CC
Creative Commons (CC) is an international nonprofit organization founded in 2001 that develops and maintains a suite of free, standardized copyright licenses enabling creators to legally share their intellectual works while specifying conditions for reuse, such as attribution requirements or restrictions on commercial exploitation.[1] These tools aim to bridge the gap between full copyright retention and public domain dedication, facilitating broader access to educational, cultural, and scientific content without necessitating abandonment of all rights.[2] Headquartered in the United States with a global network of affiliates, CC has licensed billions of works, including images, music, and academic publications, influencing movements like open educational resources and open access publishing.[3] The organization's origins trace to efforts by legal scholar Lawrence Lessig and collaborators to address perceived overreach in digital copyright laws, culminating in the release of the first CC license versions in December 2002.[3] Early adoption surged, with over one million works licensed by 2003 and nearly five million by 2004, driven by integrations into platforms like Flickr and Wikipedia's media repository.[4] Key achievements include the evolution of license suites to version 4.0 in 2013, which incorporated compatibility with international treaties and machine-readable metadata standards, and advocacy for policy reforms promoting open sharing in government data and education.[3] By sustaining a "commons" of reusable content, CC has supported empirical advancements in fields like scientific research reproducibility and cultural preservation, though its impact is measured more through licensed outputs than centralized metrics.[5] Despite widespread acclaim for democratizing access, CC has faced criticisms over license complexity and unintended consequences, such as the ambiguous "non-commercial" clause in some variants, which courts have interpreted variably and which complicates enforcement.[6] Scholarly authors have expressed ambivalence toward permissive licenses like CC BY, citing concerns over uncompensated commercial repurposing by third parties, including AI training datasets, without explicit opt-in mechanisms.[7] Additionally, fragmented license options have drawn critique for failing to establish a uniform open standard, potentially undermining interoperability in open projects, while rare but notable lawsuits highlight risks of misuse, such as improper attribution or violations in stock image repositories.[8] CC has responded by opposing overbroad enforcement tactics and clarifying stances against mandatory upload filters that could stifle sharing.[9][10]Communication and Documentation
Carbon copy
A carbon copy is a duplicate of an original document produced by interleaving carbon paper—a thin sheet coated with a pigmented waxy substance, typically containing carbon black—between the original sheet and one or more blank sheets, such that pressure from writing or typing transfers the pigment to create identical copies.[11] This method ensured simultaneous production of multiples without separate rewriting, relying on mechanical impression rather than chemical reproduction. The technology originated in the early 19th century, with Italian inventor Pellegrino Turri developing an early form of carbon paper around 1801 to enable his blind correspondent to write legibly.[12] Englishman Ralph Wedgwood formalized and patented the process on October 7, 1806, describing it as an "apparatus for producing duplicates of writings" initially integrated into a noctograph device for tactile writing by the visually impaired.[13] Wedgwood's innovation shifted toward general duplication, with the term "carbon copy" entering English usage by 1895 to denote such paper-based replicas made via carbon transfer.[14] By the mid-20th century, carbon paper had become ubiquitous in offices, available in interleaved sets or typewriter ribbons infused with carbon for efficient multi-copy generation.[15] In business and official communication, carbon copies facilitated record-keeping, distribution, and accountability; for instance, memos or letters often included notations like "cc:" listing secondary recipients, signaling transparency or archival needs without implying the copy's independence from the original.[16] This practice peaked in typewriter-era bureaucracies, where sets of three or more copies were common for approvals, audits, or legal retention, as the faint, reversed-image quality on interleaved sheets distinguished originals from duplicates.[17] Usage declined sharply post-1950s with the advent of photocopiers in 1959 and digital scanning, rendering carbon methods obsolete due to higher fidelity, scalability, and cleanliness of electrostatic copying.[18] The legacy persists in digital correspondence, where "cc" denotes sending an identical message copy to additional parties for informational purposes, evoking the original's intent of non-primary distribution while adapting to electronic workflows.[19] Archival standards still recognize carbon copies as contemporaneous duplicates, valued for their authenticity in historical records despite potential degradation from pigment smudging or paper aging.[16]Closed captioning
Closed captioning refers to the display of time-synchronized text representations of spoken dialogue, speaker identifications, and non-speech audio elements such as sound effects and music cues on television broadcasts, videos, and other visual media, enabling access for deaf and hard-of-hearing individuals while remaining optional for other viewers via decoder activation.[20][21] Unlike open captions, which are permanently embedded and visible to all, closed captions are encoded separately in the broadcast signal or file metadata, typically using standards like line 21 in analog NTSC systems or digital embedding in formats such as CEA-608 and CEA-708 for modern ATSC signals.[22][23] Development of closed captioning originated in the early 1970s through federally funded engineering efforts in the United States, with initial testing and refinement supported by the Department of Education and PBS, culminating in the first public broadcasts on March 16, 1980, by ABC, NBC, and PBS networks.[24][25] These early implementations required external decoder hardware, which became commercially available that year, coinciding with expanded programming to include selected national shows.[22] By the 1990s, advancements in digital television transitioned captioning from analog to embedded data streams, improving reliability and enabling multilingual support, though legacy analog methods persisted for compatibility.[26] Technological standards emphasize caption accuracy, synchrony with audio, completeness in conveying aural content, proper placement to avoid obscuring visuals, and consistency in formatting, as outlined by the Federal Communications Commission (FCC) under 47 CFR § 79.1.[23][27] Captions must replicate spoken words in the original language, identify speakers where unclear from visuals, and describe key non-dialogue sounds, with display guidelines limiting text to one to three lines on screen for readability.[28][29] Human-generated captions achieve higher fidelity than automated systems, which often exhibit error rates of 30-40% or more in challenging audio conditions like accents, overlapping speech, or background noise, potentially misrepresenting content.[30][31] In the United States, FCC regulations, enacted via the Telecommunications Act of 1996 and phased in through 2006, mandate closed captioning for 100% of new, non-exempt English-language video programming on television, with exemptions for live or pre-recorded content where captioning imposes undue burden, such as short local ads or archived foreign programming.[32][33] These rules extend to internet-distributed video originally aired on TV, requiring equivalent or superior quality, and include recent mandates effective September 16, 2024, for "readily accessible" caption display settings on televisions, streaming devices, and apps to simplify activation for users with low vision or dexterity limitations.[34][35] Non-compliance can result in fines, though self-implementing exemptions apply for technical infeasibility, underscoring the balance between accessibility mandates and practical broadcaster constraints.[36][37]Units and Measurement
Cubic centimetre
The cubic centimetre (symbol: cm³) is a unit of volume in the metric system, defined as the volume of a cube with each side measuring one centimetre.[38] It is a derived unit from the base SI unit of length, the metre, where 1 cm³ equals 10^{-6} cubic metres (m³).[38] The abbreviation "cc" is also commonly used, particularly in technical and medical contexts.[39] One cubic centimetre is equivalent to one millilitre (mL), a special name for this volume in the SI system, and corresponds to 0.001 litres.[38] This equivalence stems from the metric system's decimal-based structure, facilitating conversions; for instance, 1000 cm³ equals 1 litre.[38] In terms of mass, under standard conditions, 1 cm³ of water at its maximum density (approximately 4°C) has a mass of 1 gram, which historically influenced the definition of the gram.[40] The cubic centimetre emerged as part of the metric system developed in France during the late 18th century, with initial proposals in 1795 and practical implementation by 1799 amid the French Revolution to standardize measurements.[41] French scientists, including Antoine Lavoisier, devised the system based on natural invariants like the Earth's quadrant, leading to the centimetre as 1/100 of a metre and the cubic centimetre as its volumetric extension.[42] The gram was explicitly tied to the mass of 1 cm³ of pure water, linking volume directly to mass in the original definitions. In scientific and engineering applications, the cubic centimetre measures small volumes precisely, such as in laboratory experiments, pharmaceutical dosages, and fluid dynamics calculations.[38] It is preferred for its alignment with decimal scaling in the SI, avoiding fractional conversions common in non-metric units like cubic inches (where 1 in³ ≈ 16.387 cm³).[43] Though not an official SI base unit, its widespread adoption supports reproducibility in fields like chemistry and biology.[38]Intellectual Property and Licensing
Creative Commons
Creative Commons is a U.S.-based nonprofit organization that develops and maintains a set of free, standardized copyright licenses enabling creators to grant public permissions for reuse of their works while retaining certain rights.[44] Founded on December 16, 2001, by legal scholar Lawrence Lessig, computer scientist Hal Abelson, and activist Eric Eldred in San Francisco, California, the organization emerged as a response to the perceived overreach of traditional copyright laws in the digital age, aiming to facilitate broader sharing of creative content without full relinquishment of authorship.[4] Its mission centers on building a "commons" of freely accessible knowledge and culture, promoting open access as an alternative to all-rights-reserved models.[45] The core offerings are six main license variants under version 4.0, released in 2013, which combine elements of four basic attributes: BY (attribution required), SA (share-alike, derivatives must use same license), NC (non-commercial use only), and ND (no derivatives allowed).[46] For instance, CC BY permits broad reuse including commercial adaptations as long as the original creator is credited, while CC BY-NC-SA restricts commercial exploitation and mandates identical licensing for derivatives.[44] Additionally, CC0 provides a tool for waiving all copyright and related rights to place works in the public domain. Earlier versions, such as 1.0 in 2002 and 2.0 in 2004, evolved to address compatibility and international applicability, with some older suites retired due to disuse or enforceability concerns.[47] Adoption has grown significantly since inception; by 2003, over one million works carried CC licenses, rising to nearly five million by 2004.[4] As of recent data, platforms like Wikipedia host over 55 million articles under CC BY-SA across languages, the Metropolitan Museum of Art has dedicated more than 492,000 images via CC0 or similar, and Khan Academy shares over 100,000 educational resources under CC BY-NC-SA.[44] These licenses underpin open educational resources, scholarly publishing, and cultural archives, with integrations in repositories like Flickr and partnerships advancing open science initiatives.[48] In 2025, Creative Commons launched a strategic plan through 2028 emphasizing resilient open infrastructure amid challenges like AI data scraping.[49] Despite widespread use, CC licenses face criticisms for practical limitations, including interoperability issues between variants (e.g., mixing SA and non-SA licenses), disclaimer of warranties leaving users without guarantees of quality or infringement-free status, and the non-commercial (NC) clause potentially hindering innovation by restricting market-based incentives.[50] Scholarly authors report confusion and ambivalence, with nearly half unfamiliar with license mechanics and 75% sharing research but not always opting for full openness due to career or control concerns.[7] Proponents argue these tools democratize access, but detractors, including some intellectual property advocates, contend they fragment the copyright landscape without addressing root causes of overprotection, sometimes enabling "reintermediation" by new platforms that profit from licensed content.[51] Empirical evidence shows varied enforcement success, with legal disputes highlighting ambiguities in attribution and derivative definitions.[52]Computing and Software
Programming and compilers
In Unix-like operating systems,cc serves as the conventional command-line interface for invoking the C compiler, originating from the early development of the C programming language at Bell Labs.[53] Developed by Dennis Ritchie around 1972–1973 for the PDP-11 minicomputer running Version 3 Unix, the initial cc compiler processed C source files into object code and executables, using a recursive descent parser tailored to the PDP-11 architecture and initially implemented in assembly language before partial self-hosting in C.[54] This command standardized C compilation workflows, accepting arguments such as source files ending in .c, options for optimization or debugging, and linker specifications to produce an executable like a.out by default.[55][56]
The cc command's syntax typically follows cc [options] source_files -o output_executable, enabling features like preprocessing (-E), compilation without linking (-c to generate .o files), and inclusion of libraries (e.g., -lm for math functions).[57] On historical systems like V7 Unix (1979), cc directly invoked the PDP-11-specific backend, compiling multiple source files in a single invocation while handling dependencies via implicit rules.[56] Modern implementations often alias cc to GNU Compiler Collection (GCC) or Clang, which extend the original with support for C standards beyond K&R C (e.g., ANSI C89, C99, C11), cross-compilation, and advanced optimizations like -O2 for speed or -g for debugging symbols.[58] For instance, on Linux distributions, cc links to gcc unless overridden, ensuring portability while preserving the Unix tradition.[53]
Distinctions arise with uppercase CC, which on platforms like Solaris denotes the C++ compiler (e.g., Sun Studio's CC for C++ sources), whereas cc remains dedicated to C; this convention avoids confusion in build systems like Makefiles, where variables $CC and $CXX (or $CC) specify compilers respectively.[59] In portable software projects, using $CC in build scripts allows substitution of vendor-specific compilers (e.g., IBM's xlc or Microsoft's cl.exe via compatibility layers), promoting reproducibility across environments without hardcoding gcc.[60] Despite evolutions like GCC's 1987 debut supporting multiple architectures, cc endures as a POSIX-standard interface, emphasizing simplicity and forward compatibility in compiler toolchains.[61]