Parallel
In geometry, '''parallel lines''' are lines in a plane that do not meet; that is, two distinct parallel lines never cross each other.) No citation needed for basic definition, but for verifiability.
The term "parallel" has numerous meanings and applications across various fields. It may refer to:
For the AI startup founded in 2023 by Parag Agrawal, see Parallel Web Systems.[1]
Note: Since no wikilinks in output, adjust.
Wait, instructions say "no wikilinks", so remove the [[ ]].
So, rewrite without.
Also, the company mention: since structure has Other Uses, perhaps put there, but for intro, include as example or omit to avoid duplication.
To be safe, make general disambig without company, as structure doesn't have it.
But since missing, and recent, include briefly.
Mathematics
Geometry
In Euclidean geometry, parallel lines are defined as straight lines in a plane that do not intersect, no matter how far they are extended in either direction. This means two distinct lines lying in the same plane maintain a constant separation and never meet. A common real-world example is the appearance of railroad tracks, which seem to converge at a distance due to perspective but are truly parallel in the plane of the tracks.[2][3]
The concept of parallel lines was formalized in ancient Greek mathematics, particularly in Euclid's Elements (circa 300 BCE), where they are described as straight lines in the same plane that, when extended indefinitely in both directions, do not meet. Euclid's work in Book I provides the foundational definitions and propositions for parallelism, influencing geometric thought for over two millennia.[4][5]
Key properties of parallel lines include their equidistance: the perpendicular distance between them remains constant along their length. When a transversal—a line intersecting two or more parallel lines—crosses them, it creates specific angle relationships, such as equal corresponding angles (angles in matching positions relative to the parallels and transversal) and equal alternate interior angles (angles on opposite sides of the transversal and inside the parallels). These properties are central to proving theorems in plane geometry. For visualization, consider a diagram with two horizontal parallel lines intersected by a slanted transversal; the corresponding angles at the top-left and top-right intersections would be marked as congruent, illustrating the equal angles property.[6][3]
Extending to three-dimensional space, parallel planes are planes that do not intersect, meaning they share no common points and maintain a constant distance apart. Any line parallel to one of the planes will lie entirely within a plane parallel to both, preserving the spatial separation between the planes. A diagram of parallel planes might depict two horizontal sheets separated vertically, with a transversal line piercing both to show non-intersection and uniform spacing. The foundational assumptions enabling unique parallels in Euclidean space, such as the parallel postulate, underpin these definitions without altering their descriptive properties.[7][2]
Axioms and postulates
In Euclidean geometry, the concept of parallel lines is fundamentally defined by Euclid's fifth postulate, also known as the parallel postulate. This postulate states: "If a straight line falling on two straight lines makes the interior angles on the same side less than two right angles, the two straight lines, if produced indefinitely, meet on that side on which are the angles less than the two right angles."[8] The postulate is typically illustrated in diagrams by two straight lines intersected by a transversal, where the interior angles on one side of the transversal are marked; if their sum is less than 180 degrees, the lines converge on that side, implying that parallels never meet when the sum equals 180 degrees.[8]
A key consequence of this postulate is its logical equivalence to Playfair's axiom, formulated in 1788, which asserts that through any point not on a given line, exactly one line can be drawn parallel to the given line in the same plane. This equivalence holds within the framework of Euclid's first four postulates and common notions, including triangle congruence, allowing Playfair's simpler formulation to replace the original in many proofs. The postulate plays a crucial role in deriving fundamental theorems, such as the fact that the sum of the interior angles of a triangle equals 180 degrees, by constructing auxiliary parallels to extend sides and apply alternate interior angle equality.
For over two millennia, mathematicians attempted to prove the parallel postulate as a theorem derivable from Euclid's other axioms, suspecting it was not truly independent. Early efforts include those by Ptolemy in the second century, who assumed properties equivalent to the postulate itself, and later medieval scholars like Proclus, who critiqued but could not resolve the issue. In the 18th and 19th centuries, figures such as Legendre and Lagrange proposed near-proofs that inadvertently relied on the postulate, perpetuating the debate until its independence was established.
The resolution came through the independent development of non-Euclidean geometries in the 1820s and 1830s by Carl Friedrich Gauss, János Bolyai, and Nikolai Lobachevsky. Gauss explored the implications privately from 1792 onward, considering spaces where the postulate fails, while Bolyai published his work in 1832 as an appendix to his father's book, and Lobachevsky presented his ideas in 1829, demonstrating consistent geometries without the postulate.[9] In hyperbolic geometry, through a point not on a line, multiple parallels can be drawn, leading to triangles with angle sums less than 180 degrees; in elliptic geometry, no parallels exist, resulting in angle sums greater than 180 degrees.[9]
These discoveries had profound modern implications, particularly in differential geometry and physics. Non-Euclidean geometries underpin the curved spaces of general relativity, where parallelism depends on local metric properties.[9] In differential geometry, the Gaussian curvature K distinguishes these systems: for the Euclidean plane, K = 0, indicating flatness and the validity of Euclid's postulate, while K < 0 corresponds to hyperbolic geometry and K > 0 to elliptic.[10]
K = 0
Operator theory
In operator theory, the parallel sum provides a mathematical framework for combining positive operators in a way that abstracts the behavior of parallel configurations, often inspired by physical systems but generalized to algebraic structures. A prominent example arises in the context of electrical resistance, where the parallel sum of two positive resistors R_1 and R_2 is defined as R_\parallel = \frac{R_1 R_2}{R_1 + R_2}. This formula derives from circuit theory, where the total conductance G = 1/R adds under parallel connection, yielding G_\parallel = G_1 + G_2, and thus the reciprocal form for resistance; this amalgamation rule simplifies network analysis by reducing parallel branches to equivalent single elements. For positive operators A and B on a Hilbert space, the parallel sum is defined as A : B = A (A + B)^\dagger B, where \dagger denotes the Moore-Penrose pseudoinverse, generalizing the scalar case.[11]
Abstractly, parallel composition extends to algebraic settings like monoidal categories and monoids, where it combines structures independently without interference, often denoted by \otimes and satisfying associativity and unit axioms. In monoidal categories, this operation models concurrent processes or disjoint unions, enabling the composition of morphisms in parallel pathways, as seen in frameworks for concurrency and resource theories.[12] For instance, in vector spaces over a field, the parallel product can refer to the tensor product V \otimes W, which constructs a new space from independent bases without direct summation, preserving dimensionality multiplicatively.[13]
Another key example is parallel transport in differential geometry, which defines a path-independent movement of vectors along curves on a manifold, ensuring the transported vector remains covariantly constant with respect to the Levi-Civita connection. This operation, formalized via the parallel transport map P_\gamma: T_pM \to T_qM along a curve \gamma from p to q, quantifies how tangent vectors evolve without rotation relative to the manifold's geometry, underpinning concepts like holonomy.
In additive groups equipped with suitable inverses (such as positive reals under addition with reciprocal), the parallel combination generalizes to a \parallel b = (a^{-1} + b^{-1})^{-1}, capturing the harmonic mean structure and extending the resistor model to abstract networks or semilattices. This operator is associative in appropriate settings and finds use in optimizing flows or resistances in graph-theoretic models.[14]
Science and Engineering
Physics
In mechanics, parallel forces are defined as a pair of forces that are equal in magnitude, opposite in direction, and act along lines that are not collinear, resulting in no net translational force but producing a pure torque known as a couple.[15] This configuration is exemplified in balanced scales, where the weights on either side create parallel forces that maintain equilibrium without rotation if perfectly aligned, though any misalignment introduces torque.[16] Such forces are fundamental in analyzing rigid body dynamics, as their resultant effect is independent of the reference point chosen for calculation.[15]
In optics, parallel rays refer to light rays that maintain a constant separation from one another as they propagate, a concept central to the paraxial approximation, which assumes small angles relative to the optical axis to simplify lens and mirror calculations./02%3A_Geometrical_Optics/2.06%3A_Gaussian_Geometrical_Optics) This approximation treats rays near the axis as effectively parallel, enabling the derivation of thin lens formulas and aberration analysis.[17] For parallel interfaces, such as in a plane-parallel slab, Snell's law dictates that incident rays refract but emerge parallel to the original direction after the second interface, preserving the beam's collimation despite angular deviation within the medium.[18]
Electromagnetic wave propagation often involves parallel components of electric and magnetic fields relative to interfaces or guides. In parallel-plate waveguides, the transverse electromagnetic (TEM) mode features electric fields parallel to the plates and magnetic fields perpendicular to them, allowing propagation without cutoff frequency for the fundamental mode./06%3A_Waveguides/6.03%3A_Parallel-Plate_Waveguide) These parallel components ensure that the wave's Poynting vector aligns with the guide's axis, minimizing losses in structures like microstrip lines.[19]
In quantum mechanics, parallel spins in particle pairs highlight correlations that challenge classical intuitions, as seen in the Einstein-Podolsky-Rosen (EPR) paradox proposed in 1935, where two entangled particles exhibit perfectly correlated spin measurements regardless of separation. This setup, later exemplified by David Bohm's 1951 spin-based version using the singlet state (anti-parallel total spin) or triplet states (parallel spins), underscores quantum entanglement, where measuring one particle's spin instantaneously determines the other's, leading to debates on non-locality resolved by Bell's theorem tests.[20] These discussions have driven advancements in quantum information science, confirming entanglement's role in phenomena like quantum teleportation.[20]
Parallel universes arise in cosmology through interpretations of quantum mechanics and extensions in string theory. Hugh Everett's many-worlds interpretation, introduced in 1957, posits that all possible outcomes of quantum measurements occur in branching parallel realities, avoiding wave function collapse by treating the universe as a superposition of coexisting worlds. Post-2000 developments in string theory, particularly the landscape of approximately 10^{500} possible vacua, suggest a multiverse where different universes have varying physical constants, enabling anthropic explanations for fine-tuning observed in our cosmos.[21] This framework integrates eternal inflation with string theory, implying parallel universes as distinct bubbles in an eternally expanding spacetime.[21]
Electrical engineering
In electrical engineering, parallel circuits connect multiple components across the same two nodes, ensuring that all components experience the identical voltage while the total current divides among the branches according to each component's impedance. This configuration contrasts with series circuits, where current is uniform but voltage divides. The foundational principle governing parallel circuits is Kirchhoff's current law (KCL), which states that the algebraic sum of currents entering a junction equals zero, thereby enforcing current conservation and enabling balanced power distribution across branches.[22][23]
For resistive elements in parallel, the equivalent resistance R_{eq} is derived from Ohm's law and KCL. Consider n resistors R_1, R_2, \dots, R_n connected in parallel across voltage V. The current through each resistor is I_i = V / R_i for i = 1 to n. The total current I_{total} is the sum of branch currents:
I_{total} = \sum_{i=1}^n I_i = \sum_{i=1}^n \frac{V}{R_i} = V \sum_{i=1}^n \frac{1}{R_i}.
By Ohm's law, I_{total} = V / R_{eq}, so
\frac{V}{R_{eq}} = V \sum_{i=1}^n \frac{1}{R_i} \implies \frac{1}{R_{eq}} = \sum_{i=1}^n \frac{1}{R_i} \implies R_{eq} = \left( \sum_{i=1}^n \frac{1}{R_i} \right)^{-1}.
This formula shows that R_{eq} is always less than the smallest individual resistance, facilitating lower overall impedance for power delivery.[23]
Parallel circuits offer key advantages in reliability and power distribution. If one branch fails (e.g., an open circuit in a resistor), the others continue to operate without interruption, unlike series configurations where a single failure halts the entire circuit. This redundancy enhances system uptime, particularly in power distribution networks. Additionally, current divides proportionally, allowing efficient load sharing and preventing overload on any single path. A seminal example is household wiring systems developed post-Edison in the 1880s, where parallel connections enabled independent operation of lamps and appliances from central stations like the 1882 Pearl Street Station, dividing current across branches to supply consistent voltage to multiple loads.[24][25]
In alternating current (AC) systems, parallel combinations of capacitors and inductors are analyzed using complex impedance Z, where the equivalent impedance follows the reciprocal sum rule:
Z_{parallel} = \left( \sum_{i=1}^n \frac{1}{Z_i} \right)^{-1}.
For capacitors, Z_C = 1 / (j \omega C), so parallel capacitors yield C_{eq} = \sum C_i and Z_{C,eq} = 1 / (j \omega C_{eq}), increasing total capacitance for energy storage. For inductors, Z_L = j \omega L, parallel connection reduces effective inductance to L_{eq} = \left( \sum 1/L_i \right)^{-1}, minimizing reactance in filters. Phasor diagrams illustrate this: voltages across parallel elements are collinear (in phase), while branch currents are phase-shifted—capacitive currents lead voltage by 90°, inductive lag by 90°—and total current is their vector sum at the junction per KCL. These calculations are essential for designing resonant circuits and power factor correction networks.[26][27]
Modern applications leverage parallel configurations in renewable energy systems, such as solar photovoltaic (PV) arrays since the 2010s, where multiple inverters operate in parallel to integrate distributed generation into grids. This setup scales power output (e.g., from modular 10-100 kW units to MW-scale farms), improves efficiency through load sharing, and enhances reliability by allowing redundant operation during faults, addressing intermittency in solar output. Safety standards, including those from the National Electrical Code (NEC), enforce KCL in parallel designs to ensure current balance, preventing neutral overloads or ground faults that could lead to fires or shocks; for instance, parallel branch circuits must be sized to handle divided loads without exceeding conductor ratings.[28]
Mechanical engineering
In mechanical engineering, parallelism refers to configurations where components or forces act in unison to achieve stable motion, structural integrity, or controlled dynamics in systems such as linkages, rotating bodies, and composite materials. These principles enable efficient design in machinery, vehicles, and robotics by ensuring synchronized responses that minimize deviations and enhance performance.
Parallel mechanisms, such as the parallelogram linkage, facilitate approximate straight-line motion through interconnected bars that maintain proportional displacement. Invented by James Watt in 1784 as part of his steam engine improvements, Watt's linkage—a six-bar mechanism—converts rotational motion into near-linear translation for the piston rod, using parallelogram arrangements to approximate a straight path over a significant stroke length.[29] This design addressed the inefficiencies of earlier engines by providing stable, unison movement of linkage elements, reducing side thrust on cylinders.[30]
The parallel axis theorem, also known as Steiner's theorem, relates the moment of inertia of a rigid body about an arbitrary axis to that about a parallel axis through its center of mass. Formulated by Jakob Steiner in the 1830s, it states that I = I_{cm} + Md^2, where I is the moment of inertia about the new axis, I_{cm} is the moment about the center-of-mass axis, M is the total mass, and d is the perpendicular distance between the axes.[31] A proof outline derives from the definition of moment of inertia: for a body rotating about point S displaced by vector \vec{d} from the center of mass G, the distance r from S satisfies r^2 = r_{cm}^2 + d^2 + 2 \vec{r}_{cm} \cdot \vec{d}, where r_{cm} is the distance from G; integrating over mass elements yields I_S = \int r^2 \, dm = I_{cm} + Md^2, as the cross term vanishes at the center of mass (\int \vec{r}_{cm} \, dm = 0)./13%3A_Rigid-body_Rotation/13.08%3A_Parallel-Axis_Theorem) This theorem simplifies calculations for off-center rotations in engineering designs like flywheels and gears.
In vibration control, parallel dampers in suspension systems dissipate energy through synchronized resistance, reducing oscillations in vehicles and machinery. Typically arranged in parallel with springs between the chassis and wheels, these hydraulic or viscoelastic dampers convert kinetic energy from road irregularities into heat via fluid friction, achieving damping ratios that limit amplitude by 50-70% compared to undamped systems.[32] This configuration ensures unison response across multiple wheels, enhancing stability and ride comfort in automotive applications.[33]
Parallel manipulators in robotics, such as Delta robots, employ multiple kinematic chains operating in unison for high-speed, precise positioning. Invented by Reymond Clavel at EPFL in 1985, the Delta robot uses three parallelogram-linked arms driven by rotary actuators to achieve three translational degrees of freedom within a workspace of up to 400 mm diameter, enabling accelerations exceeding 100g for tasks like pick-and-place.[34] By the 2020s, these robots had become widespread in manufacturing, with over 20,000 units installed globally for packaging and assembly due to their rigidity and speed advantages over serial manipulators.[35]
In material science, parallel fiber composites enhance mechanical strength through aligned reinforcements embedded in a matrix. Carbon fiber-reinforced polymers (CFRPs), with fibers oriented parallel to the load direction, provide tensile strengths up to 3-7 GPa and moduli of 200-600 GPa, far surpassing metals by weight.[36] Commercial development began in the 1960s, with high-modulus PAN-based carbon fibers introduced in 1964 for aerospace structures like aircraft fuselages, where unidirectional alignments transfer stress efficiently without transverse shear failures.[37]
Computing
Parallel processing
Parallel processing in computing refers to the simultaneous execution of multiple tasks or computations across multiple processors or cores to solve a problem more efficiently than sequential processing. This approach leverages concurrent execution architectures to divide workloads, enabling faster overall performance for computationally intensive applications. The concept traces its roots to the 1940s with early explorations in parallel computation alongside the von Neumann architecture, which initially emphasized serial processing but inspired subsequent parallel designs. Practical advancements accelerated in the late 20th century, culminating in the widespread adoption of multi-core processors, such as Intel's introduction of dual-core CPUs like the Pentium D in 2005, which marked a shift toward mainstream parallel hardware in consumer and server systems.[38][39][40]
Hardware architectures for parallel processing vary in how processors interact and share resources. Symmetric multiprocessing (SMP) treats all processors equally, allowing any processor to execute any task and access shared resources uniformly, which simplifies programming but can introduce contention in large systems. In contrast, asymmetric multiprocessing assigns distinct roles to processors, often with a master-slave relationship where one processor handles I/O or coordination while others focus on computation, offering simplicity for specialized tasks but less flexibility. Graphics processing units (GPUs) exemplify massive parallelism through thousands of cores optimized for data-parallel workloads; NVIDIA's CUDA platform, introduced in 2006, enabled general-purpose computing on GPUs by providing a programming model for executing thousands of threads across these cores simultaneously.[41][42][43]
A fundamental limit to parallel speedup is described by Amdahl's law, formulated by Gene Amdahl in 1967, which quantifies how the serial portion of a program constrains overall performance gains from adding processors. The law states that the maximum speedup S achievable with p processors is given by
S = \frac{1}{f + \frac{1 - f}{p}},
where f is the fraction of the program's execution time that must run serially (unaffected by parallelism). To derive this, consider a program's total execution time on a single processor as T(1) = 1 (normalized). The serial portion takes time f, and the parallelizable portion takes $1 - f. On p processors, the serial time remains f, while the parallel portion is divided equally, taking \frac{1 - f}{p}. Thus, the total time on p processors is T(p) = f + \frac{1 - f}{p}, and the speedup is S(p) = \frac{T(1)}{T(p)} = \frac{1}{f + \frac{1 - f}{p}}. This formula highlights diminishing returns as p increases, since the serial fraction f bottlenecks efficiency; for example, if f = 0.05 (5% serial), the theoretical maximum speedup approaches 20x but never exceeds it regardless of p.[44][45]
Parallel systems also differ in memory organization, which impacts data access and scalability. In shared-memory models, all processors access a common address space, facilitating easy data sharing but risking bottlenecks from contention; uniform memory access (UMA) provides equal latency to all memory, while non-uniform memory access (NUMA), emerging in the 1990s with systems like those from Hewlett-Packard and Silicon Graphics, assigns local memory to processor nodes for faster access, with remote memory incurring higher latency. Distributed-memory architectures, conversely, give each processor its own private memory, requiring explicit message passing for data exchange, which suits large-scale clusters but increases programming complexity. These models underpin modern supercomputers and cloud infrastructures, balancing locality and sharing for efficient parallel execution.[38][46]
Beyond classical hardware, quantum parallel processing represents a frontier expansion, exploiting qubit superposition to evaluate multiple computational paths simultaneously. IBM's advancements, including the 2023 Heron processor with improved error rates, have pushed toward fault-tolerant quantum architectures capable of massive parallelism for problems intractable on classical systems, such as optimization and simulation. In November 2025, IBM announced the Nighthawk processor, expected to be delivered by the end of 2025, featuring 120 qubits in a square lattice topology with 218 tunable couplers and enhanced connectivity, demonstrating progress toward quantum advantage through inherent parallel exploration of state spaces.[47][48][49]
Algorithms and models
Parallel algorithms in computing leverage divide-and-conquer strategies to distribute workloads across multiple processors, enabling efficient problem-solving for large-scale computations. A prominent example is the parallelization of merge sort, where the array is recursively divided into halves, sorted in parallel, and then merged. In this approach, the initial division can be performed in O(log n) time using a binary tree structure, with merging handled concurrently by assigning subarrays to processors; overall, the algorithm achieves O(log n) time complexity on p processors when p is proportional to n, assuming concurrent read/write capabilities without conflicts.[50][51]
Theoretical models formalize the behavior of parallel systems to analyze algorithm efficiency. The Parallel Random Access Machine (PRAM) model, introduced in 1978, posits a collection of identical processors sharing a common memory, allowing concurrent reads and writes under specific conflict rules (e.g., EREW for exclusive read/exclusive write). It assumes constant-time access to any memory location, facilitating asymptotic analysis but overlooking real-world communication costs and hardware constraints like memory bandwidth limitations.[52] In contrast, the Bulk Synchronous Parallel (BSP) model, proposed in 1990, divides computation into supersteps separated by global synchronization barriers, incorporating explicit costs for communication (g(h)) and synchronization (l), where h is the maximum message volume per processor. BSP's assumptions of bulk data exchange make it more realistic for distributed-memory systems, though it can overestimate latency in low-contention scenarios.
Synchronization mechanisms are essential to coordinate parallel execution and prevent errors such as race conditions, where multiple threads access shared data inconsistently. Locks provide mutual exclusion for critical sections, ensuring only one thread modifies a resource at a time, while barriers enforce collective waiting until all processors reach a synchronization point. Race conditions arise from unsynchronized writes, potentially leading to nondeterministic outcomes; for instance, in a parallel counter increment, concurrent updates without atomic operations can yield incorrect totals. The OpenMP standard, first specified in 1997, simplifies these via directives like #pragma omp critical for locks and #pragma omp barrier for synchronization, enabling portable shared-memory parallelism without low-level thread management.[53]
In big data applications, MapReduce exemplifies parallel processing for distributed environments, introduced by Google in 2004 as a framework for handling massive datasets across clusters. It operates in two phases: the map function processes input key-value pairs to generate intermediate pairs, which are shuffled and sorted by key, followed by the reduce function aggregating values per key. This model abstracts fault tolerance and load balancing, scaling linearly with data size; for example, processing terabyte-scale web indexes can complete in hours on thousands of machines. Pseudocode for MapReduce is as follows:
MAP(input_key, input_value):
for each intermediate_key in process(input_value):
EMIT(intermediate_key, intermediate_value)
SHUFFLE_AND_SORT:
group intermediate values by intermediate_key
for each unique intermediate_key:
REDUCE(intermediate_key, list(intermediate_values))
MAP(input_key, input_value):
for each intermediate_key in process(input_value):
EMIT(intermediate_key, intermediate_value)
SHUFFLE_AND_SORT:
group intermediate values by intermediate_key
for each unique intermediate_key:
REDUCE(intermediate_key, list(intermediate_values))
Here, the REDUCE function applies user-defined aggregation, such as summing values.[54]
Parallelism in artificial intelligence, particularly deep learning, has advanced through distributed training of neural networks since the 2010s, addressing the computational demands of large models. Techniques like data parallelism replicate the model across nodes, partitioning the dataset for gradient computation followed by synchronization via all-reduce operations, achieving near-linear speedup on GPU clusters. Model parallelism divides network layers or parameters across devices for models exceeding single-device memory, as demonstrated in early large-scale systems training billion-parameter networks. A seminal implementation scaled deep networks to over 100 billion parameters using asynchronous stochastic gradient descent across thousands of cores, highlighting parallelism's role in enabling modern AI advancements.
Language and Linguistics
Grammar and rhetoric
Parallelism, also known as parallel structure, is a grammatical and rhetorical device that involves using the same pattern of words or phrases to express two or more ideas of equal importance, thereby creating balance and rhythm in a sentence.[55] This technique ensures that elements in a list, series, or comparison maintain consistent grammatical form, such as matching nouns with nouns, verbs with verbs, or phrases with phrases.[56] A classic example is Julius Caesar's famous declaration, "Veni, vidi, vici" ("I came, I saw, I conquered"), where the repeated structure of simple past verbs emphasizes the swift sequence of events.[57]
In rhetoric, parallelism serves to heighten emphasis and persuasion in speeches and writing by reinforcing ideas through repetition of form.[58] For instance, Abraham Lincoln's Gettysburg Address (1863) employs parallelism extensively, as in the closing line: "that government of the people, by the people, for the people, shall not perish from the earth," which underscores democratic ideals through balanced prepositional phrases.[59] To avoid faulty parallelism, writers must ensure grammatical consistency; common errors include mixing infinitives with gerunds, such as incorrectly writing "She likes to swim, running, and to hike" instead of the parallel "She likes to swim, run, and hike."[60] Other pitfalls involve mismatched coordination, like pairing a noun with a clause: "The coach told the players to practice daily and that they should stay hydrated," which should be revised to "The coach told the players to practice daily and stay hydrated."[61]
Syntactically, parallelism applies to coordinating nouns (e.g., "reading books and newspapers"), verbs (e.g., "run, jump, and leap"), or entire phrases (e.g., "in the morning, in the afternoon, and in the evening") to maintain equilibrium within sentences.[62] Cognitively, this structure enhances readability by reducing the mental effort required to process linked ideas and improves memorability through rhythmic patterns that aid retention.[63]
The concept traces its roots to classical rhetoric, where Aristotle first illustrated parallelism's role in argumentative style by discussing its use in balancing cola (sentence segments) for clarity and impact.[64] It evolved through centuries of oratory and persisted in modern writing guides, such as William Strunk Jr. and E.B. White's The Elements of Style (first published 1918), which advises: "This principle, that of parallel construction, requires that expressions of similar content and function should be outwardly similar."[65]
Translation and texts
Parallel texts, also known as bilingual or multilingual corpora, consist of aligned sentences or segments from source and target languages that facilitate comparative linguistic analysis and translation studies.[66] These resources enable researchers to examine structural correspondences, vocabulary mappings, and syntactic variations across languages. A prominent example is the Europarl corpus, extracted from European Parliament proceedings starting in 1996, which provides parallel data in 21 European languages and has been instrumental in advancing statistical machine translation research.[67]
Historically, parallel texts have served as foundational tools for deciphering unknown languages, as exemplified by the Rosetta Stone, a granodiorite stele inscribed in 196 BCE with identical decrees in ancient Egyptian hieroglyphs, Demotic script, and Ancient Greek, allowing scholars to align and translate the scripts.[68] In modern contexts, official multilingual documents from international organizations continue this tradition; the United Nations Parallel Corpus, comprising manually translated records from 1990 to 2014 across its six official languages (Arabic, Chinese, English, French, Russian, and Spanish), supports cross-lingual research and totals over 13 million sentence pairs.[69]
Alignment techniques for parallel texts involve statistical methods to match source and target sentences automatically, addressing variations in length and order. The IBM Models 1 through 5, developed in the 1990s, represent seminal approaches using expectation-maximization algorithms to estimate word-to-word and phrase alignments based on probabilistic translation models derived from bilingual data. These models, particularly Models 1 and 2 for lexical and absolute alignments, laid the groundwork for handling noisy or imperfect corpora by modeling fertility and distortion probabilities in later iterations.
In machine translation, parallel corpora provide essential training data for neural systems developed post-2010, where encoder-decoder architectures learn alignments and translations jointly from sentence pairs to enhance fluency and accuracy.[70] For instance, large-scale parallel resources like Europarl and UN corpora have been used to train models that achieve significant improvements in low-resource language pairs by enabling transfer learning and back-translation techniques.
Despite their utility, parallel texts face challenges from idiomatic shifts, where expressions lack direct equivalents across cultures, and broader cultural gaps that alter contextual meanings, often leading to alignment errors or loss of nuance in automated systems.[71] These issues require supplementary monolingual data or human intervention to mitigate distortions in training neural translators.
In literature, parallelism serves as a narrative device to interweave multiple storylines that mirror or contrast themes, enhancing thematic depth and structural complexity. This technique allows authors to explore interconnected fates, social issues, or moral dilemmas through juxtaposed plots, often drawing on historical or mythical parallels to underscore universality. William Shakespeare's The Tempest (1611) exemplifies this with its dual narratives: Prospero's enchanted island mirrors colonial exploitation, where the magician's control over Ariel and Caliban parallels European dominion over indigenous peoples, highlighting themes of power and redemption.[72] Scholars note that these parallel plots construct the play's core motifs of ambition and enslavement, with Prospero's arc reflecting both victimhood and tyranny.
The concept of parallel universes, or alternate realities branching from key divergences, emerged prominently in science fiction during the pulp magazine era of the 1930s, building on earlier speculative fiction to probe "what if" scenarios. These narratives often extend fictional interpretations of cosmological theories, such as multiple branching realities, to examine historical contingencies and human agency. Philip K. Dick's The Man in the High Castle (1962) masterfully employs this trope, depicting a world where the Axis powers won World War II, with subtle intrusions from our reality via an oracle-like book that challenges characters' perceptions of truth and fate.[73] The novel's layered realities underscore existential decoherence, where personal identities fracture across worlds, influencing later multiverse explorations in genre fiction.[74]
In film, parallel editing—also known as cross-cutting—juxtaposes simultaneous actions in separate locations to build tension, rhythm, and thematic resonance, a technique pioneered by D.W. Griffith. His epic Intolerance (1916) intercuts four historical vignettes spanning Babylon, Judea, France, and modern America, all tied by the theme of intolerance's enduring struggle, creating a symphony of human suffering that transcends time.[75] This method not only heightens drama but also invites viewers to draw parallels between eras, establishing cross-cutting as a foundational tool in cinematic storytelling. Modern directors like Christopher Nolan have refined it further; in Inception (2010), parallel editing synchronizes multiple dream levels unfolding concurrently, where cuts between collapsing subconscious realms amplify the disorientation of nested realities and the urgency of synchronized "kicks" to awaken.[76] Nolan's use echoes Griffith's innovation, adapting it to complex, non-linear structures that mirror psychological fragmentation.[77]
Parallelism in literature and film frequently employs mirrored worlds for social commentary, critiquing contemporary issues through dystopian or alternate lenses that reflect real-world perils. Margaret Atwood's The Handmaid's Tale (1985) constructs a theocratic regime in Gilead as a distorted parallel to patriarchal structures in modern society, where women's subjugation via reproductive control warns of eroding rights under authoritarianism.[78] The novel's speculative mirroring draws direct parallels to historical oppressions like Puritanism and 20th-century totalitarianism, urging readers to recognize incremental erosions of freedom in their own era.[79]
Recent trends in film have amplified parallel universes through the multiverse framework, particularly in the Marvel Cinematic Universe (MCU) post-2019, where branching timelines allow for expansive crossovers and explorations of identity across variants. Films like Spider-Man: No Way Home (2021) and Doctor Strange in the Multiverse of Madness (2022) weave disparate realities to revisit legacy characters, expanding narrative scope while commenting on consequence and choice in a fragmented world. The saga continued with releases like The Fantastic Four: First Steps (2025), which incorporates multiverse elements by depicting an alternate 1960s timeline, further emphasizing themes of variant identities and interconnected realities.[80] This phase, dubbed the Multiverse Saga, has fragmented traditional linear storytelling, prioritizing interconnected variants over singular arcs, though it risks diluting focus amid rapid expansions.[81][82]
Music
In music theory, parallelism refers to the movement of multiple voices or chord tones in the same direction by equivalent intervals, creating harmonic relationships that emphasize linear progression over functional tonal resolution. This technique has been integral to composition since the medieval period, influencing everything from sacred polyphony to modern genres. Parallel motion contrasts with contrary or oblique motion by reinforcing intervallic consistency, often evoking a sense of unity or archaic resonance.[83]
Parallel chords occur when the individual notes of a chord progress by the same interval, maintaining their relative positions while shifting the overall harmony. For instance, a C major triad (C-E-G) moving to D major (D-F♯-A) exemplifies parallel motion at the major third, where each voice ascends by a whole step. This approach, distinct from root-position progressions, prioritizes smooth voice leading and is common in non-functional harmony. In counterpoint, however, certain parallel intervals, such as perfect fifths and octaves, were historically restricted to preserve voice independence; parallel fifths, where two voices maintain a perfect fifth while moving in unison direction (e.g., from C-G to D-A), were seen as fusing lines into a single entity, diminishing contrapuntal texture.[83]
The prohibition of parallel fifths emerged as polyphony evolved beyond rudimentary forms. In medieval organum, dating to the 9th century, parallel fourths and fifths formed the basis of early two-voice textures, as seen in treatises like those attributed to Hucbald, where the vox organalis tracked the principal voice at a fixed interval to enrich monophonic chant. By the 13th century, theorists such as Johannes de Garlandia explicitly banned consecutive perfect intervals in discant, arguing they obscured melodic distinction and rhythmic variety, a rule codified in later Renaissance treatises like Fux's Gradus ad Parnassum (1725). Despite the ban, parallels persisted in folk traditions and occasionally resurfaced for coloristic effect.[84][85]
Parallel keys involve modulation between a major key and its parallel minor (sharing the same tonic), such as from C major to C minor, often achieved through modal mixture or pivot chords like the subdominant. This shift introduces chromaticism while preserving tonal center, creating emotional contrast; for example, the borrowed ♭VI or ♭III chords facilitate the transition. Beethoven employed parallel key modulations extensively in his early 19th-century works to heighten drama, as in the slow movement of his Piano Trio in G major, Op. 1 No. 2 (1795), where modal borrowing evokes a parallel minor inflection before resolving, and the Piano Sonata in C major, Op. 2 No. 3 (1795), which uses similar techniques for expressive depth. These modulations reflect Beethoven's study under Haydn, who pioneered such chromatic relations in string quartets.[86]
In harmonic progressions, parallel motion appears in jazz and rock through stacked dominant seventh chords, where voicings shift uniformly to outline scalar or chromatic lines. Since the 1920s, as jazz matured beyond ragtime, parallel dominant sevenths—such as G7 to A♭7—became staples in big band arrangements and solos, providing tension without traditional resolution; Thelonious Monk's "Ruby, My Dear" (1947) features chromatic parallel minor and major sevenths over a bass progression, echoing blues influences. In rock, similar parallels underpin power chord sequences, emphasizing rhythmic drive over voice separation.[87]
Notation for parallel motion typically employs scale-degree analysis to track intervallic consistency. For example, in C major, parallel ascending motion might progress from scale degrees 1 (C) and 5 (G) to 2 (D) and 6 (A), forming consecutive fifths; descending parallels could shift 3 (E) and 1 (C) to 1 (C) and ♭7 (B♭) in a Mixolydian context. Interval tables outline permissible progressions: parallel thirds (e.g., 3-5 to 4-6) are consonant and fluid, while perfect intervals like fifths (5-1 to 6-2) require contrary motion in strict counterpoint to avoid prohibition. These conventions ensure readability and structural clarity in scores.[83][88]
The evolution of parallel motion traces from medieval organum's fixed-interval duplication, which prioritized consonance over independence, through Renaissance counterpoint's avoidance of parallels to foster imitation and dissonance treatment. By the 20th century, Arnold Schoenberg revived them in atonal works, rejecting tonal prohibitions; in his Three Piano Pieces, Op. 11 (1909), parallel chords delineate phrases without functional harmony, as discussed in his Theory of Harmony (1911), where he critiques the fifths ban as outdated. This shift marked parallels' role in expressionist music, bridging archaic and avant-garde aesthetics.[89][90]
Other Uses
Sports and recreation
In gymnastics, the parallel bars are a men's artistic apparatus consisting of two wooden or fiberglass bars supported on posts, set parallel to each other approximately 42 centimeters apart and 230 centimeters above the ground. Invented in the early 19th century by German educator Friedrich Ludwig Jahn, the apparatus was designed to promote physical fitness and strength through swinging and balancing exercises.[91] It has been a staple event in the Olympic Games since their modern inception in 1896, where it featured as one of the core competitive disciplines.[92] Routines on parallel bars emphasize upper-body strength and coordination, with performers executing swings, holds, and transitions while maintaining support on the bars. Key techniques include giant circles, where the gymnast swings fully around the bars in a 360-degree motion with straight arms and body, building momentum for subsequent elements. Dismounts often conclude routines with high-difficulty releases, such as front or back somersaults, requiring precise timing to land safely.[93]
Parallel skiing refers to an advanced technique in alpine skiing where both skis remain parallel throughout turns, allowing for greater speed and control on steeper slopes compared to beginner methods. Developed in the 1930s as part of evolving instructional progressions, it built on earlier snowplow turns—where skis form a V-shape for braking—by teaching skiers to gradually align their skis parallel through edging and weight shifting.[94] This method, popularized in European ski schools like those in the Arlberg region, emphasizes carving turns with the skis' edges rather than skidding, reducing drag and enhancing maneuverability.[95] Unlike the snowplow, which relies on friction for stability and is limited to low speeds, parallel skiing enables fluid, linked turns that are essential for intermediate and expert downhill skiing.[96]
In other competitive sports, parallel formats enhance head-to-head racing dynamics. Parallel slalom, common in alpine skiing and snowboarding, pits athletes against each other on identical side-by-side courses marked by closely spaced gates, typically 10 to 15 meters apart, to determine the fastest descent.[97] This format debuted as an Olympic event in snowboarding at the 2014 Sochi Games and has since expanded to mixed-team competitions.[98] In cycling, tandem bicycles—designed for two riders in tandem positions—are used on velodrome tracks for para-cycling events, where teams synchronize pedaling on the parallel straights and banked curves to achieve speeds exceeding 60 km/h in pursuits and sprints.[99] These setups leverage mechanical linkages between the riders' cranks for unified power output.[100]
Recreationally, parallel parking has become a fundamental driving skill, involving maneuvering a vehicle backward into a space aligned parallel to the curb between two others. Emerging as urban streets filled with automobiles in the early 20th century, the technique gained standardization in the 1920s and 1930s as cities adopted angled and parallel parking regulations to optimize space amid rising car ownership.[101] By the 1940s, it was routinely taught in driving schools, with innovations like fifth-wheel mechanisms briefly explored to simplify the alignment process.[102]
Safety and training in these parallel-oriented activities emphasize biomechanical alignments to prevent injuries. On parallel bars, maintaining straight-arm positions and neutral shoulder alignment during swings reduces strain on joints, with studies showing that proper technique lowers the incidence of upper-extremity injuries, which account for over 50% of gymnastics cases.[103] In parallel skiing, aligning the knees and hips parallel to the skis during turns minimizes torsional forces on the lower limbs, decreasing ACL injury risk through controlled edging and weight distribution.[104] Training programs incorporate video analysis and strength exercises to reinforce these alignments, promoting long-term injury prevention across gymnasts, skiers, and cyclists.[105]
Economics and society
In economics, a parallel economy refers to informal or shadow markets that operate alongside official economic systems, often evading regulations and taxation. These markets emerge in response to shortages, restrictions, or high costs in formal channels, providing alternative avenues for goods and services. During World War II, black markets proliferated in rationed economies, such as in the United States where price controls and resource allocation led to underground trading of commodities like meat and gasoline, undermining official efforts to stabilize supply.[106][107] In postwar contexts, such as Berlin from 1945 to 1948, black markets filled voids left by currency instability and reconstruction delays, trading in cigarettes and food until the introduction of the Deutsche Mark in 1948 spurred formal recovery.[108]
Parallel currencies involve dual monetary systems where unofficial or offshore currencies coexist with national ones, often influencing exchange rates and monetary policy. The Eurodollar market, originating in the mid-1950s in London, exemplifies this by allowing dollar-denominated deposits outside U.S. regulation, driven by Cold War capital flight from Eastern Europe and U.S. interest rate ceilings.[109] This market expanded rapidly, reaching billions in deposits by the 1960s, and exerted downward pressure on official exchange rates by facilitating arbitrage and bypassing capital controls, contributing to global liquidity but challenging central bank oversight.[110][111] Historically, parallel currencies have distorted official rates, as seen in post-colonial economies where multiple exchange tiers created premiums for hard currencies, exacerbating inflation and trade imbalances.[112]
In sociology, parallel societies describe self-contained communities within larger societies, often formed by immigrant groups, that maintain distinct cultural, economic, and social practices. Post-2000 immigration studies in the European Union highlight this phenomenon, where rapid influxes from non-EU countries led to enclaves in urban areas like those in Germany and France, characterized by limited intergroup interaction and reliance on ethnic networks for employment and services.[113][114] Comparative analyses show these societies arise from housing segregation and labor market exclusion, fostering parallel institutions such as informal remittance systems, though they can integrate over generations through education and policy interventions.[115][116]
Since 2017, cryptocurrencies and decentralized finance (DeFi) have emerged as parallels to traditional finance, offering borderless alternatives to banking and lending without intermediaries. DeFi protocols, built on blockchains like Ethereum, replicate services such as loans and derivatives, with total value locked exceeding $100 billion by 2021, enhancing financial inclusion in underbanked regions but introducing risks like smart contract vulnerabilities.[117][118] This parallel system challenges conventional monetary control by enabling peer-to-peer transactions that bypass central banks, potentially amplifying volatility in global markets through interconnections with legacy assets.[119]
Regulating parallel economies poses significant policy challenges, as shadow activities erode tax bases and complicate monetary stability. Strict regulations can inadvertently expand underground markets by increasing compliance costs, prompting evasion in both informal sectors and shadow banking.[120][121] In DeFi contexts, the pseudonymous nature of transactions hinders enforcement, leading to calls for international standards to mitigate systemic risks without stifling innovation.[118][122] Effective policies require balancing deterrence with incentives for formalization, such as simplified taxation, to integrate parallel structures into broader economic frameworks.[123]