Fact-checked by Grok 2 weeks ago

Procedural programming

Procedural programming is a within the broader imperative style that organizes code into procedures or subroutines, which are sequences of instructions executed step by step to manipulate and perform computations by modifying state. This approach views a as a of calls, where is passed between procedures as arguments or return values, focusing on the "how" of problem-solving through explicit and sequential operations. Originating in the mid-20th century, it draws from the , where and share memory, enabling direct state changes via assignments and loops. The paradigm emerged prominently with early high-level languages like in 1957 and in 1959, which replaced with more readable, procedure-based instructions for scientific and , respectively. By the , structured procedural programming advanced the model through block-structured languages such as Pascal (1971), introducing features like local variables, conditional statements, and loops to enhance and while avoiding unstructured jumps like the infamous "" statement. Key languages exemplifying procedural programming include , which combines low-level control with high-level abstractions, and , widely used for its simplicity in early personal . These languages prioritize code reusability through callable procedures, making it easier to break complex tasks into manageable, repeatable units. Core features of procedural programming include modularity via procedures that encapsulate logic, sequential execution with control structures like if-else and while loops, and data separation where variables are global or passed explicitly, allowing efficient memory use but requiring careful management to avoid side effects. Advantages encompass its straightforward mapping to hardware operations, resulting in high performance for algorithmic tasks, and ease of debugging through linear code flow. However, for large-scale software, it can lead to challenges like code duplication and maintenance difficulties without additional discipline, prompting evolutions such as object-oriented paradigms that bundle data with procedures. Despite these limitations, procedural programming remains foundational in systems like embedded software and remains influential in multi-paradigm languages such as Python and Java.

History and Origins

Early Development

Procedural programming emerged as a that emphasizes the organization of code into step-by-step instructions executed sequentially, utilizing procedures or subroutines to promote modularity and reusability, building upon the imperative roots of early by focusing on explicit control of program flow and data manipulation. This approach addressed the post-World War II computing challenges, where the complexity of scientific and calculations on early computers demanded more structured methods than raw , influenced heavily by the outlined in 1945, which mandated sequential execution models for stored programs and data in a unified system. The architecture's design, privately circulated by , facilitated the development of languages that could express algorithmic steps in a human-readable form while aligning with hardware's linear instruction processing. Early procedural elements appeared in Konrad Zuse's , conceived in the with initial work from to 1945, which introduced concepts like conditional statements, loops, and subroutine-like structures for engineering computations, though it remained unpublished in comprehensive form until 1972 due to wartime disruptions and Zuse's focus on . Similarly, , proposed by in 1949 as the first high-level language for electronic computers like the , incorporated rudimentary procedural features such as arithmetic operations and conditional transfers, interpreted line-by-line to simplify mathematical programming over . However, these were precursors; true procedural programming crystallized with in 1957, led by at , which formalized subroutines for modular and DO loops as key control structures for iterating over scientific computations, enabling efficient translation of formulas into executable sequences. The paradigm gained further traction with , developed in 1958 through an international effort to algorithmic notation, which introduced block structures for encapsulating code segments with local variables and supported recursive procedures, thereby enabling nested scopes and more sophisticated in program design. This marked a pivotal shift toward procedural languages that balanced expressiveness with the sequential imperatives of machines, laying groundwork for broader adoption in computational tasks.

Key Milestones

The development of , formalized in the 1960 report by the ALGOL 60 committee, marked a pivotal advancement as the first language, incorporating call-by-value and call-by-name parameter passing along with lexical scoping for variable declarations and block structure. These features enabled more modular and readable code compared to prior languages, laying foundational principles for procedural programming by emphasizing hierarchical organization and precise control over data and execution flow. ALGOL 60's innovations profoundly influenced later languages, serving as a direct precursor to Pascal, developed by in 1970 specifically for educational purposes to instill modularity and disciplined programming practices. Pascal further refined procedural paradigms by enforcing strong typing to prevent type-related errors and deliberately limiting the use of statements, promoting structured control flows like and while loops to eliminate unstructured "" and foster verifiable, maintainable programs. This approach aligned with the emerging movement, making Pascal a cornerstone for procedural and readability in the . Meanwhile, , created by at in 1972, extended procedural capabilities into by introducing pointers for explicit memory management and low-level hardware access, allowing efficient bridging between abstract procedures and machine-level operations. External pressures, such as the 1970s oil crisis, accelerated the demand for efficient procedural code in embedded systems, particularly in controls where microprocessors enabled fuel-optimized algorithms to address energy shortages and rising costs. By the late 1980s, efforts solidified these advancements with the standard (X3.159-1989), ratified in 1989, which precisely defined procedural elements including functions for reusable code blocks, structs for composite data types, and overall syntax to promote portability, reliability, and consistent implementation across diverse platforms.

Core Concepts

Procedures and Functions

In procedural programming, a is a named sequence of statements that performs a specific task and can be invoked multiple times from different parts of the program, thereby avoiding the repetition of inline code and promoting reusability. This concept was formalized in early languages like , where a procedure declaration specifies an identifier, optional formal parameters, and a block of declarations and statements executed upon invocation. By encapsulating logic into such units, programmers can decompose complex problems into manageable subtasks, enhancing code maintainability without altering the sequential execution model. Functions extend procedures by returning a value to the caller after execution, typically through a , and they often include parameters to receive input arguments. In languages such as Pascal and , functions are distinguished from void procedures by their specified return type, allowing their use in expressions, while local variables within the function provide data isolation to prevent unintended interference with external state. Parameters enable flexible input, with formal parameters acting as placeholders matched to actual arguments at call time, supporting in procedural designs. The invocation of procedures and functions relies on a mechanism, where each call pushes an activation record onto the stack to manage execution context. An activation record typically includes storage for parameters, local variables, the return address to resume the caller, and sometimes dynamic links for nested scopes, ensuring proper unwinding upon return. This stack-based approach handles nested calls efficiently, allocating and deallocating resources dynamically to support the program's . Recursion allows a or to invoke itself, enabling elegant solutions to problems with repetitive substructure, such as computing the of a number. For instance, a recursive defines the base case where factorial(0) or factorial(1) returns 1, and the recursive case as n * factorial(n-1), terminating via the base case to prevent infinite loops. Each recursive call adds an activation record to the , with returns propagating values upward until resolution. Parameter passing in procedures can occur by value, where copies of arguments are made to avoid modifying originals, or by reference, where addresses are passed to allow direct alteration of caller data. In and Pascal, call-by-value copies scalar values into the activation record, while call-by-name or var parameters in Pascal enable reference-like behavior for efficiency with large data. C defaults to pass-by-value but simulates reference via pointers. The following illustrates a to sum an , first by value (copying the array) and then by reference (using a pointer to the original): By Value (Array Copied):
procedure sumArrayByValue(arr: array of integer, size: integer) returns integer
    local sum: integer = 0
    for i from 0 to size-1 do
        sum := sum + arr[i]
    end for
    return sum
end procedure
This approach isolates the procedure but incurs copying overhead for large arrays. By Reference (Using Pointer):
procedure sumArrayByRef(arrPtr: pointer to array of integer, size: integer) returns integer
    local sum: integer = 0
    for i from 0 to size-1 do
        sum := sum + arrPtr^[i]  // Dereference pointer
    end for
    return sum
end procedure
Here, modifications via the pointer affect the original array if needed, optimizing for shared data. These procedures integrate with control structures like loops to sequence operations within the invoked block.

Control Flow and Sequencing

In procedural programming, control flow determines the order in which statements are executed within a , enabling the of algorithms through a series of imperative instructions. The default mode of execution is sequential, where statements are processed from top to bottom and, within the same line, from left to right, assuming no intervening control structures alter the path. This linear progression forms the foundation of imperative computation, allowing programmers to express step-by-step operations directly mirroring the intended logic of the task. Conditional structures introduce branching based on boolean conditions, permitting alternative execution paths to handle decision-making. The canonical if-then-else construct evaluates a condition and executes one block of statements if true, optionally followed by an else block if false, thereby implementing selection as one of the three primitive control mechanisms identified in the structured program theorem. This mechanism ensures that programs can adapt to runtime data without unstructured jumps, promoting readability and maintainability. For example, in pseudocode resembling languages like C or Pascal:
if (x > 0) then
    y = x * 2;
else
    y = x * -1;
end if;
Such structures underpin the alternation , which, combined with sequencing, allows the elimination of arbitrary jumps in favor of hierarchical . Iterative structures facilitate repetition by executing a block of statements multiple times until a is met, addressing the need for loops in . Common forms include the , which tests a before each ; the , which initializes, tests, and increments a in a single construct; and the do-while loop, which executes the body at least once before testing. These realize the of , where loop —logical assertions that remain true before and after each —provide a basis for proving termination and correctness. For instance, in a summing numbers up to n, the invariant might that the partial sum equals the sum of the first k integers for some k ≤ n, ensuring the loop halts when k exceeds n. Hoare's axiomatic framework formalizes this with preconditions, postconditions, and invariants to verify that loops terminate and achieve their intended effect without infinite execution. Unconditional jumps via goto statements allow direct transfer to a labeled point in the code, offering flexibility but often leading to "spaghetti code" with tangled control paths. Although present in early procedural languages like Fortran and BASIC, goto was sharply criticized for undermining program structure and debuggability. In his influential 1968 letter, Edsger Dijkstra argued that goto fosters undisciplined branching, making it difficult to reason about program flow and advocating instead for structured alternatives like conditionals and loops. This critique catalyzed the widespread adoption of structured programming principles, rendering goto deprecated in modern procedural languages. Transfer statements provide controlled deviations within structured flow, enabling early exits or modifications without full restructuring. The return statement terminates a prematurely and passes control (and optionally a value) back to the caller, essential for function-like procedures. Within loops, break exits the innermost immediately upon condition satisfaction, while continue skips the remainder of the current and proceeds to the next, both preserving overall structure while allowing concise handling of exceptions like sentinel values or error cases. These constructs extend the basic primitives without introducing the hazards of , as endorsed in methodologies.

Programming Techniques

Modularity and Decomposition

Modularity in procedural programming is a fundamental principle that involves dividing a complex program into smaller, independent units known as modules or procedures, thereby enhancing overall maintainability, reusability, and ease of testing. This approach allows developers to focus on specific functionalities without affecting the entire system, reducing the risk of unintended side effects during modifications. By encapsulating related operations within discrete units, modularity promotes a structured development process that aligns with the sequential nature of procedural languages like C and Pascal. One primary technique for achieving modularity is top-down decomposition, which starts with a high-level specification of the main program and iteratively refines it into a hierarchy of subordinate procedures. This method, often called stepwise refinement, was formalized by Niklaus Wirth in his 1971 paper, where he demonstrated how to progressively detail abstract steps into concrete, implementable code while preserving program correctness at each level. For instance, a sorting algorithm might first be outlined as a high-level procedure calling sub-procedures for partitioning and recursion, gradually expanding each until fully coded. This hierarchical breakdown not only clarifies the program's structure but also enables early identification and isolation of design flaws. In contrast, the bottom-up approach to modularity builds programs by first developing and testing individual procedures or libraries of reusable functions, then integrating them to form the complete application. This technique is particularly useful in procedural environments where common utilities, such as string manipulation or mathematical operations, can be codified into libraries for repeated use across projects, fostering efficiency in large-scale software development. By prioritizing the creation of robust, self-contained components, bottom-up design supports incremental assembly and verification, as seen in early systems programming where foundational routines were assembled into higher-level applications. A critical aspect of modularity is , which conceals the internal implementation details of from external modules, exposing only the necessary through parameters and values. Introduced by in his seminal work on system decomposition, this principle minimizes coupling between modules by restricting access to sensitive data structures and algorithms, thereby allowing internal changes without impacting dependent code. In procedural programming, is typically enforced through definitions that abstract away low-level operations, such as hiding array manipulations within a search function that only requires input criteria. This abstraction layer not only simplifies comprehension but also bolsters system flexibility in evolving requirements. The practical benefits of modularity and decomposition in procedural programming include a measurable reduction in overall system complexity, particularly through metrics like cyclomatic complexity, which quantifies the number of independent execution paths within a module. By limiting interdependence—such as through controlled procedure calls—modular designs typically yield lower cyclomatic values per unit (ideally under 10), correlating with fewer defects and simpler testing suites, as established in Thomas McCabe's 1976 analysis of program control flow. Scoping rules further reinforce these boundaries by localizing variables to specific procedures, preventing unauthorized access.

Scoping and Data Management

In procedural programming, scoping rules determine the visibility and accessibility of and identifiers within a program, ensuring that data is managed predictably across different parts of the code structure. Lexical (or static) scoping, a cornerstone of most procedural languages, resolves references based on the textual structure of the source code rather than the order of execution at . This approach was pioneered in , where structures—delimited by begin and end keywords—create nested scopes that limit visibility to the enclosing , preventing unintended interactions between distant code segments. For instance, a declared within a is accessible only from that block and any nested inner blocks, promoting data isolation and reducing errors from name clashes. Procedural languages typically define multiple scope levels to organize data hierarchically: global scope for variables accessible throughout the entire , local scope for variables confined to a specific or , and block-level for variables declared within statements like loops or conditionals. The lifetime of these variables is closely tied to their ; a is created into its (allocation) and destroyed (deallocation), often managed via a stack-based environment. This mechanism supports by allowing procedures to maintain private data without pollution, a key motivation for practices. In languages like , for example, a in a exists only during the function's execution, while block variables within if statements follow the same entry-exit lifecycle. An alternative to lexical scoping is dynamic scoping, where variable resolution occurs at by searching the call stack for the most recent binding of the identifier, rather than the static layout. This method, though less common in modern procedural languages due to its potential for unpredictable behavior, was employed in early variants of , such as Lisp 1.5, where function calls could access variables from the calling context dynamically. Dynamic scoping simplifies certain interactive or interpretive environments but can lead to bugs when is refactored, as variable meanings change based on execution paths. A notable feature enabled by lexical scoping in procedural languages is the handling of free variables in nested procedures, where an inner procedure can reference variables from its enclosing outer scope without explicit passing. This is exemplified in Pascal, which supports nested procedure declarations; an inner procedure treats outer local variables as free variables, accessing them read-only or modifiable depending on the language rules, effectively creating a form of closure-like behavior. For example, in Pascal code:
procedure Outer;
var x: integer;
procedure Inner;
begin
  x := x + 1;  // Accesses free variable x from Outer
end;
begin
  x := 0;
  Inner;
end;
Here, Inner resolves x to the outer scope lexically, maintaining within the nested structure. Such mechanisms enhance and but require careful management to avoid dangling references if outer scopes end prematurely. in procedural programming extends scoping through parameter-passing conventions, primarily by value or by reference, which dictate how arguments are shared between procedures. Pass-by-value creates a copy of the argument's value for the procedure, ensuring the original remains unchanged and isolating side effects; this is the default , where scalar parameters like s are duplicated on the . In contrast, pass-by-reference passes the (often via pointers or var parameters in Pascal), allowing the procedure to modify the caller's data directly, which is efficient for large structures but introduces risks of unintended modifications if aliases are not handled carefully. For instance, modifying a referenced in a subroutine can alter the original data unexpectedly, potentially leading to bugs in multi-procedure interactions; programmers mitigate this by using const qualifiers or to signal intent.

Language Implementations

Classical Languages

, one of the earliest high-level programming languages developed in the , exemplifies procedural programming through its use of subroutines and functions to encapsulate reusable code blocks. A subroutine, declared with the SUBROUTINE keyword, performs operations without returning a value, while a , declared with FUNCTION, computes and returns a single value to the caller. These constructs allow for modular code organization, where the main program calls subroutines or functions to handle specific tasks, promoting and maintainability. Additionally, COMMON blocks provide a mechanism for sharing global data across subroutines and the main program by declaring named storage areas that multiple units can access, facilitating data persistence without formal parameters. The following code snippet illustrates a subroutine implementing the bubble sort algorithm to sort an of integers in ascending order:
SUBROUTINE BUBBLESORT(N, A)
  INTEGER N, A(N), TEMP, I, J
  DO 10 I = 1, N-1
    DO 20 J = 1, N-I
      IF (A(J) > A(J+1)) THEN
        TEMP = A(J)
        A(J) = A(J+1)
        A(J+1) = TEMP
      END IF
  20 CONTINUE
  10 CONTINUE
END SUBROUTINE BUBBLESORT
This subroutine takes the array size N and the array A as parameters, iterating through the array to adjacent elements until sorted. , developed in 1960, was a foundational procedural language that introduced block structure, variables, and call-by-name/value parameters, influencing many subsequent languages. Procedures in ALGOL are defined with the '' keyword and can include declarations within begin-end blocks, enabling structured of algorithms. BASIC, introduced in 1964 by John Kemeny and Thomas Kurtz, popularized procedural programming for beginners through simple subroutines via GOSUB and RETURN statements, though early versions relied on line numbers and for . Later dialects like structured BASIC added procedures for better modularity. Pascal, introduced in 1970 by , emphasizes with procedures and functions that support clear and data abstraction. A , defined using the PROCEDURE keyword, executes a sequence of statements without returning a value, while a , defined with FUNCTION, returns a value of a specified type. Parameters can be passed by value (default, creating local copies) or by reference using the VAR keyword, allowing modifications to the original data in the calling scope, which is essential for efficient in procedural designs. Pascal also supports nested procedures, enabling inner procedures to access variables from the enclosing scope, enhancing encapsulation within larger programs. The following Pascal program demonstrates nested procedures for computing the nth number recursively:
program Fibonacci;
var
  n, result: [integer](/page/Integer);

procedure ComputeFib(m: [integer](/page/Integer); var res: [integer](/page/Integer));
var
  temp, prev: [integer](/page/Integer);

  procedure Fib(k: [integer](/page/Integer); var f: [integer](/page/Integer));
  var
    t: [integer](/page/Integer);
  begin
    if (k <= 1) then
      f := k
    else begin
      Fib(k-1, temp);
      Fib(k-2, prev);
      f := temp + prev;
    end;
  end;

begin
  Fib(m, res);
end;

begin
  n := 10;
  ComputeFib(n, result);
  writeln('Fibonacci of ', n, ' is ', result);
end.
Here, variables are declared at the procedure level before 'begin'. The outer procedure ComputeFib declares temp and prev for use in the nested Fib procedure, which recursively calculates the Fibonacci value using VAR parameters to return results to the caller. C, standardized in 1989 but rooted in the 1970s B language, implements procedural programming via functions that form the building blocks of programs, with the main function serving as the entry point. Function prototypes, declarations specifying return types and parameters, ensure type checking and enable forward references, typically placed at the file's top or in separate files. Local variables declared as static retain their values across function calls within the same file, providing file-scope persistence without global visibility, useful for maintaining state in utility functions. Due to C's lack of built-in modules, developers use header files (with .h extension) to declare function prototypes and external variables, which are included via #include directives to share interfaces across multiple source files, facilitating large-scale procedural development.

Contemporary Applications

Procedural programming remains integral to systems programming, particularly in performance-critical environments where low-level control is essential. The , primarily implemented in , exemplifies this through its reliance on modular procedures and functions to manage hardware interactions, memory allocation, and process scheduling, enabling efficient execution on diverse architectures. This procedural structure allows developers to optimize for speed and resource usage in real-time operating system tasks, such as interrupt handling and device drivers. In embedded systems, C++ used in a procedural style continues to dominate for real-time control applications on microcontrollers, where predictability and minimal overhead are paramount. For instance, Arduino sketches leverage procedural constructs—like sequential function calls and loops—in a C++ environment to implement sensor data processing, motor controls, and timing-critical operations in resource-constrained devices. This approach ensures deterministic behavior in applications ranging from robotics to IoT sensors, building on classical C foundations for direct hardware manipulation. Modern languages often incorporate procedural cores within hybrid paradigms to balance simplicity and advanced features. Python's def keyword defines reusable procedures that form the backbone of scripting tasks, allowing sequential execution of code blocks for data manipulation and automation, even as object-oriented and functional elements coexist. Similarly, Go emphasizes functions as first-class procedural units, enhanced by goroutines for lightweight concurrency, enabling efficient handling of networked services and parallel computations without the complexity of traditional threads. In the 2020s, procedural pipelines have gained traction in scripting, particularly with Julia's design for high-performance numerical computing. Julia's pipeline operator (|> ) facilitates chaining of procedural functions to build workflows, such as preprocessing datasets and training models in pipelines, offering speed advantages over interpreted languages like for compute-intensive tasks. This trend supports in scientific applications, from simulations to optimization algorithms. Legacy systems highlight procedural programming's enduring presence, with estimates of 220–800 billion lines of code still in use as of 2025, powering banking transactions worldwide. This code handles daily payments and account management through structured procedures that ensure reliability in high-volume operations. Refactoring these systems poses significant challenges, including talent shortages for maintenance and integration difficulties with modern APIs, often leading to incremental modernization rather than full rewrites.

Strengths and Challenges

Advantages

Procedural programming excels in simplicity due to its linear structure, which closely mirrors sequential human reasoning and thought processes, rendering the code straightforward and intuitive to comprehend. This approach organizes instructions in a clear, top-down sequence, making it an ideal for beginners who can grasp fundamental concepts without the complexity of additional abstractions. The paradigm's efficiency stems from its imperative nature, enabling direct translation of code into machine instructions that closely align with operations, thereby minimizing overhead compared to paradigms requiring interpreters or virtual machines. This low-level mapping supports high performance in resource-constrained environments, as the compiled code executes with minimal abstraction layers. Reusability is a core strength, achieved through procedures that serve as modular building blocks, allowing developers to encapsulate logic and invoke it across multiple contexts, thereby adhering to the DRY () principle and reducing code duplication. This modularity, as briefly referenced in techniques, fosters maintainable designs by promoting the reuse of well-defined units. A notable example of its is the Unix operating system, developed in the 1970s using procedural principles , where enabled the to expand from approximately 6,000 lines in 1973 to over 1 million lines by later decades while maintaining system integrity and performance. Additionally, the step-by-step facilitates , as developers can trace execution linearly with breakpoints, isolating issues efficiently without navigating complex interdependencies.

Limitations

One key limitation of procedural programming arises from its reliance on global state management, which often results in tight coupling between modules through shared variables. This approach can introduce unintended side effects, as modifications to a global variable in one procedure may unpredictably affect others, complicating debugging and increasing the risk of errors during maintenance. For instance, in languages like C, where global variables are commonly used, this implicit dependency hides interactions that are not evident from procedure signatures alone, leading to fragile code structures that are difficult to evolve. In large-scale systems developed by multiple teams, procedural programming exacerbates problems by allowing code to grow into monolithic structures without enforced . Without built-in mechanisms to partition responsibilities, procedures can become interdependent, making it challenging for teams to work independently and coordinate changes effectively, which often results in conflicts and prolonged cycles. This lack of inherent contrasts with paradigms that impose clearer boundaries, contributing to reduced as project size exceeds thousands of lines of code. Procedural programming's emphasis on mutable also heightens security risks, particularly in low-level languages where direct manipulation is permitted. For example, , the absence of bounds checking on arrays can lead to buffer overflows, where excessive data writing corrupts adjacent and enables exploits like , compromising system integrity. Such vulnerabilities stem from the paradigm's procedural focus on sequential data handling without safeguards against mutations, making it prone to runtime errors that attackers can leverage. Studies from the late , such as analyses in Edward Yourdon and Larry Constantine's Structured Design (1979), highlighted higher error rates in unstructured programs compared to structured designs, particularly for systems exceeding 100,000 lines of , attributing this to increased costs and complexity due to poor and . These findings underscored how procedural designs, without disciplined , amplify faults in large projects due to pervasive and state dependencies. Additionally, the sequential nature of procedural programming poses significant challenges for parallelism, as its linear and shared mutable state complicate multi-threading without language extensions. Implementing concurrent execution requires manual to avoid race conditions, which can introduce deadlocks or nondeterminism, demanding extensive refactoring that undermines the paradigm's simplicity in single-threaded contexts.

Comparisons to Other Paradigms

With Imperative Programming

Imperative programming represents a broad in which programs are constructed as sequences of commands that explicitly modify the of a computational system, typically through operations that alter or variables. This approach contrasts with declarative paradigms by focusing on how to achieve a result via step-by-step instructions, often drawing from the where data and instructions share mutable storage. Procedural programming emerges as a structured subset of this imperative framework, introducing organized procedures or subroutines to manage complexity while retaining the core imperative mechanism of state mutation. Both imperative and procedural programming share fundamental traits, including the use of mutable state, assignment statements to update , and the allowance for side effects where operations can alter global or shared data beyond their immediate scope. For instance, in languages supporting either style, a might be assigned a value early in execution and later reassigned, enabling sequential processing of data flows. These elements align closely with hardware-level operations, as seen in low-level imperative code like , where direct manipulation predominates without higher-level abstractions. The key distinction lies in procedural programming's emphasis on modularity through subroutines—self-contained blocks of code that encapsulate related operations—thereby reducing reliance on unstructured control flows such as unrestricted goto statements common in basic . This structuring promotes clearer program organization, making maintenance and debugging more feasible by limiting arbitrary jumps that can obscure logical flow. A foundational theoretical basis for this shift is the Böhm–Jacopini theorem, which demonstrates that any can be realized using only three control structures: sequences of commands, selections (e.g., ), and iterations (e.g., loops), without needing goto for equivalence. This transition from unstructured imperative styles, exemplified by early languages like that heavily depended on goto for control, to procedural approaches gained momentum in the 1970s amid the structured programming movement. Influential critiques, such as Edsger Dijkstra's 1968 letter decrying the goto statement's harmful effects on program readability, catalyzed the adoption of subroutine-based in subsequent imperative designs. By the mid-1970s, languages like Pascal exemplified this evolution, enforcing structured constructs to replace ad-hoc jumps while preserving imperative state changes.

With Object-Oriented Programming

In procedural programming, data and procedures remain separate, with functions operating on global or local variables passed as parameters, allowing for straightforward manipulation of data structures without inherent bundling. This separation facilitates direct algorithmic implementation but can lead to tighter coupling between modules in larger systems. In contrast, (OOP) emphasizes encapsulation, where data (as attributes) and procedures (as methods) are bound together within classes or objects, promoting data hiding and modular organization. A pivotal historical contrast emerged in the 1970s, when Smalltalk, developed by at PARC in the late 1960s and early 1970s, pioneered the paradigm by treating everything as objects that communicate via messages, marking a shift from procedure-centric models. Meanwhile, the language, created by at in 1972, exemplified procedural programming through its focus on structured functions and explicit data handling for systems like Unix. This procedural foundation in directly influenced , introduced by in 1985 as a hybrid language that extended with classes and objects while retaining procedural capabilities. The trade-offs between these paradigms are evident in their suitability for different tasks: procedural programming often proves simpler and more efficient for implementing straightforward algorithms, where data flow is linear and performance-critical, as seen in low-level systems code. , however, excels in modeling real-world entities with complex interactions, such as in , where encapsulation reduces compared to procedural approaches, enhancing for large-scale applications. Refactoring procedural code to presents challenges, primarily involving the identification and wrapping of scattered into classes to achieve encapsulation, which can introduce temporary and require extensive testing to preserve . This often demands global variables into object attributes and converting standalone functions into methods, potentially increasing initial effort before yielding long-term benefits.

With Functional Programming

Procedural programming embodies an imperative approach, specifying how computations are executed through sequential statements that often involve side effects, such as modifying global variables or operations, and rely on mutable to track changes during execution. This paradigm treats functions as procedures that can alter external , enabling direct over the program's but introducing in reasoning about due to unpredictable interactions. In contrast, adopts a declarative style, expressing computations as the evaluation of mathematical-like expressions that describe what result is desired, without prescribing the steps. It centers on pure functions—those whose output depends solely on inputs and exhibit no side effects—while enforcing immutability of data to ensure predictability and composability. serves as the primary mechanism for repetition in functional code, replacing loops to maintain purity by avoiding mutable counters or accumulators. The core divergence in state handling underscores these paradigms: procedural code uses to rebind variables to new values, as in x = x + 1, which mutates the existing and can lead to cascading effects across the program. , however, employs binding to associate immutable values with names at definition time, preventing reassignment and favoring to build complex behaviors from simpler, verifiable units. Lisp, pioneered by John in 1958, marked an early milestone in by introducing list processing and as foundational elements, diverging from the procedural emphasis on mutable structures in languages like , developed by in 1972 for with explicit state management. Haskell, formalized in its 1990 report, pushed functional purity to its limits by mandating immutability and , eliminating side effects entirely to achieve . Illustrative of these differences in practice, procedural data processing often uses loops to traverse and modify collections iteratively, such as incrementing elements in an via a that updates mutable memory. Functional equivalents apply higher-order functions like to transform each element immutably into a new collection and reduce to aggregate results without altering originals, promoting declarative pipelines over imperative control. Hybrid applications blend these styles for efficiency; for instance, a procedural might handle low-level updates in performance-sensitive , while functional /reduce patterns manage higher-level flows to leverage immutability for safer parallelism.

With Logic Programming

Procedural programming requires the programmer to explicitly define the sequence of steps and to compute results, making it well-suited for deterministic algorithms where outcomes follow a predictable path. In this paradigm, operations are executed in a prescribed order, often using constructs like and conditionals to manage execution. For instance, solving a problem involves detailing how is transformed step by step, ensuring for the same input. In contrast, logic programming adopts a declarative approach, where programmers specify facts about the domain and rules governing relationships, leaving the inference of solutions to the underlying engine. This engine employs unification to match query terms with rule heads by finding substitutions that make them identical, and to derive conclusions by refuting negations through combination, often via . A seminal example is , developed by Alain Colmerauer in 1972 at the University of as a tool for , which automates to explore alternative derivations when a path fails, without requiring explicit failure-handling code from the programmer. Unlike procedural languages such as Pascal—introduced in 1970 by for structured, deterministic computation— avoids manual implementation of search mechanisms, simplifying the expression of relational queries. This distinction highlights key contrasts in applicability: procedural programming excels in scenarios demanding precise, linear control for reliable, step-wise processing, whereas is ideal for non-deterministic problems involving search and , such as planning tasks where multiple potential solutions must be evaluated. In procedural approaches, expressing such non-determinism requires added complexity, like nested loops or recursive routines to simulate exploration and reversion, which can obscure the core logic and increase error proneness. Logic systems, by delegating this to the , better support domains like and .

References

  1. [1]
    2.1. Introduction to Object Oriented Programming - OpenDSA
    Procedural programming is a list or set of instructions telling a computer what to do step by step and how to perform from the first code to the second code.
  2. [2]
    IC211: Introduction
    To sum up: The Procedural Programming paradigm sees a program as functions calling functions, with data being passed around as arguments to or return-values ...
  3. [3]
    Language Paradigms
    Language Paradigms. Imperative or structured/procedural programming -- Process-oriented. It is based on the von Neumann architecture where data and programs ...
  4. [4]
    [PDF] History and Paradigms - Mathematics and Computer Science
    Jan 5, 2009 · 1. The imperative or procedural paradigm - of which FORTRAN and COBOL are early representatives. (More modern representatives include C and ...
  5. [5]
    Block Structured Procedural and Object Based
    Block structured procedural type languages typically emulate encapsulation with the use of procedures and functions where local data is not available to the ...
  6. [6]
    [PDF] Concepts of Programming Languages SLecture Imperative ...
    This paradigm emphasizes on procedure in terms of under lying machine model. There is no difference in between procedural and imperative approach. It has the ...
  7. [7]
    1.1. Software Development Paradigms and Processes
    The procedural paradigm focuses on how to solve a problem. Software developers using this paradigm list the steps needed to solve the problem and then ...
  8. [8]
    Computer Programming
    A procedural language tells the computer how a task is done: Add this, compare that, do this if something is true, and so forth-a very specific step-by-step ...
  9. [9]
    13. Classes and objects — How to Think Like a Computer Scientist
    In procedural programming the focus is on writing functions or procedures which operate on data. In object-oriented programming the focus is on the creation of ...<|control11|><|separator|>
  10. [10]
    Programming Paradigms
    Below is a table listing the major programming paradigms and what sorts of problems they are commonly used for as well as some common disadvantages encountered.Missing: key features
  11. [11]
    This course covers the concepts of high-level programming ...
    Procedural (A paradigm based on the concept of a procedure call -- AKA: subprogram, subroutine, function, method, routine; structured, modular, scoping (!!
  12. [12]
    Von Neumann Privately Circulates the First Theoretical Description ...
    This document, written between February and June 1945, provided the first theoretical description of the basic details of a stored-program computer.
  13. [13]
    The "Plankalkül" of Konrad Zuse: A Forerunner of Today's ... - catb. Org
    Plankalkül was an attempt by Konrad Zuse in the 1940's to devise a notational and conceptual system for writing what today is termed a program.
  14. [14]
    Language Games | It Began with Babbage - Oxford Academic
    Zuse's 1945 manuscript on Plankalkül and its use in constructing programs was never published at the time. It lay ignored or unnoticed until 1972, when it ...
  15. [15]
    The UNIVAC SHORT CODE - IEEE Computer Society
    The UNIVAC SHORT CODE, the first example of a high level programming language actually intended to be used with an electronic computer was proposed by John ...
  16. [16]
    [PDF] The History of Fortran I, II, and III by John Backus
    It describes new prologues for these subroutines and points out that mix- tures of Fortran-coded and assembly-coded relocatable binary programs could be loaded ...
  17. [17]
    [PDF] generality and - hierarchy: algol-60 - UTK-EECS
    the 1958 document is often known as the "Algol-58 Report." It is instructive to see the objectives of the new language as stated in the Algol-58 Report: I ...
  18. [18]
    [PDF] Algol - Computer Science
    Sep 19, 2013 · – Algol-58 is created in 8 days in Zurich, as a preliminary ... • Result of research: block structure. 16. Blocks Define Nested Scopes.
  19. [19]
    THE AMERICAN SIDE OF THE DEVELOPMENT OF ALGOL
    To the Algol58 terminology it added: block, call by value and call by name, typed procedures, declaration scope, dynamic arrays, own variables, side effects, ...
  20. [20]
    50 Years of Pascal - Communications of the ACM
    Mar 1, 2021 · The Pascal programming language creator Niklaus Wirth reflects on its origin, spread, and further development.
  21. [21]
    [PDF] On the Composition of Well-Structured Programs† - NIKLAUS WIRTH
    We give brief examples of structured programs, show the essence of this approach, discuss its relationship with program verification, and comment on the role of ...Missing: 1970 typing
  22. [22]
    [PDF] Niklaus Wirth - The Programming Language Pascal (Revised Report)
    The only difference lies in the fact that a function yields a result which is confined to a scalar type and must be specified in the function declaration.
  23. [23]
    [PDF] The Development of the C Language - Nokia
    The C programming language was devised in the early 1970s as a system implementation language for the nascent Unix operating system. Derived from.
  24. [24]
    Embedded Computer Systems - an overview | ScienceDirect Topics
    The big push toward microprocessor-based engine control came from two nearly simultaneous developments: The oil shock of the 1970s caused consumers to place ...
  25. [25]
    [PDF] for information systems - programming language - C
    This standard specifies the form and interpretation of C programs, promoting portability, reliability, and efficient execution, and specifies syntax and ...
  26. [26]
    [PDF] Report on the Algorithmic Language ALGOL 60
    Since a statement of a block may again itself be a block the concepts local and non-local to a block must be under- stood recursively. Thus an identifier ...Missing: 58 | Show results with:58
  27. [27]
    [PDF] Block-structured procedural languages Algol and Pascal
    Simple statement-oriented syntax. ♢ Block structure. ♢ Recursive functions and stack storage allocation. ♢ Fewer ad hoc restrictions than previous languages.Missing: standardization | Show results with:standardization
  28. [28]
    Parameter Passing - cs.wisc.edu
    When a parameter is passed by reference, the calling method copies the l-value of the argument into the called method's AR (i.e., it copies a pointer to the ...
  29. [29]
    [PDF] Procedures & Activation Records - Rose-Hulman
    Jan 14, 2020 · A procedure call must allocate and initialize an AR to preserve it's own state. Upon returning from a procedure, it must dismantle it's own.
  30. [30]
    [PDF] Flow Diagrams, Turing Machines And Languages With Only Two ...
    In this paper, flow diagrams are introduced by the ostensive method; this is done to avoid definitions which certainly would not be of much use. In the first ...
  31. [31]
    [PDF] An Axiomatic Basis for Computer Programming
    In this paper an attempt is made to explore the logical founda- tions of computer programming by use of techniques which were first applied in the study of ...Missing: URL | Show results with:URL
  32. [32]
    Letters to the editor: go to statement considered harmful
    First page of PDF. Formats available. You can view the full content in the ... Go To Statement Considered Harmful. Edsger Wybe Dijkstra. Read More. Comments.Missing: original URL
  33. [33]
    On the criteria to be used in decomposing systems into modules
    This paper discusses modularization as a mechanism for improving the flexibility and comprehensibility of a system while allowing the shortening of its ...
  34. [34]
    Program development by stepwise refinement - ACM Digital Library
    Program development by stepwise refinement. Author: Niklaus Wirth. Niklaus ... As a contribution to programming methodology, the paper contains a detailed, step ...Missing: original | Show results with:original
  35. [35]
    [PDF] E-2728 - Revision 1 TOP-DOWN, BO)TTOM-U P
    The purpose of this report is to inform engineers, programmers and managers of new design and programming techniques for Shuttle software.<|control11|><|separator|>
  36. [36]
    [PDF] II. A COMPLEXITY MEASURE In this sl~ction a mathematical ...
    THOMAS J. McCABE. Abstract- This paper describes a graph-theoretic complexity measure and illustrates how it can be used to manage and control program com ...
  37. [37]
  38. [38]
    [PDF] Revised Report on the Algorithmic Language ALGOL 60
    Since sequences of statements may be grouped together into compound statements and blocks the definition of statement must necessarily be recursive. Also since ...Missing: 58 | Show results with:58<|separator|>
  39. [39]
    CSE 341 Lecture Notes -- Lexical and Dynamic Scoping - Washington
    Languages such as Haskell, Algol, Ada, C, Pascal and Scheme/Racket are lexically scoped. A block defines a new scope. Variables can be declared in that scope.
  40. [40]
    From LISP 1 to LISP 1.5 - Formal Reasoning Group
    Jul 26, 1996 · In modern terminology, lexical scoping was wanted, and dynamic scoping was obtained. I must confess that I regarded this difficulty as just ...
  41. [41]
    Nested procedure and functions
    When a routine is declared within the scope of a procedure or function, it is said to be nested. In this case, an additional invisible parameter is passed to ...
  42. [42]
    CSE 341 -- Parameter Passing - Washington
    Call by value is particularly efficient for small pieces of data (such integers), since they are trivial to copy, and since access to the formal parameter can ...
  43. [43]
    Chapter 1 Elements of FORTRAN
    There are three types of subprograms: subroutines, functions, and block data subprograms. The subroutines and functions are called procedures, which are invoked ...Missing: historical | Show results with:historical
  44. [44]
    13. Common blocks - Fortran 77 Tutorial
    This is a way to specify that certain variables should be shared among certain subroutines. But in general, the use of common blocks should be minimized.Missing: functions historical documentation
  45. [45]
    [PDF] 9 application development: sort & search - KFUPM
    The FORTRAN subroutine that implements this sorting technique is as follows: SUBROUTINE SORT (A, N). INTEGER N, A(N), TEMP, K, L. DO 11 K = 1, N - 1. DO 22 L ...<|control11|><|separator|>
  46. [46]
    Procedures and Functions - Essential Pascal on marcocantu.com
    Unlike C, a Pascal function or procedure always has a fixed number of parameters. However, there is a way to pass a varying number of parameters to a routine ...
  47. [47]
    Pascal - Procedures - Tutorials Point
    If a subprogram (function or procedure) is to use arguments, it must declare variables that accept the values of the arguments. These variables are called the ...
  48. [48]
  49. [49]
    CPlus Course Notes - Functions
    Function prototypes are often placed in separate header files, which are then included in the routines which need them. For example, "cmath" includes the ...
  50. [50]
    Prototypes & Headers - CS 3410
    A header is a C source-code file that contains declarations that are meant to be included in other C files. You can then “copy and paste” the contents of header ...
  51. [51]
    Basics of Arduino sketches and Embedded C - Engineers Garage
    Apr 7, 2024 · Arduino boards are programmed in “C.” C is a popular system programming language that has minimal execution time on hardware in comparison to other high-level ...<|separator|>
  52. [52]
    The Arduino Handbook – Learn Microcontrollers for Embedded ...
    Oct 5, 2023 · The Arduino programming language is built upon the C/C++ language so they both share similar syntax and structure. You may come across resources ...
  53. [53]
    Effective Go - The Go Programming Language
    This document gives tips for writing clear, idiomatic Go code. It augments the language specification, the Tour of Go, and How to Write Go Code, all of which ...
  54. [54]
    AutoMLPipeline - Julia Packages
    AutoMLPipeline (AMLP) is a package that makes it trivial to create complex ML pipeline structures using simple expressions. It leverages on the built-in ...
  55. [55]
    Potential of the Julia Programming Language for High Energy ...
    Oct 5, 2023 · In this paper the applicability of using the Julia language for HEP research is explored, covering the different aspects that are important for HEP code ...<|separator|>
  56. [56]
    Understanding COBOL: The Backbone of Business Computing That ...
    Sep 8, 2025 · COBOL powers critical business systems after 65+ years. Learn how mainframe data lineage unlocks legacy code modernization and maintenance.
  57. [57]
    Overcome COBOL Modification With Modernization | Blog - FairCom
    Mar 20, 2025 · Refactoring involves restructuring existing code to improve readability, maintainability, and efficiency without altering its external behavior.
  58. [58]
    [PDF] Procedural Programming in C++ - BYU-Idaho
    Procedural programming is the process of solving programming challenges by breaking large problems into smaller ones. These sub-problems are called procedures. ...
  59. [59]
    What is Procedural Programming? - Hackr.io
    Advantages · Simplicity: Easy to understand and use, especially for beginners. · Linear Execution: Clear, sequential flow of control, making it easy to follow the ...
  60. [60]
    The Evolution of C Programming Practices: A Study of the Unix ...
    C coding in Unix evolved with hardware, promoted modularity, adopted new features, allowed compiler register allocation, and reached code formatting agreement.Missing: procedural | Show results with:procedural
  61. [61]
    [PDF] Functions and abstraction - Washington
    Breaking down a program into functions is the fundamental activity of programming! • How do you decide when to use a function? – One rule: DRY (Don't Repeat ...Missing: reusability | Show results with:reusability
  62. [62]
  63. [63]
    Understanding Procedural Programming: Your Step-by-Step Guide
    Ease of Debugging. Debugging procedural programs is simple. Since the code runs in a clear sequence, developers can check each line one by one to find and ...
  64. [64]
    The Problems with Global Variables - Embedded Artistry
    May 4, 2022 · The main problem with using global variables is that they create implicit couplings among various pieces of the program (various routines ...
  65. [65]
    [PDF] Evolving Software Architectures from Monolithic Systems to Resilient ...
    In monolithic systems, the scaling of development teams is often limited, as all teams have to work on the same code base and coordinate changes.
  66. [66]
    Buffer Overflow Risks in Software Development - Veracode
    A buffer overflow, or buffer overrun, is a common software coding mistake that an attacker could exploit to gain access to your system.
  67. [67]
    [PDF] Structured Design ISBN 0-917072-11 - vtda.org
    My coauthor, Ed Yourdon, not only sieved through these to extract the most essential pieces, but, from teaching the material himself, has added novel methods of ...
  68. [68]
    What Change History Tells Us about Thread Synchronization
    Aug 30, 2015 · Multi-threaded programs are pervasive, yet difficult to write. Missing proper synchronization leads to correctness bugs and over ...
  69. [69]
    The Imperative Programming Paradigm
    Jun 13, 2009 · Imperative programs are characterized by sequences of bindings (state changes) in which a name may be bound to a value at one point in the ...
  70. [70]
    [PDF] Imperative Programming Languages (IPL) - GW Engineering
    Definitions: • The imperative (or procedural) paradigm is the closest to the structure of actual computers. • It is a model that is based on moving bits ...
  71. [71]
    [PDF] CSci 658: Software Language Engineering Programming Paradigms
    Feb 17, 2018 · Procedural languages, for example, are imperative languages built around the concept of subprograms – procedures and functions. Programmers ...
  72. [72]
    [PDF] Main Theme Programming Languages Overview & Syntax - NYU
    – The Java language definition defines a machine- independent intermediate form known as byte code. Byte code is the standard format for distribution of Java.
  73. [73]
    Lambda Expressions :: CC 410 Textbook
    ... procedural programming paradigm, itself a subset of imperative programming. In imperative programming, we write code that consists of commands that modify ...
  74. [74]
    Flow diagrams, turing machines and languages with only two ...
    Flow diagrams, turing machines and languages with only two formation rules. Authors: Corrado Böhm, Giuseppe JacopiniAuthors Info & Claims. Communications of ...
  75. [75]
    CSE 305 Programming Languages Lecture Notes Stuart C. Shapiro
    1968: Edsger Dijkstra publishes "Goto Statement Considered Harmful" as letter to the editor of CACM. Launches structured programming movement. 1968-1971 ...Missing: transition | Show results with:transition
  76. [76]
    [PDF] structured programming using turbo pascalr version 5.0 on the ibm pc
    The GOTO state- ment is discussed in chapter 4, but you should realize here that during the mid-1960s virtually no programmer could even conceive of a program ...Missing: transition | Show results with:transition
  77. [77]
    2.1. Introduction to Object Oriented Programming — CSC205
    ... procedural programming uses procedures ... In a procedural language, tasks are executed in functions or procedures and the data that the functions operate on is ...
  78. [78]
    An empirical comparison of modularity of procedural and object ...
    Abstract: A commonly held belief is that applications written in object-oriented languages are more modular than those written in procedural languages.Missing: scholarly | Show results with:scholarly
  79. [79]
    The early history of Smalltalk | History of programming languages---II
    Early Smalltalk was the first complete realization of these new points of view as parented by its many predecessors in hardware, language, and user interface ...
  80. [80]
    The Development of the C Language - Nokia
    This paper is about the development of the C programming language, the influences on it, and the conditions under which it was created.
  81. [81]
    [PDF] Why C++ is not just an Object-Oriented Programming Language
    Too often, ''hybrid'' is used in a prejudicial manner. If I must apply a ... [Stroustrup,1991] Bjarne Stroustrup: The C++. Programming Language. Addison ...
  82. [82]
    [PDF] Object-oriented programming: Some history, and challenges for the ...
    May 16, 2012 · Procedural ab- straction is characterized by using a computational mechanism—a repre- sentation of a function as executable code—rather than ...
  83. [83]
    Comparison of an Object-Oriented Programming Language to a ...
    We found supporting evidence that programmers produce more maintainable code with an object oriented language than a standard procedural language.
  84. [84]
    [PDF] Refactoring Practices in the Context of Modern Code Review
    Summary: Challenges of reviewing refactored code inherits challenges of reviewing traditional code changes, as refactoring can also be mixed with func- tional ...
  85. [85]
    An Exploratory Study on the Relationship between Changes and ...
    The goal of the empirical study is to analyze refactoring operations applied by developers during the evolution history of a software system. The purpose is ...
  86. [86]
    [PDF] CS 6110 Lecture 12 State and mutable variables 18 February 2013
    Feb 18, 2013 · State refers to the ability to have mutation: a change in value over time. The functional languages we have studied.
  87. [87]
    Functional programming vs. imperative programming - LINQ to XML
    Functional programming avoids state and mutable data, and instead emphasizes the application of functions. Fortunately, C# and Visual Basic don't require the ...
  88. [88]
    Introduction to Functional Programming
    A programming style is considered to be imperative to the extent that it focuses on the explicit flow of control that leads to progress toward some goal by ...
  89. [89]
    [PDF] How functional programming mattered - Rice University
    Nov 25, 2016 · As a simple example, consider a mathematical definition of the factorial function: 0! = 1. (n + 1)! = (n + 1)n! Its definition in the functional ...
  90. [90]
    Principles of Programming Languages: Outline of Lecture 8
    In imperative languages (languages supporting mutable state), call-by-name has the same semantics as it does in functional languages, assuming that we ...
  91. [91]
    [PDF] History of Lisp - John McCarthy
    Feb 12, 1979 · This paper concentrates on the development of the basic ideas and distin- guishes two periods - Summer 1956 through Summer 1958 when most of ...
  92. [92]
    [PDF] A History of Haskell: Being Lazy With Class - Microsoft
    Apr 16, 2007 · These opening words in the Preface of the first Haskell Report,. Version 1.0 dated 1 April 1990, say quite a bit about the history of. Haskell.
  93. [93]
    From Map/Reduce to JavaScript Functional Programming
    Jan 29, 2015 · These two functions not only allow developers to describe a computation more clearly, but also to simplify the work of writing loops for traversing an array.
  94. [94]
  95. [95]
    The birth of Prolog | History of programming languages---II
    The project gave rise to a preliminary version of Prolog at the end of 1971 and a more definitive version at the end of 1972. This article gives the history of ...
  96. [96]
    Resolution in Prolog - Logic Programming
    Resolution is a technique of producing a new clause by resolving two clauses that contain a complimentary literal and Resolution produces proof by Refutation.
  97. [97]
    Logic programming for deliberative robotic task planning
    Jan 18, 2023 · In this manuscript, we present a survey on recent advances in the application of logic programming to the problem of task planning.
  98. [98]
    Backtracking Algorithm - GeeksforGeeks
    Jul 23, 2025 · Backtracking is a problem-solving algorithmic technique that involves finding a solution incrementally by trying different options and undoing them if they ...