Fact-checked by Grok 2 weeks ago

Stored-program computer

A stored-program computer is a digital computer architecture in which both instructions (the program) and data are stored in the same modifiable memory, enabling the machine to treat programs as data that can be altered during operation, a foundational concept for modern computing. This design, often associated with the , contrasts with earlier program-controlled machines where instructions were fixed via wiring or plugs, limiting flexibility. The concept was theoretically outlined by in his 1936 description of the universal , but it gained practical prominence through John von Neumann's 1945 report, First Draft of a Report on the , which proposed storing programs in electronic memory for the project. The first working stored-program electronic digital computer was the Manchester Small-Scale Experimental Machine (SSEM), known as the "Baby," which successfully executed its initial program on June 21, 1948, at the , using Williams-Kilburn tube memory. Subsequent implementations, such as the in 1949 at the , demonstrated practical utility by running useful programs for scientific computation, solidifying the architecture's role in enabling , compilers, and the evolution of general-purpose computing.

Core Concepts

Definition and Principles

A stored-program computer is a type of machine in which both instructions and are stored in the same addressable unit, allowing the to access and manipulate instructions in the same manner as . This design enables instructions to be treated as modifiable , facilitating dynamic alteration during execution. The serves as a primary embodiment of this principle, emphasizing the unified storage of code and . The core operational principles revolve around the separation of functionality from software, where the executes instructions stored in without inherent task-specific wiring. Central to this is the fetch-execute cycle, in which the retrieves an instruction from using an pointer or , decodes it, and carries out the operation before incrementing the pointer for the next . This cycle allows for simply by loading new instructions into , eliminating the need for physical reconfiguration to change tasks. This architecture underpins computational universality, enabling the stored-program computer to simulate any other computing device or perform arbitrary computable functions, provided sufficient memory is available—a property conceptually aligned with . By representing instructions as , the can implement loops, conditionals, and recursive procedures, mirroring the capabilities of a . Key components include the (CPU), which encompasses the (ALU) for computations, registers for temporary , and the instruction pointer; the unit, which holds both instructions and data in addressable locations; and the , a that orchestrates the fetch-execute cycle by generating signals to coordinate operations across the CPU and memory.

Distinction from Other Architectures

Fixed-program computers, such as mechanical calculators and early electromechanical devices, operate with instructions that are hardwired into the machine's circuitry or configured via physical means like plugs, switches, or punched tapes. For instance, the (1944), an electromechanical calculator developed by Howard Aiken and , relied on punched paper tapes for instruction sequences and plugboards for wiring connections, making it inherently limited to predefined tasks without modifiable program storage. This design restricts flexibility, as altering the machine's behavior requires manual reconfiguration of hardware components, often involving extensive rewiring or replacement of tapes, which is time-consuming and error-prone. In contrast, stored-program computers treat instructions as data stored in the same modifiable memory, enabling dynamic changes to the program without hardware intervention. This allows for , where programs can alter their own instructions during execution, and facilitates easier debugging and adaptation by simply loading new instruction sets into memory. Fixed-program systems, however, demand physical rewiring or redesign for any functional changes, prohibiting such runtime modifications and tying the machine's capabilities to its initial engineering. The exemplifies an alternative design with physically separate memory spaces and pathways for instructions and data, which in its pure form—as seen in the —does not support stored programs since instructions reside in non-modifiable storage like tapes rather than shared, alterable memory. The , foundational to stored-program computers, employs a unified memory for both, allowing seamless access but introducing the von Neumann , where the shared bus limits concurrent instruction fetching and data operations, potentially constraining performance as processing speeds increase. While Harvard designs avoid this through parallel access paths, they increase hardware complexity and cost compared to the simpler unified approach. Stored-program architectures offer key advantages in and generality, as programs can be distributed and executed as files across compatible machines without hardware alterations, promoting widespread reusability and rapid iteration. This shifts development costs from expensive physical reconfigurations to more efficient software modifications, enabling broader applicability in diverse computational tasks.

Historical Development

Precursors and Theoretical Foundations

The , conceived by in the 1830s, represented an early conceptual precursor to programmable computing, though it was never constructed during his lifetime. Babbage's design incorporated punched cards— inspired by the Jacquard loom—for inputting both data and operation sequences, allowing the machine to perform a variety of calculations under user-defined instructions rather than fixed mechanical paths. This separation of from hardware wiring laid foundational ideas for flexibility in computation, even as programs were externally supplied via cards rather than stored internally. In the early , Konrad Zuse advanced programmable machines with the Z3, a relay-based computer completed in 1941, which executed instructions read from in a sequential manner. Unlike earlier fixed-wiring calculators, the Z3 featured a program interpretation unit that processed instructions for operations, including floating-point calculations, enabling it to solve complex problems like equations. However, its programs were not stored in modifiable memory but fed externally without support for conditional jumps, limiting it to linear execution and distinguishing it from fully stored-program systems. Alan Turing's 1936 paper, "On Computable Numbers, with an Application to the Entscheidungsproblem," provided a rigorous theoretical foundation by introducing the universal Turing machine, a hypothetical device capable of simulating any other Turing machine given a description of its rules encoded on an infinite tape. This model demonstrated that a single machine could read and execute arbitrary instructions from its tape—functioning as both memory and program storage—proving the universality of computation and paving the way for machines where software and data shared the same medium. As Turing described, "It is possible to invent a single machine which can be used to compute any computable sequence," with the tape holding the "standard description" of the target machine's behavior. John von Neumann's 1945 "First Draft of a Report on the " formalized the stored-program principle for electronic computers, proposing that instructions and data be encoded in and stored interchangeably in high-speed , allowing programs to be modified during execution. Drawing on Turing's universality, von Neumann outlined a that fetched and decoded instructions from , enabling efficient reprogramming without hardware alterations—a key evolution from tape or card-based inputs. This architecture emphasized the equivalence of instructions and data, stating that "the logical control [would] be so entirely disconnected from the specific arithmetic processes that, with suitable instructions, any of the arithmetic organs can be used for any type of arithmetic process."

First Practical Implementations

The , officially known as the Small-Scale Experimental Machine (SSEM), was the world's first electronic stored-program computer to successfully execute a program from its electronic memory. Built by Frederic C. Williams, Tom Kilburn, and at the , it ran its inaugural program on June 21, 1948, which involved finding the highest proper factor of the number 2^{18} (262144) by trial division, testing every integer downwards from 2^{18} - 1 until a was found. This demonstration proved the viability of storing both data and instructions in the same modifiable electronic memory, a key departure from earlier machines like that required physical reconfiguration for reprogramming. The Baby's design centered on a single Williams-Kilburn tube for , capable of storing 1,024 bits organized as 32 words of 32 bits each. Instructions were 32-bit words, featuring a simple format with 3 bits for the and 13 bits for addressing, enabling basic operations like , , and conditional jumps across its limited 32-word store. Though not intended for practical , the machine's success validated the Williams-Kilburn tube's reliability for storage, paving the way for larger systems. Following the Baby, the Electronic Delay Storage Automatic Calculator (EDSAC) emerged as the first practical general-purpose stored-program computer, operational at the under . Completed in 1949 and running its first program on May 6 of that year, EDSAC performed scientific calculations such as generating tables of squares and primes, marking the transition from experimental prototypes to usable tools for . Its design incorporated initially 512 words of mercury (later upgraded to 1024 words in 1952) and subroutines for , enabling efficient handling of complex numerical tasks in fields like physics and . Concurrent developments included the , completed in 1949 by and for the Northrop Aircraft Company, which became the first operational stored-program computer in the United States with dual processors and for input. In parallel, the Soviet Union's MESM (Small Electronic Calculating Machine), developed by Sergei Lebedev and completed in 1950, represented an independent effort as the first stored-program electronic computer in , used initially for rocketry and nuclear research calculations. These implementations overcame significant challenges from prior designs like , which lacked stored programs and relied on manual wiring and switches for reconfiguration, making reprogramming time-consuming and error-prone. The shift required innovations in reliable, random-access electronic memory—such as the and delay lines—to enable instructions and data to coexist dynamically, inspired by John von Neumann's 1945 report outlining the stored-program architecture. This evolution addressed ENIAC's limitations in flexibility, allowing machines like the Baby and to execute programs electronically without hardware alterations.

Expansion and Commercialization

The transition from experimental prototypes to commercial viability marked a pivotal phase in the development of stored-program computers during the early 1950s. Building on foundational systems like the , which became operational in as the first practical general-purpose stored-program electronic computer, manufacturers began producing machines for sale to government and private entities. This shift enabled broader application beyond academic and military research, with companies investing in scalable designs that emphasized reliability and efficiency. A landmark in this expansion was the , developed by Eckert and Mauchly and completed by after acquiring their company in 1950. Delivered to the U.S. Census Bureau on March 31, 1951, it was the first commercial stored-program computer placed into production, featuring drives for input and output to handle large datasets efficiently. The system, which used 5,200 vacuum tubes and weighed 29,000 pounds, was sold for over $1 million per unit, with 46 ultimately delivered to customers including utilities, insurance firms, and the U.S. military. entered the market the following year with the 701 Defense Calculator, its first production stored-program computer targeted at scientific computing. Announced in 1952, the 701 incorporated punched-card readers and punches for data handling, marking IBM's significant corporate commitment to the technology and resulting in 19 units rented primarily to national laboratories, the Weather Bureau, and defense contractors. The commercialization extended globally, with the producing early examples based on academic designs. The , a refined commercial version of the , was delivered to the in February 1951 as the world's first commercially available general-purpose stored-program computer; nine units were sold between 1951 and 1957, including exports to , the , and . Similarly, LEO I, developed by J. Lyons & Co. and inspired by the , ran its first business application—a bakery valuation program—on November 17, 1951, establishing it as the first stored-program computer dedicated to commercial processes like production scheduling. By the late 1950s, advancements incorporated transistors for improved performance, as seen in the 7090, announced in 1958 and deployed starting in 1959 as the company's first commercial transistorized stored-program system, offering six times the speed of its vacuum-tube predecessor for scientific and administrative tasks. Stored-program computers saw rapid adoption across defense, scientific, and business sectors in the 1950s, driven by their versatility in handling diverse workloads without hardware rewiring. In defense, systems like the supported calculations for national laboratories and military applications, while aided census and intelligence processing for the U.S. government. Scientific use expanded through machines like the 701 for engineering simulations at aircraft manufacturers and weather agencies, and business applications emerged with LEO I automating clerical tasks at Lyons and UNIVAC deployments in insurance and utilities. The inherent flexibility of stored-program architectures—allowing software modifications to adapt to new needs—facilitated iterative improvements, contributing to reductions in physical size, power consumption, and overall costs as integration progressed in the decade. This flexibility extended to telecommunications by the mid-1960s, where stored-program control revolutionized switching systems. Bell Labs introduced the No. 1 Electronic Switching System (1ESS) in Succasunna, , on May 30, 1965, as the first large-scale stored-program-controlled , replacing electromechanical relays with a computer directing call routing via programmable instructions. Capable of handling up to 80,000 calls per hour and supporting features like , the 1ESS used 731,000 bytes of memory and marked a shift toward software-defined networks in .

Technical Aspects

Memory and Instruction Storage

In the stored-program computer architecture, instructions and are stored in a unified space, allowing the same to access both without distinction between program code and operands. This model treats as a linear of addressable locations, where each location holds a fixed-size word—typically 32 or 40 bits in early designs—that encodes both instructions and in . An word generally consists of an (a few bits specifying the , such as or branching) followed by fields (bits indicating addresses or immediate values), enabling programs to be loaded, modified, and executed dynamically from the same storage as variables and constants. Early implementations relied on innovative but limited storage technologies to realize this unified model. The Manchester Baby (1948), the first functional stored-program computer, used Williams tube memory—cathode-ray tubes storing bits as electrostatic charges on the screen's phosphor coating, refreshed periodically to prevent decay. This provided random access to 32 words of 32 bits each (1,024 bits total, equivalent to 128 bytes), sufficient for simple programs but prone to interference from nearby tubes. Similarly, the EDSAC (1949) employed mercury delay-line memory, where ultrasonic pulses circulated in tubes of liquid mercury to represent bits serially, offering non-random access but higher density; its 32 delay lines stored 512 words of 35 bits (approximately 17.9 kilobits). Advancements in the shifted toward more reliable and faster media. Magnetic , introduced around 1950 in machines like the ERA 1103, used a rotating cylinder coated with ferromagnetic material to store bits magnetically, providing with capacities up to several kilobytes and serving as auxiliary for instructions in some designs. By mid-decade, magnetic core memory—tiny ferrite rings wired in a , magnetized to represent bits—became dominant, offering true with access times under 1 microsecond; it was employed in systems like the series from the late 1950s, with initial capacities of 1,000 to 12,000 words (approximately 36 to 432 kilobits for 36-bit words). This technology persisted into the 1970s until the advent of RAM, exemplified by Intel's 1103 DRAM chip (1970), which packed 1,024 bits per chip using MOS transistors for volatile , enabling rapid scaling to megabyte-level main memories in commercial machines. Memory addressing in these systems evolved from basic absolute schemes, where operands directly specified fixed locations in the unified space, to relative addressing for modularity, particularly in larger programs. In the , absolute addressing used the full 5-bit address field (supporting 32 locations) to reference any word for instructions or . Capacities started small—hundreds to thousands of bits in prototypes like the Baby and —but grew exponentially; by the 1960s, core-based systems routinely offered 64 to 256 kilobits, reaching scales with integration by the mid-1970s, vastly expanding programmable complexity. The unified storage of instructions and data introduced inherent security risks, as programs could inadvertently or maliciously overwrite code via address errors, akin to conceptual buffer overflows where data spills into instruction space. Early designers recognized this vulnerability in the model, where modifiable instructions enabled self-altering code but risked system instability from erroneous writes. This shared access also contributed to the bottleneck, limiting throughput as the processor serially fetched both code and data over a single bus.

Programming and Execution

In stored-program computers, instructions typically consist of an (opcode) specifying the desired action and one or more fields indicating the locations of or results in . For example, in the , the world's first operational stored-program computer completed in , each 32-bit instruction included a 5-bit field for the (, supporting up to 32 lines), a 3-bit function for the (enabling eight possible operations such as or ), and additional bits for or sign, with unused bits ignored by the . Addressing modes in these early systems were limited but essential for flexibility; common modes included immediate (where the operand value is embedded directly in the instruction, though rare in initial designs), (using the field to access directly), and indirect (where the field points to a location containing the effective , often used in jump instructions like the Baby's JMP, which loaded an indirect into the ). These formats allowed instructions to treat uniformly for and , enabling dynamic behavior. The execution of programs follows the fetch-execute cycle, orchestrated by the to process instructions sequentially from . In the first step, the fetches the instruction at the address stored in the (PC), incrementing the PC to point to the next instruction; this mirrors the design outlined in John von Neumann's 1945 report, where the central arithmetic part and coordinate to retrieve binary-coded orders from a buffer. The fetched instruction is then decoded to identify the and operands, determining the operation (e.g., or branching) and resolving addresses via the specified mode. Execution performs the operation, such as adding values from to the accumulator or storing results back to , with any updates to registers or the PC occurring as needed; finally, results are written if required, and the cycle repeats. This iterative process, with instruction execution times of about 1.2 milliseconds (approximately 800 ) in the , enabled automatic computation without manual reconfiguration, distinguishing stored-program systems from prior wired-program machines. Early programming for stored-program computers relied on low-level assembly languages, where human-readable mnemonics were translated into machine code via rudimentary assemblers or loaders. In the EDSAC (1949), programmers wrote code using symbolic opcodes (e.g., "A" for add to accumulator, "S" for subtract) on paper, converting them to 17-bit binary instructions (5-bit opcode, 10-bit address, 1-bit long/short modifier) punched onto paper tape; the machine's "initial orders"—a fixed 31-instruction bootstrap loader stored in the first 32 memory words—automatically assembled and loaded the program by scanning the tape, substituting symbols for binary values, and placing instructions in memory starting at word 32. This sub-routine library approach, detailed by Maurice Wilkes, minimized manual binary coding errors and supported reusable code blocks for common tasks like square roots, marking an early step toward systematic software development. A key feature of these systems was self-modification, where programs could alter their own in to adapt during , exploiting the uniformity of and . For instance, in programs, a might modify the field of a subsequent to increment a pointer, as in vector arithmetic where an add 's is updated mid-execution to process elements sequentially without redundant jumps, saving precious in the 1K-word limit. This technique, while efficient for optimization in resource-constrained environments, introduced complexity and bugs, as seen in early applications where self-modifying jumps facilitated conditional branching but required careful tracking of locations. Debugging and testing in early stored-program computers involved manual intervention and basic output mechanisms due to the absence of sophisticated tools. Programmers used single-step execution modes, such as EDSAC's "Single E.P." button, to advance one instruction at a time while observing indicator lights for register states or memory contents; for deeper analysis, operators halted the machine to read core (or equivalent delay-line) memory via console switches or generated dumps by printing selected memory words onto teleprinter output. In practice, errors like the 1949 Airy disk diffraction program for EDSAC were identified through tape re-punching after manual code reviews, with jumps logged on paper to trace execution paths, highlighting the labor-intensive nature of verifying programs in these pioneering systems.

Impact and Legacy

Influence on Modern Computing

The stored-program paradigm, as articulated in John 's 1945 EDVAC report, established the foundational architecture for modern general-purpose computers by enabling instructions and data to reside in the same memory, a principle that underpins dominant instruction set architectures (ISAs) like x86 and . This von Neumann model allows processors to fetch, decode, and execute programs dynamically from memory, providing the flexibility that defines contemporary computing systems from desktops to embedded devices. x86, prevalent in servers and personal computers, and , which powers most mobile processors, both implement this shared-memory approach, ensuring compatibility with a vast array of software while evolving through generations of hardware improvements. The paradigm's influence extends to the software ecosystem, where treating programs as modifiable data in memory facilitated the creation of high-level languages and supporting tools. , introduced by in 1957, was among the first such languages, allowing programmers to write in mathematical notation that compilers could translate into stored and executed in memory, revolutionizing scientific computing. This enabled the development of operating systems like UNIX in the 1970s, which manage memory allocation for both data and executable programs, supporting multitasking and resource sharing across diverse hardware. Compilers and interpreters, integral to modern programming, rely on this stored-program concept to generate and load code dynamically, fostering portability and abstraction layers that separate application logic from hardware specifics. By decoupling program functionality from fixed hardware wiring, the stored-program approach has driven scalability across computing scales, from 1950s mainframes to today's infrastructures, where machines emulate the on distributed to handle in processing demands. This flexibility aligns with , as density increases primarily benefit software optimization rather than requiring architectural overhauls, allowing systems to scale performance through layered hierarchies like caches and SSDs. Consequently, the permeates ubiquitous applications: smartphones execute stored apps via processors, servers run workloads on x86 clusters, and devices process in constrained , all leveraging the same core principle for efficient, reprogrammable operation. The standardization of ISAs around this model, whether RISC's simplified instructions in or CISC's complex ones in x86, ensures and in both and software domains.

Challenges and Limitations

The stored-program design, while revolutionary, introduced the bottleneck, where instructions and data share the same memory bus, leading to contention and delays during access. This shared pathway limits overall system throughput, as the processor must serially fetch instructions and data, creating a fundamental performance constraint that has persisted despite advances in hardware speed. Such delays are inherent to the fetch-execute cycle in this architecture. Early implementations of stored-program computers relied on , which suffered from high failure rates, often causing sudden program crashes and operational downtime. For instance, the , an influential early electronic computer with precursors to stored-program concepts, experienced vacuum tube failures approximately every two days due to overheating and wear, highlighting the reliability challenges of the era. To address these issues, engineers introduced basic error detection methods, such as parity bits appended to data words to detect single-bit errors during transmission or storage, to improve without correcting errors automatically. The unified memory for and in stored-program systems also created vulnerabilities by allowing instructions to be easily modified like ordinary , facilitating and enabling precursors to modern such as viruses and . This architectural feature made it possible for erroneous or malicious alterations to instructions, compromising system integrity; for example, exploits, rooted in this design, were demonstrated in the 1988 , which infected thousands of computers by injecting and executing in memory areas. Power and size constraints further limited early stored-program computers, as vacuum tube-based designs required substantial energy and physical space, restricting scalability until . The , one of the first practical stored-program machines, utilized around 3,000 tubes and consumed several kilowatts of power while occupying a room-sized footprint, illustrating how tube heat dissipation and needs imposed practical barriers to larger systems. To mitigate the Von Neumann bottleneck, alternatives like modified Harvard architectures—featuring separate memory buses for instructions and data—have been explored, particularly in digital signal processors (DSPs) for real-time applications. These designs, such as the Super Harvard Architecture in ADI SHARC DSPs, allow parallel access to code and data, reducing contention and improving performance in bandwidth-sensitive tasks without fully abandoning stored-program principles.

References

  1. [1]
    The Modern History of Computing
    Dec 18, 2000 · (A program-controlled computer, as opposed to a stored-program computer, is set up for a new task by re-routing wires, by means of plugs etc.).
  2. [2]
    [PDF] Von Neumann Computers 1 Introduction - Purdue Engineering
    Jan 30, 1998 · A Von Neumann computer stores instructions and data together in a common memory, and is the basic architecture of most computers today.
  3. [3]
    [PDF] First Draft of a Report on the EDVAC* - Computer Science
    In 1945 von Neumann wrote First Draft of a Report on the EDVAC. Using ... for subsequent stored-program computer design based on the serial delay line memory.
  4. [4]
    The Stored Program - CHM Revolution - Computer History Museum
    Manchester Small Scale Experimental Machine ("Baby"). This groundbreaking demonstration machine was the first computer to execute a program from memory.
  5. [5]
    Milestones:Manchester University "Baby" Computer and its ...
    On 21 June 1948 the “Baby” became the first computer to execute a program stored in addressable read-write electronic memory.Citation · Historical significance of the... · Features that set this work...
  6. [6]
    [PDF] The History of Computer Science
    Maurice Wilkes built the EDSAC, sometimes called the first stored-program digital computer. ... He proposed a definition of "thinking" or "consciousness ...
  7. [7]
    None
    ### Summary of Stored-Program Computers (Chapter 13)
  8. [8]
    [PDF] Computer Systems Design and Architecture - Clemson University
    The stored program concept says that the program is stored with data in the computer's memory. The computer is able to manipulate it as data—for example, to ...
  9. [9]
    Turing's Pre-War Analog Computers – Communications of the ACM
    Aug 1, 2017 · “[The] essential point of the stored-program computer is that it is built to implement a logical idea, Turing's idea: the universal Turing ...
  10. [10]
    2.3. The First Generation — CS160 Reader - Chemeketa CS
    The Stored Program Computer​​ Early computing machines had fixed programs. For example, a desk calculator is a fixed program computer. It can do basic ...
  11. [11]
    Harvard Mark I/IBM ASCC - CHM Revolution
    The electromechanical Harvard Mark I was powered by a 50-foot rotating shaft. It was the brainchild of Howard Aiken, but built by IBM, whose chairman Thomas ...
  12. [12]
    Stored program paradigm – Clayton Cafiero
    Sep 7, 2025 · The Von Neumann architecture built on this concept. It describes a computer model in which a single memory holds both instructions and data, and ...
  13. [13]
    John Von Neumann and Computer Architecture - Washington
    The modified Harvard architecture fixes the von Neumann architecture's bottleneck by using separate instruction and data caches between the memory and CPU.
  14. [14]
    Computer Architecture:Introduction
    The advantages of the stored program concept is that programs can be simply shipped as files of binary numbers that maintain the binary compatibility and ...<|control11|><|separator|>
  15. [15]
    The Analytical Engine: 28 Plans and Counting - CHM
    Dec 8, 2015 · Beginning around 1822, English mathematician Charles Babbage designed a series of mechanical calculating machines, or as he called them Engines.Missing: primary source
  16. [16]
    Konrad Zuse - ACM Digital Library
    Zuse started developing program-con- trolled binary calculators in 1936 [1]. In. 1941, he completed the first fully opera- tional digital computer. His machine,.Missing: details | Show results with:details
  17. [17]
    [PDF] ON COMPUTABLE NUMBERS, WITH AN APPLICATION TO THE ...
    The universal computing machine. It is possible to invent a single machine which can be used to compute any computable sequence. If this machine M is ...
  18. [18]
    [PDF] First draft report on the EDVAC by John von Neumann - MIT
    IEEE Annals of the History of Computing, Vol. 15, No. 4, 1993 • 33. Page 8. First Draft of a Report on the EDVAC.
  19. [19]
    The Manchester Small Scale Experimental Machine -- "The Baby"
    The world's first stored-program electronic digital computer successfully executed its first program on 21st June 1948. That program was written by Tom ...
  20. [20]
    Manchester SSEM Simulation Model - ICSA
    This memory was a specially adapted cathode ray tube now known as a Williams-Kilburn Tube. The Baby demonstrated that the Williams-Kilburn Tube could be used as ...
  21. [21]
    EDSAC - Electronic Delay Storage Automatic Calculator
    It is generally accepted that the EDSAC was the first practical general purpose stored program electronic computer. Other, earlier machines were either ...
  22. [22]
    70 years since the first computer designed for practical everyday use
    May 3, 2019 · EDSAC ran its first successful program on 6th May 1949, making it the world's first fully functional stored-program computer.Missing: purpose | Show results with:purpose
  23. [23]
    Innovative Aspects of the BINAC, the First Electronic Computer Ever ...
    Eckert and Mauchly's BINAC was the first stored-program computer ever fully operational, since the Moore School's EDVAC, which was designed to be the first ...
  24. [24]
    Sergei Lebedev Produces MESM, the First Russian Stored-Program ...
    MESM, the first Russian stored-program computer, was operational in 1951, ran its first program in 1950, and was used for rocketry and nuclear calculations.
  25. [25]
    Programming the ENIAC: an example of why computer history is hard
    May 18, 2016 · Was the modified ENIAC less of a computer than the Manchester Baby because its program was in ROM and could not be changed by the computer?
  26. [26]
    UNIVAC - Engineering and Technology History Wiki
    Dec 6, 2019 · UNIVAC, the UNIVersal Automatic Computer, was the first computer built for general commercial use and used magnetic tape, rather than punch cards, to input and ...Missing: output | Show results with:output
  27. [27]
    1951 | Timeline of Computer History
    The LEO was England's first commercial computer and was performing useful ... It was the first tape storage device for a commercial computer, and the ...<|separator|>
  28. [28]
    The IBM 701 - Columbia University
    Jan 1, 2004 · The IBM 701 Defense Calculator (1952) was IBM's first production computer. It was designed primarily for scientific calculation.
  29. [29]
    The Ferranti Mark 1 (Digital 60)
    The Ferranti Mark 1 was the world's first commercially available general-purpose computer. The first machine off the production line was delivered to the ...
  30. [30]
    Milestone-Proposal:LEO: First Application of Digital Computing to ...
    LEO 1 is the first stored-program computer applied to business. The design included many World-First developments: Lyons conceived the idea of using electronic ...
  31. [31]
    The IBM 7090 - Columbia University
    The IBM 7090, announced in 1958, was a transistorized version of the vacuum-tube-logic 709 and the first commercial computer with transistor logic.
  32. [32]
    [PDF] Chapter 2 Computer Evolution and Performance Computer ...
    Scientific calculations. • 1955 - the ... • Smaller size gives increased flexibility. • Reduced power and cooling ...
  33. [33]
    The First Electronic Telephone Switching System (1ESS) - Tikalon's
    The first computerized electronic switching system, called a stored program control telephone exchange, was put into service in Succasunna, New Jersey, in 1965.
  34. [34]
    Memory & Storage | Timeline of Computer History
    Early memory included the Williams-Kilburn tube, magnetic core memory, and magnetic drum memory. Tape drives and magnetic disk drives were also early storage ...
  35. [35]
    Thanks For The Memories: Touring The Awesome Random Access ...
    Mar 8, 2016 · EDSAC used 32 delay lines to hold 512 35-bit words (actually, the mercury tubes held more data, but some were used for housekeeping like ...
  36. [36]
    How the von Neumann bottleneck is impeding AI computing
    Feb 9, 2025 · Processors hit what is called the von Neumann bottleneck, the lag that happens when data moves slower than computation.
  37. [37]
    5.1. The Origin of Modern Computing Architectures - Dive Into Systems
    The ACE was a stored-program computer, meaning that both the program instructions and its data are loaded into the computer memory and run by the general- ...
  38. [38]
    A brief history of FORTRAN/Fortran - Ibiblio
    This wonderful first FORTRAN compiler was designed and written from scratch in 1954-57 by an IBM team lead by John W. Backus and staffed with super-programmers.
  39. [39]
    [PDF] Introduction to computing, architecture and the UNIX OS
    Jan 8, 2019 · This basic structure was first proposed by John Von Neumann in 1945. • The Arithmetic/Logic Unit (ALU) does all the calculations. • The Memory ...
  40. [40]
    Von Neumann Architecture :: Intro CS Textbook
    In a stored program computer, the computer program itself can be stored in memory, just like the programs data. So no longer do we need to have separate bits of ...
  41. [41]
    [PDF] RISC, CISC, and ISA Variations - CS@Cornell
    Brief Historical Perspective on ISAs. Accumulators. • Early stored-program computers had one register! • One register is two registers short of a RISC-V ...
  42. [42]
    [PDF] Can Programming Be Liberated from the von Neumann Style? A ...
    Thus programming is basically planning and detailing the enormous traffic of words through the von Neumann bottleneck, and much of that traffic concerns not ...
  43. [43]
    A Plausible Solution to the Von Neumann Performance Bottleneck
    Abstract: The ill-famed von Neumann bottleneck has been the main performance hurdle since the invention of computers. Although several techniques such as ...Missing: sources | Show results with:sources
  44. [44]
    Ubiquity symposium 'What is computation?'
    Dec 31, 2010 · ... vacuum tubes it was made up would fail at too high of a rate. Indeed the ENIAC did have a tube failure about once every two days (Randall, 2006) ...
  45. [45]
    A Short History of the Second American Revolution - UPenn Almanac
    Without the ability to store a program in its own memory--a feature known as the stored program concept--ENIAC had to be manually wired to execute a particular ...
  46. [46]
    [PDF] Error-detection-based quantum fault tolerance against discrete Pauli ...
    Nov 22, 2006 · Faulty vacuum tubes in early classical computers such as the ENIAC were a major limitation to scalability, and one impetus to develop a ...
  47. [47]
    EDSAC - Clemson University
    An "order" was a 17-bit instruction that consisted of three fields: 5-bit opcode, 10-bit address, 1-bit length code Orders were punched onto tape in symbolic ...Missing: language | Show results with:language
  48. [48]
    [PDF] An Architectural Approach to Preventing Code Injection Attacks
    We propose a change to the memory architecture of modern processors that addresses the code injection problem at its very root by virtually splitting memory ...Missing: risks | Show results with:risks
  49. [49]
    Memory Safety: An Explainer
    Sep 26, 2023 · The Von Neumann architecture is a “Stored Program” design that stores CPU instructions in the same memory as data. This allowed for simpler, ...
  50. [50]
    [PDF] Classic machines: Technology, implementation, and economics
    Tube failures were bimodal, with more failures near the start of operation ... TRANSISTOR FAILURE RATE. 62. 10-9. 558. '60 61. '69 *64. '53. 1954 '65.
  51. [51]
    [PDF] Designing a Software Defined Radio to Run on a Heterogeneous ...
    the Harvard architecture is able to mitigate the inherent bottleneck in the Von Neumann architecture by physically separating data and instruction pathways.
  52. [52]
    [PDF] ADSP-21161 - Electrical & Computer Engineering
    ... Harvard Architecture (SHARC). Digital Signal Processor (DSP). The architectural descriptions cover func- tional blocks, buses, and ports, including all ...<|control11|><|separator|>