Fact-checked by Grok 2 weeks ago

36-bit computing

36-bit computing encompasses computer architectures that employ a 36-bit word as the fundamental unit of representation, addressing, and execution, a design choice prominent in mainframe systems during the mid-20th century. This word size facilitated efficient handling of scientific and calculations, where it provided adequate for floating-point operations while accommodating data formats. The adoption of 36 bits stemmed from early computing's roots in punch-card processing and teletype systems, which used 6-bit character encodings; a 36-bit word thus held exactly six such characters, optimizing storage for alphanumeric data in commercial and scientific applications. Additionally, it supported balanced floating-point formats, typically allocating bits for sign, , and exponent to meet the demands of numerical computations in physics and engineering, the primary drivers of early electronic computers. Pioneering systems like the , introduced in 1952 as a defense-oriented , and the UNIVAC 1103A, introduced in 1956 as the first commercial magnetic core-memory machine, established 36-bit designs in the scientific computing domain. In the , advanced 36-bit technology with the in 1964, its first entry into large-scale systems, followed by the in 1967, which became a cornerstone for environments. The 's architecture, featuring 16 general-purpose registers and support for multitasking, powered influential software ecosystems, including early versions of EMACS, TeX, and the ARPANET's initial implementations, while running operating systems like TOPS-10 and TENEX. Successors such as the DECSYSTEM-20, introduced in 1976, extended this lineage into the late , emphasizing compatibility and expandability up to 256K words of memory. By the mid-1960s, the rise of standardized 8-bit byte-addressable systems, exemplified by IBM's System/360 in , began supplanting 36-bit architectures in favor of 32-bit words for broader compatibility across commercial and scientific workloads. Despite this shift, 36-bit systems persisted in niche applications until the early , leaving a legacy in challenges and the evolution of modern computing paradigms.

Introduction and Fundamentals

Definition and Basic Characteristics

36-bit computing encompasses computer architectures where the fundamental data unit, termed a word, consists of 36 bits, serving as the standard width for integers, memory addresses, and other data elements. This configuration allowed for compact representation of numerical and textual information in early systems. A key characteristic is the 36-bit word's equivalence to six 6-bit characters, enabling efficient storage of alphanumeric data via encodings like Fieldata, where each character occupies 6 bits. The word size aligned well with early mainframe requirements for precision in scientific and computations, offering a balance between computational capability and memory efficiency. Compared to smaller word sizes like 32 bits, the 36-bit architecture provided advantages in data packing, such as accommodating up to 10 digits per word in fixed-point representations, which enhanced for numerical tasks without additional storage overhead. This efficiency extended to character handling, packing six characters directly into one word versus fractional or partial use in narrower formats. Typical operations on these data units included fixed-point arithmetic, such as 36-bit addition and subtraction executed in a single cycle, multiplication of two 36-bit operands yielding a 72-bit product, and division of a 72-bit dividend by a 36-bit divisor.

Rationale for 36-Bit Architecture

The 36-bit word length in early computing architectures was selected in part to accommodate the precision requirements of contemporary mechanical calculators, such as Friden models, which typically handled up to 10 decimal digits in their registers. Representing a signed 10-digit decimal number requires approximately 34 bits for the magnitude (since \log_2(10^{10}) \approx 33.2), plus an additional bit for the sign, making 36 bits a practical choice that provided sufficient headroom without excess waste. Another key motivation was the efficiency of storing text data using prevailing 6-bit encodings, which were common in the for representing uppercase letters, digits, and basic symbols in business and scientific applications. A 36-bit word could thus hold exactly six such characters (6 × 6 = 36 bits), enabling compact and aligned storage without partial-word fragmentation, which optimized memory usage in resource-constrained systems. This word size also struck a between computational for scientific workloads and practical addressing limits. For instance, using 18 bits for addresses within a 36-bit word allowed up to $2^{18} = 262,144 words of addressable —adequate for many early applications—while leaving ample bits for manipulation in floating-point and operations essential to and tasks. In the vacuum-tube era, the favored word-aligned operations over flexible byte boundaries to simplify hardware implementation, reducing the complexity of addressing logic, shifting mechanisms, and arithmetic units that would otherwise require additional vacuum tubes and wiring for sub-word handling. This design choice minimized costs and improved reliability in systems where tube failures were a common issue.

Historical Evolution

Early Developments (1950s)

The inception of 36-bit computing in the marked a significant advancement in scientific computation, primarily driven by military and research demands for handling complex numerical problems. The ERA 1103, introduced in 1953 and derived from the classified system, was one of the earliest commercial 36-bit machines, designed for high-performance scientific applications such as statistical and mathematical analyses. It featured a 36-bit word length, with 1,024 words of high-speed electrostatic storage and 16,384 words of magnetic drum storage, enabling efficient processing of large datasets in defense-related tasks. The 1103, a close variant released the same year by , shared this architecture and targeted similar scientific computing needs, establishing 36-bit systems as a standard for precision calculations. IBM contributed prominently to early 36-bit adoption with the , announced in 1952 as the "Defense Calculator," which utilized a 36-bit word size to perform intensive simulations, including thermonuclear feasibility calculations for the hydrogen bomb project at . This system offered 2,048 words of electrostatic memory using Williams tubes, expandable to 4,096 words, providing an initial addressable limit suitable for the era's computational demands. The follow-on , introduced in 1954, enhanced this foundation with dedicated floating-point hardware and starting at 4,096 words (expandable to 32,768 words), prioritizing reliability and speed for and scientific workloads. The 36-bit word size emerged from practical constraints of vacuum-tube technology, where circuitry was commonly grouped into 6-bit modules for improved reliability and to align with 6-bit character encodings like , allowing six characters per word and supporting up to 24,576–98,304 characters across typical memory configurations of 4,096–16,384 words. These early systems laid the groundwork for broader adoption in the following decade.

Peak Usage and Advancements (1960s–1970s)

The marked the peak era for 36-bit computing, driven by the transition to transistor-based architectures that enhanced reliability, speed, and efficiency over vacuum-tube predecessors. The 7090, announced in 1959 and widely adopted throughout the decade, represented a pivotal advancement as 's first commercial transistorized scientific mainframe, offering significantly reduced power consumption and needs compared to earlier models like the 709. Installed at institutions such as in 1963 and upgraded to the 7094 variant by 1966, the 7090 excelled in scientific computations, supporting applications in physics simulations and early music synthesis at places like Princeton. Similarly, General Electric's , introduced in the early , provided a competitive family of 36-bit mainframes for large-scale scientific and tasks, featuring drum and disc-oriented systems with integrated software support. Following GE's exit from the computer market, Honeywell acquired the division in 1970, rebranding and evolving the GE-600 into the Honeywell 6000 series, which maintained 36-bit compatibility while introducing enhancements like the Extended Instruction Set for string processing and business applications. These systems solidified 36-bit dominance in scientific, government, and university environments, where IBM alone held over 60% market share in computing by the late 1960s, though 36-bit architectures powered significant high-performance workloads in niches until the mid-1970s. A key technical advancement was the expansion of memory addressing to 18 bits, enabling up to 262,144 words of core storage—equivalent to roughly 1 MB—across models like the Honeywell 6000 and DEC PDP-10, which facilitated larger datasets for complex simulations. 36-bit systems played a central role in pioneering and multiprogramming, enabling multiple users to interact concurrently via remote terminals and laying groundwork for networked computing. The GE-645, an enhanced GE-600 variant delivered to in 1967, hosted the operating system, which implemented segmented and supported for up to hundreds of users, influencing modern OS designs. continued commercialization post-merger, deploying it on 6000-series machines for government and academic sites until the 1980s. In networking experiments, DEC's served as early nodes, such as at the in 1969, running the TENEX OS to handle packet-switched communications and resource sharing across institutions. These innovations underscored 36-bit computing's versatility in fostering collaborative, multi-user environments critical to research in the 1960s and 1970s.

Key Systems and Implementations

IBM 700/7000 Series

The 700/7000 series marked 's initial foray into 36-bit computing, starting with the , announced in 1952 and oriented toward and scientific applications. This vacuum-tube machine used a 36-bit word length for and featured 2,048 words of electrostatic memory based on cathode-ray tubes, enabling it to perform over 16,000 additions or subtractions per second. Designed primarily for complex calculations in contexts, the 701 established the foundational 36-bit architecture that influenced subsequent models in the series. Building on the 701, the , introduced in , added significant advancements including hardware support for floating-point operations and index registers to facilitate more efficient programming for scientific workloads. It transitioned to , expandable to 32K words, which provided greater reliability and capacity compared to earlier electrostatic storage. These features made the 704 suitable for demanding tasks such as tracking, solidifying the 36-bit word as a standard for handling both and floating-point data in high-precision computations. The series evolved further with the transistorized IBM 7090, announced in 1958 and shipped in 1959, which offered roughly six times the performance of its vacuum-tube predecessor, the , through the use of solid-state logic on Standard Modular System cards. The subsequent IBM 7094, released in 1962, enhanced real-time processing capabilities with indirect addressing, additional index registers (up to seven), and support for input/output channels, making it ideal for applications like the airline reservation system and the (BMEWS). Retaining the 36-bit , the 7094 utilized up to 32K words and achieved approximately 229,000 for basic operations, such as additions, with a 2.18 μs cycle time. The 700/7000 series persisted into the mid-1960s, with production of models like the 7094 continuing until 1969 to support legacy installations, even as shifted toward the System/360 architecture announced in 1964. This transition emphasized byte-addressable 8-bit data paths in the S/360, but the 36-bit internals of the 7000 series influenced compatibility features, such as emulators in higher-end S/360 models that allowed execution of 700-series software, ensuring a smoother migration for users reliant on 36-bit scientific computing.

DEC PDP-6 and PDP-10

The , introduced by in 1964, marked DEC's entry into 36-bit computing as its first large-scale system designed for general-purpose scientific data processing. With a 36-bit word length and support for memory capacities ranging from 8K to 64K words, it included 18-bit physical addressing along with protection and relocation registers to facilitate secure multitasking. Operating at approximately 0.25 , the PDP-6 was particularly suited for control applications, such as process and , due to its modular design and compatibility with high-performance peripherals. Building on the PDP-6 architecture, the PDP-10 series—produced from 1967 to 1983—evolved into DEC's flagship 36-bit minicomputer, emphasizing scalability and interactive use. Early models featured the KA10 processor, a transistor-based implementation delivering enhanced performance over its predecessor, while later variants like the KS10 utilized AMD 2901 bit-slice components and an Intel 8080A control processor to reduce costs without sacrificing core functionality. Memory expanded significantly, supporting up to 512K words in advanced configurations, which enabled handling of complex workloads in research environments. The PDP-10 played a central role in the development of the ARPANET, serving as a primary host for early networking protocols, and powered seminal AI research at institutions like Stanford's Artificial Intelligence Laboratory, where customized variants facilitated innovative software experiments. A hallmark of the PDP-10 was its advanced memory management, including paging hardware that supported virtual memory schemes like those in the TENEX operating system, providing a 256K-word virtual address space segmented into 512-word pages. This system employed demand paging, associative mapping via the BBN pager interface, and working set algorithms to manage page faults and core allocation efficiently, minimizing thrashing in multiuser scenarios. Complementing this, the PDP-10 offered high-speed I/O through dedicated busses and multichannel controllers, enabling rapid data transfer to peripherals such as disks, tapes, and network interfaces essential for real-time and timesharing operations. The 's enduring impact is evident in its prolonged commercial deployment, notably at , where the company relied on PDP-10 systems for core services like billing and routing from the through the early 2000s, licensing the architecture to sustain operations even after DEC's shift to VAX.

UNIVAC 1100 Series and Successors

The 1103, introduced in 1953 as the first commercial system in the lineage leading to the 1100 series, featured a 36-bit word architecture with logic and initial memory of 1,024 words, supplemented by drum storage of 12,288 words. This model marked an early advancement in 36-bit computing for scientific and engineering applications, building on prior UNIVAC efforts in large-scale . The subsequent UNIVAC 1105, released in , expanded core memory capacity to 12,288 words of 36 bits while retaining drum storage up to 32,768 words, enhancing performance for both scientific workloads and emerging business uses through improved capabilities via up to 24 drives. The 1100 series proper began evolving in the with models like the 1107 in 1962, transitioning to transistorized logic and thin-film , and progressed through the with systems such as the 1108 (1964) and 1110 (1971), achieving up to 1 million words of 36-bit by the early 1970s using plated-wire . The upgraded 1103A in 1956 replaced Williams tubes with up to 12,288 words. These systems incorporated dual addressing modes, including standard 18-bit effective addressing and extended modes with 24-bit indexing for larger spaces, enabling efficient handling of complex programs. High reliability was a principle, with features like error-correcting and modular redundancy achieving availability rates of 90-98% even in early models, making the series ideal for transaction-heavy environments in banking and government sectors, such as processing and financial record-keeping. In the , the 2200 series emerged as a mid-range complement to the high-end line, introducing with and 16K bit chips to replace core storage, starting with models like the 2200/10 in and offering capacities up to 524,288 words while maintaining full compatibility with software. This shift improved speed and reliability for distributed in applications, with mechanisms reducing access times to 100-200 nanoseconds. Sperry Univac's merger into in 1986 led to the ClearPath Dorado series in the 1990s, which virtualizes the 36-bit environment on modern Intel-based hardware, supporting legacy applications through emulation while scaling to millions of ; as of 2025, Release 21.0 continues to provide ongoing maintenance for critical banking and government workloads.

Other Notable Systems

The , introduced by in 1964, represented a family of 36-bit mainframe computers designed for scientific, engineering, and applications. These systems featured 36-bit words with 18-bit addressing, supporting up to 262,144 words of core memory in configurations like the GE-635 model, which emphasized multiprocessor capabilities and compatibility with peripherals from earlier GE designs. The architecture included two 36-bit accumulators and eight 18-bit index registers, enabling efficient handling of floating-point operations and large datasets typical of 1960s computing workloads. Following GE's exit from the computer business in 1970, continued and enhanced the line as the 6000 series, maintaining full compatibility while incorporating integrated circuits for improved performance. Models such as the offered expandable memory up to 1,048,576 36-bit words and supported advanced through the GECOS operating system with remote access via DATANET interfaces. Notably, the GE-645 variant, modified for the project in collaboration with and , pioneered secure multi-user with segmented limited to 256,000 words per segment, influencing modern operating system designs. on these 36-bit platforms ran until the early 1980s, demonstrating the architecture's suitability for interactive computing environments. In the 1980s, the Symbolics 3600 series emerged as a specialized 36-bit architecture tailored for artificial intelligence and symbolic processing, particularly Lisp-based applications. Introduced in 1983, these single-user workstations used a 36-bit word format with a tagged memory system, where each word included a 2-bit major type tag and optional 4-bit minor tag for runtime type checking and garbage collection, addressing the demands of dynamic Lisp data structures. The processor employed a stack-oriented design with 28-bit virtual addressing across 256-word pages, supporting up to 4 megabytes of physical memory in later configurations, and executed 17-bit instructions optimized for list processing and interactive development. This tagged approach, derived from MIT Lisp machine concepts, provided hardware acceleration for AI tasks, distinguishing the 3600 from general-purpose 32-bit systems of the era. Lesser-known 36-bit implementations included derivatives of systems like the Atlas, though primary Atlas models used 48-bit words; certain adaptations, such as the variant developed for specialized U.S. government applications in the early , employed a 36-bit word size with 24-bit data fields to balance performance and memory efficiency in scientific . Similarly, hybrid laboratory systems like the DEC LINC-8 (1966–1969) and its successor PDP-12 integrated 12-bit processors with software compatibility for 36-bit floating-point operations, allowing limited of larger 36-bit environments through optional processors that handled 36-bit single-precision arithmetic with 24-bit mantissas. These niche systems extended 36-bit principles to biomedical and experimental research without full hardware adoption.

Technical Specifications

Word Structure and Data Types

In 36-bit computing architectures, the fundamental unit of data is a 36-bit word, which could represent a single-precision fixed-point integer or be divided into two 18-bit halves for purposes such as indexing or half-word operations. This structure facilitated efficient handling of both arithmetic and address-related computations, with bits typically numbered from 0 (most significant) to 35 (least significant). Signed integers were commonly represented in sign-magnitude format, using 1 bit for the 0: 0 for positive, 1 for negative) and 35 bits for the magnitude, yielding a range from -2^{35} to 2^{35} - 1. Unsigned integers utilized the full 36 bits for magnitude, ranging from 0 to 2^{36} - 1, though some implementations employed for signed values to simplify arithmetic. Half-word integers (18 bits) extended this flexibility for smaller operands. Floating-point numbers followed a standardized single-precision format across many 36-bit systems: 1 (bit 0), an 8-bit exponent (bits 1-8, excess-128 bias for a range of -128 to +127), and a 27-bit normalized (bits 9-35, with the leading 1 implied). This provided approximately 8 decimal digits of precision, suitable for scientific computations, with double-precision extending to 72 bits over two words for greater accuracy. Decimal representation used 6-bit (BCD) encoding, compatible with punched-card standards, allowing up to 6 digits per 36-bit word with zone bits for alphanumeric data or separate sign handling. This format supported commercial applications requiring exact decimal arithmetic, with conversion instructions handling BCD-to-binary operations. Bit-level operations emphasized field extraction and manipulation, particularly for 6-bit fields aligned with character encodings, using instructions to load, deposit, or mask arbitrary bit strings within a word. These capabilities, including logical shifts and rotates, enabled precise control over sub-word data without full word overhead.

Memory Addressing and Limits

In 36-bit computing architectures, memory addressing typically employed 18-bit addresses embedded within instruction words, enabling direct to up to $2^{18} = 262,144 words of , which equates to roughly 1.2 given the 36-bit (4.5-byte) word size. This standard configuration balanced the need for efficient scientific computation with the technological constraints of the era, where addresses were often extracted from specific fields in the 36-bit . Early implementations in the , such as the , imposed stricter limits with 12-bit addressing for 18-bit half-words, supporting a maximum of 4,096 half-words or 2,048 full 36-bit words in standard configurations, though hardware expansions could extend this to 4,096 words. These constraints reflected the nascent state of and electrostatic storage technologies, prioritizing reliability over capacity in vacuum-tube-based systems. To overcome physical addressing limitations, later 36-bit systems introduced segmentation and paging mechanisms; for instance, the DEC under the operating system supported up to 4 million words of through these techniques, vastly expanding effective addressable space beyond hardware bounds. This virtual addressing allowed programs to operate in larger spaces while maintaining compatibility with the core 18-bit physical scheme. Physical memory in 1970s 36-bit systems, reliant on technology, scaled up to 4M words in larger production models like the PDP-10 KI10 and KL10 variants, though smaller models were limited to 256K-512K words and I/O interfaces and bus speeds often created bottlenecks that limited practical throughput for data-intensive applications. A modern echo of these addressing paradigms appears in x86 architectures' PSE-36 extension, which enables 36-bit physical addressing to support up to 64 GB of in 32-bit modes, drawing on the historical utility of extended bit widths for memory expansion.

Character Encoding Schemes

In 36-bit computing systems, character encoding schemes were designed to efficiently pack textual data into the fixed 36-bit word size, often prioritizing compatibility with existing standards or domain-specific needs. Early systems commonly employed 6-bit encodings, which allowed six characters per word and supported up to 64 distinct symbols, sufficient for uppercase letters, digits, , and codes. IBM's (BCD) encoding, used in systems like the and 709, represented alphanumeric characters in a 6-bit format derived from punched-card standards, with each word holding six characters for tasks such as business accounting. This scheme mapped digits 0-9 to patterns 000000 through 001001, while letters A-Z occupied 100001 to 110010, enabling direct compatibility with electromechanical tabulators. Similarly, FIELDATA, a 6-bit code developed under U.S. military auspices in the late 1950s, was standardized in MIL-STD-188A for communication systems and adopted in 1100 series computers, encoding 64 characters including uppercase, numerals, and military-specific symbols like variants. DEC's SIXBIT, introduced for and systems, provided a 6-bit of ASCII characters (codes 32-95 ), packing six per word for efficient storage of printable text in operating systems like TOPS-10. As the 7-bit ASCII standard emerged in , adaptations were necessary for 36-bit architectures to minimize wasted bits. The common 5/7 packing scheme stored five 7-bit ASCII characters in 35 bits of a word, leaving one bit unused, as implemented in systems for text files and terminals under TOPS-20. Storing 8-bit ASCII variants, such as those with , typically packed four characters per word (32 bits used, four wasted), though this was less efficient and rarer in pure 36-bit environments. In running on the GE-645/ 6180, a 9-bit byte scheme was used to encode both ASCII and characters, allowing four characters per word ( bits exactly) with the extra bit often for or extension, supporting text and higher-density in the system's hierarchical . DEC's RADIX-50 encoding optimized alphanumeric data for and PDP-11 systems by treating strings as base-50 numbers using a 40-character repertoire (A-Z, 0-9, period, dollar, and underscore), encoding approximately 2.5 characters per 16 bits or six full characters plus four extra bits per 36-bit word, commonly for filenames and symbols in assemblers.

Software Environment

Operating Systems

Several operating systems were developed specifically for 36-bit computing architectures, leveraging the word size to enable efficient multitasking, , and in multi-user environments. These systems pioneered features like and paging that influenced subsequent designs, optimizing for the hardware's capabilities in handling large address spaces and complex workloads. Digital Equipment Corporation's TOPS-10, introduced in the late 1960s for the , evolved from a simple monitor for the earlier into a robust system supporting both and . It used priority-based scheduling with round-robin quanta for interactive users, allowing multiple terminals to share resources while protecting user spaces through hardware-enforced modes. This enabled efficient in academic and research settings, with modular design accommodating varying memory configurations up to 512K words. Meanwhile, TENEX, developed by , Beranek and Newman in 1969 for modified s, introduced demand-paging , expanding effective address spaces to 256K words per and supporting multiprogramming with low overhead . Its innovations in file management and command completion influenced later systems, including aspects of UNIX's handling and user interfaces. Multics, initiated in the for the GE-645 (later systems), represented a landmark in secure, multi-user with its —the first of its kind—allowing directories as files for organized storage and access. It employed access control lists on every file entry for granular security, including mandatory controls to prevent unauthorized access, and utilized 9-bit bytes for full ASCII support, facilitating efficient data encoding in its segmented model. These features enabled reliable resource sharing among hundreds of users while emphasizing protection rings for multitasking integrity. For the UNIVAC 1100 series, EXEC II, deployed in the , was a drum-oriented batch system that managed sequential program execution and I/O overlaps via "symbionts" for peripheral buffering, supporting early transaction-like workloads on systems like the 1107 and 1108 with minimal 65K-word configurations. By the 1970s, the OS 1100 advanced this with integrated through the Transaction Interface Package (TIP), enabling real-time database access for applications such as banking, complete with locking and detection for concurrent operations in multiprocessor setups.

Programming Languages and Tools

Programming languages and tools for 36-bit computing were adapted to leverage the architecture's word size, often incorporating features for efficient handling of 6-bit characters, floating-point operations, and tagged data structures. Early high-level languages like emphasized numerical computation, while later adaptations of and specialized Lisp implementations exploited the full 36-bit word for integers and pointers. Assembly languages provided low-level control with operators tailored to the word's structure, complemented by debuggers for interactive development. Fortran implementations on 36-bit systems, such as FORTRAN released in 1958 for the and 709, were optimized for the hardware's built-in . This version introduced independent compilation of subroutines and separate assembly of object modules, enabling and efficient linking for scientific applications. The optimizations allowed Fortran to generate code that directly utilized the 704's floating-point instructions, achieving high performance for mathematical computations without emulating operations in software. Adaptations of for 36-bit architectures, including implementations on running on 6180 systems, treated the integer type ([int](/page/INT)) as a full 36-bit value, providing a range suitable for the word size. Characters were represented using 9-bit bytes, with four such bytes packing into one 36-bit word, which facilitated compatibility with the system's native data packing while supporting C's string handling and portability features. This configuration allowed C programs to interface directly with 36-bit memory addressing and arithmetic, though it required adjustments for byte-oriented operations compared to 8-bit systems. Lisp environments on 36-bit hardware, notably the 3600 , utilized tagged 36-bit words to distinguish data types and enable efficient collection. Each word included 4 tag bits for type identification (e.g., pointers versus numbers) and 32 data bits, allowing immediate representation of small integers and seamless relocation during collection. The collector employed an incremental copying algorithm, processing short-lived (ephemeral) objects frequently in a two-level to minimize pauses, with hardware support like barriers reducing overhead to about 10-20% of mutator time. This design supported high-performance symbolic processing, with the collector scanning memory without boundary concerns between objects. Assembly programming on DEC PDP-10 systems relied on MACRO-10, which included field operators for manipulating 6-bit character slices within 36-bit words. Instructions like field deposit (FDE) and field extract (FEX) allowed selective access to bit fields, commonly used for packing six 6-bit ASCII characters per word, with syntax such as .0^9 specifying a 9-bit field starting at bit 0. The debugger complemented this by providing interactive examination and modification of memory, supporting commands to display words in or symbolic form and single-step execution of MACRO-10 code. These tools enabled precise control over the architecture's bit-level features for .

Decline and Modern Legacy

Transition to 8-Bit Architectures

The transition from 36-bit computing architectures to byte-oriented designs began prominently with the introduction of the in 1964, which standardized the 8-bit byte as the fundamental unit of and addressing. This shift marked a departure from the 36-bit words and 6-bit character encodings prevalent in earlier IBM scientific mainframes like the 7090, necessitating significant software rewrites and efforts for , as the System/360's 32-bit word length and byte-addressable memory were incompatible with prior 36-bit binary formats. The architecture's emphasis on scalability and uniformity across models addressed longstanding customer frustrations with incompatible upgrades but imposed migration challenges, including recompilation of assembly code and adaptation of data structures to fit the new byte boundaries. In the minicomputer sector, (DEC) accelerated the move away from 36-bit systems with the VAX family, introduced in 1977 as a 32-bit extension of the successful 16-bit . The company opted for the byte-addressable VAX design to leverage existing PDP-11 software and hardware ecosystems, facilitating easier transitions for users but requiring PDP-10 customers—many in and research—to port TOPS-10 applications to the operating system, often involving substantial due to differences in word alignment and addressing. This migration highlighted compatibility hurdles, such as reformatting 36-bit data into 32-bit structures, yet VAX's performance and compatibility with environments hastened the adoption of byte-oriented computing. Economic pressures further drove the obsolescence of 36-bit systems, as advancing technologies in the 1970s favored power-of-2 addressing schemes that aligned efficiently with 8-bit bytes, reducing waste in memory allocation compared to the irregular 36-bit boundaries. The widespread adoption of the 7-bit ASCII standard in , extended to 8 bits for and international characters, reinforced byte , while cheaper dynamic chips—typically organized in 8-bit widths—made non-byte designs increasingly uneconomical for new . By the late 1970s, major vendors had shifted new designs to 8-, 16-, or 32-bit architectures, with 36-bit systems relegated to legacy roles; for instance, DEC ceased production in 1983, and Sperry Univac's series continued with developments like the 1100/90 in 1982 but saw no significant new 36-bit models after the mid-. This timeline reflected the broader industry consolidation around byte-addressable memory, ending the dominance of 36-bit computing by the close of the 1980s.

Contemporary Uses and Influences

Unisys ClearPath Dorado systems, which maintain compatibility with the 36-bit architecture of the original UNIVAC 1100 series, continue to support mission-critical applications in sectors such as banking and defense as of 2025. These systems run the OS 2200 operating environment, enabling the execution of legacy 36-bit workloads without modification, including high-volume transaction processing for financial institutions and secure data handling in government and defense operations. Integration with Microsoft Azure has been available since 2020, allowing virtualization of Dorado environments in the cloud while preserving 36-bit compatibility, with ongoing roadmap support through 2025 including enhanced performance and security features. Unisys reported continued shipments and deployments of ClearPath systems into 2025, underscoring their role in modern hybrid infrastructures for organizations reliant on long-standing 36-bit applications. Legacy software migration from 36-bit 1100 series systems to contemporary platforms remains a key practice, particularly for and applications. Tools like Astadia's migration solutions with facilitate the porting of applications to x86-based environments on , enabling seamless transition while retaining functional equivalence for business-critical programs. Similarly, third-party solutions convert and code from ClearPath environments to native x86 or Unix systems, reducing dependency on hardware without altering core logic. These migrations support ongoing operations in industries where 36-bit software handles complex calculations and , ensuring compliance and efficiency in x86-dominated ecosystems. The influence of 36-bit addressing persists in modern architectures through features like PSE-36, which extends physical memory mapping to 64 GB using 4 MB pages in legacy 32-bit modes. This mechanism, introduced in early processors, informed subsequent extensions like PAE, allowing efficient handling of larger memory spaces that trace back to 36-bit word designs for scientific and enterprise computing. In systems, such concepts underpin management, enabling and optimized addressing for workloads originally developed on 36-bit platforms. Niche revivals of 36-bit computing occur through in retrocomputing communities and AI research exploring concepts. Projects at events like the Portland Retro Gaming Expo in 2025 demonstrate interactive 36-bit system , preserving hardware like the for educational and hobbyist use. In , researchers revisit 36-bit —such as the Symbolics 3600—for their native support of symbolic processing, influencing modern studies on language-based and systems, with talks at the 2025 European highlighting Lisp's enduring role in prototyping. These efforts underscore 36-bit designs' contributions to foundational architectures, adapted via software on current hardware.

Applications Beyond Computing

Use in Field-Programmable Gate Arrays

Field-programmable gate arrays (FPGAs) have incorporated 36-bit arithmetic capabilities in their dedicated (DSP) blocks to support specialized applications requiring higher precision than standard 18-bit operations. These features enable efficient implementation of wide datapaths without excessive resource overhead, particularly in reconfigurable hardware where fixed-width multipliers and adders can be cascaded to form 36-bit units. The Lattice ECP3 family of FPGAs provides native support for 36-bit multipliers through cascading two sysDSP slices, allowing configurations such as 36x36 multiplication for DSP-intensive tasks. This architecture is optimized for low-power applications in , where the cascaded multipliers facilitate operations like filtering and transforms without spilling over into general-purpose logic resources. Similarly, the Stratix series, including models such as the Stratix 10 and the more recent Stratix 16 (as of 2025), integrates 36-bit adders and multipliers within variable-precision blocks, supporting 36x36 multipliers by cascading multiple blocks or summed accumulations for enhanced throughput. These blocks span multiple logic array blocks (LABs) and enable flexible precision scaling from 9-bit to 36-bit, accommodating high-performance designs in modern FPGA deployments. Newer families, such as 's Stratix 16 (released in 2025), continue to offer variable-precision blocks supporting up to 36-bit operations for advanced . The primary advantages of 36-bit support in these FPGAs lie in their efficiency for porting legacy originally developed for 36-bit mainframes and performing high- mathematics that avoids intermediate in computations exceeding 18-bit ranges. By leveraging dedicated , designers achieve lower and reduced power consumption compared to emulating wider arithmetic in soft logic, making it suitable for resource-constrained environments. For instance, in , 36-bit DSP blocks are used to implement polyphase filters and adaptive equalizers, where the extended maintains in multi-rate processing chains. In scientific simulations, such as those emulating historical 36-bit systems like the , FPGAs replicate original word lengths to run legacy code accurately, supporting research in computational history or validation without precision loss.

Other Electronic Implementations

In , particularly within digital cellular systems, 36-bit data frames are employed in the Abis Transcoder Rate Adaptation Unit (A-TRAU) format for networks, where each A-TRAU frame carries eight such frames to efficiently transport voice data and control bits across the air interface. This structure supports with precise alignment for signaling and in base stations and mobile switching centers. In scientific and instrumentation, 36-bit counters and timers are integrated into devices for high-precision timing and . For instance, systems from (now Technologies, formerly Agilent) such as the 10897B High Resolution Laser Axis Board utilize 36-bit position words to achieve fractional-wavelength accuracy in applications, enabling precise measurements in oscilloscopes and analyzers for timing and testing. Similarly, modern logic analyzers incorporate 36-bit timers and counters at each trigger level to qualify events with resolutions extending to nanoseconds, facilitating detailed decoding and fault isolation in complex digital systems. These implementations provide extended for capturing long-duration events without , essential in applications like high-speed serial bus . Custom application-specific integrated circuits (ASICs) occasionally employ 36-bit architectures for specialized signal processing, though such designs remain rare by 2025 due to the dominance of 32-bit and 64-bit standards. In niche audio processing, the STMicroelectronics STA309A multi-channel digital audio processor uses 24- to 36-bit precision internally at a 192 kHz sample rate to handle gain control, channel mixing, and attenuation for up to nine channels, supporting high-fidelity applications in professional audio equipment and automotive sound systems. This precision minimizes quantization noise in multi-channel configurations, such as 6-channel surround sound, where parallel 6-bit data paths per channel aggregate to 36 bits for efficient processing. Hybrid systems in and often incorporate 36-bit interfaces to bridge legacy hardware with contemporary architectures, ensuring compatibility in high-resolution data handling. For example, front-end digital processors in systems feature 512 × 36-bit program memories to support wide instruction formats for signal analysis, integrating older transistor-based components with modern elements for enhanced target detection and tracking in applications. These interfaces facilitate seamless data transfer between 36-bit legacy modules and newer 64-bit processors, preserving precision in environments requiring sub-microsecond timing for phased-array radars.

References

  1. [1]
    [PDF] COMPUTER ORGANIZATION: Architecture
    36-bit word was quite common (IBM early machines: 701, 704), and word sizes of 12, 18 and 60 bits were represented as well (PDP-8, CDC 6600). In the early days ...
  2. [2]
    SMBlog -- 7 March 2023 - Computer Science at Columbia University
    Mar 7, 2023 · But how big should bytes be? One faction favored 6-bit bytes and either 24-bit or 36-bit words; another favored 8-bit bytes and 32-bit words.
  3. [3]
    The IBM 701 - Columbia University
    Jan 1, 2004 · The 36-bit architecture was adopted by Digital Equipment Corporation for its PDP-6 and PDP-10 computers from 1964 to 1988 and by other ...
  4. [4]
    The evolution of the Sperry Univac 1100 series - ACM Digital Library
    The 1100 series hardware architecture Is based on a 36-bit word, ones complement structure which obtains one operand from storage and one from a high-speed ...
  5. [5]
    36-bit Timeline - Gordon Bell
    Digital unveils its first 36-bit computer, the PDP-6. 1964: Tops 10 is developed as the major user software interface for Digital's 36-bit machines.
  6. [6]
    The Digital Equipment Corporation PDP-10 - Columbia University
    Jul 24, 2022 · The PDP-10 was an influential computer, the first widely used timesharing system, and the basis of the ARPANET. It was a massive system, ...
  7. [7]
    [PDF] UNIVAC 1106 & 1108
    BASIC UNIT: 36-bit word. In core storage, each word location includes two additional parity bits, one for each half-word. FIXED-POINT OPERANDS: One 36-bit word.
  8. [8]
    [PDF] Architecture of the IBM System / 360
    This paper discusses in detail the objectives of the design and the rationale for the main features of the architecture. Emphasis is given to the problems ...Missing: packing | Show results with:packing
  9. [9]
    [PDF] Extending TEX and mEtaFoNt with floating-point arithmetic
    The PDP-10 computers on which TEX and mEtaFoNt were developed had 36-bit words: the four extra bits raised the maximum dimension by a fac- tor of 16 ...
  10. [10]
    RFC 734: SUPDUP Protocol
    ... 36.-bit words. Each word is sent through the 8-bit connection as six 6-bit bytes, most-significant first. Each byte is in the low-order 6 bits of a character.
  11. [11]
    Things Every Hacker Once Knew - catb. Org
    Apr 19, 2023 · Alternatively, 6-bit characters might be packed 6 to a word. There were many different 6-bit character encodings; not only did they differ ...
  12. [12]
    [PDF] DECsystem 10 - Computer History Museum - Archive Server
    Earlier DEC designs and the then current 6-bit character standard forced a word length which was a multiple of 6, 12, and 18 bits. Thus a 36-bit word was.Missing: rationale | Show results with:rationale
  13. [13]
    [PDF] HISTORY OF NSA GENERAL-PURPOSE ELECTRONIC DIGITAL ...
    Feb 9, 2004 · Word Size -- The word size on ATLAS II was. 36 bits. ATLAS I word size was only 24 bits. 3. Instruction Logic -- ATLAS II used two-address.
  14. [14]
    [PDF] NOTES ON THE LOGIC OF THE ERA 1103 COMPUTER SYSTEM ...
    1 TERMINOLOGY. The terminology used in discussing the characteristics of the 1103 computing system is explained in the following notes. 1. Word length. 36 bits ...
  15. [15]
    [PDF] The Computing Effort that Made Trinity Possible - OSTI.GOV
    Nov 16, 2021 · Metropolis based the design of Los Alamos's MANIAC on the IAS machine, as did IBM with its first fully electro- nic computer, the 701 “Defense ...
  16. [16]
    The IBM 704 - Columbia University
    The IBM 704 Computer (1954). The first mass-produced computer with core memory and floating-point arithmetic, whose designers included John Backus.
  17. [17]
    What was the rationale behind 36 bit computer architectures?
    Jul 23, 2019 · 36 bit machines were common in the 1950s (IBM 700 series, UNIVAC 1103) but 18-bit machines didn't appear until 1960 or so (PDP-1), as far as I can tell.What was the rationale behind 32-bit computer architectures?Were there ever 12-, 24-, 48-, etc bit processors?More results from retrocomputing.stackexchange.com
  18. [18]
    The IBM 7090 - Columbia University
    The IBM 7090, announced in 1958, was a transistorized version of the vacuum-tube-logic 709 and the first commercial computer with transistor logic.
  19. [19]
    IBM 7090 - Computer History Wiki
    Apr 22, 2025 · The IBM 7090 was IBM's first commercial transistor scientific mainframe (built at a time when computers for scientific and business computing used separate ...
  20. [20]
    [PDF] 7090 data processing system
    The use of transistors in the 7090 instead of vacuum tubes reduces both the total power and air conditioning requirements of the 7090 system by as much as ...<|separator|>
  21. [21]
    Episode 2: Composers in the Computer Center
    May 6, 2022 · Composers used the IBM 7090, punching cards to create instructions, and synthesized music, often without hearing it, using the MUSIC program.
  22. [22]
    [PDF] GENERAL ELECTRIC COMPUTER HISTORY INDEX NOTES ON ...
    America system and was first shipped during September, 1960. The second major system offering was the GE-225 which was derived from the GE-312 Process Computer.
  23. [23]
    History - Multics
    Jul 31, 2025 · GE sold its computer business to Honeywell in 1970. This was referred to as the "merger," and was announced to GE employees on 20 May 1970 and ...
  24. [24]
    [PDF] Honeywell Series 6000 - Bitsavers.org
    Feb 17, 1971 · Both models offered up to 262K 36-bit words of core storage in multiple independent modules, with input/output opera- tions controlled by ...
  25. [25]
    IBM in the Computer Era - Minnesota Computing History
    Jun 28, 2018 · The first midrange IBM computer was the System 3 in 1969, followed by Systems 32, 34, 38, 36, in the 1970s and early 1980s, and the high ...
  26. [26]
    [PDF] The Impact of Memory and Architecture on Computer Performance
    Mar 28, 1994 · Thus, several 36-bit machines used an 18-bit word address (262,144 words, or slightly more than 1 MB), and most 32-bit machines designed since ...
  27. [27]
    The Birth and Development of the ARPANET - Columbia University
    And at the University of Utah (Utah), the fourth site, the IMP was connected to a DEC PDP-10 using the TENEX operating system. By the end of 1969, the first ...
  28. [28]
    Chapter: 7 Development of the Internet and the World Wide Web
    Most were time-sharing systems that supported a number of simultaneous users. By 1970, many groups had settled on the Digital Equipment Corporation (DEC) PDP-10 ...<|separator|>
  29. [29]
    IBM 700 Series
    cylindrical devices coated ...
  30. [30]
    IBM-7094 - Ed Thelen's Nike Missile Web Site
    It could perform 500,000 logical decisions, 250,000 additions or subtractions, 100,000 multiplications, or 62,500 divisions in one second. It had hardware to do ...Missing: 229000 | Show results with:229000
  31. [31]
    PDP-6
    Specification - PDP-6 First shipped June 1964 Word length 36 bits Speed 0.25 MIPS Memory 18-bit physical address protection and relocation registers ...
  32. [32]
    [PDF] Digital Equipment Corporation - Computer History Museum
    The 36.bit PDP-10 computer was program-. LINC-8 compatible with the PDP-6 and approximately twice as. Finally that year, the LINC-8 was built, based on a pre ...
  33. [33]
    DEC PDP display - Stanford InfoLab
    The PDP-1 (1960) was the first DEC machine, with an 18-bit word, and was used to develop time-sharing operating systems.
  34. [34]
    [PDF] TENEX, a Paged Time Sharing System for the PDP-10
    The BBN pager is an interface between the PDP-10 processor and the memory bus. It provides individual mapping (relocation) of each page (512 words) of both user ...
  35. [35]
    [PDF] pdp-10 interface manual - Bitsavers.org
    This manual presents implementation guidelines and requirements for the PDP-10 I/O bus, memory bus, and data channel bus. Specifications for the FLIP CHIP ...
  36. [36]
    A Brief History of 36-bit Computing at CompuServe
    CompuServe has one of the world's most powerful remaining thirty-six bit computing facilities, but got its first PDP-10 almost by accident. While I was a ...
  37. [37]
    [PDF] History and Evolution of 1100/2200 Mainframe Technology - VIP Club
    Nov 8, 1990 · Typical semiconductor memory card of mid 1970s; 16K chip provides 32K ... Memory Technology. The 2200/400 memory uses the one megabit chip.
  38. [38]
    UNIVAC 1105 - IT History Society
    The UNIVAC 1105 had either 8,192 or 12,288 words of 36 bit magnetic core memory, in two or three banks of 4,096 words each.Missing: 1955 | Show results with:1955<|separator|>
  39. [39]
    [PDF] Sperry Univac 1100/60 System - Bitsavers.org
    A fundamental consideration in the 1100/60 system design was the provision of high availability, reliability, and maintainability (ARM). Sperry Univac has ...
  40. [40]
    Unisys History
    Sperry introduces the UNIVAC 1100 Series, forerunner of the 2200 Series. ... Burroughs introduces the B5000 Series, the first dual-processor and virtual memory ...
  41. [41]
    [PDF] ClearPath OS 2200 Release 21.0 - Product Support
    This document is a software planning and migration overview for ClearPath OS 2200 Release 21.0, dated October 2024.
  42. [42]
    [PDF] ClearPath Forward Dorado 4400/6400/8400 System - Product Support
    The Dorado system includes a Unisys Intel-based server that runs the OS 2200 operating ... OS 2200 36-bit words each occupy a minimum of 8 bytes of memory. 1.8.2.
  43. [43]
    Architecture of the Symbolics 3600 - ACM Digital Library
    The Symbolics 3600 is a family of high-performance, single user computers optimized for the Lisp language and for interactive use.
  44. [44]
    [PDF] DEC PDp·8 Series - Index of /
    FLOATING-POINT OPERANDS: 36-bit single precision operand instructions with a 24-bit signed fraction and signed 12-bit exponent. Optional 72-bit operand with a ...
  45. [45]
    [PDF] PDP-10 - Bitsavers.org
    The PDP-I 0 is a general purpose, stored program computer that includes a central processor, a memory, and a variety of peripheral equipment such as.
  46. [46]
    [PDF] Digital Equipment DECsystem-20 - Bitsavers.org
    Each 36-bit word is used to represent five 7-bit bytes, with one unused bit per word. Bytes from 1 to 36 bits in length can also be recognized and manipulated.
  47. [47]
    Reference Manual - IBM 7090 Data Processing System
    ... floating-point number is stored in a word as shown in Figure 3. The fraction is con- tained in bit positions 9 through 35. A floating-point. Characteristic.
  48. [48]
    [PDF] Buchholz: The System Design of the IBM Type 701 Computer
    If the word length is too short there will be many applications for which a single word is not enough to carry all the bits needed to represent a number.Missing: rationale | Show results with:rationale
  49. [49]
    [PDF] The Analytical Engine, Volume 1, Number 3, January 1994
    Mar 30, 2022 · THE IBM 701 INSTRUCTION SET. The IBM 701 had a 36 bit word packed with two 18 bit instructions. Each instruction had a 6 bit opcode, leaving 12.<|separator|>
  50. [50]
    6-1-documentation/tops20.doc from bb-h138e-bm_tops20_v6_1_distr
    TOPS-20 V6.1 supports a MAXIMUM memory configuration of 4 megawords. 10. TOPS-20 V6.1 supports RA81 and RA60 disks on the HSC50, but does not ...
  51. [51]
    [PDF] TECHNICAL SUMMARY - Bitsavers.org
    Core Memory Size (36-bit words). 32-256 64-256. 80-256 96-4096 128-4096. (min. -max. K words). Memory Speed - microseconds/word. 1.0. 1.0. 1.0. 1.0. 1.0.
  52. [52]
    [PDF] Addendum— Intel Architecture Software Developer's Manual
    Therefore, with the 36-bit PSE feature, a page directory can contain up to 1024 entries, each pointing to a 4 MB page that can exist anywhere in the 36-bit ...Missing: x86 | Show results with:x86
  53. [53]
    [PDF] CODING for the MIT-IBM 704 COMPUTER - Bitsavers.org
    The 3 letter SHARE codes include those of the 704 ... J4 NA 0109 WRITE A SINGLE BCD CHARACTER ON CRT. 150. S. J4 NA 0110 WRITE BCD CHARACTES STORED· IN N-704 ...Missing: encoding | Show results with:encoding
  54. [54]
    UNIVAC 1100 Series FIELDATA Character Code - Fourmilab
    FIELDATA was defined as a 7-bit, 128 character code with the first 64 characters reserved for control codes (which varied among the different versions of the ...Missing: 1950s | Show results with:1950s
  55. [55]
    DEC/PDP Character Codes - Rabbit
    That meant that in the standard 36 bit memory word, there would be enough space for a 6 character file name together with four bits of flags or other ...Missing: definition | Show results with:definition
  56. [56]
  57. [57]
    a file transfer protocol - IETF
    ... ASCII. For example PDP-10 stores 7-bit characters, five per word with 36th bit as don't care, while Multics stores them four per word, right-justified in 9 ...
  58. [58]
    [PDF] macro-10 assembler programmer's reference manual - Bitsavers.org
    The foregoing character set is the Radix-50 character set. Any statement character which is not in the Radix-50 set is treated as a symbol delimiter when ...Missing: encoding | Show results with:encoding
  59. [59]
    TENEX, a paged time sharing system for the PDP - 10
    TENEX is a new time sharing system implemented on a DEC PDP-10 augmented by special paging hardware developed at BBN. This report specifies a set of goals ...Missing: influence | Show results with:influence
  60. [60]
    Features - Multics
    Multics MacLisp was designed to be compatible with the large, mature, and heavily used "MACLISP" dialect in use on the PDP-10's throughout the AI Lab and MAC, ...
  61. [61]
    [PDF] UNIVAC 1106 & 1108
    BASIC UNIT: 36-bit word. In core storage, each word location includes two additional parity bits, one for each half-word. FIXED-POINT OPERANDS ...Missing: rationale | Show results with:rationale
  62. [62]
    [PDF] ClearPath Enterprise Servers - MCP Implementation Guide
    When all system-critical disk units are mirrored, MCP servers are more fault tolerant and more likely to keep running under MCP control when problems occur ...
  63. [63]
    [PDF] UNISYS - Bitsavers.org
    Re-entrant MCP - MCP/VS assigns copies of its own routines to user tasks executing in the system to improve throughput and fault tolerance. • Increased ...
  64. [64]
    [PDF] The History of Fortran I, II, and III by John Backus
    But the advent of the IBM 704 with built-in floating point and indexing radically altered the situation. The 704 presented a double challenge to those who ...
  65. [65]
    [PDF] Multics - Bitsavers.org
    This manual describes the C programming language as implemented under Multics. The language is described by noting variations from a baseline version of C.
  66. [66]
  67. [67]
    [PDF] MACRO ASSEMBLER REFERENCE MANUAL - Bitsavers.org
    This document describes the language elements of the MACRO-10 Assembler for the DECsystem-10. SUPERSESSION/UPDATE INFORMATION: OPERATING SYSTEM AND VERSION:.
  68. [68]
  69. [69]
    The IBM System/360
    The System/360 unified a family of computers under a single architecture for the first time and established the first platform business model.
  70. [70]
    IBM System/360 - Engineering and Technology History Wiki
    Jan 9, 2015 · The April 1964 announcement of IBM System/360 was revolutionary in content and unprecedented in scope. It replaced all five of IBM's (6-bit-byte) computer ...
  71. [71]
    [PDF] Architecture of the IBM System/360 - People @EECS
    family is among those designed with fixed-word-length decimal arithmetic. As one would expect, the storage efficiency advantage of the variable data format ...
  72. [72]
    DIGITAL Computing Timeline - Text Version
    Feb 2, 1998 · January: Introduction of 36-bit DECSYSTEM-20, the lowest-priced general-purpose timesharing system on the market. The DECSYSTEM-20 was based on ...
  73. [73]
    [PDF] DEC: The mistakes that led to its downfall - SIGCIS
    DEC had been through this transition with the PDP to VAX migration so should have been aware of the pitfalls. Memos in Olsen's archives indicate that Olsen ...
  74. [74]
    [PDF] Nothing Stops It! - Computer History Museum - Archive Server
    The DEC-10 was used to compile the VMS modules written in Bliss because at the time the Bliss compiler only ran on a DEC-10. ... Migrating from PDP to VAX. As ...
  75. [75]
    What is the history of why bytes are eight bits?
    Nov 16, 2011 · Memory and registers weren't so cheap back then, so 8 bits was a good compromise, compared to 6 or 9 (fractions of a 36-bit word).<|separator|>
  76. [76]
  77. [77]
    PDP-10 - Computer History Wiki
    Mar 20, 2025 · A series of large, 36-bit word mainframe-like systems built by DEC. They were basically a re-implementation of the earlier PDP-6 ISA.Missing: details | Show results with:details
  78. [78]
    [PDF] UNIVAC 1100 Series
    High internal speed-the capability to execute most instructions in a single 750-nanosecond core cycle through overlapped accessing of instructions and data.
  79. [79]
    [PDF] ClearPath OS 2200 - Product Support
    This document is a software planning and migration overview for ClearPath OS 2200 Release 20.0, dated March 2023.
  80. [80]
    ClearPath Forward: Why Unisys Still Matters in the Mainframe Market
    Sep 17, 2025 · For organisations with decades of investment in OS 2200 workloads, the Dorado line provides a lifeline and a roadmap for the future. These ...Missing: 36- | Show results with:36-
  81. [81]
    Clearpath Forward Transaction Processing Solutions - Unisys
    ClearPath MCP ecosystems can now run on Amazon Web Services, demonstrating how Unisys is a leader in deploying mainframe systems on public cloud platforms.Missing: 36- bit defense
  82. [82]
    Unisys ClearPath MCP Virtualization on Azure - Microsoft Learn
    Learn how to apply Unisys virtualization technologies to migrate a legacy Unisys ClearPath Forward Libra mainframe to Azure.
  83. [83]
    [PDF] ClearPath® MCP Roadmap and Strategy Update - Unisys
    Sep 26, 2024 · The ClearPath MCP roadmap, updated September 25, 2024, expires December 31, 2024, and is subject to change. It includes performance and ...Missing: bit legacy
  84. [84]
    [PDF] ClearPath® Forward Overview - Unisys
    Oct 6, 2025 · This presentation includes certain non-GAAP financial measures that exclude certain items such as postretirement expense; debt extinguishment,.
  85. [85]
    Unisys mainframe migration with Avanade AMT - Microsoft Learn
    This article describes how to use Avanade Automated Migration Technology (AMT) to migrate Unisys Master Control Program (MCP) source code and emulated MCP ...Missing: UNIVAC x86
  86. [86]
    Unisys Clearpath / MCP Migration - VerraDyne
    We provide migration services from Unisys Clearpath or MCP to Windows Or Unix. Programs are converted to chosen programming language Cobol, VB or C#.Missing: Fortran UNIVAC 1100 x86
  87. [87]
    Unicon Conversion Technologies Inc. Unisys Clearpath
    The converted system is no longer a Clearpath system; it has been fully converted to a true open system running in a true native open systems environment.<|separator|>
  88. [88]
    [PDF] Intel® 64 and IA-32 Architectures Software Developer's Manual
    This manual has five volumes: Basic Architecture, Instruction Set Reference AM, Instruction Set Reference NZ, System Programming Guide, Part 1, and System ...Missing: modern | Show results with:modern
  89. [89]
    Why you should forget about 4GiB of RAM on 32-bit systems and ...
    Apr 23, 2011 · To begin with I should also mention that PAE/PSE/PSE-36 are features that allow a 32-bit OS to see more than 4GiB of RAM. AWE and MMAP ...
  90. [90]
  91. [91]
    Computers at PRGE 2025
    After the demise of Seattle's Living Computer Museum, the ICM stepped up to preserve and share the history of 36-bit computing through interactive exhibits ...
  92. [92]
    Symbolics Technical Summary
    The Symbolics 3600 family is a line of 36-bit single-user computers ... bits – hence the name tagged architecture to describe 3600-family processors.
  93. [93]
    BACK TO THE FUTURE: LISP IN THE NEW AGE OF AI - YouTube
    May 21, 2025 · This is a talk given by Anurag Mendhekar at the European Lisp Symposium 2025 about Common Lisp's place in computing in the age of LLMs.
  94. [94]
    [PDF] DS1021 - LatticeECP3 Family Data Sheet - Farnell
    Apr 3, 2012 · • Multiply (36x36 by cascading across two sysDSP slices). • Multiply ... Table 2-9 shows the maximum number of multipliers for each member of the ...
  95. [95]
    [PDF] Enabling High-Performance DSP Applications with Stratix V ... - Intel
    FPGAs have traditionally supported 18-bit signal-processing datapaths. However, high-performance signal-processing designs require more than 18-bit precision.Missing: modern | Show results with:modern
  96. [96]
    [PDF] Embedded Signal Processing Capabilities in a Low Cost ECP3 FPGA
    Some other useful DSP operations that are supported with the new cascade feature are rounding, barrel shifting and creating 36x36 multipliers. Part of the ...
  97. [97]
    How can I implement multipliers larger than 36 bits in Stratix™... - Intel
    Multipliers with widths greater than 36-bits must be implemented using more than one DSP block. You must specify the width using the LPM_MULT megafunction ...
  98. [98]
    Stratix 10 NX Architecture - ACM Digital Library
    In this article, we will introduce the Stratix 10 NX device, which is a variant of FPGA specifically optimized for the AI application space.
  99. [99]
    [PDF] Signal Processing through Field Programmable Gate Arrays
    Oct 1, 2009 · FPGA designers can alternately switch between 9-bit, 18-bit or 36-bit or 18-bit complex math functions without changing the system hardware.
  100. [100]
    Putting A PDP-10 On An FPGA - Hackaday
    Jul 29, 2011 · Although PDP-10 emulators do exist, this project isn't an emulation – the system actually has the 36-bit word length of the original, ...
  101. [101]
    [PDF] Digital cellular telecommunications system (Phase 2+) - ETSI
    The format of the A-TRAU frame is given in Figure 5. An A-TRAU frame carries eight 36 bit-data frames. C Bits. Table 3. C1. C2.
  102. [102]
  103. [103]
    Logic Analyzer Features
    Each level also provides a 36-bit timer and a 36-bit counter to further qualify trigger events. The GoLogicXL also provides special hardware for serial bus ...
  104. [104]
  105. [105]
    [PDF] Radar Signal Processing - MIT Lincoln Laboratory
    The FDP featured a 512 × 36-bit program memory to support the wide instruction-word format, which was physically separate and distinct from two simul- taneously ...