Fact-checked by Grok 2 weeks ago

Register machine

A register machine is an abstract model of computation consisting of a finite set of registers, each capable of storing an unbounded , and a that executes a program of instructions to manipulate these registers, including incrementing a register's value, decrementing it if nonzero with conditional jumps, and unconditional jumps to other instructions, ultimately halting upon completion. Introduced by in 1961 as a simplified alternative to for studying , register machines provide a direct framework for proving that all partial recursive functions are computable through register operations alone. Formally defined with instructions such as "increment register r and go to line k," "if register r is nonzero, decrement it and go to k, else go to l," and "halt," these machines demonstrate , meaning any computation performable by a can be simulated by a register machine with a sufficient number of registers. Key variants include Minsky's original two-register model, which uses decrement-jumps to enable conditional behavior, and the more general formulations by Shepherdson and Sturgis in , which formalized the instruction set for broader applicability in . Register machines are notable for their simplicity compared to tape-based models, facilitating proofs of undecidability and universality—such as the existence of a universal register machine that simulates any other given its description as input. In practice, they underpin analyses of and have influenced the design of real-world processors, though their unbounded registers distinguish them from bounded architectures like random-access machines.

Introduction

Overview

A register machine is an abstract model of computation consisting of a finite number of registers, each capable of storing arbitrarily large non-negative integers, along with a program composed of a finite sequence of instructions and a program counter that indicates the next instruction to execute. This model serves as an idealized computer for studying the limits of mechanical calculation, where computation proceeds by sequentially executing instructions that modify register contents or alter control flow. The core instructions typically include incrementing a specified by 1 (with a specified next instruction line), decrementing a by 1 only if its value is positive (with branches to different instructions if zero or positive to prevent negative values), and halting. These operations allow the machine to perform basic manipulations of numerical data without requiring auxiliary storage beyond the registers themselves. For example, to compute the sum of two non-negative integers stored in registers R0 and R1 and place the result in R2, a simple might proceed as follows: first, decrement R0 and to an increment of R2 if R0 was positive, repeating until R0 reaches zero; then, similarly decrement R1 while incrementing R2 until R1 is zero; finally, halt. This loop-based approach effectively adds the values by successive increments. Register machines demonstrate how basic arithmetic operations, such as and , can be simulated using only bounded control and unbounded register capacities, contrasting with tape-based models that rely on infinite linear memory. They are computationally equivalent to Turing machines, capable of simulating any effectively .

Basic Components and Operations

A register machine consists of a finite number of registers, denoted as R_0, R_1, \dots, R_n, each capable of storing a non-negative (natural numbers including zero). These registers serve as the primary storage mechanism for data during computation. Additionally, the machine includes a (PC), which points to the index of the next instruction to be executed, and a fixed list of instructions stored in , arranged linearly and numbered sequentially starting from 0 or 1. Execution begins with the PC set to the first instruction and all registers initialized to zero unless otherwise specified, proceeding by fetching and performing the instruction at the current PC location. The set comprises a small number of operations designed to manipulate contents and the of execution. The increment , denoted I(r, k), increases the value in r by 1 and then sets the PC to line k. This operation provides the means to build up values in registers through repeated application and allows for unconditional jumps by choosing an appropriate k. The decrement , D(r, p, q), tests and conditionally modifies r: if the value in r is greater than zero, it decrements r by 1 and sets the PC to line p; otherwise, if r is zero, it leaves r unchanged and sets the PC to line q. This allows for conditional branching based on whether a is empty, enabling the to implement loops and decision structures. Unconditional jumps can be achieved using the increment on an auxiliary if necessary to avoid altering the desired value. To terminate computation, the halt instruction H stops execution entirely, leaving the final register values as the output. Programs for register machines are conventionally represented as a linear sequence of these instructions, with each line assigned a unique positive integer label corresponding to its position. Jumps and branches reference these labels to redirect the PC, ensuring that the machine's behavior is fully determined by the initial register configuration and the instruction list.

Formal Definitions

Minsky Register Machine

The Minsky register machine is a computational model featuring a finite collection of registers R_1, R_2, \dots, R_k, each storing a non-negative integer that can grow without bound. The program consists of a finite, sequentially numbered list of instructions drawn from the set \{I(n, j), D(n, t, s), H\}, where n denotes a register index ($1 \leq n \leq k) and t, s, j are instruction line numbers. Execution commences at instruction 1 with given initial register values and proceeds via the specified jumps, halting upon encountering H. This model, introduced by Marvin Minsky in 1961, provides a minimal instruction set capable of universal computation while facilitating proofs of equivalence to other models like the Turing machine. It is contemporaneous with Joachim Lambek's 1961 abacus machine, both refining earlier counter-based models into direct register paradigms. The semantics of the instructions are defined as follows. The instruction I(n, j) replaces the value in R_n with R_n + 1 and jumps to j. The instruction D(n, t, s) checks the value in R_n: if R_n = 0, it jumps to t; otherwise, it sets R_n := R_n - 1 and jumps to s. The instruction H terminates execution, returning the final register values as output. These operations ensure deterministic behavior while supporting conditional control based on register contents. Unconditional jumps can be simulated using auxiliary s or specific configurations of the conditional instructions. Turing completeness of the Minsky register machine is established by constructing a simulator for an arbitrary Turing machine within the model using a fixed number of registers. The Turing machine's tape, state, and head position are encoded using numerical pairing functions (e.g., Cantor pairing) to combine multiple values into single registers: one register holds the encoded tape contents (position-symbol pairs via Gödel-like numbering for the finite non-blank portion), another tracks the current state and head position as a paired integer, and auxiliaries manage computation steps. To perform a computation step, the machine decodes the relevant portions via loops of increments and conditional decrements to extract symbols and positions, applies the transition rule (updating the encoding), and re-encodes the configuration—extending the tape encoding as needed by adjusting the numerical representation without requiring additional registers. Movement left or right updates the head position encoding accordingly. This simulation faithfully replicates any Turing machine computation, demonstrating equivalence in expressive power. A simple example program illustrates zero-testing for register R_1, using standard instructions (assuming lines numbered starting from 1, and H at line 4 for zero case, with non-zero leading to line 3 for handling): 1: D(1, 4, 2) # If R1=0, jump to 4 (halt); else dec R1, go to 2
2: I(1) # Increment back (to restore for test, or process)
3: J(1) # Simulated unconditional jump back to 1 (using aux register if needed, or direct via config)
4: H # Zero detected: halt
This loop tests and restores R1 if non-zero, or halts if zero. For simulating indirect addressing—accessing a register indexed by the value in R_m—a subroutine employs and copying: use loops with increments on an and conditional decrements on a copy of R_m to match the index, employing conditional jumps to identify and access the target via sequences of I and D operations. This requires auxiliary registers for temporaries and demonstrates how the minimal instructions enable pointer-like access through bounded loops and tests.

Shepherdson-Sturgis Register Machine

The Shepherdson-Sturgis machine is a defined with a bounded set of types, emphasizing deterministic execution and unbounded storage to compute partial recursive functions. Introduced in 1963, it uses a fixed program size consisting of at most k instructions, where k is a small independent of the input, demonstrating that such machines can simulate arbitrary Turing-complete computations without indirect addressing or unbounded instruction sets. Formally, the machine operates on an infinite sequence of registers R1, R2, ..., each holding a non-negative integer. A program is a finite list of numbered instructions (1 to mk), drawn from four types:
  • INC(i): Increment the contents of register i by 1, then proceed to the next instruction.
  • DEC(i, j): Decrement the contents of register i by 1 (assuming it is positive); if it becomes zero after decrement, jump to instruction j, otherwise proceed to the next instruction.
  • ZERO(i, j): If the contents of register i is zero, jump to instruction j; otherwise, proceed to the next instruction.
  • JUMP(j): Unconditionally jump to instruction j.
Execution begins at instruction 1 with initial values in specified input registers and proceeds sequentially or via jumps until an attempt to execute a nonexistent instruction (e.g., beyond m or invalid j), at which point the machine halts, with the output in a designated register. The model ensures deterministic flow through bounded branching, with registers providing unlimited storage capacity. A key property of this model is its ability to compute every partial recursive function using a fixed k, such as k=4, which suffices for Turing completeness by encoding successor, predecessor, zero-testing, and unconditional branching. This fixed-instruction bound highlights the model's efficiency in proving the equivalence of register machines to other universal systems. For illustration, consider a program fragment to subtract the value in register 2 (y) from register 1 (x), assuming xy ≥ 0 and halting with the result in register 1 (instructions numbered starting from 1):
1: ZERO(2, 6)     // If y = 0, halt (result is x)
2: DEC(1, 6)      // Decrement x (assume x > 0); if now 0, jump 6
3: DEC(2, 1)      // Decrement y; if now 0, jump 1, else next
4: JUMP(1)        // Unconditional loop
5: (unused)
6: HALT           // Output in register 1
This loop effectively decrements x exactly y times by alternating decrements, halting when y reaches zero or x reaches zero prematurely (under assumption). The explicit ZERO test prevents over-decrementing, and DEC handles the conditional jump on zero post-decrement. Compared to Minsky's register machine, the Shepherdson-Sturgis variant is similar in structure but incorporates an explicit instruction for zero-testing, avoiding reliance on indirect jumps.

Historical Development

Early Precursors ()

In the early , Hao Wang developed formal systems that reduced problems in predicate calculus to arithmetic operations on natural numbers, establishing early conceptual foundations for register-based . His work emphasized the mechanization of through numerical manipulations, providing a bridge between logical formalisms and computational models. This approach highlighted the potential for without reliance on symbolic tapes, focusing instead on discrete operations akin to those in registers. An important precursor was Emil Post's tag systems from the 1940s, which perform computation using production rules to iteratively modify a string held in a single queue-like structure, simulating Turing machines without a tape. Wang's 1957 introduction of the B-machine marked a significant precursor to register machines, presenting an extremely simple Turing-equivalent model that avoided infinite tapes by operating on a bounded string in a single register. The B-machine employs a finite set of instructions, including movement along the string, marking with a symbol (such as printing a 1), conditional jumps, and halting, effectively simulating general computation through rewriting rules on the register's content. This design drew from tag system variants, where productions iteratively modify the string until a halting condition is met. Influenced by Wang's numerical focus and Post's string-based models, subsequent developments explored tape-less alternatives using production rules on strings, but these early models exhibited limitations in fully abstracting multi-register operations and unconditional generality, which required extensions in subsequent decades to achieve the mature register machine formulations.

Key Formulations (1960s)

In the early , Z. A. Melzak introduced a foundational using an "abacus machine" consisting of an infinite collection of "holes," each capable of holding a non-negative number of "pebbles," where the number of pebbles in a hole directly represents the value stored there. The machine's instructions operate by moving entire clumps of pebbles between holes to perform basic arithmetic: is achieved by transferring pebbles from one hole to another, thereby increasing the destination's count, while involves removing a clump from a hole, provided it contains at least that many pebbles. This pebble-based representation allows for intuitive simulation of numerical computations without relying on a linear structure. Building directly on Melzak's framework, Joachim Lambek refined the model in by atomizing the pebbles into indivisible units, treating each as a single increment to a in one of countably many (analogous to holes or wires). Lambek's instructions simplify operations to increment (), which adds one unit to a specified register, and decrement (DEC) with a zero-test, which subtracts one unit from a register only if it is non-zero, otherwise branching based on the test result. These atomic steps, combined with conditional jumps, enable programming of partial recursive functions while using only a finite number of per , paving the way for more streamlined models. Marvin Minsky first formalized and minimized the register machine in 1961, defining a with a finite set of registers holding non-negative integers and a program of labeled instructions executed sequentially, with further details in his 1967 book. The core instructions are minimalist: increment a specified register (INC r), and if register r is zero jump to instruction z otherwise decrement r (JZDEC r, z), along with unconditional jumps to other instructions. Minsky demonstrated by showing that two registers suffice to simulate any , using counters to encode and manipulate tape positions and symbols through successive increments and decrements. A key innovation across these 1960s formulations was conceptualizing multiple registers as "cutting the tape" of earlier linear models, granting to data and overcoming sequential limitations for efficient arithmetic and . Contemporaneously, J. C. Shepherdson and H. E. Sturgis developed a similar register machine in 1963, emphasizing instructions for zeroing, incrementing, and transferring values between registers to compute recursive functions.

Refinements and Variants (1970s Onward)

In the mid-1960s, Calvin C. Elgot and introduced the machine as a formal model bridging programming languages and computational theory, featuring a finite-state controller that executes instructions stored in an infinite array. The operates without direct support for indirect addressing, restricting jumps and memory accesses to fixed locations or immediate values, which limits its efficiency for tasks requiring dynamic . This limitation can be circumvented through techniques such as jump tables, where a table of addresses is precomputed and indexed to simulate indirect jumps, or that alters instructions during execution. Building on these foundations in the early , Juris Hartmanis explored hierarchies within register machine models, demonstrating how variations in register operations and addressing capabilities create distinct complexity classes based on time measures. In his analysis of random-access stored-program machines, Hartmanis established bounds in terms of basic register instructions, showing that stronger models with enhanced addressing yield stricter separations between solvable problems of varying difficulties. This work emphasized the role of register operations as a for computational resources, influencing subsequent classifications of feasible computations. A significant advancement came in 1973 with Stephen A. Cook and Robert A. Reckhow's formulation of the (RAM), which explicitly incorporates indirect addressing by allowing load and store operations via register-held indices into the memory array. Unlike prior models, the Cook-Reckhow RAM assigns a logarithmic cost to operations based on operand size—specifically, the time for or is proportional to the logarithm of the larger number—while other instructions like indirect loads and conditional jumps incur unit cost. This cost model balances realism with analyzability, enabling precise simulations of Turing machines and hierarchies for time-bounded computations. These refinements profoundly shaped from the 1970s onward, with the serving as a unit-cost abstraction in discussions of polynomial-time solvability and the , facilitating proofs of and separations. The models' emphasis on register-based addressing and operation costs provided a practical alternative to Turing machines for analyzing efficiency in .

Computational Equivalence

Relation to Turing Machines

Register machines are computationally equivalent to s, meaning that any function computable by one model is computable by the other. This equivalence establishes that both models capture the full class of partial recursive functions, providing a foundational result in . A can simulate any register machine by encoding the contents of the registers as symbols on its tape. For a register machine with instructions such as increment, decrement, and conditional jump based on zero, the uses a single tape divided into s, each representing a register in notation (e.g., a sequence of marks whose length indicates the register's value), separated by special delimiters. The finite control of the then mimics the register machine's and executes each by moving the to the appropriate , performing operations like adding or removing a mark for increment/decrement, and branching based on whether a segment is empty. This simulation preserves the halting behavior, with the halting if and only if the register machine does. Conversely, a register machine can simulate any , even with as few as two registers in the case of counter machines. In the general case with unbounded registers, the simulation allocates two registers per tape cell: one to store the cell's symbol value and another to track its position relative to the head. Additional registers maintain the current head position, the finite state of the , and temporary values for transitions. To execute a step, the register machine updates the relevant position and value registers according to the transition function, shifting positions as needed for head movement. For the two-register variant, as shown by Minsky, the counters encode the configuration through a clever representation that simulates arbitrary numbers of registers or stacks, enabling full simulation via state transitions and counter manipulations. Formally, every register machine computes a partial recursive function if and only if a Turing machine does. This theorem follows from bi-directional simulations that map configurations between the models: a register machine configuration (register values and program state) encodes uniquely into a Turing machine tape configuration, and vice versa, preserving computability and halting. The mappings ensure that the simulated computation proceeds identically, up to the simulation overhead. These simulations incur resource overheads, typically resulting in a slowdown in time complexity for the simulating model relative to the simulated one, due to the need to or encoded representations (e.g., traversing tape segments proportional to the number of s or tape length). Space overhead is linear in the maximum or values used.

Precedence Over Other Models

Register machines demonstrate significant expressive power by subsuming or equating to several foundational models of , particularly those operating on counters or symbolic strings. Counter machines, which perform increment, decrement, and zero-testing operations on registers, are directly equivalent to register machines, as the latter incorporate these primitives along with unconditional jumps to achieve full generality. Abacus machines, defined with an infinite array of registers supporting addition, subtraction, and conditional branching on zero, likewise match the computational capabilities of register machines, serving as an alternative formulation that emphasizes arithmetic operations over explicit decrement. This equivalence arises because both models can simulate each other's instructions through straightforward translations of their instruction sets. A key aspect of their precedence is the ability to compute all partial recursive functions, extending beyond primitive recursive functions by incorporating the μ-operator for unbounded minimization. The μ-operator allows register machines to search for the smallest input satisfying a predicate, enabling the computation of functions like the Ackermann function that outstrip primitive recursion. Seminal work established that register machines with a finite set of instructions—such as increment, decrement by one, zero-test, and jumps—can realize this full class, providing a direct arithmetic basis for general computability. Register machines also precede string-manipulating models like Post normal systems and systems through simulations that encode symbolic numerically. In these simulations, are represented via , where symbols map to primes and are executed by arithmetic operations on register-held numbers, effectively using counters to mimic rewriting rules. For systems, which and delete fixed-length blocks from a -like , register machines replicate the process by maintaining queue length and content in , applying rules via conditional branches and decrements. This numerical encoding preserves the halting behavior and undecidability inherent in these systems. In the broader context of the Church-Turing thesis, register machines align equivalently with and the theory of recursive functions, all capturing the intuitive notion of effective . While offers a functional, applicative paradigm for defining computations via and , register machines provide an imperative perspective akin to stored-program computers, manipulating state through sequential instructions. This equivalence underscores the thesis by demonstrating that diverse formalisms—arithmetic, symbolic, and applicative—yield the same class of computable functions. Despite their generality, register machines do not exceed the power of Turing machines; they are strictly equivalent, inheriting the same limitations such as undecidability of the . Any in Turing machines, like determining whether a program halts, remains undecidable under register machine simulations, preserving the theoretical boundaries of .

Variants and Applications

Random Access Machine (RAM)

The () is an abstract that extends machines by incorporating to an unbounded of registers, treating them as addressable memory locations. It consists of a finite program executed sequentially on an infinite sequence of registers X_0, X_1, \dots , each holding an arbitrary integer (positive, negative, or zero). This model was introduced by and Reckhow to analyze time-bounded computations, providing a framework closer to real computers than Turing machines while remaining theoretically tractable. The RAM's instruction set supports basic arithmetic, indirect addressing, and . Key instructions include loading a constant (X_i \leftarrow c), and (X_i \leftarrow X_j + X_k, X_i \leftarrow X_j - X_k), indirect load (X_i \leftarrow X_{X_j}) and store (X_{X_i} \leftarrow X_j), conditional transfer to m if X_j > 0 (TRA m if X_j > 0), and (READ X_i, X_i). Indirect addressing allows the value in one to specify the of another, enabling efficient access to non-contiguous locations without sequential scanning. Each execution time is bounded by a length function l(n), typically defined as l(n) = 1 or l(n) = \lfloor \log_2 |n| \rfloor + 1 for |n| \geq 1, to account for the cost of handling numbers of varying sizes. Variants of the RAM differ primarily in their cost models, balancing simplicity and realism. The unit-cost RAM assumes constant time (l(n) = 1) for all operations, idealizing on words of fixed and simplifying where number sizes remain bounded. In contrast, the logarithmic-cost RAM charges time proportional to the bit length of operands (l(n) = \lfloor \log_2 |n| \rfloor + 1), offering a more accurate of multi-precision on large integers, though it complicates proofs by introducing logarithmic factors. Both variants are Turing complete: a simulates a logarithmic-cost RAM in O(T(n)^2) time and a unit-cost RAM in O(T(n)^3) time; conversely, RAMs simulate Turing machines in O(T(n) \cdot l(T(n))) time. This equivalence makes the RAM a preferred model in for efficient simulations of circuits or other machines. A representative example of the RAM's power is a fragment for random access using indirect . Suppose an array A is stored in registers starting at 100, with the i held in R_1 and the result to be loaded into R_0. The code computes the target in R_2 \leftarrow R_1 + 100, then performs the indirect load R_0 \leftarrow R_{R_2}. This sequence accesses A in constant steps under the unit-cost model, demonstrating how indirect facilitates operations without linear-time searches, a core advantage over simpler register machines.

Modern Extensions and Uses

In the realm of concurrency, parallel register machines extend the classical model by incorporating multiple processors that operate on shared or distributed registers, enabling the of concurrent computations while preserving through structures. These models formalize non-interleaving concurrency, where are partially ordered, and have been mechanized in proof assistants to verify properties like and liveness. For instance, a parallel register machine with uses event structures to represent execution traces, allowing for the extraction of configurations that correspond to specific runs. Quantum variants of register machines introduce qubit registers to simulate quantum operations, supporting recursive programs through instruction-level control flows that manipulate superpositions and entanglements. A quantum register machine architecture, defined with an instruction set for quantum , enables efficient and execution of quantum recursive functions by maintaining reversibility in quantum states. This model provides a purely quantum for evaluating recursive algorithms, bridging classical register operations with quantum parallelism. Register machines find applications in proof assistants, where formalizations in systems like verify undecidability results and computational properties. For example, Minsky register machines have been mechanized in to prove , encoding machine computations as Diophantine equations and establishing their equivalence to Turing machines within constructive . In program semantics, register machines model the operational behavior of imperative languages, capturing state transitions via register updates and jumps to define denotational mappings for mutable state and . In , the () variant of register machines underpins definitions of space-bounded classes like LOGSPACE, where computations are measured by the logarithmic number of registers accessed in polynomial time. However, critiques highlight limitations of the unit-cost assumption, which treats arithmetic on O(log n)-bit words as constant time, potentially inflating efficiency claims; logarithmic-cost models adjust for bit operations to better align with real hardware constraints. Post-2000 developments include reversible machines, which ensure every computation step is invertible to minimize energy dissipation, simulated efficiently by membrane computing systems with symport/antiport rules. These models support forward and backward , applicable to low-power devices. Additionally, register machines inform low-level virtual machines (LLVMs), such as those in the Lua interpreter, where register-based optimizes imperative code execution by reducing stack operations and improving instruction density.

References

  1. [1]
    Register Machine - an overview | ScienceDirect Topics
    A Register Machine is an idealized computing machine introduced by Minsky in 1961, which allows for a direct proof that all recursive functions are computable.
  2. [2]
    Register Machine -- from Wolfram MathWorld
    An idealized computing machine consisting of a fixed set of data registers and set of instructions for operating on them.
  3. [3]
    [PDF] CDM Register Machines - Carnegie Mellon University
    A register machine (RM) has finite registers and a control unit. The control unit executes instructions like 'inc rk' and 'dec rkl' to manipulate register ...
  4. [4]
    Computability of Recursive Functions | Journal of the ACM
    Computability of Recursive Functions. Authors: J. C. Shepherdson.
  5. [5]
    Wang Hao. Arithmetic models for formal systems. Methodos, vol. 3 ...
    Aug 6, 2025 · Wang Hao. Arithmetic models for formal systems. Methodos, vol. 3 (1951), pp. 217–232. March 2014; Journal ...Missing: predicate register 1954-1957
  6. [6]
    A Variant to Turing's Theory of Computing Machines
    A Variant to Turing's Theory of Computing Machines. Author: Hao Wang. Hao Wang ... First page of PDF. Formats available. You can view the full content in ...
  7. [7]
    Turing machines - Stanford Encyclopedia of Philosophy
    Sep 24, 2018 · (Some of) Post's modifications of Turing's definition became part of the definition of the Turing machine in standard works such as Kleene 1952 ...
  8. [8]
    [PDF] Computation and Discrete Mathematics
    Jun 7, 2024 · ... register machine program (or RM-program for short) is a sequence I0 ... Hao Wang in 1954 is already capable of modeling all partial ...
  9. [9]
    An Informal Arithmetical Approach to Computability and Computation
    Nov 20, 2018 · Such a machine consists of a box and a linear tape divided into squares, indefinitely long in both directions. The tape passes through the box ...
  10. [10]
    How to Program an Infinite Abacus | Canadian Mathematical Bulletin
    Nov 20, 2018 · Melzak, Z. A., An informal arithmetical approach to computability and computation, Canad. Math. Bull. 4 (1961), 279-293.Google Scholar. A ...
  11. [11]
    Computation: finite and infinite machines - Books - ACM Digital Library
    Minsky, Marvin L. (1961), "Recursive unsolvability of Post's problem of "tag" and other topics in theory of Turing machines," Annals of Math. 74, 437--454 ...
  12. [12]
    Random-Access Stored-Program Machines, an Approach to ...
    Random-Access Stored-Program Machines, an Approach to Programming Languages. Authors: Calvin C. Elgot. Calvin C. Elgot. IBM Watson Research Center, Yorktown ...
  13. [13]
    Computational complexity of random access stored program machines
    Download PDF ... Elgot andA. Robinson, Random-access stored-program machines, an approach to programming languages,J. Assoc. Comput. Mach. 11 (1964), 365–399.
  14. [14]
    Time bounded random access machines - ScienceDirect.com
    1. Stephen A. Cook, Robert A. Reckhow. Diagonal Theorems for Random Access Machines. Technical Report No. 42, Department of Computer Science, University of ...
  15. [15]
    Computation: finite and infinite machines : Minsky, Marvin Lee, 1927
    Aug 31, 2019 · Computation: finite and infinite machines ; Publication date: 1967 ; Topics: Machine theory ; Publisher: Englewood Cliffs, N.J. : Prentice-Hall.<|separator|>
  16. [16]
    [PDF] Turing machines
    ▻ Lambek (1961) and Minsky (1961): register machines. ▻ Variations on all of the above (e.g. multiple tapes, non-determinism, parallel execution...) All ...
  17. [17]
    [PDF] 6 Turing Machines - Jeff Erickson
    Prove that any standard Turing machine (with suitably encoded input and output) can be simulated by a two-register machine. The input to the register machine is ...
  18. [18]
    [PDF] real time counter machines - People | MIT CSAIL
    Minsky (1961) first showed that a machine with two counters (a 2-CM) could simulate a Turing machine. The unlimited register machines of Shepherdson and Sturgis ...
  19. [19]
    [PDF] Lecture 4: Simulations of Turing Machines - UCSD CSE
    Apr 19, 2011 · A naive way to simulate any TM with an OTM is to mark where the head of the tape is and then scan the tape to find the marker at each step.
  20. [20]
    [PDF] Turing machines (continued) - University of Bristol
    Aug 12, 2016 · I INC r: increment register r. I DEC r: decrement register r (never going below 0). I JZ r, k: if r = 0, jump to instruction k, otherwise ...Missing: test | Show results with:test
  21. [21]
    Counter-machine model - Wikipedia
    There are many variants of the counter machine, among them those of Hermes, Ershov, Péter, Minsky, Lambek, Shepherdson and Sturgis, and Schönhage.The models in more detail · 1961: Minsky's model of a... · 1967: Minsky's "Simple...
  22. [22]
    [PDF] Basic Reading on Computable Functions - - Logic Matters
    Jan 15, 2012 · Cutland, Chs 1 and 2, discusses register machines nicely. 3Warning ... 5, discuss similar machines under the label 'abacus machines ...<|control11|><|separator|>
  23. [23]
    [PDF] Computation: Finite and Infinite Machines - CBA-MIT
    On the other hand, the operation of multiplying the contents of a register by two cannot be regarded as a fixed, finite action, because the amount of work.<|control11|><|separator|>
  24. [24]
    [PDF] Recursive Unsolvability of Post's Problem of "Tag" and other Topics ...
    May 14, 2007 · Such a halting. Page 17. 452. MARVIN L. MINSKY problem is known to be recursively unsolvable: e.g., for each Turing machine T, there is another ...
  25. [25]
    The Church-Turing Thesis - Stanford Encyclopedia of Philosophy
    Jan 8, 1997 · The Church-Turing thesis concerns the concept of an effective or systematic or mechanical method, as used in logic, mathematics and computer science.
  26. [26]
    [PDF] Church-Turing Thesis - MIT
    Turing's machines were intended to provide a model of what human computing agents do. Register machines are intended as a model of what electronic computers do.Missing: imperative | Show results with:imperative
  27. [27]
    [PDF] Time Bounded Random Access Machines
    COOK AND ROBERT A. RECKHOW, Diagonal Theorems for Random Access. Machines, Technical Report No. 42, Department of Computer Science, University of. Toronto, June ...
  28. [28]
    [PDF] Mechanized Theory of Event Structures: A Case of Parallel Register ...
    Once the event structure is constructed, one can extract the configurations corresponding to the particular runs of the parallel register machine, and further ...
  29. [29]
    Efficient Implementation of Quantum Recursive Programs - arXiv
    Aug 19, 2024 · We propose a notion of quantum register machine, the first quantum architecture (including an instruction set) that provides instruction-level ...
  30. [30]
    [PDF] Hilbert's Tenth Problem in Coq (Extended Version) - Hal-Inria
    Mar 10, 2022 · They encode full computations of register machines as Diophantine equations in one single, monolithic step. To make the proof more tractable for ...
  31. [31]
    [PDF] A Short Note on Reversibility in P Systems - Research Group on ...
    the work of any reversible register machine M = (n, Q, q0,qf ,I). Consider a P system. Π = (O, q0, R, {r1}), where. O = {ri | 1 ≤ i ≤ n} ∪ Q,. R = {q/q0ri ...