Fact-checked by Grok 2 weeks ago

Logic optimization

Logic optimization is the step in the (VLSI) design cycle where modifications are performed on a gate-level of a digital circuit to satisfy constraints such as area, power consumption, delay, and . This process aims to transform an initial logic description—often derived from high-level behavioral specifications like hardware description languages (HDLs)—into an efficient implementation that minimizes resource usage while preserving functionality. It plays a pivotal role in (EDA), enabling the of complex integrated circuits by automating the refinement of logic networks. The origins of logic optimization trace back to the mid-20th century, with foundational work in the 1950s on two-level minimization using methods like the Quine-McCluskey algorithm, which systematically reduces sum-of-products (SOP) expressions to minimize the number of literals and terms. By the 1970s, as programmable logic arrays (PLAs) emerged, optimization techniques focused on covering prime implicants for efficient PLA programming using heuristic methods. The 1980s marked a breakthrough with multi-level logic synthesis, driven by academic and industrial efforts at institutions like UC Berkeley, where the MIS (Multi-level Interactive Synthesis) system introduced algebraic and Boolean transformations for arbitrary networks, significantly advancing scalability for VLSI designs. Key techniques in logic optimization encompass both technology-independent and technology-dependent phases. Technology-independent optimization includes two-level methods like for heuristic minimization of incompletely specified functions, and multi-level approaches such as kernel extraction, common subexpression sharing, and algebraic factoring to reduce depth and size. Technology-dependent steps involve technology mapping, which matches optimized logic to a specific cell library using algorithms like tree covering or DAG matching to balance delay and area. Advanced methods incorporate automatic test pattern generation (ATPG) for don't-care exploitation and design rewiring to further enhance performance under multi-objective constraints. These techniques have evolved to address modern challenges, including low-power design, ensuring logic optimization remains essential for systems.

Introduction

Definition and Goals

Logic optimization is the process of finding an equivalent representation of a specified logic circuit by modifying a gate-level , derived from high-level or (RTL) descriptions via initial , that meets one or more constraints while improving efficiency with respect to cost functions such as area, , or power consumption. This transformation preserves the functional behavior of the circuit, ensuring identical input-output mappings, and is a core step in (EDA) flows for digital integrated circuits. The primary goals of logic optimization are to minimize chip area through reduced gate count and wiring length, decrease propagation delay for faster operation, and lower power usage to extend life in portable devices and reduce issues. These objectives collectively contribute to lower costs and higher reliability in very-large-scale integration (VLSI) designs. Optimization algorithms balance these often conflicting aims, such as trading area for speed, under given constraints to achieve overall quality of results (QoR). In EDA tools, common constraints include strict timing budgets to meet clock frequencies, area limits to fit within die sizes, and thresholds to comply with system specifications. These are enforced during to ensure the optimized is manufacturable and performant. Key metrics for assessing optimization effectiveness include the number of literals in expressions, which serves as a proxy for and gate area; the total count of gates, directly correlating to area and ; the number of levels along critical paths, which determines maximum delay; and fan-out, the number of loads driven by a gate, influencing signal speed, , and dynamic due to increased wiring. Lower values in these metrics generally indicate better optimization, though trade-offs are evaluated holistically.

Historical Development

The foundations of logic optimization trace back to the mid-19th century with George Boole's development of Boolean algebra, formalized in his 1854 work An Investigation of the Laws of Thought, which provided a mathematical framework for expressing logical operations using binary variables and operations like AND, OR, and NOT. This abstract system laid the groundwork for representing and simplifying logical expressions, though its direct application to circuit design emerged later. In 1938, Claude Shannon extended Boolean algebra to practical engineering in his master's thesis A Symbolic Analysis of Relay and Switching Circuits, demonstrating how Boolean functions could model the behavior of electrical switching circuits, thereby bridging theoretical logic to the design of relay-based systems. The 1950s marked the emergence of systematic methods for minimizing Boolean functions to reduce circuit complexity. Willard V. Quine introduced a tabular method for identifying prime implicants in 1952, published as "The Problem of Simplifying Truth Functions," which formed the basis for exact minimization by systematically covering minterms with irredundant product terms. This was extended by Edward J. McCluskey in 1956 with "Minimization of Boolean Functions," an algorithmic procedure that automated the process using tabular comparisons to group compatible minterms, enabling computer implementation for larger functions. Concurrently, developed the map method in 1953, described in "The Map Method for Synthesis of Circuits," offering a graphical of truth tables arranged in a to facilitate visual identification of adjacent minterms for simplification. By the 1980s, the rise of very-large-scale integration (VLSI) demanded more scalable approaches, leading to heuristic methods that traded exactness for efficiency on complex circuits. Robert K. Brayton and colleagues introduced the Espresso heuristic in their 1984 book Logic Minimization Algorithms for VLSI Synthesis, building on earlier 1982 work, which iteratively expanded, covered, and reduced implicants to achieve near-optimal two-level realizations for programmable logic arrays (PLAs). These techniques were integrated into (EDA) flows during the 1980s and 1990s, as VLSI complexity outpaced exact methods, with tools from companies like and incorporating minimization to automate logic synthesis in ASIC design. Theoretical insights into the inherent challenges of two-level minimization were solidified in the mid-2000s. In 2006, Christopher Umans, Tiziano Villa, and Robert K. Brayton proved that finding the minimum cardinality (DNF) for a is Σ₂ᵖ-complete, highlighting the problem's position at the second level of the and justifying the reliance on heuristics for practical instances. The 1980s also saw a pivotal shift from two-level to multi-level optimization due to the limitations of flat representations in handling hierarchical VLSI designs. At the , the MIS (Multi-level Interactive Synthesis) system, detailed in a 1987 IEEE paper "MIS: A Multiple-Level Logic Optimization System" by Robert K. Brayton et al., introduced techniques like algebraic and don't-care exploitation to restructure networks, reducing gate count and delay while preserving functionality. This transition enabled EDA tools to manage increasingly complex circuits, setting the stage for modern flows.

Core Concepts

Boolean Algebra and Logic Functions

Boolean algebra constitutes the mathematical framework underlying logic optimization, enabling the representation and manipulation of logic functions in digital circuit design. Introduced by in 1847 as a calculus for , it operates on variables that assume only two values: 0 (false or absent) and 1 (true or present), symbolizing states in switching circuits. extended this algebra to in 1938, demonstrating its equivalence to relay and switching circuit analysis, where variables correspond to open (0) or closed (1) contacts. Within this algebra, a literal is defined as a variable or its complement (negation, denoted A' or ¬A), serving as the basic building block for expressions. A minterm is a specific product of literals—one for each variable in the function—such that it evaluates to 1 for exactly one input combination and 0 otherwise; for n variables, there are 2^n possible minterms. Dually, a maxterm is a sum of literals that evaluates to 0 for exactly one input combination and 1 otherwise. Key laws include the , absorption, and , which facilitate algebraic simplification. The distributive law states that distributes over (and vice versa): A \cdot (B + C) = (A \cdot B) + (A \cdot C) A + (B \cdot C) = (A + B) \cdot (A + C) laws allow redundant terms to be eliminated: A + (A \cdot B) = A A \cdot (A + B) = A , formalized by in 1847, transform negations of compound expressions: \overline{A + B} = \bar{A} \cdot \bar{B} \overline{A \cdot B} = \bar{A} + \bar{B} These laws, along with commutative (A + B = B + A, A · B = B · A) and associative properties, form the core axioms for deriving further identities. Logic functions, or Boolean functions, map binary inputs to binary outputs and are represented using truth tables, which exhaustively list all 2^n input combinations and corresponding outputs for n variables. The canonical sum-of-products (SOP) form expresses a function as a disjunction (OR) of minterms where the output is 1, while the product-of-sums (POS) form uses a conjunction (AND) of maxterms where the output is 0. For example, the function f(A, B) = A'B + AB' in SOP covers the minterms for inputs (1,0) and (0,1). Incompletely specified functions incorporate don't cares (denoted d), where certain input combinations yield unspecified outputs that can be assigned 0 or 1 to aid simplification without altering specified behaviors; this arises in practical circuit design due to unreachable states. Fundamental Boolean identities include the idempotent laws: A + A = A, \quad A \cdot A = A the complement laws: A + A' = 1, \quad A \cdot A' = 0 and the consensus theorem, which eliminates redundant terms: xy + x'z + yz = xy + x'z A tautology is an expression always evaluating to 1 (e.g., A + A'), while a contradiction always evaluates to 0 (e.g., A · A'); these serve as constants in derivations. Prerequisites for minimization involve identifying implicants—products covering one or more minterms where the function is 1—and selecting prime implicants, which are maximal implicants not subsumed by others, to form a minimal cover of the function's onset (1-values) while respecting don't cares.

Representations of Logic Circuits

Logic circuits in optimization are commonly represented at the two-level structure using sum-of-products () form, where the output is expressed as the logical OR of multiple product terms implemented via AND gates followed by an . This representation corresponds to AND-OR networks and is particularly suited for programmable logic arrays (PLAs), which realize arbitrary combinational functions through configurable AND and OR planes. The dual form, product-of-sums (), employs OR-AND networks, where the output is the logical AND of sum terms, offering an alternative for minimizing certain functions under different cost metrics. Multi-level representations extend beyond two levels to capture more complex hierarchies, typically modeled as directed acyclic graphs (DAGs) where nodes represent logic gates such as , NOT, and XOR, and directed edges denote signal interconnections or wires. These DAGs form the basis of netlists in tools, providing a structural description of the circuit's connectivity and functionality. Factored forms, such as (A + B)(A + C), offer a compact algebraic expression of multi-level logic by grouping shared subexpressions, reducing redundancy compared to expanded or while preserving the underlying . Other representations include binary decision diagrams (BDDs), which provide a canonical, graph-based functional model of functions through a rooted, directed acyclic structure with decision nodes for each variable. BDDs enable efficient manipulation for and optimization tasks, contrasting with structural algebraic expressions like factored forms by focusing on the function's rather than gate-level . Word-level models, such as those using arithmetic expressions for multi-bit operations, abstract higher-level behaviors but are less common in bit-parallel logic optimization, where BDDs or netlists dominate. The choice of representation impacts optimization efficacy; for instance, two-level or forms facilitate exact minimization but can lead to large gate counts in complex circuits, while multi-level DAGs and factored forms support scalable heuristics at the cost of increased structural complexity. BDDs excel in exact and due to their canonicity but suffer from potential in node count relative to input size, limiting applicability to circuits with fewer than 20-30 under typical variable orderings. These trade-offs guide selection in synthesis flows, balancing compactness, manipulability, and computational overhead.

Motivation

Cost and Performance Benefits

Logic optimization significantly reduces the area required for implementing circuits by minimizing the number of and interconnect wires, leading to smaller die sizes and lower manufacturing costs. This area reduction is particularly impactful in the context of , where scaling enables denser integration, but optimization ensures efficient use of available space on the chip. For instance, typical logic synthesis techniques can achieve 20-50% reductions in circuit area for benchmark designs, directly translating to higher yields during fabrication since smaller dies are less likely to contain defects. In addition to area savings, logic optimization improves performance by shortening the critical path lengths, which minimizes propagation delays and enhances overall circuit speed. By restructuring logic to reduce depth and fanout, delays can be cut by 5-25% on average across standard benchmarks, allowing higher clock frequencies without additional hardware. Power consumption also benefits, as fewer switching nodes lower dynamic power dissipation; studies report 10-25% reductions in total power for optimized circuits compared to unoptimized equivalents, with internal power savings often reaching 25% through targeted activity minimization. Reliability is further enhanced by these optimizations, as fewer transistors and wires decrease the probability of defects and mitigate issues like electromigration and crosstalk. Electromigration, driven by high current densities, is addressed by reducing signal activities at nodes, improving mean time to failure (MTF) proportionally to the inverse square of current density; synthesis techniques balance this with area constraints to yield reliability factors that extend device lifetime on benchmarks. Lower switching activity also curbs hot carrier effects and electromigration wear, reducing failure rates in scaled technologies. Economically, these benefits compound in production, where even modest die size reductions—such as 10-20%—can lead to substantial cost savings due to improved wafer yields and reduced material usage. In high-volume , a 20% area reduction might save millions in fabrication costs for large-scale chips, enabling more competitive pricing and faster time-to-market while supporting Moore's Law-driven scaling.

Challenges in Digital Design

As integrated circuits scale to larger , the grows exponentially due to the inherent nature of functions, where the number of required can increase dramatically with the number of inputs, often approaching 2^n for n variables as proven by Shannon's . This exponential growth in gate count exacerbates timing closure issues in very-large-scale integration (VLSI) designs, where interconnect delays and signal propagation become dominant, making it difficult to meet performance specifications without extensive optimization efforts. In advanced technology nodes below 5 nm, such as those using FinFET or gate-all-around (GAA) s, designers face stringent constraints in balancing area, power consumption, and timing performance. Process variations, including random fluctuations and line-edge roughness, introduce significant variability in characteristics, leading to unpredictable delays and increased leakage currents that complicate reliable operation. Verification and testing of these large-scale circuits present additional hurdles, as the sheer size amplifies the need for comprehensive fault coverage to detect defects like stuck-at faults, while simulation times escalate due to the computational demands of and waveform analysis across millions of gates. Traditional fault becomes impractical for billion-gate designs, often requiring techniques to manage the exponential rise in test pattern evaluation time. Emerging applications introduce further challenges, including power walls in and (IoT) devices where battery life constraints limit aggressive scaling, and thermal management issues in (HPC) systems that arise from dense power densities exceeding 100 W/cm², potentially causing hotspots and reliability degradation without advanced cooling strategies.

Classical Methods

Two-Level Logic Minimization

Two-level logic minimization seeks to derive the smallest sum-of-products () or product-of-sums () expression equivalent to a given , minimizing the number of product terms (implicants) and literals to reduce in AND-OR or OR-AND realizations. This optimization is essential for where the goal is to cover all required minterms (or maxterms) exactly while exploiting any flexibilities to achieve irredundancy. The methods focus on identifying prime implicants—largest possible cubes that subsume smaller ones—and selecting a minimal set that covers the function without overlap where possible. Karnaugh maps offer a graphical for manual minimization of SOP expressions by rearranging the truth table into a grid that reflects adjacency, enabling visual identification of adjacent 1s differing by one variable. Introduced by in , the method begins with plotting the function's minterms as 1s on the map (for variables up to 4-6), marking don't cares if present, and grouping contiguous 1s (including don't cares if advantageous) into rectangles of size , or 16 to form prime implicants, where each group eliminates variables that change within it. The minimal cover is then obtained by selecting essential prime implicants (those uniquely covering a minterm) and resolving any remaining cyclic coverings through to avoid . This step-by-step grouping prioritizes larger implicants to reduce literals, yielding an exact minimum for small functions, though it relies on human and scales poorly beyond 6 variables due to multidimensional map complexity. The Quine-McCluskey algorithm provides an exact, tabular method for automating the process, suitable for software implementation on incompletely specified functions. Developed by Willard V. Quine in 1952 as a theoretical framework for simplifying truth functions and practically extended by Edward J. McCluskey in 1956, it starts by listing all minterms in binary and iteratively pairs them with those differing by one bit to form implicants, marking subsumed terms and repeating until no pairs remain, thus generating all prime implicants. The second phase constructs a prime implicant chart, identifying essentials and using Petrick's method—a Boolean consensus operation on covering conditions—to solve the exact set-covering problem for the minimal selection, ensuring no smaller SOP exists. With a worst-case time complexity of O(2^n \cdot n^2) due to the exponential number of implicants and the NP-hard covering step, it remains feasible for functions with 8-12 variables but becomes intractable for larger n without heuristics. Don't cares, representing input combinations where the output is irrelevant (e.g., unreachable states in sequential circuits), enhance minimization by allowing to expand across these points, potentially merging terms and reducing literals. In Karnaugh maps, they are denoted as 'X' and optionally included in groups to form larger rectangles without altering the function's specified behavior. The Quine-McCluskey algorithm incorporates them during the pairing phase, treating 'X' minterms as wildcards that pair with more terms, and excludes them from the final cover requirements, enabling broader generation. This exploitation is critical in programming, where don't cares fill unused fuses or terms, allowing a single minimized to fit smaller arrays by assigning 'X' outputs to optimize product-line counts. These techniques find primary application in programmable logic arrays (PLAs), which directly realize two-level forms via AND and OR planes, where minimization cuts the number of products and inputs to those planes, reducing area, power, and folding complexity in custom designs. They also suit simple combinational gates like decoders or multiplexers, but limitations arise for functions exceeding 10 variables due to exponential state-space growth, often necessitating tools like for approximate solutions in practice.

Multi-Level Logic Synthesis

Multi-level logic synthesis focuses on optimizing circuits represented as interconnected networks of gates with more than two levels, aiming to reduce metrics such as gate count, literal count, and delay through restructuring operations that promote factor sharing and simplification. Unlike two-level minimization, which operates on sum-of-products or product-of-sums forms, multi-level synthesis allows for hierarchical representations where intermediate signals can be factored or decomposed to exploit redundancies across the network. This approach emerged as a key advancement in the , enabling more scalable designs for complex VLSI circuits by iteratively applying transformations like , , and substitution. Algebraic factorization is a foundational technique in multi-level , involving the identification and extraction of common algebraic factors to reduce literal counts and promote sharing. For instance, expressions like [ABC](/page/ABC) + [AB](/page/AB)D can be factored as [AB](/page/AB)(C + D), where [AB](/page/AB) is a common subexpression eliminated through algebraic division, treating the Boolean network as a over GF(2). This method relies on extraction to find potential factors: a of a f is a cube-free obtained by repeatedly dividing out the largest factor until none remain, with co-kernels being the corresponding quotients. Seminal algorithms for generation and , developed in the early 1980s, form the basis for these operations and have been shown to yield significant area reductions in benchmarks. Functional decomposition complements algebraic methods by breaking down complex functions into simpler sub-functions, often using Shannon expansion, which cofactors a function f with respect to a variable x as f = x \cdot f_0 + \bar{x} \cdot f_1, where f_0 = f(x=1) and f_1 = f(x=0). This expansion enables recursive decomposition, allowing the synthesis tool to identify intermediate signals that minimize the overall network depth or width. In multi-level contexts, such decompositions are applied selectively to nodes, balancing the trade-off between added levels and improved factorability. These techniques are integrated into restructuring flows that alternate between local optimizations and global resynthesis. Node optimization in multi-level synthesis further leverages don't cares to enhance simplifications, particularly through cubical , where cubes (products of literals) from the on-set and don't-care sets are identified to cover functions more efficiently. Multi-level don't cares arise implicitly from structure, such as unreachable states or simplifying vertices, and can be extracted to enlarge observable don't-care sets for minimization. Algorithms recognize prime implicants or cubes that exploit these don't cares, enabling aggressive reductions without altering external functionality; for example, kernel-based methods pair with cubical covers to detect extractable subcircuits. This approach improves area in industrial circuits by incorporating structural don't cares during optimization. Practical implementations of multi-level are exemplified by the Berkeley MIS (Multi-level Interactive ) system, introduced in the mid-1980s, which pioneered script-based optimization flows combining algebraic and methods for network restructuring. MIS processes circuits in a factored representation, applying sequences of commands for , elimination, and to achieve near-optimal results on MCNC benchmarks. Its successor, the system developed in the , extends these capabilities using And-Inverter Graphs (AIGs) for scalable multi-level scripting, supporting balance, refactor, and resynthesis operations that integrate kernel extraction and don't-care handling for both combinational and . 's open-source framework has become a standard for academic and industrial prototyping, often achieving significant reductions in AIG node counts for verification and tasks.

Heuristic Algorithms

Heuristic algorithms in logic optimization prioritize computational efficiency over guaranteed optimality, making them suitable for large-scale problems where exact methods are infeasible due to complexity. These approaches approximate minimal representations of or multi-valued logic functions by employing strategies such as iterative refinement, covering techniques, and local search, often running in polynomial time. They are particularly valuable in VLSI design flows, where speed enables rapid iteration during . The Espresso algorithm, developed as a cornerstone for two-level logic minimization, extends classical methods like Quine-McCluskey by incorporating don't cares and multi-valued inputs for (PLA) optimization. It operates through an iterative loop involving expansion of to cover the on-set while avoiding the off-set, irredundant extraction to eliminate superfluous , and reduction to simplify literals, using operations for prime implicant generation. Espresso-MV, its multi-valued variant, generalizes this to handle variables with more than two states via generalized cofactors, efficiently managing covering problems with heuristic selection based on cube weights and checks to detect redundancies. This enables effective handling of don't cares by incorporating them into the off-set computation, yielding compact sum-of-products forms. The algorithm's design avoids exhaustive enumeration, achieving near-optimal results through perturbations like LAST_GASP to escape local minima. Extensions of J.P. Roth's early methods, such as the minimization over graphs, further advanced practical approaches by modeling logic functions as graphs to identify and decompositions iteratively, reducing multi-level circuits without full . These techniques, refined in subsequent works, use path-based and implicant table s to approximate covers for high-variable functions. Genetic algorithms offer another class of s for approximate minimization, evolving populations of logic expressions via , crossover, and evaluation based on count or literal metrics, particularly useful for non-standard representations like exclusive-or sum-of-products. For instance, genetic implementations on FPGA clusters have demonstrated scalable for complex networks. On standard benchmarks like the MCNC suite, delivers near-optimal solutions in seconds to minutes on contemporary , far outperforming exact solvers in while maintaining practicality for circuits with hundreds of inputs. However, these heuristics can converge to suboptimal covers due to their greedy nature, occasionally missing global minima in cyclic or highly symmetric functions; consequently, they are often applied in pre-processing stages to guide exact optimization tools.

Advanced Techniques

Exact Optimization Methods

Exact optimization methods in logic synthesis aim to find globally optimal circuit implementations with respect to specified cost metrics, such as gate count or delay, by exhaustively exploring the solution space while leveraging formal techniques to prune infeasible paths. These methods contrast with heuristics by providing provable optimality guarantees, though they are computationally intensive and typically applicable only to small-scale problems. Key approaches include formulations based on Boolean satisfiability (SAT) solvers and branch-and-bound algorithms, which encode the synthesis problem in a way that allows systematic enumeration and bounding to identify minimal representations. SAT-based synthesis encodes logic minimization as a , where the goal is to find an assignment that satisfies constraints representing the target function while minimizing structural costs, often using And-Inverter Graphs (AIGs) as the underlying representation. AIGs model networks as directed acyclic graphs with two-input AND nodes and inverters on edges, enabling efficient and checking during optimization. In this framework, the problem is reduced to finding a minimal AIG that realizes the given function, with SAT solvers exploring topological families of circuits to ensure exhaustiveness; for instance, encodings can constrain node fan-ins or incorporate delay bounds to yield optimal multi-level networks. This approach has been extended to handle complex constraints, such as fixed fan-out or , by augmenting the SAT formula with additional clauses. Branch-and-bound techniques apply systematic enumeration with to the search space for multi-level delay optimization, building networks incrementally while discarding branches that exceed known optimal costs. Early formulations, such as those for , use branching on literals or divisors to construct cycle-free multi-output realizations, with bounds derived from partial network costs to ensure to the global minimum. Modern variants adapt this for delay minimization by prioritizing cuts in AIGs and incorporating technology-specific metrics, enabling exact solutions for subcircuits where heuristics may fall short. The inherent complexity of exact optimization stems from the of circuit minimization, even for multi-output functions under general models, implying that no polynomial-time algorithm exists unless P=. In practice, these methods scale to functions with up to approximately 8-10 inputs for full multi-level optimization, with libraries of precomputed optimal networks limited to 5-6 inputs, beyond which runtime becomes prohibitive without approximations. Tools like the system from UC Berkeley incorporate exact libraries for these methods, featuring commands such as "exact" for AIG-based minimization and "lutexact" for LUT-oriented , which leverage SAT encodings to compute provably optimal subnetworks. Complementing this, the Fraig procedure in performs functionally reduced AIG equivalence checking by iteratively resolving conflicts via SAT, ensuring structural optimizations preserve functionality during exact flows. These implementations facilitate integration into broader design flows for critical paths or small blocks where optimality is paramount.

Algebraic and Functional Decomposition

Algebraic methods form a of logic optimization by treating functions as polynomials over GF(2), facilitating efficient restructuring through operations such as factoring, substitution, and . These techniques are particularly effective for multi-level circuits, where they identify and extract common subexpressions to minimize literal count and logic depth while preserving functionality. Unlike exhaustive methods, algebraic approaches rely on rules like weak division, which ignores remainders in polynomial division to approximate optimal factors quickly. Substitution involves replacing or primary input in with an intermediate expression from , enabling the discovery of nonlocal redundancies and promoting structural sharing. Resubstitution reverses this process by re-expressing an existing as of other nodes, often revealing opportunities for collapsing or eliminating redundant . Both operations exploit don't cares via cofactors: for f and x, the positive cofactor f_{x=1} and negative cofactor f_{x=0} are computed, allowing selective application of substitutions only where they simplify the on-set without affecting the off-set. This cofactor-based exploitation ensures legality while maximizing simplification. Decomposition breaks down complex functions into simpler subfunctions, with two primary types: disjunctive () and conjunctive (). In disjunctive decomposition, a function f(X, Y) with disjoint input sets X and Y is expressed as f = g(h_1(X), h_2(Y)), where g combines parallel subfunctions h_1 and h_2; this form is ideal for identifying independent computations. Conjunctive () decomposition composes functions end-to-end as f(X) = g(h(X)), suitable for pipelined structures. The Ashenhurst-Curtis method integrates both for balanced decomposition, using decomposition charts to test feasibility and achieve equitable partitioning of inputs, which aids in detecting multi-output sharing without excessive depth increase. Kernel computation underpins algebraic by identifying candidate factors for . A of a f is an irreducible sum-of-products term k that divides f exactly under multiplication (AND), meaning f = c \cdot k for some c (product-of-literals), with no proper subterm of k dividing f. The full set of kernels K(f) is computed recursively: initialize with f; for each literal l dividing f, compute the quotient f / l (weak division, ignoring non-divisible terms) to form co-kernels, then recurse on quotients to generate child kernels until no further single-literal factors exist. Co-kernels are the corresponding cubes c such that f = c \cdot k + r, where r is the (often discarded in heuristics). To illustrate kernel extraction: Consider f = ac + ad + bc + bd + e.
  • Dividing by cube a + b yields kernel k_1 = c + d, with co-kernel a + b (since (a + b)(c + d) = ac + ad + bc + bd).
  • Remaining term e forms a trivial kernel k_2 = e.
For sharing detection across functions f and g, common kernels are identified by intersecting K(f) and K(g); if a kernel k appears in both and their co-kernels overlap (e.g., share a common ), k can be extracted as a shared subexpression, reducing redundancy. This process, repeated iteratively, enables multi-output optimization. These algebraic and techniques typically reduce literal count by 10-30% in multi-level circuits by promoting subexpression sharing and restructuring, as demonstrated in evaluations where factoring via kernels yields substantial area savings without timing degradation. For instance, of common double-cube kernels often achieves 15% average literal reduction over initial representations. Such benefits are central to multi-level flows, where algebraic rules provide a fast foundation before finer refinements.

Technology-Dependent Optimizations

Technology-dependent optimizations in logic synthesis involve tailoring the implementation to specific and physical constraints after the initial technology-independent restructuring. These techniques map abstract logic networks to actual gates or cells from a target , accounting for factors like delay, area, power, and manufacturing variations to achieve optimal performance in application-specific integrated (ASICs). Technology is a core process that transforms a technology-independent network into a using cells from a predefined , often modeled as directed acyclic graphs (DAGs) where nodes represent functions and edges denote interconnections. This can prioritize area minimization through complete of the DAG with library patterns or focus on delay by selecting matches that minimize critical path lengths. For instance, delay-oriented employs dynamic programming to select covers that balance and gate delays, achieving up to 20% in path delays compared to area-focused approaches on circuits. In (LUT)-based mappings, the problem resembles bin packing, where cones are fitted into fixed-size LUTs to cover the network efficiently while respecting input limits. Retiming repositions registers in synchronous circuits to minimize the clock period without altering functionality, effectively balancing stages by moving latches across . The seminal by Leiserson and Saxe uses a graph-based formulation where edge weights represent delays and register counts, solving for a valid retiming via or shortest-path methods to achieve the minimum feasible clock cycle. This approach can reduce the critical path delay by 15-30% in pipelined designs, as demonstrated on industrial circuits, while preserving sequential behavior. Retiming is particularly effective post-technology mapping to fine-tune timing closure in the presence of library-specific gate delays. Buffer insertion addresses interconnect-dominated delays in deep-submicron technologies by strategically placing along long wires to mitigate resistance-capacitance () effects and control signal s. The van Ginneken dynamic programming algorithm optimally inserts s in tree-structured nets to minimize Elmore delay, considering costs and electrical parameters, which can reduce wire delays by factors of 2-5 for global nets. For control, s boost drive strength to prevent excessive transition times that degrade timing margins, especially in high-fanout scenarios. This technique integrates with technology mapping to handle library-specific cells, ensuring compatibility with timing models. Technology libraries, such as and gate-array types, significantly influence optimization outcomes due to their structure and variability. libraries provide pre-characterized, full-custom gates arranged in rows for flexible placement, enabling denser and faster implementations than gate arrays, which pre-wire arrays and customize only metal layers, trading off area for quicker turnaround. Process variations, including intra-die fluctuations, amplify timing uncertainties in s, necessitating variation-aware characterization where libraries incorporate statistical models to derate delays by 10-20% for robust sign-off. Gate arrays exhibit lower variation sensitivity due to their uniform base but suffer from congestion limits. These library choices dictate feasibility, with s supporting more aggressive optimizations in modern nodes.

Modern Developments

Machine Learning Applications

Machine learning techniques have increasingly been integrated into logic optimization flows since 2020, automating complex decisions in and backend processes to improve area, power, and performance metrics. These approaches leverage neural networks and to navigate vast spaces, often outperforming traditional heuristics by learning from data and predicting optimization outcomes. Key applications include enhancing resynthesis for area reduction and predicting routability in physical stages, enabling faster iterations in (EDA) tools. In logic synthesis, reinforcement learning (RL) has been applied to area-driven resynthesis, where agents learn optimal sequences of transformations tailored to specific circuits. For instance, an RL framework optimizes FPGA logic by selecting parameters in the resyn2 script, achieving average area reductions of 8.3-10.5% compared to the baseline resyn2 approach across benchmark circuits. This method uses a to model the synthesis flow, with rewards based on post-technology-mapped area, demonstrating generalization to unseen designs. Similarly, RL-enhanced AIG rewriting, using or A3C algorithms, yields average cell area reductions of 13-21% over greedy baselines in industrial benchmarks, with minimal delay penalties of under 5%. Backend flow optimization benefits from neural networks predicting placement and outcomes to guide early decisions. Convolutional neural networks, such as RouteNet, predict detailed violations and hotspots in mixed-size designs with 76% accuracy for overall routability and 62% true positive for hotspots at low false positives, enabling sub-second evaluations during placement. neural networks (GNNs) further analyze netlists by representing circuits as graphs, propagating features like toggle for power estimation or identifying congestion hotspots pre-placement with improved accuracy over traditional methods. These GNNs excel in capturing structural dependencies in netlists, supporting tasks like in flows. Recent advances include Logic Neural Networks (LNNs), which combine differentiable with neural architectures for interpretable hardware implementations. LNNs enable end-to-end on FPGAs using only lookup tables, while maintaining full interpretability through gate-level analysis. Additionally, aids don't-care identification by treating care sets as and optimizing for generalization, reducing gate counts by up to 53% (e.g., from 1141 to 537 ) with minimal accuracy trade-offs of 2% in approximate scenarios. Integration of these ML techniques into commercial tools like and Quartus has accelerated via ML editions, providing 10% average quality-of-results gains through automated optimization. However, challenges persist, including training data scarcity due to proprietary designs limiting sizes, and explainability issues where black-box models hinder in safety-critical applications. Addressing these requires generation and hybrid neuro-symbolic approaches to ensure verifiable optimizations. As of 2025, further developments include hardware-aware frameworks like GraNNite for efficient GNN execution on accelerators.

FPGA-Specific Optimizations

Field-programmable gate arrays (FPGAs) require logic optimization techniques tailored to their reconfigurable architecture, which consists of configurable logic blocks (CLBs), lookup tables (LUTs), and programmable interconnects. Unlike fixed hardware, FPGA optimization must balance area, delay, and power while accounting for limited routing resources and the ability to support partial reconfiguration. These constraints lead to specialized methods that prioritize efficient mapping to LUTs and clustering into hardware slices, often using academic flows like VPR for evaluation. LUT mapping decomposes the network into k-input LUTs, where k typically ranges from 4 to 6, to minimize area or delay. Seminal algorithms like FlowMap employ a labeling scheme to find a delay-optimal covering of the network with k-LUTs, incorporating don't cares to reduce the number of LUTs by exploiting unspecified inputs in multi-level . This approach achieves area-delay trade-offs by first optimizing for minimum depth (delay) and then refining for area using don't cares, improving packing density. For instance, in circuits with high , don't cares allow sharing of sub-functions across LUTs, improving packing density. Technology mapping in FPGAs builds on these principles but emphasizes LUT-specific distinct from ASIC gate libraries. Following LUT mapping, packing and clustering group the resulting LUTs and associated flip-flops into CLBs or slices to maximize resource utilization within FPGA architecture constraints. The VPR tool suite implements a timing-driven that iteratively clusters logic based on , feasibility, and timing , achieving routable designs with minimal channel width. This process considers intra-cluster limits, such as pins per , to avoid bottlenecks that could inflate overall delay by 15-30% in dense designs. Academic research using VPR has shown that sizes of 4-10 LUTs per offer optimal area-delay trade-offs for modern FPGAs. Recent advances integrate to accelerate packing, addressing the of exploring cluster configurations. For example, the Imbalanced Large Graph Learning (ImLG) framework uses graph neural networks to predict cluster feasibility during placement, while maintaining quality comparable to traditional heuristics on VPR benchmarks. In power optimization for edge applications, vendors like demonstrate low-power FPGA flows for tasks, reducing power consumption in battery-constrained devices. A key distinction from ASIC optimization lies in the emphasis on programmable routing resources, which can consume up to 80% of FPGA area, necessitating early routability-aware decisions during logic synthesis. Partial reconfiguration further enables runtime optimization, allowing selective updates to logic partitions without full device reload, which reduces reconfiguration overhead by orders of magnitude compared to ASIC's fixed post-fabrication design. This reconfigurability supports adaptive optimizations, such as swapping power-efficient variants for specific workloads, unattainable in ASICs.

Examples

Karnaugh Map Example

To illustrate the application of the for two-level logic minimization, consider the Boolean function f(A, B, C) = \sum m(0, 1, 3, 6, 7) with don't cares at minterms 2 and 4. The method facilitates simplification by arranging minterms in a where adjacent cells differ by one variable, allowing groups of 1s (and don't cares treated as 1s when advantageous) to represent simplified product terms. The 3-variable Karnaugh map for this function, with rows labeled by A and columns by BC in Gray code order (00, 01, 11, 10), is plotted as follows, placing 1s at the specified minterms, Xs for don't cares, and 0s elsewhere:
A \ BC00011110
0111X
1X011
The optimal grouping consists of a encompassing cells for minterms 0, 1, 3, and the don't care at 2 (the entire top row), which simplifies to the term \overline{A} since A is constant at 0 and BC varies across all combinations. A separate pair covers minterms 6 and 7 (bottom row, columns 11 and 10), simplifying to the term A B since A and B are constant at 1 and C varies. Don't cares are included only in the to enlarge the group, while the don't care at 4 is left ungrouped as it does not aid further simplification. The resulting minimized sum-of-products (SOP) expression is f(A, B, C) = \overline{A} + A B. To verify equivalence, compare the truth tables of the original function and the minimized expression. The original specifies outputs of 1 at minterms 0, 1, 3, 6, 7; 0 at minterm 5; and don't care (X) at 2, 4. The minimized expression yields identical outputs at the specified positions (1 where required, 0 at 5) and arbitrary but consistent values at don't cares (1 at 2, 0 at 4).
ABCDecimalOriginal fMinimized f = \overline{A} + A B
000011
001111
0102X1
011311
1004X0
101500
110611
111711
This example reduces the original SOP form, which requires five product terms (\overline{A} \overline{B} \overline{C} + \overline{A} \overline{B} C + \overline{A} B C + A B \overline{C} + A B C), to two terms, highlighting the visual efficiency of Karnaugh maps for functions with up to four variables by enabling rapid identification of prime implicants.

Espresso Heuristic Example

The is applied to the 9sym from the MCNC suite, a symmetric with 9 inputs and 1 output that equals 1 when the input has exactly 3, 4, 5, or 6 ones; the initial two-level representation consists of 158 product terms with 516 literals. The optimization process begins by generating all prime implicants using the Quine-McCluskey-like expansion and reduction steps, identifying cubes that cannot be enlarged further without covering non-onset minterms. Essential prime implicants are then selected—those that uniquely cover certain minterms—forming the initial cover; for 9sym, this identifies a of primes that must be included to avoid missing required functionality. The cover is iteratively refined by solving a weighted set-covering problem to select a minimal set of remaining primes, followed by simplification via operations, where pairwise (resolution-like) between implicants removes subsumed cubes and detects blocking conditions to reduce redundancy without altering the function. The resulting minimized sum-of-products expression for 9sym has 86 terms and 516 literals, achieving a term reduction from 158 to 86 while maintaining functional equivalence. Compared to exact minimization methods like the full branch-and-bound Quine-McCluskey algorithm, Espresso yields the globally optimal cover for 9sym in 0.12 seconds on contemporary hardware, matching exact results; however, for more complex benchmarks in the MCNC suite (e.g., apex or pdc), exact methods require hours or days due to exponential enumeration of primes, whereas Espresso consistently delivers near-optimal solutions (within 5-10% of minimum literals) in seconds, enabling practical scalability in EDA flows. In real scenarios, Espresso's consensus-based simplification efficiently exploits don't cares—such as those derived from unreachable states in multi-level circuits or spectral don't cares—to further compact representations, often reducing literals by 10-20% beyond basic covering; its extension to multi-valued variables via Espresso-MV allows optimization of encoded domains (e.g., radix-4 literals for functions), supporting advanced mapping in PLAs and ASICs.

References

  1. [1]
    [PDF] Design rewiring using atpg - Computer Engineering Group
    Abstract—Logic optimization is the step of the very large scale integration (VLSI) design cycle where the designer performs.
  2. [2]
    [PDF] Logic Synthesis in a Nutshell - UCSD CSE
    Jul 13, 2010 · While logic optimization was finding its first commercial use for remapping, designers at major corporations, such as IBM, had already been ...
  3. [3]
    [PDF] SYNTHESIS AND OPTIMIZATION OF DIGITAL CIRCUITS
    Giovanni De Micheli. Stanford University. Technische Universitat Darmstadt ... 10.3.4 Concurrent Logic Optimization and Library Binding*. 533. 10.3.5 ...
  4. [4]
    [PDF] Logic Optimization and Synthesis: Trends and Directions in Industry
    Abstract—Logic synthesis is a key design step which optimizes abstract circuit representations and links them to technology.Missing: definition | Show results with:definition<|control11|><|separator|>
  5. [5]
    [PDF] Lecture 5: Gate Logic Logic Optimization Overview - eia.udg.edu
    The whole logic design problem is to create a circuit that meets some functional specification. • How was this spec given to you? • How do you know what it ...
  6. [6]
    An Investigation of the Laws of Thought
    Self-taught mathematician and father of Boolean algebra, George Boole (1815–1864) published An Investigation of the Laws of Thought in 1854. In this highly ...
  7. [7]
    A symbolic analysis of relay and switching circuits - DSpace@MIT
    A symbolic analysis of relay and switching circuits. Author(s). Shannon, Claude Elwood,1916-2001. Thumbnail. Download34541425-MIT.pdf (16.35Mb). Advisor.
  8. [8]
    The Problem of Simplifying Truth Functions
    THE PROBLEM OF SIMPLIFYING TRUTH FUNCTIONS. W. V. QUINE, Harvard University. The formulas of the propositional calculus, or the logic of truth functions, are ...
  9. [9]
    [PDF] Minimization of Boolean Functions - vtda.org
    Minimization of Boolean Functions*. E. J. MCCLUSKEY, Jr. (Manuscript received June 26, 1956). A systematic procedure is presented for writing a Boolean ...
  10. [10]
    [PDF] The Map Method for Synthesis of Combinational Logic Circuits
    1953. Karnaugh-Synthesis of Combinational Logic Circuits. 597. Page 6. Three-Dimensional. Maps ... Paper 53-197, recommended by the AIEE Instru- ments and ...Missing: original | Show results with:original
  11. [11]
    Logic Minimization Algorithms for VLSI Synthesis - SpringerLink
    This book discusses logic minimization for effective circuits, with the development of ESPRESSO-II, a two-level minimization program, and its potential for ...
  12. [12]
    A Brief History of EDA - SemiWiki
    Aug 5, 2012 · Before EDA, circuits were designed by hand. EDA began as an industry in 1981, and grew with the ASIC business and fabless models.
  13. [13]
    [PDF] Complexity of Two-Level Logic Minimization - Caltech Authors
    The expand step in ESPRESSO attempts to expand as much as possible the set of minterms covered by a given term by removing literals from that term. One way to ...
  14. [14]
  15. [15]
    [PDF] The Mathematical Analysis of Logic - Project Gutenberg
    Project Gutenberg's The Mathematical Analysis of Logic, by George Boole. This ... pdf.pdf or 36884-pdf.zip *****. This and all associated files of ...
  16. [16]
    [PDF] A Symbolic Analysis of Relay and Switching Circuits
    Claude E. Shannon**. 1. Introduction. In the control and protective ... A Symbolic Analysis of Relay and Switching Circuits. -(n-1). To a-. NUMBERS. Xn.
  17. [17]
    Formal logic (1847) : De Morgan, Augustus, 1806-1871
    Aug 9, 2019 · Formal logic (1847). by: De Morgan, Augustus, 1806-1871. Publication ... PDF download · download 1 file · PNG download · download 1 file · SINGLE ...
  18. [18]
    Archie Blake. Canonical expressions in Boolean algebra ...
    Archie Blake. Canonical expressions in Boolean algebra. Dissertation Chicago 1937. Lithographed. The University of Chicago Libraries, Chicago1938, ii + 60 pp.
  19. [19]
    [PDF] Programmable Logic Array
    Nov 30, 2001 · A programmable logic array (PLA) maps a set of. Boolean functions in canonical, two-level sum-of- product form into a geometrical structure. A ...<|control11|><|separator|>
  20. [20]
    [PDF] Logic Synthesis for Disjunctions of Boolean Functions
    A Boolean network (or netlist, or circuit) is a directed acyclic graph (DAG) with nodes corresponding to logic gates and edges corresponding to wires ...
  21. [21]
    A primer on digital circuit synthesis - YosysHQ Yosys 0.45 ...
    At the logical gate level the design is represented by a netlist that uses only cells from a small number of single-bit cells, such as basic logic gates (AND, ...
  22. [22]
    [PDF] Multilevel logic synthesis - Proceedings of the IEEE - People @EECS
    May 12, 2025 · The paper is organized as follows: in sections II and III we define basic notation and discuss the representation of combinational logic by an ...
  23. [23]
    [PDF] Graph-Based Algorithms for Boolean Function Manipulation12
    In this paper we present a new data structure for representing Boolean functions and an associated set of manipulation algorithms.
  24. [24]
    [PDF] Introduction to Logic Synthesis with ABC
    • BDD is a useful representation discovered around 1986. – Canonical (for a given function, there is only one BDD). – Very good, but only if (a) it can be ...
  25. [25]
    Impact on Delay, Power and Area of Machine Learning-based ...
    From the techniques evaluated, CGP-based logical optimization shows a reduction in area, power, and delay of more than 50% on average compared to the mixed-ML ...Missing: benefits | Show results with:benefits
  26. [26]
    Implementation of Area & Power Optimized VLSI Circuits Using ...
    Aug 9, 2025 · Implementation Of Area & Power Optimized VLSI Circuits Using Logic Techniques · From the results, the proposed 8-bit PASTA offers 52.6% maximum ...
  27. [27]
    [PDF] Logic Rewiring for Delay and Power Minimization
    Due to these facts, it becomes apparent that circuit optimization remains an important task in the overall design cycle. Traditionally, logic optimization is ...<|control11|><|separator|>
  28. [28]
    [PDF] Trace-Driven Logic Synthesis: Application to Power Minimization
    The problem of trace driven logic optimization for mini- mum switching activity is: Given a logic circuit represented as a set of Boolean functions and a ...Missing: scholarly | Show results with:scholarly<|control11|><|separator|>
  29. [29]
    [PDF] LOGIC SYNTHESIS FOR LOW POWER
    Peak power is critical for power grid and power supply circuits design. Even though peak power is a serious issue, we will focus on average power optimization.
  30. [30]
    [PDF] Logic Synthesis for Reliability - An Early Start to ... - Purdue e-Pubs
    Designing reliable CMOS chips involve careful circuit design with attention directed to some of the potential reliability problems such as electromigration ...
  31. [31]
    Sequential Logic Optimization using Cycle Time Utilization in ...
    Sep 5, 2025 · A case study demonstrates how seemingly modest die size reductions can translate to substantial manufacturing cost savings at scale. The ...
  32. [32]
    [PDF] Advancing Moore's Law in 2014! - Intel
    Aug 11, 2014 · • Better than normal area scaling. • Extensive design-process co-optimization. • Micro-architecture optimizations for Cdyn reduction. ~1.6x per ...
  33. [33]
    [PDF] High-level area and power estimation for vlsi circuits
    In that paper,. Shannon proved that the asymptotic complexity of Boolean functions is exponential in the number of inputs. , and that for large , almost every ...Missing: growth closure
  34. [34]
    Timing Closure - Cadence
    Increased Complexity​​ Increasing gate count is making designs more complex and achieving timing closure more challenging.
  35. [35]
    CMOS Scaling for the 5 nm Node and Beyond - NIH
    To improve the gate control ability, FinFETs have been developed for the technology nodes beyond 22 nm. For the ultra-scaled sub-5 nm technology nodes, GAA ...
  36. [36]
  37. [37]
    Fault Simulation Techniques for Growing Chip Complexity - Synopsys
    Jun 1, 2022 · Fault simulation is a key verification step when silicon designers get an indication of how resilient their chip might be to faults or errors.
  38. [38]
    [PDF] Digital Logic Testing and Simulation
    ... problems. Digital test and verification present major hurdles to continued progress. ... However, these problems have been exacerbated by the growing number of ...
  39. [39]
    Power and thermal challenges in mobile devices - ResearchGate
    Thermal management can be broadly classified into reactive, if decisions are based on current or past temperature, or proactive, if decisions are based on ...Missing: walls | Show results with:walls<|control11|><|separator|>
  40. [40]
    [PDF] Thermal and Power Challenges in High Performance Computing ...
    ABSTRACT. This paper provides an overview of the thermal and power challenges in emerging high performance computing platforms. The advent of new.Missing: mobile IoT
  41. [41]
    [PDF] The Map Method For Synthesis of Combinational Logic Circuits
    Manuscript submitted March 17, 1953 ; made available for printing April 23, 1953. M. KARNAUGH is with tbe Bell Telephone Labora- tories, Jnc., Murray Hill, N .
  42. [42]
    [PDF] Minimization of Boolean Functions" - MIT
    Minimization of Boolean Functions". E. J. McCLUSKEY, Jr. (Manuscript received June 26, 1956). A systematic procedure is presented for writing a Boolean ...
  43. [43]
    [PDF] Chapter 1 TWO-LEVEL LOGIC MINIMIZATION - ResearchGate
    Abstract. Two-level logic minimization is a central problem in logic synthesis, and has applications in reliability analysis and automated reasoning. This.
  44. [44]
  45. [45]
    ABC: An Academic Industrial-Strength Verification Tool - SpringerLink
    ABC: An Academic Industrial-Strength Verification Tool. Conference paper. pp ... multiple-level logic optimization system. IEEE Trans. CAD 6(6), 1062 ...
  46. [46]
  47. [47]
    [PDF] multiple-valued logic minimization for pla synthesis
    Jun 5, 1986 · Espresso-MV. The program Espresso-MV implements the heuristic and exact logic minimization algorithms described earlier, as well as heuristic ...
  48. [48]
    [PDF] development
    The algorithm of the present paper. while related ro that of Rel .1. is substanrially diff'erent both in concept and in technique.
  49. [49]
    Genetic Algorithm for Boolean minimization in an FPGA cluster
    Mar 5, 2010 · This paper shows a parallel genetic programming (PGP) Boolean synthesis implementation based on a cluster of FPGAs that takes full advantage of ...
  50. [50]
    Doing two-level logic minimization 100 times faster
    The 134 examples of the MCNC benchmark [33] are used for years as a wide range of representative Boolean functions in VLSI. ESPRESSO-EXACT can minimize 114.
  51. [51]
    [PDF] SAT Based Exact Synthesis using DAG Topology Families
    SAT based exact synthesis is a powerful technique, with applica- tions in logic optimization, technology mapping, and synthesis for emerging technologies. ...
  52. [52]
    [PDF] FRAIGs: Functionally Reduced AND-INV Graphs - People @EECS
    AND-INV graphs (AIGs) are Boolean networks composed of two-input AND-gates and inverters. They can be used to represent and manipulate large Boolean ...
  53. [53]
    An Algorithm for NAND Decomposition Under Network Constraints
    A branch-and-bound algorithm is presented for the synthesis of multioutput, multilevel, cycle-free NAND networks to realize an arbitrary given set of ...
  54. [54]
    [PDF] Enabling Exact Delay Synthesis - People @EECS
    Aug 8, 2017 · Complete ASIC design results, using a commercial design flow and 51 benchmarks, show that our synthesis techniques reduce the worst negative ...
  55. [55]
    [PDF] NP-Hardness of Circuit Minimization for Multi-Output Functions
    This paper shows that minimizing the circuit size of a multi-output Boolean function is NP-hard, the first such result for general Boolean circuits.
  56. [56]
    [PDF] BESWAC: Boosting Exact Synthesis via Wiser SAT Solver Call
    May 12, 2024 · Abstract—SAT-based exact synthesis is a critical technique in logic synthesis to generate optimal circuits for given Boolean functions.
  57. [57]
    [PDF] ABC: An Academic Industrial-Strength Verification Tool
    This paper introduces. ABC, motivates its development, and illustrates its use in formal verification. Keywords: Model checking, equivalence checking, logic ...Missing: Craig | Show results with:Craig
  58. [58]
    [PDF] Logic Synthesis in a Nutshell - FLOLAC
    Oct 16, 2008 · We have d + e being a level-0 kernel of both functions. Extraction results in. L = d + e,. X = a · b · (c · L + f + g) + h, and. Y = a · i · (c ...<|control11|><|separator|>
  59. [59]
    [PDF] SAT-Based Logic Optimization and Resynthesis - People @EECS
    The paper proposes an integrated SAT-based logic optimization useful as part of tech-independent synthesis and as a new post- mapping resynthesis. The ...
  60. [60]
    [PDF] the decomposition of switching functions - People @EECS
    *77. Page 17. ROBERT L. ASHENHURST. = 1;. If values are assigned to all the variables of a function f(4), a function f(i) of no variables.
  61. [61]
    A new approach to the design of switching circuits - Internet Archive
    Jun 30, 2010 · A new approach to the design of switching circuits. by: Curtis, Herbert Allen. Publication date: 1962. Publisher: Van Nostrand Reinhold.Missing: HA | Show results with:HA
  62. [62]
    [PDF] Multi-Level Logic Synthesis - UCSD CSE
    “MIS: A Multiple-Level Logic Optimization System,” IEEE. Transactions on CAD of ICs, vol. CAD-6, no. 6, November 1987, pp. 1062-1081. Giovanni De Micheli ...<|control11|><|separator|>
  63. [63]
    [PDF] Boolean decomposition using two-literal divisors - UPCommons
    The results show improvements of 15% on average, and up to 50% in some examples, w.r.t. algebraic decomposition.
  64. [64]
    DAGON: Technology binding and local optimization by DAG matching
    Technology binding is the process of mapping a technology independent description of a circuit into a particular technology. This paper outlines a formalism ...
  65. [65]
    [PDF] Reducing Structural Bias in Technology Mapping
    Technology mapping based on DAG-covering suffers from the problem of structural bias: the structure of the mapped netlist depends strongly on the subject graph.
  66. [66]
    [PDF] Retiming Synchronous Circuitry
    Aug 20, 1986 · In this paper we exhibit a polynomial-time algorithm for determining a retiming of a circuit that minimizes clock period. 3. Page 14. The ...
  67. [67]
    Retiming synchronous circuitry | Algorithmica
    This paper describes a circuit transformation calledretiming in which registers are added at some points in a circuit and removed from others in such a way.
  68. [68]
    [PDF] Optimal Wire Sizing and Buffer Insertion for Low Power ... - UCSD CSE
    Previous work on post-layout buffer insertion includes [17], in which van Ginneken gave an elegant poly- nomial time algorithm for delay-optimal buffer ...
  69. [69]
    [PDF] Slew-Aware Buffer Insertion for Through-Silicon-Via-Based 3D ICs
    We propose a buffer insertion algorithm that further reduces delay by considering slew explicitly.Missing: control | Show results with:control
  70. [70]
  71. [71]
    Process and environmental variation impacts on ASIC timing
    In this paper we will discuss the sources of variation; by introducing the concepts of systematic inter-die variation, systematic intra-die variation and intra- ...
  72. [72]
    Process Variation Tolerant Standard Cell Library Development ...
    These techniques are useful in analyzing the impact of process variations on performance and power in nanometer CMOS designs. In this extended abstract, we ...
  73. [73]
    [PDF] Logic Synthesis Meets Machine Learning: Trading Exactness for ...
    Dec 15, 2020 · To evaluate the algorithms proposed by the participants, we created a set of 100 benchmarks drawn from a mix of standard problems in logic ...
  74. [74]
    Area-Driven FPGA Logic Synthesis Using Reinforcement Learning
    Jan 31, 2023 · We demonstrate conclusive learning by the RL agent and show significant FPGA area reductions vs. the conventional approach (resyn2).Missing: resynthesis | Show results with:resynthesis
  75. [75]
    [PDF] Area-Driven FPGA Logic Synthesis Using Reinforcement Learning
    Jan 17, 2023 · The immediate next step is to extend the evaluation to depth-driven synthesis, automate the selection of episode length as well as fine-tuned a ...Missing: resynthesis | Show results with:resynthesis
  76. [76]
    [PDF] AI-driven Reinforcement Learning-Based Logic Synthesis Framework
    Feb 8, 2023 · More specifically, we will use reinforcement learning and representation learning algorithms to improve a classical logic rewriting optimization ...Missing: resynthesis | Show results with:resynthesis
  77. [77]
    [PDF] RouteNet: Routability Prediction for Mixed-Size Designs Using ...
    Nov 8, 2018 · The proposed method, called RouteNet, can either evaluate the overall routability of cell placement solutions without global routing or predict ...<|separator|>
  78. [78]
    Routing Congestion Prediction Using Deep Graph Neural Networks
    Dec 5, 2019 · We present a graph-based deep learning method for quickly predicting logic-induced routing congestion hotspots from a gate-level netlist before placement.
  79. [79]
    [PDF] GRANNITE: Graph Neural Network Inference for Transferable Power ...
    During training, GRANNITE learns how to propagate average toggle rates through combinational logic: a netlist is represented as a graph, register states and ...
  80. [80]
    Logic Neural Networks for Efficient FPGA Implementation
    ### Summary of Logic Neural Networks (LNNs) for FPGA Implementation
  81. [81]
    [PDF] Introducing Vivado® ML Editions
    Vivado ML Editions are state-of-the-art EDA with ML algorithms, offering ML-based design optimization, 10% average QoR gain, and hierarchical compile of ...Missing: Quartus scarcity explainability
  82. [82]
    EDA Challenges Machine Learning - Semiconductor Engineering
    Dec 14, 2017 · Many tasks in EDA could be perfect targets for machine learning, except for the lack of training data. What might change to fix that?Missing: Vivado Quartus explainability
  83. [83]
    The Dawn of AI-Native EDA: Opportunities and Challenges of ... - arXiv
    This paper argues for a paradigm shift from AI4EDA towards AI-native EDA, integrating AI at the core of the design process.Missing: Vivado Quartus explainability
  84. [84]
    [PDF] VPR: A New Packing, Placement and Routing Tool for FPGA Research
    Abstract. We describe the capabilities of and algorithms used in a new FPGA CAD tool,. Versatile Place and Route (VPR). In terms of minimizing routing area, ...
  85. [85]
    Imbalanced Large Graph Learning Framework for FPGA Logic ...
    Aug 7, 2023 · In this work, we propose an imbalanced large graph learning framework, ImLG, for prediction of whether logic elements will be packed after placement.
  86. [86]
    Lattice to Showcase Advanced Edge AI Solutions at New-Tech 2025 ...
    May 6, 2025 · Lattice technology demos will be featured in the Telsys Ltd. booth, showcasing industry-leading low power, small form factor FPGAs and application-specific ...
  87. [87]
    [PDF] Partial Reconfiguration for Design Optimization
    Abstract—FPGA designers have traditionally shared a similar design methodology with ASIC designers. Most notably, at design time, FPGA designers commit to a ...Missing: differences | Show results with:differences
  88. [88]
    [PDF] Simplifying Logic Circuits with Karnaugh Maps
    A useful K-map is one of three variables. • Each square represents a 3-variable minterm or maxterm. • All of the 8 possible 3- ...
  89. [89]
    [PDF] a Boolean Minimizer - BOOM
    Both "easy" and "hard" MCNC benchmarks were solved and compared with the solutions obtained by ESPRESSO. In many cases the time needed to find the minimum ...