Petrick's method
Petrick's method is a technique in Boolean algebra for determining all minimum sum-of-products (SOP) solutions from a prime implicant chart, enabling the identification of irredundant covers for minimizing logic functions.[1]
Developed by Stanley R. Petrick in 1956 as part of a technical report for the Air Force Cambridge Research Center, the method provides a systematic way to resolve the selection of prime implicants in cases where the Quine-McCluskey algorithm encounters cyclic dependencies or non-essential implicants that do not uniquely cover minterms.[2] It is particularly useful for synthesizing efficient digital circuits by reducing the number of gates and literals required in the implementation of Boolean expressions.[3]
The core of Petrick's method involves constructing a product-of-sums (POS) expression from the prime implicant table, where each sum term represents the options for covering a specific minterm column after removing essentials. This POS form is then algebraically simplified into its minimal SOP equivalent, with each product term in the SOP corresponding to a potential minimal cover of implicants; the solutions with the fewest literals are selected as optimal.[1][3] While computationally intensive for large functions—exponential in the number of variables—it remains a foundational exact method for logic minimization, often implemented in software tools for functions up to 15 variables or more.[1]
Background
Development and History
Petrick's method was developed by Stanley R. Petrick in 1956 while working at the U.S. Air Force Cambridge Research Center. In a technical report titled A Direct Determination of the Irredundant Forms of a Boolean Function from the Set of Prime Implicants, Petrick presented the technique as part of broader research in switching theory. This report, designated AFCRC-TR-56-110 and dated April 10, 1956, addressed challenges in synthesizing efficient logic circuits by minimizing Boolean expressions.
The method was specifically introduced to solve the covering problem in Boolean minimization, where the goal is to select a minimal set of prime implicants that collectively cover all necessary minterms without redundancy. Petrick's approach transformed the selection process into a systematic algebraic expansion, allowing for the identification of all irredundant sum-of-products forms from a given set of prime implicants. This innovation built upon prior work in prime implicant generation, providing a direct path to minimal covers essential for practical circuit design.[4]
In 1962, Insley B. Pyne and Edward J. McCluskey introduced significant improvements to Petrick's method, focusing on reducing computational redundancy in prime implicant tables.[5] Their paper, "The Reduction of Redundancy in Solving Prime Implicant Tables," published in the IRE Transactions on Electronic Computers (vol. EC-11, no. 4, pp. 473–482), proposed algorithmic refinements that eliminated unnecessary terms during the expansion process, enhancing efficiency for functions with larger numbers of variables.[5] These enhancements made the method more feasible for implementation in early computational environments, addressing limitations in Petrick's original formulation.[6]
Petrick's method and its refinements played a pivotal role in the early stages of computer-aided design (CAD) for digital logic circuits, facilitating automated tools for Boolean function minimization during the 1950s and 1960s. By enabling systematic determination of minimal logic expressions, it contributed to the development of more compact and reliable switching networks, influencing foundational software for electronic design automation.
Mathematical Foundations
Petrick's method operates within the framework of Boolean algebra, specifically addressing the minimization of sum-of-products (SOP) expressions for Boolean functions. An SOP expression represents a Boolean function as a disjunction (logical OR) of conjunctions (logical AND) of literals, where each literal is a variable or its complement. The primary goal of minimization is to reduce the number of literals and product terms while preserving the function's logical behavior, thereby simplifying circuit implementations and lowering costs.[7][8]
Central to this process are minterms, which are the canonical product terms corresponding to the rows of a truth table where the function evaluates to 1 (the ON-set). Each minterm includes every variable exactly once, either complemented or uncomplemented, fully specifying a single input combination; for a function of n variables, there are up to $2^n possible minterms, denoted collectively as \sum m(i) for indices i where the function is true. An implicant is any product term that covers one or more minterms without including any from the OFF-set (where the function is 0). Prime implicants are the maximal such terms: they are implicants that cannot be expanded further by removing literals or combining with adjacent implicants to cover more minterms, ensuring no redundancy in coverage.[7][8]
The prime implicant chart, also known as the covering table, organizes this information as a matrix with rows representing prime implicants and columns representing minterms of the ON-set. A 1 (or 'X') is placed at the intersection if the prime implicant covers that minterm, visually depicting the coverage relationships essential for minimization.[7][8]
This setup frames the minimization problem as an instance of the set covering problem: identify the smallest subset of prime implicants whose union covers every minterm in the ON-set at least once, with don't care conditions (minterms where the function value is unspecified) allowing optional coverage to further optimize. The method provides an exact solution to this covering challenge for irredundant SOP forms.[7][8]
Algorithm Description
Core Steps
Petrick's method provides a systematic procedure to derive the minimal sum-of-products (SOP) expression from a prime implicant chart after essential prime implicants have been identified. The process begins by reducing the chart to focus on non-essential implicants that cover the remaining minterms, ensuring computational efficiency before proceeding to the core minimization phase.[3][9]
The first step involves reducing the prime implicant chart by selecting all essential prime implicants—those that uniquely cover at least one minterm—and eliminating the corresponding rows (minterms) and columns (implicants) from the chart. This reduction simplifies the problem by fixing parts of the cover that must be included, leaving only a cyclic or unresolved subchart for further processing. Essential implicants are identified by scanning each minterm column for implicants that appear only once across all rows.[10][3]
In the second step, the remaining prime implicants in the reduced chart are labeled as variables, such as P_1, P_2, \dots, P_k, where each P_i represents the inclusion of the corresponding implicant in the final cover. A product-of-sums expression P is then formed, where for each remaining minterm j, a sum term includes all P_i that cover j:
P = \prod_{j} \left( \sum_{i \in \text{covering } j} P_i \right)
This expression P encodes all possible combinations of implicants that cover every minterm in the reduced chart, with P = 1 indicating a valid cover.[9][3][10]
While the full expansion can be computationally intensive, the method proceeds by algebraically expanding P to its SOP form. In practice, implementations may use heuristics for larger functions.[3]
Finally, in the fourth step, P is expanded into its sum-of-products form using Boolean algebra, yielding terms where each product corresponds to a selection of prime implicants that collectively cover all minterms. Among these terms, the minimal SOP is selected based on criteria such as the fewest number of implicants or the total number of literals, ensuring the irredundant cover with the lowest cost. For instance, in a chart with cyclic dependencies, multiple minimal covers may exist, and the method identifies all such options before choosing the optimal one.[9][10][3]
Logical Expansion Process
The logical expansion process in Petrick's method centers on constructing a Boolean expression that encodes the covering constraints from the prime implicant chart and algebraically deriving the minimal selections of implicants. The initial product P is formed as a product-of-sums expression, where for each minterm in the function, a sum term is created from the variables representing the prime implicants that cover that minterm. Specifically, P = \prod_{m} \left( \sum_{p_i \covering m} p_i \right), with each p_i denoting the selection of prime implicant i. This ensures P = 1 exactly when the selected implicants collectively cover all required minterms, capturing all valid combinations without redundancy at this stage.[11]
The POS form of P is expanded to SOP using the distributive law and Boolean simplification rules, such as absorption, to obtain all product terms representing valid covers. For instance, expanding (p_1 + p_2)(p_1 + p_3) yields p_1 + p_2 p_3, and further factors produce the complete set of conjunctions representing potential covers. This expansion systematically generates every valid product term, though it can result in a large number of terms for functions with many minterms.[1]
The expanded sum-of-products is then minimized by applying the absorption law, which eliminates subsumed terms via identities like X + X Y = X, removing any product that is covered by a broader term. These algebraic reductions prune the expression to its irredundant form, ensuring only essential combinations remain without altering the logical coverage.[12]
In the minimized expansion, the prime terms are identified as those product terms with the fewest literals, each corresponding to a minimal set of prime implicants that fully covers the function. Optimality is assessed by counting literals in these terms, as fewer literals indicate selections requiring fewer gates or lower overall cost in the resulting sum-of-products implementation; for example, a term like p_1 p_4 may represent a two-implicant cover superior to longer alternatives. This identification completes the process, yielding all minimal sum-of-products forms directly from the algebraic structure.[1]
Implementation
Pseudocode Representation
Petrick's method can be implemented algorithmically by processing a prime implicant chart, which serves as the input representing the coverage of minterms by prime implicants after essential implicants have been identified and removed.[1][3] The output is a set of minimal sum-of-products (SOP) terms that cover all minterms with the fewest literals. Key functions include mapping minterms to their covering implicants, constructing the product-of-sums expression P, expanding P via the distributive law to generate all possible product terms, and minimizing by applying absorption laws to eliminate redundant terms.[13][3]
Suitable data structures facilitate efficient manipulation: an array or list of prime implicants, each indexed and associated with the minterms it covers; and for the sum terms in P, a structure (e.g., a bracket or tuple) tracking the implicant indices and literal counts involved in each sum.[1] These allow tracking coverage without explicit Boolean variable assignment, focusing on combinatorial expansion. The overall flow begins with reducing the chart by essentials, builds P as a product of sums (one per remaining minterm, summing its covering implicants), expands to all combinations representing potential covers, and selects those with the minimal total literal count via simplification.[13][3]
The following pseudocode outlines a generalized implementation, assuming the input chart is a matrix chart where rows are prime implicants (0 to m-1) and columns are minterms (0 to n-1), with 1 indicating coverage. It uses lists for implicants and terms, and basic set operations for absorption.
function petricksMethod(chart):
// Assume essentials already reduced; chart is post-essential
m = number of prime implicants // rows
n = number of minterms // columns
// Step 1: For each minterm, collect list of covering implicants (indices)
coveringSums = empty list of size n
for j from 0 to n-1:
sum_j = empty list
for i from 0 to m-1:
if chart[i][j] == 1:
sum_j.append(i)
coveringSums[j] = sum_j
// Step 2: Build product-of-sums P as list of sums (each sum is list of implicant indices)
P = product of all coveringSums // Initially, P = [coveringSums[0], coveringSums[1], ..., coveringSums[n-1]]
// Step 3: Expand P using distributive law to generate all product terms
// Each product term is a set of unique implicant indices (combination covering all minterms)
allProducts = expandProduct(P) // Recursive distribution: multiply sums into products
// Helper: Expand product of sums
function expandProduct(sumsList):
if sumsList is empty:
return [empty set]
firstSum = sumsList[0]
rest = sumsList[1:]
subProducts = expandProduct(rest)
result = empty list
for term in firstSum:
for sub in subProducts:
newSet = sub union {term}
result.append(newSet)
return result
// Step 4: For each product, compute literal count and simplify via absorption
minimalCovers = empty list
minLiterals = infinity
for productSet in allProducts:
// Compute total literals: sum literal counts of implicants in set
totalLiterals = 0
for implicantIdx in productSet:
totalLiterals += literalCount(implicantIdx) // Precomputed per implicant
// Absorption: remove subsumed terms if one product covers another's minterms
if totalLiterals < minLiterals:
minLiterals = totalLiterals
minimalCovers = [productSet]
elif totalLiterals == minLiterals:
minimalCovers.append(productSet)
// Step 5: Output minimal SOP terms (map sets back to implicant expressions)
return minimalCovers // List of sets, each convertible to SOP term
function petricksMethod(chart):
// Assume essentials already reduced; chart is post-essential
m = number of prime implicants // rows
n = number of minterms // columns
// Step 1: For each minterm, collect list of covering implicants (indices)
coveringSums = empty list of size n
for j from 0 to n-1:
sum_j = empty list
for i from 0 to m-1:
if chart[i][j] == 1:
sum_j.append(i)
coveringSums[j] = sum_j
// Step 2: Build product-of-sums P as list of sums (each sum is list of implicant indices)
P = product of all coveringSums // Initially, P = [coveringSums[0], coveringSums[1], ..., coveringSums[n-1]]
// Step 3: Expand P using distributive law to generate all product terms
// Each product term is a set of unique implicant indices (combination covering all minterms)
allProducts = expandProduct(P) // Recursive distribution: multiply sums into products
// Helper: Expand product of sums
function expandProduct(sumsList):
if sumsList is empty:
return [empty set]
firstSum = sumsList[0]
rest = sumsList[1:]
subProducts = expandProduct(rest)
result = empty list
for term in firstSum:
for sub in subProducts:
newSet = sub union {term}
result.append(newSet)
return result
// Step 4: For each product, compute literal count and simplify via absorption
minimalCovers = empty list
minLiterals = infinity
for productSet in allProducts:
// Compute total literals: sum literal counts of implicants in set
totalLiterals = 0
for implicantIdx in productSet:
totalLiterals += literalCount(implicantIdx) // Precomputed per implicant
// Absorption: remove subsumed terms if one product covers another's minterms
if totalLiterals < minLiterals:
minLiterals = totalLiterals
minimalCovers = [productSet]
elif totalLiterals == minLiterals:
minimalCovers.append(productSet)
// Step 5: Output minimal SOP terms (map sets back to implicant expressions)
return minimalCovers // List of sets, each convertible to SOP term
This pseudocode generates all valid covers exhaustively but can be optimized for small n by pruning during expansion; literal counts are assumed precomputed from implicant representations.[1][13][3]
Computational Aspects
Petrick's method lends itself well to computer implementation, as its manual application becomes exceedingly tedious for prime implicant charts with more than a modest number of terms due to the exponential growth in the expansion of product-of-sums expressions. The algorithmic nature of forming covering clauses and multiplying them out into sum-of-products allows for straightforward encoding in software, enabling exact minimization for functions up to approximately 15 variables, depending on the function's structure and implementation efficiency.[1]
In handling don't care conditions, the method incorporates them directly into the prime implicant chart by excluding don't care minterms from the required coverage columns, thereby permitting optional selection of implicants that cover these positions to broaden options for achieving a minimal cover without altering the function's specified behavior. This flexibility often leads to more compact expressions by leveraging don't cares to consolidate terms across the on-set and off-set.[7]
Petrick's method is commonly integrated with prime implicant generation tools, such as the Quine-McCluskey algorithm, serving as a post-processing step to resolve the set covering problem and identify all minimal solutions from the generated implicants. This modular approach facilitates its use in broader logic minimization pipelines.[14]
In modern logic synthesis, adaptations of Petrick's method appear in academic and industrial tools for exact two-level optimization, particularly in the Berkeley ABC system for handling small to medium combinational circuits where precise minimization is needed despite computational cost. These implementations often limit its application to subsystems or initial stages before heuristic refinement.[15]
For optimization, pruning strategies during the expansion phase, including pre-reduction of the implicant table via row and column dominance to eliminate redundant choices and selective branching to avoid exhaustive multiplication of clauses, significantly mitigate the risk of full enumeration exploding in size for larger charts. Such techniques maintain exactness while improving runtime efficiency.[3]
Worked Example
Setup and Prime Implicant Chart
To illustrate the setup for Petrick's method, consider the 3-variable Boolean function f(A, B, C) = \sum m(0, 1, 2, 5, 6, 7), which includes the minterms corresponding to binary inputs 000, 001, 010, 101, 110, and 111.[1]
The minterms are explicitly: m_0 = A'B'C', m_1 = A'B'C, m_2 = A'BC', m_5 = AB'C, m_6 = ABC', and m_7 = ABC. A prime implicant is a product term that covers one or more minterms and cannot be further simplified by combining with adjacent terms without including 0s of the function. Using the Quine-McCluskey algorithm or a Karnaugh map, the prime implicants for this function are identified as A'B' (covering m_0, m_1), A'C' (covering m_0, m_2), B'C (covering m_1, m_5), BC' (covering m_2, m_6), AC (covering m_5, m_7), and AB (covering m_6, m_7).[1]
The prime implicant chart is constructed with rows representing these prime implicants and columns representing the minterms. An X is placed in each cell where a prime implicant covers the corresponding minterm, as shown below:
| Prime Implicant | m_0 | m_1 | m_2 | m_5 | m_6 | m_7 |
|---|
| A'B' | X | X | | | | |
| A'C' | X | | X | | | |
| B'C | | X | | X | | |
| BC' | | | X | | X | |
| AC | | | | X | | X |
| AB | | | | | X | X |
In this chart, no essential prime implicants exist, as each minterm is covered by exactly two prime implicants, requiring the full application of Petrick's method to find minimal covers.[1]
Step-by-Step Application
In the absence of essential prime implicants in the prime implicant chart for the example function f(A, B, C) = \sum m(0,1,2,5,6,7), the full chart is retained for coverage analysis. The prime implicants are labeled as follows: K = A'B' covering minterms 0 and 1, N = A'C' covering minterms 0 and 2, Q = B'C covering minterms 1 and 5, L = BC' covering minterms 2 and 6, M = AC covering minterms 5 and 7, P = AB covering minterms 6 and 7.[1]
To find the minimal covering sets, form the consensus function P as the product of sums corresponding to each minterm's covering options:
P = (K + N)(K + Q)(N + L)(Q + M)(L + P)(M + P)
This expression ensures every minterm is covered by at least one selected prime implicant.[1]
Next, expand P into a sum-of-products form by successively applying the distributive law, eliminating redundant terms through absorption (e.g., XY + XYZ = XY). The expansion yields:
P = KLM + KNMP + NQLM + KQLP + NQP + \dots
(Additional longer terms are present but omitted for brevity.) Each term in this expansion represents a potential covering set of prime implicants.[1]
To identify the minimal expressions, evaluate the terms by the total number of implicants (and literals), prioritizing those with the fewest for cost efficiency in implementation. The minimal terms are KLM and NQP, each with 3 implicants and 6 literals. Substituting the prime implicants yields two equivalent minimal sum-of-products expressions:
f = A'B' + BC' + AC
or
f = A'C' + B'C + AB
Verification confirms both expressions cover all required minterms without including don't-cares or zeros: A'B' + BC' + AC covers 0 (A'B'), 1 (A'B'), 2 (BC'), 5 (AC), 6 (BC'), and 7 (AC); similarly for the alternative. This completes the application, yielding irredundant covers of equal cost.[1]
Limitations and Comparisons
Challenges and Complexity
Petrick's method encounters significant challenges due to its inherent computational demands, particularly in the logical expansion phase where a product-of-sums expression is converted to sum-of-products form. This expansion process can generate an exponential number of terms, as each clause in the product contributes multiplicatively to the total output size.[16]
In the worst case, for a function with k minterms where the i-th minterm is covered by n_i prime implicants, the expansion may produce up to \prod_{i=1}^k n_i terms, reflecting the combinatorial explosion inherent to the covering problem. This growth renders the method highly sensitive to the size of the prime implicant chart, with time and space complexity reaching O(\prod n_i). For Boolean functions with more than 7 or 8 variables, the running time increases dramatically, often making exhaustive application unrealistic without computational aids or approximations.[17][18]
Manual application of Petrick's method is particularly tedious for charts involving more than a modest number of minterms, as the combinatorial proliferation demands meticulous algebraic manipulation that quickly becomes error-prone and time-consuming. The method's scalability is further limited for functions with a large number of variables, where the sheer volume of prime implicants and potential terms overwhelms practical computation absent specialized heuristics.[1]
Additionally, the approach is sensitive to the handling of don't care conditions during prime implicant generation; if don't cares are not correctly incorporated to expand implicants without requiring coverage, the resulting chart may yield suboptimal minimal covers despite the method's exact nature.[19]
Relation to Other Minimization Techniques
Petrick's method serves as a complementary technique to the Quine-McCluskey algorithm in Boolean function minimization. The Quine-McCluskey algorithm systematically generates all prime implicants through a tabular comparison process, but it leaves the selection of a minimum cover from these implicants unresolved, particularly in cases involving cyclic dependencies among minterms. Petrick's method addresses this by constructing a Boolean expression from the prime implicant chart and expanding it to identify the minimal sum-of-products cover, ensuring an exact solution to the covering problem.[7]
In contrast to heuristic-based approaches like the Espresso algorithm, Petrick's method provides an exact minimization but at the cost of exhaustive computation. Espresso employs iterative heuristics—such as expansion, reduction, and irredundancy checks—to approximate optimal covers for large-scale functions, making it more practical for electronic design automation (EDA) tools handling complex circuits. Petrick's method, while guaranteeing the global minimum, becomes computationally prohibitive for functions with many variables due to the exponential growth in the expansion of product-of-sums terms.[20]
Petrick's expansion technique forms the foundational algebraic basis for branch-and-bound methods in exact logic minimization solvers. These modern variants are integrated into EDA tools like those from Synopsys and Cadence and apply pruning and bounding to mitigate the exhaustive nature of the original method while preserving optimality.[10]
As of 2025, Petrick's method continues to influence practical implementations, such as in MongoDB's query optimization using modified Quine-McCluskey and Petrick's techniques for simplifying Boolean expressions.[21]
A key advantage of Petrick's method is its guarantee of finding all minimal solutions, providing verifiable optimality that heuristic methods like Espresso cannot always ensure. However, it is less scalable than visual aids such as Karnaugh maps for functions with fewer than six variables, where the latter's intuitive grouping avoids algebraic expansion altogether, and it lags behind heuristics for larger problems due to its tedious manual application and high computational demands.[1]