Manual 8460398

Manual 8460398
1130
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 17, NO. 11, NOVEMBER 1998
Fast Heuristic and Exact Algorithms for
Two-Level Hazard-Free Logic Minimization
Michael Theobald and Steven M. Nowick
Abstract— None of the available minimizers for two-level
hazard-free logic minimization can synthesize very large circuits.
This limitation has forced researchers to resort to manual and
automated circuit partitioning techniques.
This paper introduces two new two-level hazard-free logic
minimizers: ESPRESSO-HF, a heuristic method loosely based on
ESPRESSO-II, and IMPYMIN, an exact method based on implicit data
structures. Both minimizers can solve all currently available
examples, which range up to 32 inputs and 33 outputs. These
include examples that have never been solved before. For the
more difficult examples that can be solved by other minimizers,
our methods are several orders of magnitude faster.
As by-products of these algorithms, we also present two
additional results. First, we propose a fast new method to check
if a hazard-free covering problem can feasibly be solved. Second,
we introduce a novel reformulation of the two-level hazard-free
logic minimization problem by capturing hazard-freedom
constraints within a synchronous function through the addition
of new variables.
I. INTRODUCTION
A
SYNCHRONOUS design has been the focus of much
recent research activity [10]. In fact, asynchronous design
has been applied to several large-scale control and data-path
circuits and microprocessors [14], [22], [15], [23], [35], [39],
[20], [2].
A number of methods have been developed for the design
of asynchronous controllers. Much of the recent work has
focused on Petri Net-based methods [18], [1], [16], [5], [40],
[6], [7] and burst-mode methods [27], [9], [25], [43], [17],
[31], [13]. These two classes of design methods differ in
fundamental aspects: the delay model and how the circuit
interacts with its environment [10]. Petri Net-based methods
typically synthesize circuits to work correctly regardless of
gate delays (speed-independent delay model), and the environment is allowed to respond to the circuit’s outputs without
timing constraints (input/output mode). In contrast, burst-mode
methods synthesize combinational logic to work correctly
regardless of gate and wire delays, but the correct sequential
operation depends on timing constraints. In particular, the enManuscript received December 30, 1997; revised May 13, 1998. This work
was supported by the National Science Foundation under Grant MIP-9501880
and by an Alfred P. Sloan Research Fellowship. This work is an extended
version of two recent papers presented at the International Symposium on
Advanced Research in Asynchronous Circuits and Systems, March 1998, and
the 33rd Design Automation Conference, 1996. This paper was recommended
by Associate Editor E. Macii.
The authors are with the Department of Computer Science, Columbia
University, New York, NY 10027 USA.
Publisher Item Identifier S 0278-0070(98)09489-5.
vironment must wait for a circuit to stabilize before responding
with new inputs (fundamental mode).
The focus of this paper is on fundamental-mode asynchronous circuits, such as burst-mode machines. Burst-mode
methods have recently been applied to several large and
real-world design examples, including a low-power infrared
communications chip [19], a second-level cache controller
[26], an SCSI controller [41], a differential equation solver
[42], and an instruction-length decoder [4].
An important challenge for any asynchronous synthesis
method is the development of optimized CAD tools. In synchronous design, CAD packages have been critical to the
advancement of modern digital design. In asynchronous design, however, a key constraint is to provide hazard-free logic,
i.e., to guarantee the absence of glitches [38]. Much progress
has been made in developing hazard-free synthesis methods,
including tools for exact two-level hazard-free logic minimization [29], optimal state assignment [12], [31], synthesis for
testability [28] and low-power logic synthesis [24]. However,
these tools have been limited in handling large-scale designs.
In particular, hazard-free two-level logic minimization is
a bottleneck in most asynchronous CAD packages. While
the currently used Quine–McCluskey-like exact hazard-free
minimization algorithm, HFMIN [12], has been effective on
small- and medium-sized examples, and is used in several
existing CAD packages [27], [9], [25], [43], [17], [13], it
has been unable to produce solutions for several large design
problems [17], [31]. This limitation has been a major reason
for researchers to invent and apply manual as well as automated techniques for partitioning circuits before hazard-free
logic minimization can be performed [17].
A. Contributions of This Paper
This paper introduces two very efficient two-level logic minimizers for hazard-free multioutput minimization: ESPRESSOHF and IMPYMIN.
ESPRESSO-HF is an algorithm to solve the heuristic hazardfree two-level logic minimization problem. The method is
heuristic solely in terms of the cardinality of solution. In
all cases, it guarantees a hazard-free solution. The algorithm
is based on ESPRESSO-II [30], [11], but with a number of
significant modifications to handle hazard-freedom constraints.
It is the first heuristic method based on ESPRESSO-II to
solve the hazard-free minimization problem. ESPRESSO-HF
also includes a new and much more efficient algorithm to
check for the existence of a hazard-free solution, without
generating all prime implicants.
0278–0070/98$10.00  1998 IEEE
THEOBALD AND NOWICK: TWO-LEVEL HAZARD-FREE LOGIC MINIMIZATION
IMPYMIN is an algorithm to solve the exact hazard-free twolevel logic minimization problem. The algorithm uses an implicit approach that makes use of data structures such as binary
decision diagrams (BDD’s) [3] and zero-suppressed (Z)BDD’s
[21]. The algorithm is based on a novel theoretical approach
to hazard-free two-level logic minimization. The generation
of dynamic-hazard-free prime implicants is reformulated as
a synchronous prime implicant generation problem. This is
achieved by capturing hazard-freedom constraints within a
synchronous function by adding new variables. This technique
allows us to use an existing method for fast implicit generation
of prime implicants. Moreover, our novel approach can be
nicely incorporated into a very efficient implicit minimizer
for hazard-free logic. In particular, the approach makes it
possible to use the implicit set covering solver of SCHERZO
[8], the state-of-the-art minimization method for synchronous
two-level logic, as a black box.
Both ESPRESSO-HF and IMPYMIN can solve all currently
available examples, which range up to 32 inputs and 33 outputs. These include examples that have never been previously
solved. For examples that can be solved by the currently fastest
minimizer HFMIN, our two minimizers are typically several
orders of magnitude faster. In particular, IMPYMIN can find a
minimum-size cover for all benchmark examples in less than
813 seconds, and ESPRESSO-HF can find very good covers—at
most 3% larger than a minimum-size cover—in less than 105 s.
ESPRESSO-HF and IMPYMIN are somewhat orthogonal. On
the one hand, ESPRESSO-HF is typically faster than IMPYMIN.
On the other hand, IMPYMIN computes a cover of minimum
size, whereas ESPRESSO-HF is not guaranteed to find a minimum cover but typically does find a cover of very good
quality.
B. Paper Organization
Section II gives background on circuit models, hazards, and
hazard-free minimization. Section III describes the ESPRESSO
algorithm for heuristic hazard-free minimization. Section IV
introduces a new approach to hazard-free minimization where
hazard-freedom constraints are captured by a constructed synchronous function, leading to a new method for computing
dynamic-hazard-free prime implicants. Based on the results of
Section IV, Section V introduces our new implicit method for
exact hazard-free minimization, called IMPYMIN. Section VI
presents experimental results and compares our approaches
with related work, and Section VII gives conclusions.
1131
B. Multiple-Input Changes
Definition II.1: Let
and
be two minterms. The tranfrom
to
has start point
and end
sition cube,
point and contains all minterms that can be reached during
More formally,
is the uniquely
a transition from to
defined smallest cube that contains and : supercube(A,B).
An input transition or multiple-input change from input state
(minterm) to is described by transition cube
A multiple-input change specifies what variables change
value and what the corresponding starting and ending values
are. Input variables are assumed to change simultaneously.
(Equivalently, since inputs may be skewed arbitrarily by wire
delays, inputs can be assumed to change monotonically in
any order and at any time.) Once a multiple-input change
occurs, no further input changes may occur until the circuit has
stabilized. In this paper, we consider only transitions where
is fully defined; that is, for every
C. Function Hazards
that does not change monotonically during
A function
an input transition is said to have a function hazard in the
transition.
contains a static function
Definition II.2: A function
if and only if 1)
hazard for the input transition from to
and 2) there exists some input state
such that
Definition II.3: A function contains a dynamic function
if and only if 1)
hazard for the input transition from to
and 2) there exist a pair of input states and
such that a)
and
and b)
and
If a transition has a function hazard, no implementation
of the function is guaranteed to avoid a glitch during the
transition, assuming arbitrary gate and wire delays [29], [38].
Therefore, we consider only transitions that are free of function
hazards.1
D. Logic Hazards
A. Circuit Model
If is free of function hazards for a transition from input
to
an implementation may still have hazards due to possible
delays in the logic realization.
conDefinition II.4: A circuit implementing function
tains a static (dynamic) logic hazard for the input transition
to minterm
if and only if 1)
from minterm
and 2) for some assignment of delays to
gates and wires, the circuit’s output is not monotonic during
the transition interval.
That is, a static logic hazard occurs if
( ), but the circuit’s output makes an unexpected
transition. A dynamic logic hazard occurs if
and
and
but
the circuit’s output makes an unexpected
transition.
This paper considers combinational circuits, which can have
arbitrary finite gate and wire delays (an unbounded wire delay
model [29]). A pure delay model is assumed as well (see [38]).
1 Sequential synthesis methods, which use hazard-free minimization as a
substep, typically include constraints in their algorithms, which insure that no
transitions with function hazards are generated [27], [43].
II. BACKGROUND
The material of this section focuses on hazards and hazardfree logic minimization and is taken from [12] and [29].
For simplicity, we focus on single-output functions. A generalization of these definitions to multioutput functions is
straightforward and is described in [12].
1132
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 17, NO. 11, NOVEMBER 1998
E. Conditions for a Hazard-Free Transition
We now review conditions to ensure that a sum-of-products
is hazard free for a given input transition
implementation
is the transition
(for details, see [29]). Assume that
cube corresponding to a function-hazard-free transition from
to
for a function
We say that
has a
input state
transition in cube
Lemma II.5: If has a
transition in cube
then the implementation is free of logic hazards for the input
to
change from
Lemma II.6: If has a
transition in cube
then the implementation is free of logic hazards for the input
is contained in some
change from to if and only if
(i.e., some product must hold its value at 1
cube of cover
throughout the transition).
and
cases are
The conditions for the
symmetric. Without loss of generality, we consider only a
transition.2
Lemma II.7: If has a
transition in cube
then
the implementation is free of logic hazards for the input change
intersecting
from to if and only if every cube
also contains
(i.e., no product may glitch in the middle of
transition).
a
transition from input state
Lemma II.8: If has a
to
that is hazard free in the implementation, then for
where
the transition
every input state
is contained in some cube of cover
(i.e.,
subcube
subtransition must be free of logic hazards).
every
transitions and
transitions are called static
transitions and
transitions are called
transitions.
dynamic transitions.
F. Required and Privileged Cubes
in Lemma II.6 and the maximal subcubes
The cube
in Lemma II.8 are called required cubes. Each required
to ensure
cube must be contained in some cube of cover
a hazard-free implementation, as more formally presented in
Definition II.9.
of specDefinition II.9: Given a function and a set
every cube
ified function-hazard-free input transitions of
corresponding to a
transition, and every
where is 1 and
maximal subcube
is a
transition, is called a required cube.
Lemma II.7 constrains the products that may be included in
Each
transition cube is called a privileged
a cover
cube, since no product in the cover may intersect the cube
unless also contains its start point. If a product intersects a
privileged cube but does not contain its start point, it illegally
intersects the privileged cube and may not be included in the
cover, as presented more formally in Definition II.10.
of specDefinition II.10: Given a function and a set
every cube
ified function-hazard-free input transitions of
corresponding to a
transition is called a
privileged cube.
!
2A 0
1 transition from
transition from B to A:
A
to
B
has the same hazards as a 1
!0
Finally, we define a useful special case. A privileged cube
is called trivial if the function is only 1 at the start point and is
0 for all other minterms included in the transition cube. In this
case, any product that intersects such a privileged cube always
covers the start point. All trivial privileged cubes can safely
be removed from consideration without loss of information.
G. Hazard-Free Covers
A hazard-free cover of function is a cover (i.e., set of
implicants) of whose AND-OR implementation is hazard free
for a given set of specified input transitions. (It is assumed
below that the function is defined for all specified transitions;
the function is undefined for all other input states.)
Theorem II.11 (Hazard-Free Covering [29]): A sum of
is a hazard-free cover for function
for the
products
set of specified input transitions if and only if:
intersects the OFF-set of ;
a) no product of
b) each required cube of is contained in some product
of ;
intersects any (nontrivial) privileged
c) no product of
cube illegally.
Theorem II.11(a) and (c) determines the implicants that may
appear in a hazard-free cover of a function called dynamichazard-free (dhf-) implicants.
Definition II.12: A dhf-implicant is an implicant that does
illegally. A dhfnot intersect any privileged cube of
prime implicant is a dhf-implicant contained in no other
dhf-implicant. An essential dhf-prime implicant is a dhf-prime
implicant that contains a required cube contained in no other
dhf-prime implicant.
Theorem II.11(b) defines the covering requirement for a
must be
hazard-free cover of : every required cube of
covered, that is, contained in some cube of the cover. Thus,
the two-level hazard-free logic minimization problem is to find
a minimum cost cover of a function using only dhf-prime
implicants where every required cube is covered.
The difference between two-level hazard-free logic minimization and the well-know classic two-level logic minimization problem (e.g., solved by Quine–McCluskey algorithm)
is that, in the hazard-free case, dhf-prime implicants replace
prime implicants as the covering objects, and required cubes
replace minterms as the objects to be covered.
In general, the covering conditions of Theorem II.11 may
not be satisfiable for an arbitrary Boolean function and set of
transitions [38], [29]. This case occurs if conditions (b) and
(c) cannot be satisfied simultaneously.
A hazard-free minimization example is shown in Fig. 1.
is a
transition;
There are four specified transitions.
is a
it gives rise to one required cube [see part (a)].
transition; it gives rise neither to required cubes nor
and
are
transitions. Each
to privileged cubes.
of these two transitions gives rise to two required cubes [see
(a)] and one privileged cube [see (b)]. A minimum hazardfree cover is shown in part (c). Each required cube is covered,
and no product in the cover illegally intersects any privileged
cube. In contrast, the cover in part (d) is not hazard free since
priv-cube-1 is intersected illegally (highlighted minterm) by
THEOBALD AND NOWICK: TWO-LEVEL HAZARD-FREE LOGIC MINIMIZATION
1133
(a)
(b)
(c)
(d)
Fig. 1. Two-level hazard-free minimization example. (a) shows the set of required cubes (shaded) and the set of transition cubes (dotted). (b) shows the set
of privileged cubes (shaded). (c) shows a minimal hazard-free cover. (d) shows a minimum-cost cover that is not hazard free, since it contains a logic hazard.
product
In particular, this product may lead to a glitch
during transition
III. HEURISTIC HAZARD-FREE MINIMIZATION: ESPRESSO-HF
A. Overview
H. Exact Hazard-Free Minimization Algorithms
A single-output exact hazard-free minimizer has been developed by Nowick and Dill [29]. It has recently been extended
to hazard-free multivalued minimization3 by Fuhrer et al.
[12]. The latter method, called HFMIN, has been the fastest
minimizer for exact hazard-free minimization.
HFMIN makes use of ESPRESSO-II to generate all prime implicants, then transforms them into dhf-prime implicants, and
finally employs ESPRESSO-II’s MINCOV to solve the resulting
unate covering problem. Each of the algorithms used in the
above three steps is critical, i.e., each has a worst case run time
that is exponential. As a result, HFMIN cannot solve several of
the more difficult examples.
Very recently, Rutten [33], [32] has proposed an alternative
exact method. However, his method has yet to be evaluated on
difficult examples, e.g., on those that cannot be easily solved
by HFMIN (see Section VI-C for more details).
3 It is well known that multioutput minimization can be regarded as a special
case of multivalued minimization [30].
The goal of heuristic hazard-free minimization is to find
a very good, but not necessarily exactly minimum, solution
to the hazard-free covering problem. The basic minimization strategy of ESPRESSO-HF for hazard-free minimization
is similar to the one used by ESPRESSO-II. However, we use
additional constraints to ensure that the resulting cover is
hazard free, and the algorithms are significantly different.
One key distinction is in the use of the unate recursive
paradigm in ESPRESSO-II, i.e. to decompose operations recursively leading to efficiently solvable suboperations on unate
functions. To the best knowledge of the authors, the unate
recursive paradigm cannot be applied directly to ESPRESSOII-like heuristic hazard-free minimization. (In [33] and [32], a
unate recursive method was proposed, but only for use in the
dhf-prime generation step for exact hazard-free minimization;
see Section VI-C.) The intuitive reason for this observation
is that the operators in ESPRESSO-II manipulate covers. For
example, the “many” ON-set minterms (objects to be covered)
can typically be stored compactly and manipulated efficiently
as an ON-set cover of “a few” cubes. In contrast, required
1134
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 17, NO. 11, NOVEMBER 1998
Fig. 2. The ESPRESSO-HF algorithm.
cubes cannot be combined into larger cubes without loss of
information, which means that the basis for the unate recursive
paradigm, i.e., the concept of covers, becomes obsolete.
We therefore follow the basic steps of ESPRESSO-II, modified to incorporate hazard-freedom constraints, but without
the use of unate recursive algorithms. However, because of
the constraints and granularity of the hazard-free minimization
problem, high-quality results are still obtained rapidly even for
large examples.
In this subsection, we describe the basic steps of the
algorithm, concentrating on the new constraints that must be
incorporated to guarantee a cover to be hazard free. We then
describe the individual steps in detail in later subsections.
As in ESPRESSO-II, the size of the cover is never increased.
In addition, after an initial phase, the cover always represents
that is also hazard free.
a valid solution, i.e., a cover of
Pseudocode for the algorithm is shown in Fig. 2.
The first step of ESPRESSO-HF is to read in PLA files
specifying a Boolean function and a set of specified functionhazard-free transitions These inputs are used to generate the
the set of privileged cubes and their
set of required cubes
and the OFF-set
Generation
corresponding start points
of these sets is immediate from the earlier lemmas (see also
[29]).4
The set
can be regarded both as an initial cover
of
the function and as a set of objects to be covered. Unlike
does not in
ESPRESSO-II, however, the given initial cover
is a cover of
general represent a valid solution: while
4 The algorithm does not need an explicit cover for the don’t care set because
the operations only require the OFF-set to check if a cube is valid.
it is not necessarily hazard free. Therefore, processing begins
by first expanding each required cube into the uniquely defined
minimum dhf-implicant covering it, or the detection that this is
impossible, denoted by “undefined.” The latter case indicates
that the hazard-free minimization problem has no solution (see
Section III-J). Otherwise, the result is an initial hazard-free
and set of objects to be covered
cover
The next step is to identify essential dhf-implicants using a
modified EXPAND step. The algorithm uses a novel approach
to identifying equivalence classes of implicants, each of which
is treated as a single implicant. Essential implicants, as well
as all required cubes covered by them, are then removed from
and
respectively, resulting in a smaller problem to be
solved by the main loop. Before the main loop, the current
cover is also made irredundant.
Next, as in ESPRESSO-II, ESPRESSO-HF applies the three
operators REDUCE, EXPAND, and IRREDUNDANT to the
current cover until no further improvement in the size of the
cover is possible. Since the result may be a local minimum,
the operator LAST_GASP is then applied to find a better
solution using a different method. EXPAND uses new hazardfree notions of essential parts and feasible expansion. The
other steps differ from ESPRESSO-II as well.
At the end, there is an additional step to make the resulting implicants dhf-prime, MAKE_DHF_PRIME, since it
is desirable to obtain a cover that consists of dhf-prime
implicants. The motivation for this step will be made clear
in the remaining subsections.
In addition to the steps shown in Fig. 2, our implementation
has several optional pre- and postprocessing steps.
B. Dhf-Canonicalization of Initial Cover
In ESPRESSO-II, the initial cover of a function is provided
. This cover is a seed solution, which
by its ON-set,
is iteratively improved by the algorithm. By analogy, in
ESPRESSO-HF, the initial cover is provided by the set of
However, unlike ESPRESSO-II, our initial
required cubes
specification does not in general represent a solution: though
is a cover, it is not necessarily hazard free. Therefore,
processing begins by expanding each required cube into the
uniquely defined minimum dhf-implicant containing it. This
expansion represents a canonicalization step, transforming a
potentially hazardous initial cover into a hazard-free initial
cover
Example: Consider the function
in the Karnaugh map
of specified multiple-input transitions is
of Fig. 3. A set
transitions, each
indicated by arrows. There are two
(start point
corresponding to a privileged cube:
and
(start point
The initial cover is given by the set of required
cubes:
This cover is hazardous. In particular, consider the
corresponding to the
required cube
transition from
to
. Required cube
illegally intersects privileged cube
since it intersects
but does not contain
To avoid illegal intersection,
must be expanded to the smallest cube that also contains
THEOBALD AND NOWICK: TWO-LEVEL HAZARD-FREE LOGIC MINIMIZATION
Fig. 3. Canonicalization example.
:
supercube
However, this new
now illegally intersects privileged cube
cube
since it does not contain
Therefore, cube
in turn
must be expanded to the smallest cube containing
:
supercube
The resulting expanded
has no illegal intersections and is therefore a
cube,
dhf-implicant.
In this example,
is a hazard-free expansion of called
a canonical required cube; it can therefore replace in the
initial cover. (Note that such a canonicalization is feasible if
and only if the hazard-free covering problem has a solution;
see Section III-J.)
It is easy to see that each required cube has a unique
corresponding canonical required cube. Suppose there are two
which contain some
distinct minimal dhf-implicants, and
required cube
In this case, we now show that we can
construct a dhf-implicant that is smaller than either cube: the
and
Clearly, implicant
intersection of
contains Furthermore, if
were not a dhf-implicant, then it
would intersect some privileged cube illegally, i.e., intersect
but not contain its start point
However, this would
and
intersected
mean that both original implicants,
but at least one of them (say
did not contain
As a
result, would not be a dhf-implicant, since it would illegally
intersect thus contradicting our assumption. Therefore,
is a dhf-implicant that contains and so
and
could not
have been minimal. In sum, each required cube has a unique
corresponding canonical required cube, which contains it.
Based on the above discussion, an initial set of required
of canonical
cubes is replaced by the corresponding set
required cubes. This set is then minimized with respect to
is a valid hazard-free cover
single-cube containment.
of the function to be minimized and is used as an initial
cover for the minimization process. Interestingly,
has a
second role as well: it can also be used as the initial set
defines a new
of objects to be covered. In particular,
(not ) must be contained
covering problem: each cube of
in some dhf-implicant. It is straightforward to show that the
two covering problems are equivalent, i.e., a dhf-implicant
contains a required cube in if and only if also contains
1135
the corresponding canonical required cube of in
To
see this, suppose that contained but did not contain the
In this case, could not be a
canonical required cube of
dhf-implicant, since it must illegally intersect at least one of
those privileged cubes that caused to be expanded into its
canonical required cube.
In the above example, any dhf-implicant that contains remust also contain canonical required
quired cube
Therefore, the hazard-free minimization
cube
problem is unchanged, but canonical required cubes are now
is that it may have smaller
used. An advantage of using
i.e., being a more efficient representation of the
size than
are in general larger than
problem. Also, since the cubes in
the EXPAND operation may be
the corresponding ones in
sped up.
To conclude, the new set of canonical required cubes
replaces the original set of required cubes
as both i)
the initial cover and ii) the set of objects to be covered.
Henceforth, the term “set of required cubes” will be used to
refer to set
We formalize the notion of canonicalization below.
Definition III.1: The dhf-supercube of a set of cubes
with respect to function
and transitions
indicated as
is the smallest dhf-implicant containing
the cubes of
The superscript
is omitted when it is clear from
is computed by the simple
the context.
algorithm shown in Fig. 4.
The canonical required cube of a required cube can now
The
be defined as the dhf-supercube of the set
computation of dhf-supercubes for larger sets will be needed
to implement some of the operators presented in the sequel.
C. EXPAND
In ESPRESSO-II, the goal of EXPAND is to enlarge each
implicant of the current cover in turn into a prime implicant.
As an implicant is expanded, it may contain other implicants
of the cover that can be removed; hence the cover cardinality is
reduced. If the current implicant cannot be expanded to contain
another implicant completely, then, as a secondary goal, the
implicant is expanded to overlap as many other implicants of
the current cover as possible.
In ESPRESSO-HF, the primary goal is similar: to expand a
dhf-implicant of the current cover to contain as many other
dhf-implicants of the cover as possible. However, EXPAND
in ESPRESSO-HF has two major differences. First, unlike
ESPRESSO-II, expansion in some literal (i.e., “raising of entries”) may imply that other expansions be performed. That is,
raising of entries is now a binate problem, not a unate problem.
In addition, ESPRESSO-HF’s EXPAND uses a different strategy
for its secondary goal. By the hazard-free covering theorem,
each required cube needs to be contained in some cube of the
cover. Therefore, as a secondary goal, an implicant is expanded
to contain as many required cubes as possible.
We now describe the implementation of EXPAND in
ESPRESSO-HF. Pseudocode for the expansion of a single cube
is shown in Fig. 5.
1136
Fig. 4.
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 17, NO. 11, NOVEMBER 1998
Supercubedhf
computation.
Fig. 5. Expand (for a cube a).
1) Determination of Essential Parts and Update of Local
Sets: As in ESPRESSO-II, free entries are maintained to accelerate the expansion [30]. Free entries indicate which literals of
an implicant are candidates for removal during the expansion
process.
To explain this concept in a unified way, the current
implicant and its free entries are represented in positional
is represented
cube notation [30]. As an example,
in positional cube notation as 01 10 11 01, i.e. where literal
is encoded as 01 (10). Thus, each literal in
has a
corresponding 0 in the positional cube notation, and changing,
or raising, the 0 to 1 corresponds to removing this literal from
the implicant.
Initially, a free entry is assigned a 1 (0) if the current
implicant to be expanded has a 0 (1) in the corresponding
position. For the above example, the free entries are: 10 01 00
10. An overexpanded cube is defined as the cube where all
its free entries have been raised simultaneously.
As in [30], an essential part is one that can never, or
always, be raised. However, our definition of essential parts is
different from ESPRESSO-II, since a hazard-free cover must be
maintained. We determine essential parts in procedure update,
described below.
First, we determine which entries can never be raised
This is achieved by
and remove them from
that has distance
searching for any cube in the OFF-set
in this entry, using the same approach as in
1 from
ESPRESSO-II.
Next, we determine which parts can always be raised, raise
This step differs
them, and remove them from
from ESPRESSO-II. In ESPRESSO-II, a part can always be raised
if it is 0 in all cubes of the OFF-set,
That is, it is guaranteed
that the expanded cube will never intersect the OFF-set. In
contrast, in ESPRESSO-HF, we must ensure that an implicant
is also hazard free: it cannot intersect the OFF-set, nor can it
illegally intersect a privileged cube. Unlike ESPRESSO-II, this
is achieved by searching for any column that i) has only 0’s
in and ii) where, for each privileged cube in having a
will be
1 in this column, the corresponding start point
contained by the expanded cube
Example: Fig. 1(a) indicates the set of required cubes,
which forms an initial hazard-free cover. Consider the
(11010101, in positional cube notation). As
cube
and
can
in ESPRESSO-II, the zero-entries for literals
never be raised, since the cube would intersect the OFF-set.
However, after updating the free entries, ESPRESSO-II indicates
can always be raised, since the resulting cube
that literal
will never intersect the OFF-set. In contrast, in ESPRESSO-HF,
results in an illegal intersection with privileged
raising
so it cannot “always be raised.”
cube
Since hazard-free minimization is somewhat more constrained, the expansion of a cube can be accelerated by the
These sets
following new operations on two local sets:
and
is initially assigned
are associated with cube
the set of privileged (OFF-set) cubes. Both sets are updated as
expansion proceeds (in procedure update).
where the correspond1) Remove privileged cubes from
ing start point is already covered by (since no further
checking for illegal intersection is required).
to the local OFF2) Move privileged cubes from set
if the overexpanded cube does not include
set
can never be
the corresponding start points (since
expanded to include these start points, one must avoid
intersection with these privileged cubes entirely).
to the local OFF-set
3) Move privileged cubes from
where
, start point intersects the
OFF-set ( can never be expanded to include these start
points, therefore one must avoid intersection with the
cubes entirely).
2) Detection of Feasibly Covered Cubes of : In ESPRESSO
-II, a cube in is expanded through a supercube operation. A
is said to be feasibly covered by if supercube
cube in
is an implicant. In ESPRESSO-HF, this definition needs
to be modified to insure hazard-free covering, after expansion
of cube
Definition III.2: A cube in
is dhf-feasibly covered by
if
is defined.
This definition insures that the resulting expanded cube,
is i) an implicant (does not intersect
OFF-set) and ii) is also a dhf-implicant (does not intersect any
THEOBALD AND NOWICK: TWO-LEVEL HAZARD-FREE LOGIC MINIMIZATION
privileged cube illegally). Effectively, this definition canonicalizes the resulting supercube to produce a dhf-implicant.
may properly contain superThat is,
since the former may be expanded through a
cube
series of implications in order to reach the minimum dhfimplicant that contains both and Using this definition, the
following is an algorithm to find dhf-feasibly covered cubes
of
While there are cubes in that are dhf-feasibly covered by
iterate the following:
cube
where is a dhfReplace by
feasibly covered cube such that the resulting cube will cover
as many cubes of the cover as possible. Covered cubes are
then removed, using the “single-cube-containment” operator,
thus reducing the cover cardinality. Determine essential parts
and update local sets (see above).
3) Detection of Feasibly Covered Cubes of : Once cube
can no longer be feasibly expanded to cover any other
we still continue to expand it. This is motivated
cube of
by the hazard-free covering theorem, which states that each
required cube needs to be contained in some cube of the cover.
Therefore, as a secondary goal, cube is expanded to contain
as many required cubes as possible. The strategy used in this
substep is similar to that used in the preceding one, i.e., while
that are dhf-feasibly covered by cube
there are cubes in
iterate the following.
by
where is a dhfReplace
feasibly covered required cube such that the resulting cube
will cover as many required cubes not already contained in
as possible. Covered required cubes are then removed, using
the “single-cube-containment” operator. Determine essential
parts and update local sets (see above).
4) Constraints on Hazard-Free Expansion: In ESPRESSOII, an implicant is expanded until no further expansion is
possible, i.e., until the implicant is prime. Two steps are used:
i) expansion to overlap a maximum number of cubes still
covered by the overexpanded cube and ii) raising of entries to
find the largest prime implicant covering the cube.
In ESPRESSO-HF, however, we do not implement these
remaining EXPAND steps, based on the following observation.
The result of our EXPAND steps (see Sections III-C2 and
III-C3) guarantees that a dhf-implicant can never be further
expanded to contain additional required cubes. Therefore,
by the hazard-free covering theorem, no additional objects
(required cubes) can possibly be covered through further
expansion. In contrast, in ESPRESSO-II, expansion steps i)
and ii) may in fact result in covering of additional ON-set
minterms. Because of this distinction, the benefit of further
expansion is mitigated. Therefore, in general, our EXPAND
algorithm makes no attempt to transform dhf-implicants into
dhf-prime implicants. However, since expansion to dhf-primes
is important for literal reduction and testability, it is included as
a final postprocessing step: MAKE_DHF_PRIME (see Fig. 2).
D. Essentials
Essential prime implicants are prime implicants that need
to be included in any cover of prime implicants. Therefore, it
1137
Fig. 6. Essential example.
is desirable to identify them as soon as possible to make the
resulting problem size smaller. On the one hand, we know of
no efficient solution for identifying the essential dhf-primes
using the unate recursion paradigm of ESPRESSO-II. On the
other hand, the hazard-free minimization problem is highly
constrained by the notion of covering of required cubes, thus
allowing a powerful new method to classify essentials as
equivalence classes.
Example: Consider Fig. 6. The required cube
is covered by precisely two dhf-prime implicants:
and
which cover no other required cubes. Neither
nor
is an essential dhf-prime, since is covered by both.
or
(not both) must be included in
And yet, clearly, either
any cover of dhf-primes. Also, if we assume the standard cost
and
are of equal cost.
function of cover cardinality,
Definition III.3: Two dhf-prime implicants are equivalent if
they cover the same set of required cubes. An equivalence class
of dhf-prime implicants is maximal if its dhf-primes cover a set
of required cubes that is not covered by any other equivalence
class. A maximal equivalence class of dhf-prime implicants is
essential if its dhf-primes cover at least one required cube that
is not covered by any other maximal equivalence class.
of dhf-primes is
In the above example, the set
an equivalence class, since both dhf-primes cover the same set
In fact, the class is an essential
of required cubes
equivalence class, since it is the only equivalence class that
covers the required cube
ESPRESSO-II computes essentials after initial EXPAND and
IRREDUNDANT steps. In contrast, ESPRESSO-HF computes
essentials as part of a modified EXPAND-step. The new
algorithm is outlined as follows.
of
The algorithm starts with the initial hazard-free cover
required cubes. To simplify the presentation, assume that one
seed cube is selected and expanded greedily, using EXPAND,
This implicant is characterized by the
to a dhf-implicant
of required cubes that it contains. Dhf-implicant
set
thus corresponds to an equivalence class of dhf-primes that
Since EXPAND guarantees that covers a maximal
cover
number of required cubes, this equivalence class is also
maximal. Moreover, this class is an essential equivalence
1138
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 17, NO. 11, NOVEMBER 1998
class if
contains some
be expanded into any other
can be
To check if
imal equivalence class, a
for each required cube
required cube
which cannot
maximal equivalence class.
expanded into a different maxsimple pairwise check is used:
not covered by
determine if
is feasible. If no such feasible
is called a distinguished required
expansion exists for
cube, and therefore the equivalence class corresponding to
is essential. Otherwise, the process is repeated for every
contained in
If
corresponds to an
required cube
essential equivalence class, then is removed from the cover.
are removed,
In addition, all required cubes covered by
since it is ensured that they will be covered. This step can
result in “secondary essential” equivalence classes. In fact, due
to the removal of required cubes, more dhf-prime implicants
become equivalent to each other. As a consequence, further
equivalence classes may become essential.
The procedure iterates until all essentials are identified.
The above discussion seems to imply that the essentials
step is more or less quadratic in the number of required cubes,
i.e., very inefficient. However, by making use of techniques
similar to the ones described in the EXPAND Section III-C,
e.g., by using an overexpanded cube, the number of necessary
calls can be reduced dramatically. Therefore,
in practice, essentials can be identified efficiently, and the
problem size is usually significantly reduced (see Section VI).
E. REDUCE
The goal of the REDUCE operator is to set up a cover that
is likely to be made smaller by the following EXPAND step.
is maximally
To achieve this goal, each cube in a cover
reduced in turn to a cube such that the resulting set of cubes
is still a cover.
ESPRESSO-II uses the unate recursive paradigm to maximally
reduce each cube. Since ESPRESSO-HF is a required-cube covering algorithm, there is no obvious way to use this paradigm.
Fortunately, the hazard-free problem is more constrained,
making it possible to use an efficient enumerative approach
based on required cubes.
Our REDUCE algorithm is as follows. The algorithm reduces each cube in the cover in order. In particular, a cube
is reduced to the smallest dhf-implicant that covers all
required cubes that are uniquely covered by (i.e., contained
is the
in no other cube of the cover ). That is, if
set of required cubes that are uniquely covered by then is
replaced by
Note that the outcome of this algorithm depends on the order
in which the cubes of the cover are processed. Suppose
is reduced before
and that and
cover some required
cube but no other cube of covers If is reduced to a
cannot be reduced to a
cube that does not cover then
cube that does not cover
minterms) may in fact be irredundant with respect to covering
of required cubes.
Therefore, as in REDUCE, our approach is required-cube
based. Considering the Hazard-Free Covering Theorem, it is
straightforward that IRREDUNDANT can be reduced to a
covering problem of the cubes in
by the cubes in
That
is, the problem reduces to a minimum-covering problem of i)
required cubes using ii) dhf-implicants in the current cover.
In practice, the number of required cubes and cover cubes
usually make the covering problem manageable. ESPRESSO-II’s
MINCOV can be used to solve this covering problem exactly
or heuristically (using its heuristic option).
G. Last Gasp
The inner loop of ESPRESSO-HF may lead to a suboptimal
local minimum. The goal of LAST_GASP is to use a different
approach to attempt to reduce the cover size. In ESPRESSOis independently reduced to the smallest
II, each cube
cube containing all minterms not covered by any other cube
In contrast, ESPRESSO-HF computes, for each
the
of
smallest dhf-implicant containing all required cubes that are
not covered by any other cube in
As in ESPRESSO-II, cubes that can actually be reduced by
Each such
this process are added to an initially empty set
is then expanded in turn with the goal to cover at
using the
operator,
least one other cube of
Finally,
and, if achieved, the expanded cube is added to
with the hope
the IRREDUNDANT operator is applied to
to escape the above-mentioned local minimum.
H. Make-dhf-prime
The cover being constructed so far does not necessarily
consist of dhf-primes. It is usually desirable to expand each
dhf-implicant of the cover to make it dhf-prime as a last step.
This can be achieved by a modified EXPAND step. A simple
greedy algorithm will expand an implicant to a dhf-prime:
while dhf-feasible, raise a single entry of
I. Pre- and Postprocessing Steps
ESPRESSO-HF includes optional pre- and postprocessing
steps. In particular, the efficiency of ESPRESSO-HF depends
very much on the size of the ON-set and OFF-set covers
that are given to it. Thus, ESPRESSO-HF includes an optional
preprocessing step that uses ESPRESSO-II to find covers of
smaller size for the initial ON-set and OFF-set.5 ESPRESSO-HF
also includes a postprocessing step to reduce the literal count
of a cover, similar to ESPRESSO-II’s MAKE_SPARSE.
J. Existence of a Hazard-Free Solution
F. Irredundant
As indicated earlier, for certain Boolean functions and sets
of transitions, no hazard-free cover exists [29]. The currently
used exact hazard-free minimization method HFMIN is only
able to decide if a hazard-free solution exists after generating
ESPRESSO-II uses the unate recursive paradigm to find an
irredundant cover. In our case, we cannot employ the same
algorithm, since a “redundant cover” (according to covering of
and OFF-set are necessary to form the initial set of required cubes
More important, the OFF-set is used to check if a cube expansion is valid
(see Fig. 4).
5 ON-set
Q:
THEOBALD AND NOWICK: TWO-LEVEL HAZARD-FREE LOGIC MINIMIZATION
1139
Based on this approach, we present a new implicit method for
exact two-level hazard-free logic minimization in Section V.
A. Overview and Intuition
Fig. 7. Existence example.
all dhf-prime implicants. A hazard-free solution does not exist
if and only if the dhf-prime implicant table includes at least
one required cube not covered by any dhf-prime implicant.
However, since the generation of all dhf-primes may very well
be infeasible6 for even medium-sized examples, it is important
to find an alternative approach.
We now introduce a new theorem to check for the existence
of a hazard-free solution, without the need to generate all dhfprime implicants. This theorem leads directly to a fast and
simple algorithm that is incorporated into ESPRESSO-HF.
Theorem III.4: Given a function and a set of specified
a solution of the
function-hazard-free input transitions of
two-level hazard-free logic minimization problem exists if and
is defined for each required cube
only if
The proof is immediate from the discussion in Section III-B.
Example: Consider the Boolean function in Fig. 7, with
four specified input transitions. To check for existence
of a hazard-free solution, we compute
for each required cube
Except for
it holds
since no privileged cube is
that
intersected illegally. To compute
note that privileged cube
is intersected illegally, i.e.
Since
now intersects privileged cube
we get
, leading
does not exist
directly to the fact that
intersects the OFF-set. Thus, a hazard-free cover
because
does not exist for this example.
IV. A NOVEL APPROACH TO INCORPORATING HAZARDFREEDOM CONSTRAINTS WITHIN A SYNCHRONOUS FUNCTION
After having discussed the heuristic hazard-free minimization problem in the previous section, we will now shift our
discussion to the exact hazard-free minimization problem.
We begin by presenting a novel technique that recasts the
dhf-prime implicant generation problem as a prime generation
problem for a new synchronous function, with extra inputs.
6 This refers to “explicit representations”; we will show later that “implicit
representations” very often are feasible.
We first give a simple overview of our entire method.
Details and formal definitions are provided in the remaining
sections. Our approach is to recast the generation of dhfinto the
prime implicants of an asynchronous function
generation of prime implicants for a new synchronous function
Here, hazard-freedom constraints are incorporated into the
function by adding extra inputs. (The exact definition of
is given in Section IV-B.) An overview of the method is best
illustrated by a simple example.
Example: Consider Fig. 8. The Karnaugh map in part
defined over three variables
A represents a function
The shaded area corresponds to the only
(the second privileged cube
nontrivial privileged cube of
[101, 100] is trivial; see Section II-F). We now define a
shown in part B. Function is
new synchronous function
That is,
obtained from by adding a single new variable
is defined over four variables:
In general,
to generate one new -variable is added for each nontrivial
. Next, the prime implicants of the
privileged cube in
synchronous function are computed (shown in part B as
ovals). Finally, a simple filtering procedure is used to filter
out those prime implicants that correspond to those in
that intersect the privileged cube illegally. The remaining
prime implicants of are shown in part C. We then “delete”
the -dimension from the prime implicants and obtain the
(part D).
entire final set of dhf-prime implicants of
Our approach is motivated by the fact that dhf-primeimplicants are more constrained than prime implicants of the
same function. While prime implicants are maximal implicants
that do not intersect the OFF-set of the given function, dhfprime-implicants, in addition, must also not intersect privileged cubes illegally. Thus, there are two different kinds of
constraints for dhf-prime-implicants: “maximality” constraints
and “avoidance of illegal intersections” constraints. Our idea
is to unify these two types of constraints, i.e., to transform
the avoidance constraints into maximality constraints so that
dhf-primes can be generated in a uniform way. Intuitively,
this unification can be achieved by adding auxiliary variables,
i.e., by lifting the problem into a higher dimensional Boolean
space.
In summary, the big picture is as follows. The definition of
ensures that all dhf-prime implicants of (dhf-Prime(f,T))
can be easily obtained from the set of prime implicants of
While
may also include certain
products that are nonhazard free, these latter ones are filtered
out easily using a postprocessing step.
B. The Auxiliary Synchronous Function
We now explain how the synchronous function is derived.
is a single-output
For simplicity, assume for now that
function.
Suppose is defined over the set of variables
and that the set of transitions
gives rise to the set of
1140
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 17, NO. 11, NOVEMBER 1998
(a)
(d)
(b)
(c)
Fig. 8. Example for recasting prime generation. (a) shows the function (f ; T ) whose dhf-primes are to be computed. (b) shows the auxiliary synchronous
function g and its primes. (c) shows primes of g that do not intersect illegally. (d) shows the final dhf-primes of (f; T ) after deleting the z1 variable.
nontrivial privileged cubes
The
; that
idea is to define a function over
is, one new variable is added per privileged cube. Formally,
is defined as follows:
Function is the product of and some function that depends
on the new inputs. The intuition behind the definition of is
half of the domain, is defined as while
that, in the
half of the domain, is defined as but with
in the
“filled in” with all 0’s (i.e.,
is
the th privileged cube
“masked out”).
Example: As an example, Fig. 8(a) shows a Boolean funcwith privileged cube
(highlighted in gray).
tion
Fig. 8(b) shows the corresponding new function with added
In the
half, function is identical to In
variable
half, is identical to except that is 0 throughout
the
which corresponds to the privileged cube
the entire cube
in the original function
In particular, function
where
as
is defined
C. Prime Implicants of Function
To understand the role of function we consider its prime
implicants
We start by considering a function
that has only one
Let be any implicant of the function
privileged cube
that is contained in the
plane of
Since the
plane is defined as
also corresponds to an implicant of
Now, consider the expansion of into the
plane of
function There are two possibilities: either i) can expand
plane or ii) cannot expand into the
plane.
into
plane means that
In case i), expansion of into the
is identical to in the expanded region. Therefore, does
in the original function (if
not intersect privileged cube
in the
plane, and
it did, would have all 0’s in
expansion would be impossible). In case ii), expansion into
THEOBALD AND NOWICK: TWO-LEVEL HAZARD-FREE LOGIC MINIMIZATION
the
plane is impossible. In this case, must intersect
in function
has all 0’s in
In summary, may or may not be able to expand from
into
planes. Expansion can occur precisely
in the original
if does not intersect the privileged cube
function, since function is identically defined as in both
planes “outside” the privileged cube. Expansion cannot occur
because in the
if intersects the privileged cube
plane, the privileged cube is filled in entirely with 0’s.
of in
Example: Consider the minterm
of
Fig. 8(b), which corresponds to the minterm
can be expanded into the
plane into the prime implicant
(shaded oval). Intuitively, the expansion is
of :
does not intersect the privileged cube, i.e.,
possible since
which corresponds to the privileged cube
of
the cube
However, the implicant
the original function
(oval with thick dark border) of cannot be expanded into the
plane: it intersects the privileged cube, and therefore
plane is filled with
the corresponding region in the
0’s. Note that prime generation is an expansion process that
continues until no further expansion is possible.
Let us now consider the general case, i.e., where
may
have more than one privileged cube. We show that the support
variables of each prime of precisely define which privileged
cubes are intersected by the corresponding implicant in
Let be any prime implicant of
Here,
is a positive or negative -literal.7 However,
can only be a negative -literal. The reason is that is a
negative unate function in -variables (by the definition of
), and therefore prime implicants of
will never include
the restriction of to the
positive -literals. We indicate by
-literals, i.e.,
Note that
is an implicant
of by the definition of
We now show that the presence, or absence, of literals in
prime implicant indicates precisely which privileged cubes
If includes literal
then
intersects
are intersected by
privileged cube in function To see this, note that since
is prime, clearly cannot be further expanded into the
plane. As a result,
must intersect privileged cube in the
original function On the other hand, if does not include
then does not intersect
Intuitively, the primes
are maximal in two senses: they are maximally expanded in
or maximally nonintersecting of privileged cubes, in some
combination, which is explicitly indicated by the set of support
of the primes.
In sum, the key observation is that the set of support of a
prime implicant of immediately indicates which privileged
in
cubes are intersected by the corresponding implicant
This observation will be critical in obtaining the final set of
dhf-prime implicants of
7 Note that q may not depend on all of the x-variables: some may not appear
here.
1141
D. Transforming
into
-
Once
is computed,
can be
directly computed. The important insight for this computation
fall into three classes
is that the prime implicants of
Each prime is
with respect to a specific privileged cube
distinguished based on if and how it intersects the privileged
in
i.e., based on the intersection of
with :
cube
that do not intersect the
Class 1) prime implicants
does not intersect ;
privileged cube, i.e.,
Class 2) prime implicants that intersect the privileged
intersects
and contains
cube legally, i.e.,
its start point;
Class 3) prime implicants that intersect the privileged
intersects
but does not
cube illegally, i.e.,
contain the start point.
Prime implicants that fall into Classes 2) and 3) (i.e.,
intersects some privileged cube) can be immediately identified
by the observation of the previous subsection. Those that fall
into Class 3) can then be identified, and removed, using a
contains the start
simple containment check: determine if
point of each intersected privileged cube.
can therefore be computed as
The set
Filter out all prime
follows. Start with the set
implicants that fall in Class 3) with respect to the first
privileged cube. Then, filter out all prime implicants that fall
in Class 3) with respect to the second privileged cube, and so
on. Finally, one obtains a set such that each of its elements is
if restricted to the -variables.
a valid dhf-implicant of
The reason is that, first, all primes of are implicants of
if restricted to -variables, and second, the filtering removed
any element that intersected any privileged cube illegally.
Therefore, the set only includes dhf-implicants. In fact, it also
This fact will be
contains all dhf-prime-implicants of
proved in the next subsection.
and its prime
Example: Fig. 8(b) shows function
implicants,
Fig. 8(c) shows the result of filtering out primes that illegally
In
intersect regions corresponding to privileged cubes in
(oval with thick dark border) falls into Class
this case,
3) with respect to : it is deleted since it has a -literal,
i.e., intersects the region corresponding to privileged cube
and, in addition, does not contain the start point
However,
(shaded oval) falls into Class 1): it is
not deleted since it does not have a -literal and therefore
does not intersect the region corresponding to the privileged
The remaining two primes
and
cube
fall into Class 2): they intersect the region corresponding
and also contain the start point. Fig. 8(d) shows the
to
result of step 3, which deletes the -literals in each cube.
which is the final set
We obtain
Note that the introduction of the -variable
which is not
ensures that the dhf-implicant of
since it is contained by the prime
a prime implicant of
is nevertheless generated.
implicant
1142
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 17, NO. 11, NOVEMBER 1998
E. Formal Characterization of
in Terms of Function
-
In this subsection, based on the above discussion, we present
the main result of this section: a new formal characterization of
We use the following notations.
and
denote the positive and negative cofactors of with respect to
respectively.
denotes an operator on a set of
variable
cubes that removes all -literals of each cube. As an example,
8
The
-operator on a set of cubes (single-cube-containment)
removes those cubes contained in other cubes.
Let
Theorem IV.1: Given
be the set of nontrivial privileged cubes,9 and
be the set of corresponding
start points. Define
Then the set
-
can be expressed as follows:
(1)
includes implicants
Intuition: The set
that do not intersect the privileged cube
The set
includes implicants of
that legally intersect
i.e., contain the corresponding start
The operation ensures that only those implicants
point
remain that are legal with respect to all privileged cubes,
removes implicants
i.e., that are dhf-implicants. The
contained in other implicants to yield the final set of dhfprime-implicants.
is also
Proof: “ ” [any product in
contained in (1)].
. Then does not intersect any
Let
privileged cube illegally, i.e., for each privileged cube, it holds
that either contains the corresponding start point or does not
intersect the privileged cube at all.
intersects legally
and
does not
Suppose
—i.e., is an implicant of
;
intersect
is an implicant of
then
is a prime implicant of because of the following.
results in a cube that is not an
1) Removing (any)
and hence not an implicant of
implicant of
2) Removing (any) positive or negative
literal (of )
results in a cube such that its restriction to the , is not a dhf-prime implicant. Thus
literals,
either intersects the OFF-set of or intersects for some
privileged cube
, and is therefore
of
8 RemZ can formally be expressed by existential quantification over z variables, i.e., RemZ (P ) = fx 2 fx1 ; x1 ; x2 ; x2 ; 1 1 1 ; xn ; xn g3 j9z 2
3
fz1 ; z 1 ; z2 ; z 2 ; 1 1 1 ; z ; z g : xz 2 P g:
l
9 In
l
the theorem, pi denotes the complement function of
p1 = x1 x2 x4 : Then, p1 = x1 x2 x4 = x1 + x2 + x4 :
pi :
Example:
no longer an implicant of
In either case,
is not an implicant of
Thus, for each
is by construction in at least one of
or
Therefore, is contained in the intersection of those sets.
cannot be filtered out by the
-operator since
Also,
by construction all cubes contained in (1) are dhf-implicants.
Thus, is contained in (1).
“ ” (any product contained in (1) is also contained in
dhf-Prime(f)).
We show that
is not
Assume
contained in (1).
is a dhf-implicant that is strictly contained in
Case i)
some dhf-prime implicant. Then is filtered out
-operator and therefore not
because of the
contained in (1).
Case ii) is not a dhf-implicant. By construction, all cubes
contained in (1) are dhf-implicants: the intersection
ensures that each cube is valid with respect to all
privileged cubes, i.e., for each privileged cube, the
cube either does not intersect it or else contains the
start point. Thus, cannot be contained in (1).
F. Multioutput Case
For simplicity of presentation only, it was assumed that is
a single-output function. However, it is well known [34] that
multioutput logic minimization can be reduced to single-output
minimization. Based on this theorem, the above characterization carries over in a straightforward way to multioutput
functions. All examples given later in the experimental results
section are multioutput functions.
V. EXACT HAZARD-FREE MINIMIZATION: IMPYMIN
Based on the ideas of the previous section, we are now able
to present a new exact implicit minimization algorithm for
multioutput two-level hazard-free logic.
A. Implicit Two-Level Logic Minimization: SCHERZO
We first briefly review the state-of-the-art synchronous exact two-level logic minimization algorithm, called SCHERZO
[8], which forms a basis of our new hazard-free implicit
minimization method.
SCHERZO has two significant differences from classic minimization algorithms like the well-known Quine–McCluskey
algorithm:
• SCHERZO uses data structures like BDD’s [3] and ZBDD’s
[21] to represent Boolean functions and sets of products
very efficiently. Thus, the complexity of the minimization problem is shifted, and the cost of the cyclic core
computation10 is now independent of the number of
products (e.g., the number of prime implicants) that are
manipulated.
10 A set covering problem can be reduced in size by repeated elimination
of essential elements and application of dominance relations. The remaining
set covering problem (if any) is called the cyclic core.
THEOBALD AND NOWICK: TWO-LEVEL HAZARD-FREE LOGIC MINIMIZATION
• SCHERZO includes new algorithms that operate on these
data structures. The motivation is that the logic minimization problem can be considered as a set covering problem
over a lattice. More specifically, both the covering objects
and the objects to be covered
are subsets of the
lattice of all Boolean products (over the set of literals).
A new cyclic core computation algorithm then uses two
and
which operate on
and
endomorphisms
, respectively, to capture dominance relations and to
compute the fixpoint C, which can be shown to be
isomorphic to the cyclic core.
Algorithm:
Input:
Output:
SCHERZO
Boolean function .
All minimum two-level
implementations of .
of Prime(f) (the set of all
1) Compute the ZBDD
or covering objects). Here, is
prime implicants of
given as a BDD.
of the set of ON-set minterms
2) Compute the ZBDD
of (i.e., the objects to be covered).
3) Solve
the
implicit
set
covering
problem
(Note that “ ” replaces “ ”
usually used to describe the relation between the two
sorts of objects of a covering problem, since our
set covering problem is considered over a lattice, as
explained above.)
B. Implicit Two-Level Hazard-Free Logic
Minimization: IMPYMIN
Nowick/Dill reduced two-level hazard-free logic minimization to a unate covering problem (see Section II) where each
required cube must be covered by at least one dhf-prime implicant. As with synchronous logic minimization in SCHERZO,
hazard-free logic minimization can also be considered over the
lattice of the set of products (over the set of literals). The major
difference from synchronous two-level logic minimization is
the setting up of the covering problem. In particular, a method
efficiently,
is needed that computes the set
preferably in an implicit manner. To do so, we use the
of Section IV. The
new characterization of
algorithm is as follows.
Algorithm:
Input:
Output:
IMPYMIN
Boolean function ,
set of input transitions .
All minimum hazard-free two-level
.
implementations of
of
1) Compute the ZBDD
2) Compute the ZBDD
of
[set of re].
quired cubes of
3) Solve the implicit unate set covering problem
1143
We now explain each of the steps in detail.
:
1) Computation of the ZBDD of
Suppose that is given as a BDD (if is given as a set of
cubes, we first compute its BDD). From the BDD representing
we can easily compute a BDD for the auxiliary synchronous
using an existing
function and then the ZBDD of
we
recursive algorithm [8]. From the ZBDD of
can then compute the final ZBDD of
using Theorem IV.1. It remains to show that the necessary
and
for
operations,
these steps can be implemented efficiently on ZBDD’s.
: Assuming that positive and neg• Computing
ative literal nodes of the same variable are always adjacent in the ZBDD, we only need to traverse the ZBDD
We replace each node labeled with a
of
variable by the result of the following operation. We compute the set union of the two successors corresponding to
those products that include positive literal and to those
The resulting ZBDD
products that do not depend on
may actually include nonprimes, i.e., cubes contained in
other cubes. However, these cubes are filtered out by
(see below).
: Analogously.
• Computing the ZBDD of
:
deletes all • Computing the ZBDD of
literals in the ZBDD. We traverse the ZBDD, and at
each - or -literal, we replace the corresponding node
with the ZBDD corresponding to the union of the two
successors.
• Single-Cube Containment (SCC): The last task, the ap-operator, which removes cubes
plication of the
contained in other cubes, is actually not performed in
this step, since it is automatically handled in Step 3 of
the algorithm.
To summarize, based on Theorem IV.1 we can compute the
in an implicit manner.
covering objects,
: From the set
2) Computation of the ZBDD of
the set of required cubes can easily be
of input transitions
computed (see [29]). This set can be stored as a ZBDD.
3) Solving the Implicit Covering Problem: The implicit set
can then be solved analcovering problem
ogously to Step 3 of SCHERZO, i.e., passed directly to the unate
set covering solver of SCHERZO.
C. A Note on the Efficiency of IMPYMIN
IMPYMIN appends -variables in dhf-prime generation during the construction of the synchronous function It is worth
pointing out that the algorithm does not become unattractive
even in cases where many -variables are necessary. Such
cases typically arise when there are many dynamic transitions,
and hence many privileged cubes. In practice, the addition of
many -variables does not necessarily imply that the BDD for
will be much larger than the BDD for (see Section VI-D).
Experimental results also indicate that IMPYMIN has significantly better run time than existing asynchronous methods on
large examples. It also performs hazard-free logic minimization nearly as efficiently as synchronous logic minimization for
1144
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 17, NO. 11, NOVEMBER 1998
many examples. One reason is that the new characterization of
the set of dhf-prime implicants, presented in Section IV, makes
it possible to use state-of-the-art synchronous techniques for
implicit prime generation and implicit set covering solving
(see Section VI-D for a detailed discussion).
VI. EXPERIMENTAL RESULTS AND
COMPARISON WITH RELATED WORK
Prototype versions of our two new minimizers ESPRESSOHF11 and IMPYMIN were run on several well-known benchmark
circuits [12], [37] on an UltraSPARC 140 workstation (Memory: 89 MB real/230 MB virtual).
A. Comparison of Exact Minimizers: IMPYMIN Versus HFMIN
The table in Fig. 9 compares our new exact minimizer
IMPYMIN with the currently fastest available exact minimizer,
HFMIN, by Fuhrer et al. [12].
For smaller problems, HFMIN is faster. It should be noted,
though, that our implementation is not yet optimized.12 However, the bottleneck of HFMIN becomes clearly visible already
for medium-sized examples. For sd-control and stetson-p2,
IMPYMIN is more than 3 times faster; for the benchmark
pscsi-pscsi, it is more than 15 times faster.
For very large examples, IMPYMIN outperforms HFMIN by
a large factor. While HFMIN cannot solve stetson-p1 within
20 h, IMPYMIN can solve it in just 813 s. The superiority of
implicit techniques becomes very apparent for the benchmark
cache-ctrl. While HFMIN gives up (after many minutes of run
time) because the 230 MB of virtual memory is exceeded, our
method can minimize the benchmark in just 301 s.
B. Comparison of Our New Methods:
IMPYMIN Versus ESPRESSO-HF
Fig. 10 compares our two new minimizers ESPRESSO-HF
and IMPYMIN. Besides run time and size of solution, the table
also reports the number of essentials (for ESPRESSO-HF) and
the number of variables that need to be added (for IMPYMIN).
The two minimizers are somewhat orthogonal.
On the one hand, IMPYMIN computes a cover of minimum
size, whereas ESPRESSO-HF is not guaranteed to find a minimum cover but typically does find a cover of very good quality.
In particular, ESPRESSO-HF finds always a cover that is at most
3% larger than the minimum cover size. It is worth pointing
out that many examples were very positively influenced by
our new notion of essentials. Quite a few examples can be
minimized by just the essentials step, resulting in a guaranteed
minimum solution; e.g., dram-ctrl and pe-send-ifc.
11 Our implementation is not a simple modification of the ESPRESSO-II
code. We do not reuse any ESPRESSO-II code. The reason is that while we use
the same set of main operators—EXPAND, REDUCE, IRREDUNDANT—the
algorithms that implement these operators, as explained in detail in Section III,
are actually very different from ESPRESSO-II.
12 Our BDD package is still very inefficient. In particular, it includes a
static (i.e., not a dynamic) hashtable. The hashtable for small examples
is unnecessarily large. In fact, the run time is completely dominated by
initializing the hashtables. If we use an appropriate-sized hashtable for smaller
examples, experiments indicate that IMPYMIN can solve the small examples
as fast as HFMIN.
Fig. 9. Comparison of exact hazard-free minimizers (#c—number of cubes
in minimum solution, time—run time in seconds).
On the other hand, ESPRESSO-HF is typically faster than
IMPYMIN. However, since neither tool has been highly optimized for speed, we think it is very important to analyze the
intrinsic advantages and disadvantages of the two methods.
Intuitively, both methods overcome the three bottlenecks of
HFMIN—prime implicant generation, transformation of prime
implicants to dhf-prime implicants, and solution of the covering problem—each of which is solved by an algorithm with
exponential worst case behavior. However, the way in which
ESPRESSO-HF and IMPYMIN overcome these bottlenecks is
very different. Whereas IMPYMIN uses implicit data structures
(but still follows some of the same basic steps as HFMIN),
ESPRESSO-HF follows a very different approach. Thus, the
two methods are orthogonal in their approach to overcome
these bottlenecks. Moreover, while ESPRESSO-HF is faster than
IMPYMIN on all of our examples, this does not mean that this
is necessarily true for other examples.
In this context, it is important to note that very often the
role data structures like BDD’s play in obtaining efficient
implementations of CAD algorithms is misunderstood. Using
BDD’s, many CAD problems can now be solved much faster
than before the inception of BDD’s. However, the naive
approach of taking an existing CAD algorithm and augmenting
it with BDD’s does not necessarily lead to a good tool (see
discussion in [8]). In particular, it is not easily possible to
simply augment ESPRESSO-HF or HFMIN with BDD’s to obtain
a high-quality tool. Rather, a new theoretical formulation was
needed on the characterization of dhf-prime implicants (see
Section IV-E), on which the new exact implicit minimizer
could be based.
C. Comparison with Rutten’s Work
An interesting alternative approach to our new characterization of dhf-prime implicants (see Section IV-E) was recently
proposed by Rutten et al. [33], [32] as part of an exact
hazard-free minimization algorithm. Their new algorithm to
computing dhf-prime implicants is very different from ours.
THEOBALD AND NOWICK: TWO-LEVEL HAZARD-FREE LOGIC MINIMIZATION
1145
Fig. 10. Comparison of the heuristic hazard-free minimizer ESPRESSO-HF, the exact hazard-free minimizer IMPYMIN, the heuristic minimizer ESPRESSO-II, and
the exact minimizer SCHERZO. (#c—number of cubes in solution, time—run time in seconds, #e—number of essentials, #v—number of added variables,
BDD f =g —BDD sizes without/with added variables).
Their approach follows a divide-and-conquer paradigm. In
particular, the dhf-prime generation problem is split into
three subproblems with respect to a splitting variable. The
first (second, third) subproblem generates those dhf-prime
implicants that have a positive literal (negative literal, don’tcare literal) for the splitting variable. The underlying idea why
this approach may be efficient is that it allows one to determine
illegal intersections of privileged cubes already during the
splitting phase (see [33] for details), which can significantly
reduce the recursion tree and lead fast to terminal cases. In
the merging phase of the divide-and-conquer approach, the
solutions to the subproblems are then combined.
However, it is worth pointing out that a major difference
between our work and Rutten’s is that his approach is not
based on implicit representations, while ours is. Furthermore,
while Rutten’s work is promising, it has not been fully
evaluated so far. In particular, he only presented run times
for functions that are significantly smaller than those that can
be handled by our two new methods. To be precise, on the
examples he reports, his own reimplementation of the existing
HFMIN tool never takes more than a few seconds. Thus, Rutten
evaluates his approach (and admittedly shows improvement)
only on examples that can already easily be solved by existing
algorithms. In contrast, as shown in the previous subsection,
our new methods are more powerful, since they can solve
examples efficiently that cannot be solved by HFMIN within
several hours of run time.
D. Comparison of Synchronous Versus
Asynchronous Minimization
We now compare our two new tools for two-level hazardfree minimization, ESPRESSO-HF and IMPYMIN, with the two
corresponding state-of-the-art tools for two-level nonhazardfree minimization, ESPRESSO-II and SCHERZO. The table in
Fig. 10 compares both run time and cardinality of solution
for all four minimizers. The table also indicates the number of identified essentials for the two heuristic minimizers,
ESPRESSO-II and ESPRESSO-HF. Finally, for IMPYMIN, it re-
ports the number of added variables and their impact on
BDD size.
The run-time comparison indicates that, although our tools
are not implemented as efficiently as their synchronous counterparts, they are comparably fast. Interestingly, our tools are
actually faster than the synchronous tools for the two largest
examples, cache-ctrl and stetson-p1. For our set of benchmarks, this result seems to indicate that the more constrained
asynchronous problem, which is to minimize a function
without hazards for a set of transitions
may be easier than
the corresponding synchronous problem, which is to minimize
the same function without any specified input transitions and
without hazard-free constraints.
The comparison in terms of cardinality of solution indicates
an increase in the asynchronous case compared with the synchronous case. In an earlier comparison [29], it was observed
that the logic overhead for the asynchronous case was never
greater than 6%. In contrast, in our table, there is a large
variation in overhead, ranging from 0% (stetson-p3) to 60%
(ssci-trcv-bm). The increase in overhead is due to the fact
that we now report on significantly more complex problems:
while [29] only performed single-output minimization, we
do multioutput minimization (on many of the same circuit
examples), including for functions ranging up to 32 inputs
and 33 outputs.
However, it is important to note that this table should not be
used to draw general conclusions regarding how much logic
overhead asynchronous designs incur due to the necessity to
avoid hazards. Our benchmark functions have been generated
by asynchronous synthesis methods, i.e., these functions do
not really make much sense in a synchronous system. On the
one hand, functions derived from asynchronous FSM’s must
have function-hazard-free input changes and critical race-free
state changes, unlike those derived from synchronous FSM’s.
On the other hand, asynchronous FSM’s are typically specified
in a more controlled environment, with more don’t cares. A
truly fair comparison on this interesting point is much beyond
the scope of paper.
1146
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 17, NO. 11, NOVEMBER 1998
Fig. 10 also compares the number of identified essentials,
using both hazard-free and nonhazard-free algorithms.
ESPRESSO-HF’s new formulation of essential equivalence
classes typically allows many more essentials to be identified
than in ESPRESSO-II. For example, in cache-ctrl, ESPRESSO-HF
identifies 50 essentials (out of an exact minimum cover of 97
cubes), while ESPRESSO-II identifies only seven essentials (out
of an exact minimum cover of 80 cubes). Thus, ESPRESSO-HF
makes positive use of hazard-freedom constraints to obtain
a very strong formulation of essentials, which has a positive
impact on both run time and quality of solution.
Finally, the table indicates that adding (sometimes many)
variables in IMPYMIN does not lead to an explosion in terms
of BDD size. To incorporate hazard-freedom constraints,
into
IMPYMIN (unlike SCHERZO) transforms the BDD of
The table, which compares
the BDD of auxiliary function
the corresponding BDD sizes for the same BDD package
and variable ordering, indicates that adding variables for
this transformation increases the BDD size, even for large
examples only by a small factor, which is typically about two.
Thus, the BDD size of auxiliary function is not much larger
than the BDD size of
VII. CONCLUSIONS
We have presented two new minimization methods for multioutput two-level hazard-free logic minimization: ESPRESSOHF, a heuristic method based loosely on ESPRESSO-II, and
IMPYMIN, an exact method based on implicit data structures.
Both tools can solve all examples that we have available.
These include several large examples that could not be minimized by previous methods.13 In particular, both tools can
solve examples that cannot be solved by the currently fastest
minimizer, HFMIN. On the more difficult examples that can
be solved by HFMIN, ESPRESSO-HF and IMPYMIN are typically
orders of magnitude faster.
Although ESPRESSO-HF is a heuristic minimizer, it almost
always obtains an exactly minimum-size cover. ESPRESSO-HF
also employs a new fast method to check for the existence
of a hazard-free solution, which does not need to generate all
dhf-prime implicants.
IMPYMIN performs exact hazard-free logic minimization
nearly as efficiently as synchronous logic minimization by
incorporating state-of-the-art techniques for implicit prime
generation and implicit set covering solving. IMPYMIN is based
on the new idea of incorporating hazard-freedom constraints
within a constructed synchronous function by adding extra
inputs. We expect that the proposed technique may very well
be applicable to other hazard-free optimization problems, too.
ACKNOWLEDGMENT
The authors would like to thank O. Coudert for very helpful
discussions and for his immense help with the experiments.
The authors would also like to thank the reviewers for in13 In publications on the 3D method (see, e.g., [43] and [41]), note that
several of these examples appear, but only single-output minimization is
performed.
sightful suggestions, B. Fuhrer for providing his tool HFMIN,
and M. Singh for interesting discussions.
REFERENCES
[1] P. A. Beerel, “CAD tools for the synthesis, verification, and testability of
robust asynchronous circuits,” Ph.D. dissertation, Stanford University,
Stanford, CA, 1994.
[2] M. Benes, S. M. Nowick, and A. Wolfe, “A fast asynchronous Huffman
decoder for compressed-code embedded processors,” in Proceedings of
the International Symposium on Advanced Research in Asynchronous
Circuits and Systems. New York: IEEE Computer Society Press, 1998.
[3] R. E. Bryant, “Graph-based algorithms for boolean function manipulation,” IEEE Trans. Comput., vol. C-35, pp. 677–691, Aug. 1986.
[4] W. Chou, P. A. Beerel, R. Ginosar, R. Kol, C. J. Myers, S. Rotem,
K. Stevens, and K. Y. Yun, “Optimizing average-case delay in the
technology mapping of domino dual-rail circuits: A case study of
an asynchronous instruction length decoding PLA,” in Proceedings of
the International Symposium on Advanced Research in Asynchronous
Circuits and Systems. New York: IEEE Computer Society Press, 1998.
[5] T.-A. Chu, “Synthesis of self-timed VLSI circuits from graph-theoretic
specifications,” Ph.D. dissertation, MIT Laboratory for Computer Science, Cambridge, June 1987.
[6] J. Cortadella, M. Kishinevsky, A. Kondratyev, L. Lavagno, and A.
Yakovlev, “Complete state encoding based on the theory of regions,”
in Proceedings of the International Symposium on Advanced Research
in Asynchronous Circuits and Systems. New York: IEEE Computer
Society Press, 1996.
[7] J. Cortadella, M. Kishinevsky, L. Lavagno, and A. Yakovlev, “Petrify:
A tool for manipulating concurrent specifications and synthesis of asynchronous controllers,” IEICE Trans. Fundamentals Electron. Commun.
Comput. Sci., vol. E80-D, no. 3, pp. 315–325, Mar. 1997.
[8] O. Coudert, “Two-level logic minimization: An overview,” Integration:
VLSI J., vol. 17, pp. 97–140, 1994.
[9] A. Davis, B. Coates, and K. Stevens, “Automatic synthesis of fast compact asynchronous control circuits,” in S. Furber and M. Edwards, Eds.,
Asynchronous Design Methodologies, vol. A-28 of IFIP Transactions.
Amsterdam, The Netherlands: Elsevier Science, 1993, pp. 193–207.
[10] A. Davis and S. M. Nowick, “An introduction to asynchronous circuit
design,” in A. Kent and J. G. Williams, Eds., Encyclopedia of Computer
Science and Technology, vol. 38, sup. 23, pp. 231–286, 1998.
[11] R. K. Brayton et al., Logic Minimization Algorithms for VLSI Synthesis.
Norwell, MA: Kluwer Academic, 1984.
[12] R. M. Fuhrer, B. Lin, and S. M. Nowick, “Symbolic hazard-free
minimization and encoding of asynchronous finite state machines,” in
1995 IEEE/ACM International Conference on Computer-Aided Design.
New York: IEEE Computer Society Press, 1995.
[13] R. M. Fuhrer, S. M. Nowick, M. Theobald, N. K. Jha, and L. Plana,
“Minimalist: An environment for the synthesis and verification of burstmode asynchronous machines,” in Proc. Int. Workshop Logic Synthesis,
1998.
[14] S. B. Furber, J. D. Garside, S. Temple, J. Liu, P. Day, and N. C. Paver,
“Amulet2e: An asynchronous embedded controller,” in Async97 Symp.,
ACM, Apr. 1997.
[15] J. Kessels and P. Marston, “Design asynchronous standby circuits for a
low-power pager,” in Async97 Symp., ACM, Apr. 1997.
[16] M. Kishinevsky, A. Kondratyev, A. Taubin, and V. Varshavsky, Concurrent Hardware: The Theory and Practice of Self-Timed Design. New
York: Wiley, 1994.
[17] P. Kudva, G. Gopalakrishnan, and H. Jacobson, “A technique for
synthesizing distributed burst-mode circuits,” in Proc. 33rd Design
Automation Conf., ACM, 1996.
[18] L. Lavagno and A. Sangiovanni-Vincentelli, Algorithms for Synthesis
and Testing of Asynchronous Circuits. Norwell, MA: Kluwer Academic, 1993.
[19] A. Marshall, B. Coates, and P. Siegel, “The design of an asynchronous
communications chip,” IEEE Design Test Comput. Mag., pp. 8–21, June
1994.
[20] A. J. Martin, S. M. Burns, T. K. Lee, D. Borkovic, and P. J. Hazewindus,
“The design of an asynchronous microprocessor,” in Proc. 1989 Caltech
Conf. Very Large Scale Integration, 1989.
[21] S. Minato, “Zero-Suppressed BDD’s for set manipulation in combinatorial problems,” in Proc. 30th Design Automation Conf., ACM, 1993.
[22] T. Nanya, Y. Ueno, H. Kagotani, M. Kuwako, and A. Takamura,
“TITAC: Design of a quasidelay-insensitive microprocessor,” IEEE
Design Test Comput. Mag., vol. 11, pp. 50–63, Summer 1994.
[23] L. S. Nielsen and J. Sparso, “A low-power asynchronous data path
for a fir filter bank,” in Proceedings of the International Symposium on
THEOBALD AND NOWICK: TWO-LEVEL HAZARD-FREE LOGIC MINIMIZATION
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
[35]
[36]
[37]
[38]
[39]
[40]
Advanced Research in Asynchronous Circuits and Systems (Async96).
New York: IEEE Computer Society Press, 1996, pp. 197–207.
S. M. Nowick and M. Theobald, “Synthesis of low-power asynchronous
circuits in a specified environment,” in Proceedings of 1997 International Symposium on Low Power Electronics and Design, 1997, pp.
92–95, 1997.
S. M. Nowick and B. Coates, “UCLOCK: Automated design of highperformance unclocked state machines,” in Proc. IEEE Int. Conf. Computer Design, Oct. 1994, pp. 434–441.
S. M. Nowick, M. E. Dean, D. L. Dill, and M. Horowitz, “The design of
a high-performance cache controller: A case study in asynchronous synthesis,” in Proceedings of the Twenty-Sixth Annual Hawaii International
Conference on System Sciences. New York: IEEE Computer Society
Press, vol. I, pp. 419–427.
S. M. Nowick and D. L. Dill, “Synthesis of asynchronous state machines
using a local clock,” in IEEE International Conference on Computer
Design. New York: IEEE Computer Society Press, Oct. 1991, pp.
192–197.
S. M. Nowick, N. K. Jha, and F. Cheng, “Synthesis of asynchronous
circuits for stuck-at and robust path delay fault testability,” IEEE Trans.
Computer-Aided Design, vol. 16, pp. 1514–1521, Dec. 1997.
S. M. Nowick and D. L. Dill, “Exact two-level minimization of hazardfree logic with multiple-input changes,” IEEE Trans. Computer-Aided
Design, vol. 14, pp. 986–997, Aug. 1995.
R. Rudell and A. Sangiovanni-Vincentelli, “Multiple valued minimization for PLA optimization,” IEEE Trans. Computer-Aided Design, vol.
CAD-6, pp. 727–750, Sept. 1987.
J. W. J. M. Rutten and M. R. C. M. Berkelaar, “Improved state
assignments for burst mode finite state machines,” in Proc. 3rd Int.
Symp. Advanced Research in Asynchronous Circuits and Systems, 1997.
J. W. J. M. Rutten, M. R. C. M. Berkelaar, C. A. J. van Eijk, and
M. A. J. Kolsteren, “An efficient divide and conquer algorithm for
exact hazard free logic minimization,” in Proceedings of the Design,
Automation and Test in Europe (DATE). New York: IEEE Computer
Society Press, 1998.
J. W. J. M. Rutten and M. A. J. Kolsteren, “A divide and conquer
strategy for hazard free 2-level logic synthesis,” in Proc. Int. Workshop
Logic Synthesis, 1997.
T. Sasao, “An application of multiple-valued logic to a design of
programmable logic arrays,” in Proc. Int. Symp. Multiple-Valued Logic,
1978.
R. F. Sproull, I. E. Sutherland, and C. E. Molnar, “The counterflow
pipeline processor architecture,” IEEE Design Test Comput. Mag., vol.
11, no. 3, pp. 48–59, 1994.
M. Theobald and S. M. Nowick, “An implicit method for hazard-free
two-level logic minimization,” in Proceedings of the International Symposium on Advanced Research in Asynchronous Circuits and Systems.
New York: IEEE Computer Society Press, 1998.
M. Theobald, S. M. Nowick, and T. Wu, “Espresso-HF: A heuristic hazard-free minimizer for two-level logic,” in Proc. 33rd Design
Automation Conf., ACM, 1996.
S. H. Unger, Asynchronous Sequential Switching Circuits. New York:
Wiley-Interscience, 1969.
K. van Berkel, R. Burgess, J. Kessels, M. Roncken, F. Schalij, and A.
Peeters, “Asynchronous circuits for low power: A DCC error corrector,”
IEEE Design Test Comput. Mag., vol. 11, no. 2, pp. 22–32, Summer
1994.
C. Ykman-Couvreur, B. Lin, and H. de Man, “Assassin: A synthesis
system for asynchronous control circuits,” IMEC, Tech. Rep., Sept.
1994.
1147
[41] K. Yun and D. L. Dill, “A high-performance asynchronous SCSI
controller,” in IEEE International Conference on Computer Design.
New York: IEEE Computer Society Press, 1995.
[42] K. Y. Yun, A. E. Dooply, J. Arceo, P. A. Beerel, and V. Vakilotojar, “The
design and verification of a high-performance low-control-overhead
asynchronous differential equation solver,” in Proceedings of the International Symposium on Advanced Research in Asynchronous Circuits
and Systems. New York: IEEE Computer Society Press, 1997.
[43] K. Y. Yun, D. L. Dill, and S. M. Nowick, “Synthesis of 3D asynchronous
state machines,” in IEEE International Conference on Computer Design.
New York: IEEE Computer Society Press, 1992, pp. 346–350.
Michael Theobald received the Diplom degree in
computer science from Johann Wolfgang GoetheUniversität, Frankfurt/Main, Germany, in 1994. He
currently is pursuing the Ph.D. degree in computer
science at Columbia University, NY.
His research interests include synchronous and
asynchronous circuits, computer-aided digital design, logic synthesis, formal verification, efficient
algorithms and data structures, and combinatorial
optimization.
Mr. Theobald received the Honorable Mention
Award at the 1997 International Conference on VLSI Design. He was a Best
Paper Finalist at the 1998 IEEE Async Symposium.
Steven M. Nowick received the B.A. degree from
Yale University, New Haven, CT, and the Ph.D. degree in computer science from Stanford University,
Stanford, CA, in 1993.
His Ph.D. dissertation introduced an automated
synthesis method for locally clocked asynchronous
state machines, and he formalized the asynchronous
specification style called “burst mode.” He currently
is an Associate Professor of computer science at
Columbia University, NY. His research interests include asynchronous circuits, computer-aided digital
design, low-power and high-performance digital systems, logic synthesis, and
formal verification of finite-state concurrent systems.
Dr. Nowick received an NSF Faculty Early Career (CAREER) Award
(1995), an Alfred P. Sloan Research Fellowship (1995), and an NSF Research
Initiation Award (RIA) (1993). He received a Best Paper Award at the 1991
International Conference on Computer Design and was a Best Paper Finalist
at the 1993 Hawaii International Conference on System Sciences and at the
1998 Async Symposium. He was Program Committee Cochair of the IEEE
Async-94 Symposium and is Program Committee Cochair of the forthcoming
IEEE Async-99 Symposium. He is a member of several international program
committees, including ICCAD, ICCD, ARVLSI, IWLS, and Async. He is also
Guest Editor of a forthcoming special issue of the PROCEEDINGS OF THE IEEE
on asynchronous circuits and systems (February 1999).
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Related manuals

Download PDF

advertising