BISR mem layout

BISR mem layout
A New Built-In Self-Repair Approach to VLSI
Memory Yield Enhancement by Using
Neural-Type Circuits
Pinaki Mazumder. J'v/ellliJcr, IE!:!:. and Ylh-Shyr Jlh. Iv/cmhCl, If.!:f.
VLSI ('hip siH' is rapidl� increasing. more and
more circuit components are becoming inacce".,ibl.. for .. ,tenlal
testing. diagnosis. and repair. Memor� arra�s arc widely us"d
in VLSI chips. and restructuring of partiall� fault� arra�., b.1
the available spare rows and columns is a l'omputationall� in­
tractable problem. Conventional softwart' nH'mnr� -repair al­
gorithms cannot he readil� implemented \I ithin a VLSI c h ip to
diagnQse and repair these fault� memor� arra� s. Intelligent
hardware based on a neural-network model
PI'''' ides
an eH'el'­
tile solution for such huilt-in self-repair (B1SR) applil'ations.
This paper clearly demonstrate, how to represent the objectil e
function of the memor�' repair prohlem as a neural-network
{'nergy function. and ho\\ to exploit the neural net\\ ork 's ")lI­
vergence pr()pert� for derh ing optimal repair solutions.
T\\ 0
algorithms have heen developed using a neural network. and
their performances are compared with the repair most (RM)
algnrithm that is ('ommonly u,cd h� mcmorl chip manufactur­
randomly generated defect patterns. the proposed al­
(lr) (DRAM) dllp'. redundant rows and column, are
added to reconh�ure the Illellwry subarrays, where the
rm" or Clllull1[]!' in which defective cells appear. are
e I illllilated by using techllil)ues such as electrically pro­
gramillable latche, or laser p l'lSOnaI17ation. The problem
(If ()ptimal reconti).!uration and spare allocation has been
[ IJ. [91. [10[. [14].
[15[. [29[. A review of these algorithms for memory di­
agnosis and repair can he seen In [4[. But all of these
\\idel) studied hy many researchers
algorithlm cannot he readily applied to embedded arrays.
where they arc neither l'lllltrollable by external testers. nor
arc their re'ppns",> readily observable, Built-in ,df-test
cirl'ults are c()mlllonly Llsecl
test ,uch emhedded arra., s and had chips are discarded
llhcn thev lad III pass the relevant test procedures. I n or­
gorithm II ith a hill-climbing (HC) eal)abilit� has been found to
be successful in repairing memor� arra), in
I'cpair (BISR) circuits lllu.,t he developed for optimal re
PQsed to R:\l's
case.,. as op­
cases. The paper also demon,trates
hy using l'er� small silicon merhl'ad. onc can impknll'nt this
algorithm in hardwarl' \\ithin a VLSlchip for BISR nf memor�
arrays. The proposed auto-repair approach i"
S)H)\' n
to im­
,,-dva�e thc partially tault: chips, new built-in self­
pair ()f the fault\ l1lemoryarravs
In this papl'l'. a n()\cl self-repair scheme is proposed
th:JI usc,
huilt-in nc:ural llc:t\\()rk to determine how to
proH the VI.SI chip yield by a significant fat:lor. and it can
automatic'ally repair
also impnne the life span of the chip b� automatically rcstrul'­
thc ,pare ro\\
turing its memory arrays in the c\ ent of sporadic cell failure,
during the field liSt'.
faulty memory array by utilizing
:Jnd column,,- TIVO algorithms have been
propo,cd to illustrate Illl\\ a neural network can solve ran­
dom delcc'!s. and theIr perf(lrm<lnces arc compared with
Iltc' L'OIllClltiOll<l1 sllft\\are rcpair algorithms, such as Re­
{Jair Atu.I! [2f\[. The Repair Most (RM) algorithm IS a
hTJ{(llll'C II(),
THE VLSI device techn()l()�il'al feature width I"
rapidly decrea,ing
tu the range ()t hundreds uj
nanometers. the chip yield tends to reduL'c progressively
due to incrcased chip area.
fabrication pro­
cesses. and shrinking device gcometries. In order
vage partially faulty chip". redLlndant Clrl'Ult ekI1ll'llt, arl'
incorporated in the
chip". and an apprupriate rel'unhglll'
by pa" and replace the fault\
ation scheme is employed to
clements. In high-den,ity dynamic random-access mem\fanu...,cnplll"Cl"l\cd \1a� 1. 199(), rc\ JaJlll�tr.' 10. Iyl):. I'ill" \\\l!�
in part h.' the !\;JI]()nai SCh'TlCl' FlllJrllLI\lllll lInJI..-'f (;ranl '1I)l�
\vl1:" :..uppot1ed
4013CN�, h� the- 011icc of
Jnd by the L.S.
RC�L'ardl undn (iran! ()()014-k:) K 0122.
under (;ran( Il.-\AL O.1·S7·" (loll"
h\ :\:-,.,OClatc EdirOl F Hrekl
P. \1<.uumdcr 1:-. IAlth the U('"p�I1Illcnt (1j 1:::JCl"tllcd l-:.n:..!I� ICl'[ln� clIH.I (',)111
puler SCll'ncl', The Unl\er .... Jty ill" l\1i .. hlflUr1. Ann . \ r h o; . :Y1I -+X�!{)()
Y.-S. Jih \\l.l:--' \\ith the t'�i\'('"\ ()r�J\1ichI!!_;ln '\nll Arhnf \11 IIL'I'
URI Pn'8ralil
Thi:-, paper wa� recoillmended
W I th
IB\1 T.J. \\/ahon h ("enkr.
') I)r�t\l\'.n
Height'-.. \,y
grcedy algmithlll that itcrati\l'l\ a,signs a spare row (col­
replan' the mw tl'plutlln) v.hich currently has the
maXlmutll unco\ered cl'lls until all the cells arc covered
pr hypassed (providing a successful solution). or all the
'pare rows and c()lumns
exhausted (failing
give a
repairabk solutilln. if all fault) cell, are llot bypassed).
This algorithm is selected for
comparisoTl hccau,e it i, relati\cly simple and can be im­
b� digital h'lrLiv.are, unlike other software al­
such as ,itllulatcd annealing or approximation
will rc'(jllirc
A hardwarL'
er,ioTl of simulated annealing
(iallssian white noise generator to generate
the random 1ll0\
the temperaturc.
and a logarithmic circuit
The llther heuristic-based algorithms
(lik,' FLeA. LECA. and
and hranch-and­
hOLllld [1-1-1) will rel)ulre 1llIcnll'ode-driven complex digi­
tal c'm:uih fpr ,otv ing the prohlem by hardware. The per­
fOrInancc of ncural algorithms has been compared with
RM by running these algorithms with repairable fault pat­
terns and examining what percentages of these fault pat­
terns can be repaired by them. The simulation studies
made in this paper demonstrate that a neural network us­
ing a hill-climbing (HC) capability provides a fast and
good solution for a repairable array. The technique is im­
plemented within the chip by an electronic neural net­
work, and it promises to be a potential application area
for neural networks.
Historically, the memory repair problem dates back to
the evolution of 64 Kbit DRAM's in the late 1970's [25],
when, to improve the chip yield, two or three extra rows
and extra columns were added to each quadrant consisting
of 64 Kbit suharray of memory cells. Simple greedy [2],
[28] and exhaustive search II J algorithms were developed
to rcpair the faulty memory subarrays. Since the search
space was very small for such problem siLes, all these
algorithms provided high throughput in memory diagnosis
and repair. But as the memory size has increased to sev­
eral megabits over the last few years, the defect patterns
have become sufficiently complex and the search space
has grown extensively. The problem of memory repair has
been shown to be NP-complete [9]. [141 by demonstrating
that the repair problem is transformable in polynomial
time to the Constrained Clique problem in a bipartite
graph. A number of heuristic algorithms, such as branch­
and-bound [14], approximation [14 L best-first search
[10], and others [15], [29], recently have been proposed
to solve the memory array repair problem. The two key
limitations of these algorithms are a) that their worst-case
complexities are nearly exponential and h) they are not
readily implementable within the chip for BISR. These
algorithms are generally written in a high-level program­
ming language and are executable on general-purpose
computers. This paper addresses two fundamental aspects
of the memory repair problem: how to devise efficient al­
gorithms so that the overall production throughput im­
proves along with the chip yield, and how to generate such
repair algorithms in hardware so that they can be applicd
to repairing memorics, embedded within the VLSI chips.
The contribution of this paper is to demonstrate how to
solve the array repair problem by using neural networks.
and how to implement a BTSR scheme in hardware. A
neural network can produce solutions by the collective
computing of its neuron processors with far faster speed
than the abovementioned sequential algorithms running on
conventional computers. Since these neural processors are
simple threshold devices. the basic computing step of
neural nets is comparable to the on-off switching of a
transistor [7], [8], 122 L [24]. Another potential advantage
of using neural networks is that they arc robust and fault­
tolerant in the sense that they may compute correctly even
if some components fail and/or the exciting signals are
partially corrupted by noise. Thus the reliability of the
self-repair circuit using electronic neural networks is very
high. This has heen experimentally verified in Section VI.
This paper has been organized as follows: Section II
provides a brief overview of the neural network model
and its dynamic behavior. Section III provides a formal
framework for the memory repair problem using the con­
cepts of neural net computation. Two algorithms are de­
veloped by programming the synaptic weight matrix of
the neural net. The first algorithm is greedy, and starting
from any random configuration (i.e., arbitrary neuron
states), it can solve the problem by monotonically reduc­
ing the energy function of the neural net. The second al­
gorithm uses the HC technique to yield a near-optimum
solution. Section IV gives the simulation results of the
neural net algorithms. From the simulation experiments,
it is seen that the neural net algorithms are superior to the
RM algorithm. An electronic implementation of neural
networks is demonstrated in Section V. The intrinsic fault­
tolerant ability of a neural net is studied in Section VI.
The yield improvement and hardware overhead caused by
the neural-net self-repair circuitry are examined in Sec­
tion VII.
The earliest mathematical formalization of the human
nervous system can he found in the work by McCulloch
and Pitts [19], where it was modeled a, a set of neurons
(computation elements) interconnected by synapses
(weighted links). Each neuron in the model is capable of
receiving impulses as input from and firing impUlses as
output to potentially all neurons in the network. The out­
put function of a neuron depends on whether the total
amount of input excitation received exceeds its predeter­
mined threshold value 0, or not. The state of neuron i is
represented by an all-or-none binary value Sj, with Sj
being a nonfiring condition and Si
I denoting an im­
pulse-firing condition. Note that one may represent a non­
zero threshold by applying an external impulse bias hi to
the neuron i with hi = 0"
Interactions between two dift'erent neurons occur
through a synapse serving as a duplex communication
channel. The signals passing through the synapse in both
directions are considered independent. Moreover, the sig­
nal traveling from neuron i to neuron j is amplified by a
weight factor WI" i.e., if the impulses fired by a neuron
correspond to a unit influence, then a firing neuron i pro­
duces Wji, amount of influence to neuronj.
Now, let !Ii denote neuron i's next state value. From the
above description one can represent the neural net state
transition function as follows:
if 2.:
if 2..: W,;5j > Oi
< OJ
Actually, in the human nervous system or any other
biological implementations, neural networks make smooth
continuous transitions when the neurons change their
states. Therefore. we assume that neurons which receive
a larger influence in magnitude will have a faster state
tran�iti()n, Neuron proce"ors are expected
an asy nc h ro no us a nd r andll m fa shion ,
to operalc' in
Followin g the des cri ption of the behav ior of an indivId­
ual neumn. let us look at the overall n.:(work state tran­
sition behavior, First. a net wmk state is dcnoted by the
ve ct or of ncuron statc v a ria b l cs , Givcn a network state
vector X
(XI. h . . .. . X\ l. X is calle d a/i�ed point, if
and on ly if x; - x, for all i's, That i" ()r all the pllSsihk
2\ states of a neural net. only the fixed points arc cll nsid
ered stable. If the current n e two r k state is not onl' nf the
fixed points . the neural net cannot maintall1 the pn;sent
state. One imlllediate questio n about the network hehavior
could concern the state co n v'Crg c nce: starti ng with an ar­
bitrary statc. will the network c ve ntual ly reach a fixed
poin t? To answer the question . an energy funct ion has
been formulated, First. let us c on side r the Glse <If an ex­
citato r y or pll s itive in p ut to a neuron, Al'cording to the
neuron's functional de finitio n . this input will cncouragl'
the neu ro n to tire impubes, If the neuron is already in the
fi ring state. this input can be looked upon a, ne gativ c en­
ergy. sin c e hy convention. a sySklll is mllrC stahle when
the energy lev'cl is lower, Likewise. an inhibitory or ne'g
ative input to a neuwn in the nonfiring state should abn
be considered as n c gat i v e cnergy, On the ()ther hand.
pos i ti ve energy is creat ed if a firing neuron rece i v es an
i nhi bitory inp ut nr a nll n firing neuron n:ccives an excita­
tory inpu t . since the network i, poten tia ll y destahilizL'd hv
the inp ut.
If \I', = \I'" . fm all i's and j's, the tutal e nergy r;\ \.
commonlv known as the Lyapunov lunction 1111. is gi\'l.:n
,carchinu str;.llCUle, can he d c \ elop ed hy modifying the
e,D algl;rithlll, By p rogra mming the neural nl't­
v\ork ill �ll appropriak v\a�. vvc lind that the n cu nd nets
arc c apa ble of se ar chin g the solutio n by He or tunneling,
Thus. the proba b i l i ty of ohtaining a globall y optimal so­
lution h) lIsing ncural net\\llfks is Icry high ,
, imple
1\, \'
\ \
- ::
L..J .,{ \\1 "',
Note that the c han ge of sta te by neuron i will result in an
absolute decrease of I:, \\'1,' S, + I (.1, ) h, in e nergy . where
I(s, 1 = I if I, = O. and - I otherwlsl: T l l g et her with the
fact t hat the total en crg y is hou ndcd . the net\\ urk is g uar­
anteed to reac h a local min imal energ� fixed pll int 1121,
A, Arm\' Ri'/Jilil' Pl'u/JlclII RC'{JI'i'.IC'II/u/i(i/I
Assume a 1lll'IlH)J"\ array ()f size: ,v x N with e xac tly p
s pa re rows and (/ s are �() lum ns that can he utilized to
the defectivc rows and co lu m n s , Assume an ar­
hltrary fault pattern in whll'h t a ulty cell s ran d om l y lKcur
on III «
N) disllllL't mws and 1/ «
N) d istinct columns
of the memory arrav. such that the l'ompacted subarray of
,izc II/ ;< /I C l; nt ain� all re!c:vant mv" and columns wh ich
havc at le ast one faultv ccll, Let the matrix D
{Ii,,} of
sire 1/1 x 1/ characteri/e the status of the m emory cells.
such that il" . which c orres pon d s to the cell at row i an d
column j. is I if the cdl i� faulty. and Ii" = O. otherwise,
Normal ly . vcr) fe\\ c el ls in the array are faulty. and the
l'haracteristic matrix, D. l'i highly sparse. A row re pai r
scheme ca n be represented a., a v ector U of m hits. such
that II, = 0 if mVI i " to he repl a ced , otherwise 11,
1. 0
� i � III - I h, colullln rep ai r scheme is defined in the
same ta,hion. and is den o ted hy a v ec tor V of 11 b its , The
numhers of () ' s in U and V must be less th an or equal to
fI and (/. respectively, The memory is said to be repairabl e
if the above c(lThtraints are s atisfie d . and esse ntially the
repair prohlem is how to detennmc a pair of U an d V.
sLlch that U I D V
In additi()n to the characteristic m atrix D. let the ov er all
statu.; PI' r ow i and l'lllllllln.i of the array be cha racter iz ed
by r, an d /.: ,. respectively.
repl ace
it' \'0\\
l'(lI1tains defective element;,
if e ol umn . i cpntains defective clements
Nf-l K\J. NI',Tw()KK S()1l II()N,
The idea of usin g a neu ral network
ta ckle com bi n a­
torial op timizati on probl ems was first proposed by H op­
field [12]. who designed and fahricated a neural network
for solving the cl assi c J'ral'eling Sale,l//wll Problem 151.
The obj ecti ve (cost) function of a comhinatorial opti mi­
zation problem can he repr es ented by Lv a pu nov' s energ)
function of the neural network. and the netlvork's prop­
erty of co nvergence from a ra n dom inilial s tat e to a lclcal
m inimal energy state can he ut ili zed to redu c e the L,,,,t
f un ction of the comhinatorial proh k m , However. thc
neural computing appr oach g iv en in 1121 leads onl y to the
desig n of a greedy g radient de sc ent I GD) algo rith m \\ hich
stops searc hing at the fip,t local minimal �Lllution encoun­
tered; consequently. the neural netwc)rk solution IS gen­
erally of low quality ,
I n ' this se ction \\ e st ud v the convergence hehavior ot
neural networks, We denl()nstrate that more p()\l e rtu l
To design a neural net f(lr the memory rep air problem.
II/ , 1/ is used. whe re the states
neuron, arc denoted by ,l'I,'s. lsi s III,
and the states of the rcmalt1lllilll n e uron s by ,I" '". lsi
s II, For case of inte rpretatio n . the fir st /1/ ne ur on , arc
addrc"ed as I'ml' I/eurolls. and the remaining II neuron s
as lO/1I11111111:'1IWI/S, The ph ysi c al mean ing (lfthese neuron
states arc defined a, follows'
a neu ral net ot SiLl: 1".1
of the tirst
ii, ,uggested for replacement
if co l ulll n .i is sugge '-led for
replace ment
I he Cllst fu n c t i on C Ii A' of
re prese nt
the Illemory repair proble m
hoth the nonfeasible repairing and in­
complete covera�c schcme's h� higher costs, These tWe)
aspects can be modeled by the following two expressions:
ing synapses and the intensities of the external biases to
neurons can be detennined through simple algebraic ma­
But in order to keep the main diagonal of the neural
net's synaptic weight matrix zero, all A/2
. 1:s�
ternlS in
the array repair cost expression are rewritten as A/2
1:5,. Since s,
{O. I},
equivalent. However, A /2
1:s� corresponds to self-feed­
back through a synapse of strength
so that the overall cost function is represented as an al­
gebraic sum of C1 and C2. In the above expressions, the
values of A and
are empirically decided and as shown
and A/2
The expression for CI encourages exactly p spare rows
q spare
columns to be used in the repair scheme. For
those nonfeasible schemes that require more than the
available spare rows and/or columns, the cost will in­
crease quadratically. If necessary, the optimum usage of
-A for
all ncurons,
Es; means a bias of -A/2 for every neuron.
The resulting neural network is shown as follows:
in Section IV, their relative values are critical for obtain­
ing the appropriate percent of successful repairs.
these two tenns are mathematically
-A(l - (jij)
(p- I /2)'A+BEAj
where {j;j
1/2) . A +
-A ( l
- (ji})
0 if i ::f::. j, otherwise 1.
In order to illustrate how to construct the synaptic
spares can be experimentally attempted by repeating the
weight matrix from these values, let us consider a path­
search operation with successively reduced amounts of
ological defect pattern that consists of 16 faulty cells as
available spares (i.e.,
shown in Fig.
progressively decrementing the
value of p and/or q in the expression for C1), until no
successful repair scheme can be found.
The expression for C2 monotonically decreases as more
and more defective cells are covered by the spare rows
and columns. If all defective elements are covered by at
least one spare row or column, the above expression is
then minimized to zero. Unsuccessful repairing schemes
which fail to cover all defective clements will yicld
positive costs. The two double summation terms in C are
equivalent. As it will be seen later in the expression for
If there are only four spare rows and
four spare columns. a greedy algorithm such as RM can­
not produce a successful repair scheme [I]. The only suc­
cessful allocation of spares for replacement is indicated
in the figure by horizontal and vertical stripes. But due to
the presence of four defects in both row 1 and column 9,
the RM algorithm will be tempted to make the first two
repairs on them. The algorithm will thereby fail to repair
the memory because, in addition to the available six spare
rows and columns, the algorithm will require two more
columns or rows to cover all the defects along the diag­
the synaptic weight matrix, this duplication facilitates the
onal direction. The neural network algorithm described in
later transformation to neural net energy functions.
this section can repair successfully such a defect pattern
Another cost function can be defined by modifying the
CI expression, i.e.,
a memory fault pattern, the neural net synaptic weights
(� �I,)2
,- I
+ A/2
(± )
will be estimated by adjusting A /B
1 (the value A
2 has been chosen empirically and is not critical for the
Note that Ci is actually a simplified CI with p and
the expression for C;
schemes with fewer spares used. Intuitively, this way
new C; + C2 combination has a potential danger of hav­
ing a large amount of unsuccessful repair schemes with a
small number of uncovered defects mapped to local min­
ima just because they request less spares. The differences
between the neural networks defined by these two cost
functions will be further illustrated later in the simulation
It may be recalled that the neural network energy func­
The resulting neural net synaptic weight matrix and the
biases are given in Fig. 2.
C. The Hill-Climbing (HC) Approach
We propose here a neural net with simplified He be­
havior that provides high-quality solutions similar to the
powerful combinatorial technique, called Simulated An­
nealing [13]. First, let us note that the solutions to the
memory array repair problem are classified as success and
failure. In fact, it is commonly known that all optimiza­
tion problems can be transformed to a series of yes/no
questions [5]. Due to this specific criterion, we do not
The Gradient Descent (GD) Approach
tion is
prefers repair
more efficient repair schemes may be encouraged. But the
performance of the algorithm) in the above expressions.
to zero. Without considering whether all defects will be
covered or not,
by finding the unique solution shown in Fig. 1. For such
-1/21:; 1:jw;jS;Sj - 1:;s;b;. By rewriting
CMR in the form of EN , the weights of the interconnect-
consider any random move that may increase the energy
of the system until the move is absolutely necessary, i.e.,
the search has reached an unsuccessful local energy min­
imum. If the current local minimum does not yield a suc­
cessful repair, moves that increase the system energy are
Oi\ e"'II't
l'lHd',\II.lJ lIKll.l' .\:\[) '\.'>I�\I.'>,
\ rl.'p,llrdh!c 1,1U1t� d[T�!� \\1111 !\llll· .,P,,l"l' ro'.\., aJld lnur "p'lr..:
ILR'",li)�,J) 1>1.'>1("
H/! .'.-:!.,
H'21.2 "
Br., d,;:
� t '-.:' I
15.9,9,9.9,11.11, 1l.1I. 7.11, ll.ll, 11,,15, 7.
allowed hy providing negative synaptic: feedbad to neu­
rons themselves. Th e hasic idea is that when th e ,earch
has reached an unsuccessful local minimum. we c a rl l'orL'l'
the neural net to make a move hy turning on a n euron.
whieh appears to enter a lower energ) state but. in fact.
will increa,e the system energy due to the n e ga tive ,el f­
feedback. Next. the .'>ystem is expected to rurn ()If .'>0I1\L·
other neuron to enter a new low er energy .'>talc. t ll U.'> es­
caping the local minimum energy trap, Notice that with
very low probability. it is po.'>sible for a neuron to fall inID
a lo op of alternate Of} and ott states. as it i, similarl) po ,­
sible for simulated an rl eali ng ((] Cycle t h rou gh randllill
moves, Consequently. network behavior i, not admissible
in the sense that it is guaranteed to COIl\ erge to an optimal
solution state. As in simulated annealing. which uses a
predefined criterion to hreak the inner loop of optimi/a­
tion. a suitable timeout mechanism is neces,arv 1m He
neural nets to prevent excessively long searches,
We being [he dcseription of the new neural net b) gll­
ing the synaptic weights and biases <I' follllWS:
WI i,I,
I. JAl\l.�R)
12. 1\0.
�i eI,. '
To examine the neural net. let us assume th at .4 '* 0
and B = O. Thus. each row neuron will receive a positiH'
P . A amount of bla�. and each column neuron Wil l reCl�i\l'
a positive 1/ . A amount of hias. Also. due to B = O. the
row neurons are independent of th e coluilln neuron'.. i.e' .
there arc: no ') napt ic: COllIh:c'lions between the two groups.
Suppose there are p' < fl row neurons that are in firin g
state,: then each roll neuron will receive a p ositiv e (p Ii')
j amount of influence. and all of them will be en­
c'ouraged to fire, Sim ilarly. if p' > p. all row neurons
Ivi ll receil'e a negative Ip - p') , A amount of influence.
an d all will tend to turn otf.
Next. let u, assume that .-1
( ) and B '* O. In this case.
the aillount 01' inlluencc reccilL'd hy row n euron i is gIven
I.e . . B ti me .'
defective elements in row i
by a replacement .
Those defective clements in row i that are currently cov­
ered h) some column r epair s will not contribute any ur­
gency to foree r epl ac emen t of this row,
Consider the silLlation where all the spares ha ve heen
utIlI/.ed (hypothetically) 1m repair. hut there is still a sin­
gle def ect left uncovered hy the current repair scheme, At
this point. the number of defects inside a row or column
IS the only influence on neurons. For the row and column
that contains this let unu)\'Cred defect. a positive total
mfluenl'C of B w i ll be received by the two corresponding
neurons, After this defect is cOIered by turning on. say.
the rnw ncuron. which causes the usc of one more spare
row than allowed. all the row neurons will now receive
an ad ditiona l negative in tlu en cc ()f A due to the extra spare
suggested, If we ChOllSC to have B > A. then the neural
the number
t ha t w[)uld he cllvered exclw,lvely
net will be stuck at the present state, making the repair
scheme unsuccessful. On the other hand, if we have
a\1 the current row spares will cover only one faulty
element exclusively will now receive a net negative influ­
ence equal to
I B -- A I, thus causing the network to switch
to a new state and give up on the current scheme.
In this section, simulation results are provided to dem­
onstrate the superiority of the proposed neural computing
approach for the problem of automatic array repair. Six
reduced army fault patterns
of size lO x
and 20 X 20 (Case 2) randomly generated arrays
with about 10, 15, and 20% faulty elements wcrc used in
the experiments. As was explained in Section Ill, these
small size arrays represent the actual defective rows and
Fig. 3.
Performance comparisons between GD and GD'.
B and declines asymptotically to about 42% as BI A
I n­
creases. It is not surprising that the percentages of suc­
column, in large-size memory arrays, some as large as a
cessful repairs obtained by GD' and GD are converging
few million bits.
to the same value, since when
B is
much larger than
The performance of two GD neural nets GD and GD',
the consideration for the spare usage i s o f little or n o ef­
defined by cost functions C, + C2 and C; + C2, respec­
fect, compared with the consideration for fault coverage.
tively, are compared. For both 10
For the second experiment, the effectiveness of the RM
arrays. the probabilities of finding a successful repair
algorithm is compared with GD. In order to provide an
are depicted in Fig. 3. By
equal starting situation, the programmed neural net is
scheme versus the ratio
controlling the value of
B /A
B /A ,
10 and 20
the importance of the fault
According to the figure, the effectiveness of GD' is
largely affected by how
set equal to
B are selected.
started with all neuron processors in nonfiring states in­
stead of other random combinations. We will use GD­
coverage over the spare usage can be manipulated.
no successful repair scheme can be found
zero to denote this special case of the GD convergence.
The performances of these two algorithms are compared
in Table 1. Random defect patterns are generated for both
by GD' for any one of the repairable fault patterns, just
cases, and the algorithms are applied to repair the faulty
as was expected in Section III. For 10
10 arrays, the
arrays with three different sets of available spares. From
percentage of successful repairs improves to about 50%
the results it was seen that on average, GD-zero is twice
increases to 5, and thereafter converges to about
as successful as RM when the defect pattern is not very
42%. GD' behaves similarly for 20 x 20 arrays. except
large (represented by Case
that the peak performance happen> at about
umns are used. But GD-zero is about three to five times
Now, let
k be the maximum of p and q,
1 2.
where l' and q are
and few spare rows and col­
more successful when the defect pattern is large (repre­
the maximum numbers of spare rows and columns al­
sented by Case
lowed to form a successful repair scheme, respectively.
rows and columns are used. As for the number of steps
The B/A ratio for a peak value to occur i, found to be
equal to k. Note that each firing row (column) neuron will
and a relatively large number of spare
taken to find a repair scheme. it is about the same for both
amount of influence to disable all other row
Second, tradeo1ls between the use of two neural com­
(column) neurons, and a row or column neuron is cn­
puting methods arc examined, and the results are shown
send a
to cover faults in a row
B amount
of influence
in Table II. For each defect pattern, a number of random
column. In order to allow the
trials are performed by each method. Average perfor­
couraged to fire by as low as a
neural net to explore wlutions using up to k spare rows
mance reaches consistency within one hundred random
to keep up
trials. As is expectec!, the simulation indicates that the HC
On the other hand. GD shows the advantage of its low
scheme for repairable arrays due to its ability to perform
spare columns, we must have
to k row or column neurons firing at the same time.
sensitivity to how
B are
approach is almost perfect in locating a successful repair
selected over the range of
> I. In a hardware implementation of a neural net­
global searches. The probability for the GD to reach a
successful repair in a random trial is about 0.5. The av­
work chip, the ratio of BI A is likely to vary over a wide
erage number of steps needed by He is about two to three
range because of processing parameter variations and the
times that needed by GD. The runtime for the GD algo­
wide tolerance of resistive elements. Thus the perfor­
rithm varies very little over a large number of experi­
mance of the neural network will remain uniform from
ments, but the HC algorithm is found to have a wide var­
chip to chip if GD is implemented as opposed to GD',
iance in the number of
whose behavior changes dramatically with different val­
ues of
B / A.
The percentage of successful repairs obtained
by GD appears to have a peak value of about 50% at
steps to execute the search
The chances of getting a successful scheme achieved
by the four methods are shown in Fig. 4 as percentages.
[0 arrays
20 arrays
E2l �
\,\HLI I
III �'-('( 1"'-,1 t! R11'\!f{" DD ..... I ISl RM \'\1l cn 1.1 :{I)
Ca'>l' --------
C; of
U. � I
(-1, 5 )
d. -11
IY, 91
('I, 12}
II =,
12 'I
I \81.1 I I
Pl"RI()I{\I\\'( I Cllr>..1!'\!<I\\)'\,\ H I
)� 'I
(-1, ." "
-1(, I
112. 12 )
' ----,-I
qx q
h h
- ---- --
1(11i II
13 U
Iilli ',I
12 7
l)) ;
2� .h
--- - --,
" h
-1 Ii -
I' h
initiation of the algorithm, Greedy techniques such ilS RM
I" I
.' h
�1\lHlk(JI SI\RCHSILl'�
1-1 -l
22 h
22 6
8H Y
Ci [)
20 arrays, RM's performance deteriorates, and on a\­
:2 1 �
( .l�C
than CD where the neurum are randomly turned on at the
erage in more than
, 7
2(,1 �
GD \'\I)I{CI'\, \\Il�\(,
I ()Oc;, success in all cases, The CD-7ero performs beller
It can be seen that for 10 x 10 array" He a chie \' e, almo,t
fail to cover more than 67 c;, case, on the average, �or
1-1 II
---- -- ----- -- --- --- I)
-1:i, 1
(l), I),
('I 1 2
c; [)
\\1) '. (;'1) \\,,1, H e \11 k\1 :'\I I Ie.,
------'-,- ---
14 II
of the case" it fails tn repair the
40 .'
To ,ummari/,e the simulation result" it may be pOITltL'd
out that CD is faq in rcaching
spare allocation, and has
small variance in the number of search ,tcp', On the other
hand. the number of search ,teps used by the He can var"
widel". depending on the initial random starting states and
how difficult the problem instances are. But on average,
the number of search steps needed hy He does not e,ca­
late exponentially, �()r four out of six types of seltings.
the GD approach to attain the ,ame level of success as the
He approach docs within I '7c
the He approach is found
be more ellicient, The actual numbers can be seen in
Tabk [IJ,
In this section, we Llenllll1strate how to implement an
about twice the number for GD, For the other two types
electronic neural network that can repair a faulty memory
the average number of search steps for He is limited
of ,eltings, the ratios of the average lluillher of search
by using the He searching strategy, Electronic neural nets
steps are no worse than four. In fact. if we compare the
can be built by both analog and digital techniques, Mur­
equivalenl aI'em!;" /lulIlber (�l search sleps expected !<]f
ray and Smith have discussed their tradeofi's and devel-
oped a digital neural net representing the interactions by
pulse stream of different rates [22]. Even though the dig­
ital network has better noise immunity. it requires large
silicon area for its pulse generators and digital multi­
pliers. Analog neural n ets using current summing princi­
plcs wcrc developed by Graf et al. [7J and Sivilotti et af.
[24J. Carver Mead's book on analog VLSI presents the
design strategies for several types of analog ncural sys­
tems [20]. These analog neural net, require smail area and
their intrinsic robustness and ability to compute correctly
even in the presence of component failures, are particu­
larly useful features for large-scalc VLSI implementation.
In this design, a neuron is realized as a difference am­
plifier which produces binary output voltages. The firing
neuron state is represented by the high voltage output, and
the nonfiring state by the low voltage output. The synaptic
influences between neurons, as well as the biases to neu­
rons, are represented by electrical currents. Thus, an ex­
citatory influence propagated to a n euron can be realized
by injecting current to the corresponding amplifier input.
Similarly, an inhibitory influence propagated to a ncuron
can be realized by sinking current from thc corresponding
amplifier input. As for the synaptic amplification, it is
simulatcd by the branch resistance regulating the amount
of current that passes through.
Note that for the array repair sparc allocation problem
discussed here. all the synaptic weights of the intercon­
nection matrix are negative. Moreover. between two con­
nected neurons, the synaptic weight of the connecting link
can only be one of two different values, -A or B. If we
assume that there are equal numbers of row and column
neurons, an example electronic neural net with eight neu­
rons can be implemented, as shown in Fig. 5.
According to Fig. 5, a n euron fires a negative influence
to another n euron by turning on a transistor which sinks
current from the target neuron's amplifier input. The
amount of current involved is controlled by the size of the
transistor. Note that owing to the assumption that equal
numbers of neurons are reserved for row and column re­
pairs, we are able to divide the interconnection matrix into
four equal parts. It is not hard to find that only the first
and third quadrants, as indicated in Fig. 5, need to be
programmed based on the variable memory array defect
pattern. For each programmable link, an additional tran­
sistor controlled by a memory cell is added to the PLIll­
down transistor. A programmable link can be discon­
nected (connected) by storing a 0 (I) in the memory cell.
Fig. 6 show, the essential steps in programming the
neural net's interconnection network. Let the memory ar­
ray defect pattern be given in Fig. 6(a), with faulty cells
represented by black dots. l'iext, by deleting fault-free
rows and columns, a compressed defect pattern is ob­
tained in Fig. 6(b), with a faulty cell denoted by a 1. Then
the compressed defect pattern is used to program the first
quadrant of the neural net interconnection matrix, and the
third quadrant is programmed by the transposed com­
pressed defect pattern, as is shown in F ig. 6(c).
Finally, the possibility of huilding the neural network
Fig . .5. An example electronic nCLIral net for memory repair
alongside an embedded memory to achieve this self-repair
purpose is demonstrated by the schematic diagram shown
in Fig. 7.
The design to be discussed assumes that a built-in tester
is available to provide fault diagnosis information. First
of all. the status of each memory row and column is de­
termined after the testing is donc, and this information is
stored in the faulty-row-indicator shift-register (FRISR)
and faulty-column-indicator shift-register (FCISR), with
a faulty row or column indicated by a 1. Then, to com­
press the memory array defect pattern, the detailed defect
pattern of each faulty row is provided to the row-defect­
pattern shift-register (RDPSR) one row at a time. As men­
tioned in Section III, the characteristic matrix D is highly
sparse, and the fault pattern can be stored in the form of
a compressed matrix each row and column of which will
contain at least one faulty element. This is obtained by
shifting those bits of RDPSR that correspond to the n on­
zero bits of FCISR, into the compressed-row-defect-pat­
tern shift-register (CRDPSR). The content of CRDPSR is
then used to program a row (column) in the first (third)
quadrant of the neural net" s interconnection network. The
row (column) address of the quadrant is obtained by
counting the nonzcro bits in the FRISR. The stimulating
biases to the row and column n euron inputs arc generated
by counting the total number of faults in each row (col­
umn) in the row (column) fault-count shift-registers. After
a repair scheme is obtained by reading the outputs of the
neuron amplifiers, the information is expanded in reverse
order to the compression of defect patterns, and passed on
to control logic of actual reconfiguration circuits. Nor­
mally, laser zapping [23], (3), focused ion-beam [21], or
electron beam [6] techniques are used to restructure a
1 32
r c w n u mber
871 10:23
(a I
Mcmor) array d c fc __, t pattern
RA \1 n:!1
inpJt- �hHL
�["e�"ed row
detect. pa::tern SR
-r 4
addr. oecoder
o j 1ilL'
� 0 2 4 x l0 2 t1
memc:!:'y array
� ro.; (�
1 [0'01
npr ' s
\nt pr,onn c-tlr.r
defect pattern SR
tau.l'_ count SR
1 :-(.1.
the pr"'9.-anmaI:;le p;lrtion of the
faulty memory array w h e re the l aser or c h arged beams a rc
llsed for b l ll w i ng nut t he r m g rammable fuses to discon­
nect the fa u l t y rows and col u m n s . These sche mes cannot
0 tIllJ
� msJ _
D � IH D
,i /
, I,
\\" "
n i ques for s e l f- re s t ructuring
i ) e lectron i c a l l y
grammable amorphous s i l icon fu s e s . w h i c h can be pr(l­
20- V
p u l se
programmable electron i c reconti g urat ion s\� itche s . w h i c h
u s u a l l y i m pose a smal l a m o u n t o f penal t i e s on c i rl' u i t
speed and are a .
r h e c i rc u it behav i o r w a s v e r i fi e d t h rou,;h S P I C E s i m ­
u l at ions . S i m ul at ion output for a compressed
S as an e xample . T h e defects arc represented b y shaded
squares . The i n i t i a l state o f t h e n e u ra l net was
Lem .
i.e .
there was no a l l ocation of s p a res for any faulty rows n r
columns . S i nce every neuron repre sents e i th e r a fau l ty
ro w
or a faulty c o l u m n i n t h e compressed defect patte rn . to
cover the defects all neuro n s i n i t i a l l y began t ra n s i t ioll tn­
__1 i
o oo -
----- -
0 00
'1 7 1
i /
-- - -
- -
-� -- -- -
"f�c�)�_ --j�'",-- ---b........
6 00
4 00
8 00
1 0 00
4 x 4 dekct
pattern w i th eight defect i ve memory c e l l s i s shown i n F i g .
ward the fi r i n g s l ate
l 00 -
0 50
1 -+f-1--�' I, �1
,!(\' \
'\ , !
be emp lo y e d i n automatic restructuri ng . Two v i a b l e tech­
Lt )
g ra m med
( h ) C nI1 1 r 1rc :"" n' d dd cl't rattL'fIl . I L' ) \ (' u l'd i
( 11 )
RAM: ce :
1 0
1 1
0 1
0 0
l I\..' ll l a \ l I e t fill 1l1l' l l l l l f) n,:;pd l r
data butfel;
hit m a p
' " " " y caL
Fig . 6 . 1 : \<llllpll' progrJ I l 1 111 j l l �
f1c l\l.(lrk
1 1 0
0 1 1
0 0 1
0 0 1
. A ft e r 3 ns. the neurons representing
2 and c o l u m n 4 were successful in compe t i t i o n and
rem a i n e d in the tiri ng stat e . Th i s c a n be e x p l a i ned by the
ward t he fi r i n g s t ate agai n . This t i m e t h e n e u rons repre­
sent i n g row 4 and c o l u m n I w e re successfu l . The other
four neurons t h e n cont i n ued
was no re m a i n i n g defect
the off state . s ince there
cOYc'r and the spares w e re a l l
w ,cd u p
V I . '1 f, l ' !Vd
One unavoidable pro b l e m
add i n g e x t ra hardware fo r
B I S R is that the e x t r'a h a rd w a re i t s e l f may be subject to
component fa i l u res . I n t h i s sec t i o n . we demonstrate the
g i v e t he correspo n d i ng ne uron the l argest pll', i t i v e b i a s t ( )­
i ntrinsic fa ul t-t(llerant capab i l i t y of n e u ra l networks a s a
ward the fi r i n g state . A l l other n e u rons started to turn olf.
reconligura t i o n control u n i t in the memory re pa i r c i rcu i t .
fact that row 2 and column 4 each hay e t h ree d e fects
due to the m u tu a l d i scouragement propagated t hrough the
Three types o f L'CllllpllIlcnt fai l ure ,> h a v e been i d en t i ti ed
negat i ve synapse s . W he n the re m a i n i n g n e u ro n s were be­
i n neural network s .
comi ng l o w i n act i v i t y . the mutual d i scourag ement faetor
ti uct u a t i o n s . and neuron - , w c k faults to serve as the fau l t
was a l so weak e n i n g , a n d the neurons started to move
mode l . For c a c h faulty s y napse . e i t h e r o f the synapt ic
namely sy napse-stuck fau l t s , b i a s
V L S I M F M O R Y Y I eLD r N H A N C F M f N T
100 �____________
stuck fau l t s , up to five in the 20-neuron network and ten
in the 40-neuron network, have almost no effect on the
average performance . But the bias fl uctuation faults and
the neuron-stuck faults cause the average percentage of
repairs out of repairable defect patterns to decrease stead­
i ly to zero when over one - fourth of total neurons are af­
fected. The difference between a bias fluctuation fau l t and
a neuron-stuck fault is in the amount of influence given
to a particular faulty row o r column for repa i r . W h i l e a
stucK- r"l.ll�
bias fluctuation fault w i l l encourage or discourage the cor­
B ' a s fluc:uatlons
Neur01 stuck-faults
respond ing row or column substitution sl ightly by one unit
of i nfluenc e , a neuron-stuck fault w i l l actually insist the
substitution be made.
Number of fau l t s
j( -l l
I n this section , a quantitative analysis of yield enhance­
ment due to neural network ' s s e l f-repair capab i l ity is
done . Faults are injected into memory array s , spare s , and
networks to compute the resulting y ield.
overhead of the self-repair logic i s a l so estimated.
A well - known yield form ula due to Stapper 1 251 . [26]
i s used here to calculate the original y i e l d ,
Number of fau l t s
Fi g
(l +
where 0 is the defect density ,
Synapse stuck-faults
"Jeuron stu(�-faults
A o / a ) -o
is the memory array area,
is some elustering factor of the defects . Let P, be
the probabi lity function for a defect pattern to he repair­
able with respect to the remai n ing fault-free spares and
the fault condit ion of the s e l f-repair c i rcuitry , and
the area of overhead. Then , the y i e l d ,
Neural net performance:.. in fault c o n d i tions
weights wi} or I1Ji can b e assumed to be stuck-at-x, where
Yo, without the
neural-net-controlled sel f-repair,
is a positive number in the ra nge of synaptic strength
+ (I
Yo ) P"
(A + B)o / a] -r>.
B y minor algebraic manipulation, i t can be seen that
generators are modeled to fluctuate within one unit of the
predetermined biases, and faulty neurons will have stuck­
at-firing or stuck-at-nonfiring states.
Rather than rendering the enti re neural network useless,
+ BoYIi
/a "
I then , the new y ie l d formula can be s implified
scheme s , which correspond to complete covers of a l l de­
P, +
fective memory cel l s , may no longer be mapped to ac­
ceptable local energy minima, thus prolonging the search
cell s that control the programmable synapse s . Faulty hias
tion over the set o f spare al location schemes . A llocation
with neural­
net-controlled self-repair can be calculated as fol l o w s :
values, due to transistor-stuck faults or defective memory
these faults will only reshape the energy (cost) distrihu­
( l - P,.)
BoYo .
We used 256 x 2 5 6 (64-Kbit) memory arrays for sim­
for an acceptable solut i o n . On the other hand , incomplete
ulation w i th
allocation schemes may b e mapped to fal se acceptable lo­
cal energy minima, causing the neural net to stop search­
respective l y . S i x hundred m e mory arrays, each with at
ing without success .
0. 2 5 , 0 . 5 ,
2 , 3, 4, so
80 % , 66. 7 % . 50 % , 33. 3 % , 25 % , and 20 % ,
l east nne defect, arc generated accordingly . For each set
Identical memory defect patterns prepared in Section
of memory array s , up to four spare rows and fou r spare
IV are used here . Random faults are incrementally in­
columns were provided to examine the neural net repair­
j ected into a He type of neural net to examine the achiev­
ability w ith respect to the number of available spares. The
able percentages of repairs for repairable defect patterns .
spares and the neural net components are a l l subject to the
To distinguish the degrees of seriousness among different
same degree of defect density as the memory array. The
types of faults , we inject only faults of the same type. The
resulting fu nction values of P, are shown i n Fig. 1 0 .
simulation results are shown in Fig.
The results indicate that a smal l number o f synapse-
nal l y , the corresponding calculated y ields are given i n Fig .
I I.
I I - T ! , - r k ·\ '\ :'J \ C r l ( ) \ '1 ( ) \ ( " ( I \ l P l
\ \, I I I \ 1 1 / 1 1 1 .\( (
\ ( )L
I :!-
'< 0
J \ '\' t A R )
) lj1.)1
1 ,,\ I l I . L 1 \
I [ "\ 1 ( I J
\ '\ \'
.'\ I·C\�f \Vr I H R I S R e l R ( \ r \
\ k I l l O [\ l l, l l ...
1 ' , 1 ' 1 1\
l T l d I L' a t ( ) ! S R
I{( l \\ ddt'l"t pattern S f{
H.p\\ Lill i ! I.'oun! ,�R
K n log.
C o l u m n Lililt l t l l : ! l ! S R
}< ,\ \1 \\nrd
C ( ) 'llrrL' ..... ..,�,d
I l Tll' l'\ ) l I 1 1 t �
x I ( )� II
1 6/1
Pr(\ � [" I I ' l l l l c! h l L' ..,� n �tp"' l' '''
-+1{ '
" ,-' U j"(IT!"
211 (-+ log "
l.-'.HLE \'
�I I I - R ! l' \ 1 1-< H \ R D\\ \ R I
o\ ! R H I- \ j ) I"
-----.._- - - - - �
2 :;; (} [ fA K ni t ,
:'i l � 1 2 S h Kb i l
I IJ � � [ I \1 h l t l
, I I
Nurr�er o f
('.A.� <I
1 , 2[1
�J:e eX?l2c�ed rllilTlber of detec:.s
(, , 3 5
:'i y
2 , 76
l . c .1
F l.'"-\ I
i, )
u! R A M S i l l
5 hb
c �h
:3.11 '
'\ ( ) f l - p rn t: r' I T1l I 1 1 .d"'l , l' ") nap'c"
� n log n
n1\\ dl- I n' ( l'atkTIl
Pr\).�..!T�I ( l l !ll:I �l k " : n a r'> l>, S R ' .....
2 5 h ( /)..+ K I 1ll1'!TlOf) arr�i : "
il \
l : d l � l t } ,-' l l [ IJ lllll J lldlCd t ( l f ,,) R
o f spares
S ,-' : l. '>l' : 1 1 I 1 [1 ' �
- d, L l
- dJI.� �
th� expected nill"'be r o f defect:'>
Rl'r ( l i ra h i I J t � ( ) I cI dci'l'ui \ l' �56
( ' n I U l" l1 dl'l'(hk,
dl, � (; . 2 S
\ '\ j ) .... ) \ I I \1'i.
Numher \ ) 1' tr�lIl � i :-. t ( ) r:-.
Fq;. 1 0
l \ l l ( i R \ l l · l) ( I R C I I " ] S
l )- R - \ ] DI- I ) [ ) I .\ / ( , '\ ( ) )
1 . 2t{
:3. . Y 7
1 . .1 3
The h i olog ica l n e rvo u s sy s t e m 's abil i ty to so l v e p e r­
c e p tua l llpt imiLatinn p ro h l e m s is i m it a t e d h e re to tackle
the VLSI array rcpai l p ro h l em
I n contrast to the c u rre nt
seq ue n t i al repa i r a l gori t h m s . v.. h ich ru n s l ow l y o n t he
A c c ord ing to t h e ,chemat i c d i a g ram ,hown i n F i g
e xcept for the FC/ R I S R �nd t h e R O PS R . w h ich arc pro­
p0!1 i o n a i to the d i m c n s io n o f t h e m e m ory array . t h c
c' o n v ent i on a l d i g i u l com puters , the n e u ra l network ' s col ­
l e c t i v e computational propcrty pro v id e s v e ry fast sul l! ­
t itlm.
Methods to t ra n s fo rm th e pro bl em i n stance i nto
l' O I l 1 -
neura l n et w o rk com pu t a t i o n model are demonstrated i n
p le x i t y of thc rem a i n i n g se l f- repa i r I ( ) g i c ha s i c al l y de­
dct a i l . Of t he t wo ty pe s of n e u ral n e ts stud ied i n t h i s pa­
pe nds
d i rne n s l o n s
co m p re s s ed
d c'fect
x 11 comp n.: s \ cd d e fec t pal
t e rn w i l l req u i rc a neural net o f / / I + 11 ne u ro n s . S i nce t h e
pattern s . F o r insta n c e . an
per. th e GD n e ura l network h a s been fou n d to be t w o to
fo u r t im e s hetter t h a n the RM a l g o r it h m i n o b t ai n i n g suc­
cess ful repair schemes . ThO' G O mi n i m i z e s t h e energy
m e a n a n d v a r i a n c e o f the n u m bcr o f fau l t y c o l u m n , ( n)ws )
funl' l i o n of thc net work o n l y in the l oc a l it y of t h e s t a n i n g
can he c a l c u l a t ed ;Jcc o rd i n g to t he fau l t d istrihution I'u n c ­
e n ergy v a l u e ,
t i o n . ;In c s t i mate on t h e req u i red n c u r;J 1 n e t size
ea\i I y made t t ) ac c om m o da te nearl \
;J I I po s s i h l e COlll ­
p re ss ed d e fect pattern s . For t h e ca,e 0 1 2 5 6
memory a rra y s w i t h A o a s h i g h a \ -l- . i t i ,
2 5 6 ( 6-l- K )
n c a r-ce n a i lllY
that the compressed a rray wi l l not be l a rge r than I ()
I ()
G i v e n an N x N O R A \1 . l e t the s i Ze: o f the ma x i m u lll
rhe perfo rmance of t h e neural network i s
fun he r i m p roved hy i n t rod u c i n g an H C te c h n i qu e t hat a l ­
lows the
, e a rch
to e s c a p e t h e traps of l o c a l m i ni m a . By
ge ne rat i n g random defect patterns and experimenting w i th
l a rg e n u m her of a rra y s . it is seen t h a t the HC al gori t h m
finds a s o l u t ion i n a rcpaira b l e memory array w i t h near
ce rt a i n t y ( w i t h a proh<l b i l ity of 0 . 9 8 o r more ) . For t h e
A n i t cm i z ed ac­
s a m e fau l t pa tte rn s , s i m p le c u m m e rc ial a l g orith m s . l i ke
w a re based o n t ra n s i s to r c ou n t i s g i v e n in Tablc I V . a n d
t e rn s . On the averagc . about t w ice as many search ste p s
compre" ed me mo ry defect pat t e rn he
count of t h e dy n a m i c memory array and ,d l- repa i r hard­
t he perccntages of overhead a r e l i , ted in Tahle V for
iou� lIIemory and ne u ral net s i z e "
1< \1 , can y ield fca s i b l e s o l u t i o n s for only 2 0 % of the p at ­
a re u s ed hy the H C as o p pos e d
As i nd ic a t e d hy t h e
Both t h e HC and GO n eu ra l networks can be i m p l e ­
re s u l t s . the ov e rhea d is i ns i gn i fica n t . compared w i t h t h e
m e n t e d i n h ard w are u s i n g v e ry SITw l 1 overhead. typical ly
y i eld i m prove me n t . a n d t h e overhead can he e v e n s m a l l e r
i f static R A \1 · s . w i t h 6N c t ra n s i sto rs i n the lIlemory ar­
I e " , than 3 '1c if t h e mc m o ry size is 256 Kbit
pay u tf t)f t h is R I S R
a pp ro a c h
m o re . The
is ve ry h i g h : the V LS I ch i p
ray . are c o n s i d e re d . Here the ov e rh ead of the B I ST c i rc u i t
! icld c a n i n c rease from I U 'k w i t h o ut the BISR c i rc u i t to
fl i p - fl op s and ten gates for a 256-K R A M ) . a :, shown
a i s o pr()\ � s t h a t t h e nc u ra l net worh a re mllch morc rob u st
is no t i n c luded . T y p i c al l y t h i s overhead is very low ( ahoul
i n Mazu rnder' s earl i c r papers [ 16 ] . [17 ] .
ahout I OO 'i; h) u s i ng the proposed neural n et s . The paper
and fa u l t - to l e ra n l than t h e c o nv e n t i o n a l l og i c c i rc u i t s . and
thereby are ideally suited for self-repair circuits. The pro­
posed designs of neural networks can operate correctly
even in the presence of multiple faulty synaptic circuit
elements , and as more numbers of neurons become per­
manently stuck to a firing or a nonfiring state, the net­
works gracefully degrade in their abilities to repair fau lty
arrays .
The paper shows how t o solve a vertex -cover problem
of graph theory , generally known to be an intractable
problem, hy using neural networks. A large number of
problems in other domains, which can be modeled in sim­
ilar graph-theoretic terms, can also be solved hy the
neural-nctwork designs discussed in thi s paper. In the
memory repair problem, an entire row or column is re­
placed to eliminate a defective cell. Such an approach is
easy to implement and is cost-effcctive in redundant mem­
ory designs because the cells are very small in size. But
a largc class of array networks in systolic and arithmetic
logic circuits employ a different strategy where a defec­
tive cell is exactly replaced by a spare or redundant cell .
An appropriate graph model for such an a rray repair prob­
lem will be to construct a maximum matching of pairs
between the vertices of a hipartite graph representing the
set of faulty cells and the set of available spare cell s . The
technique described in this papcr can be extended to solve
the maximum matching algorithm by neural networks
[18J .
The overall goal of the proposed BISR circuits is to
improve the chip yield by reconfiguring thc faulty com­
ponents at the time of chip fabrication, and also to extend
the lifc span of the chip by automaticall y testing and re­
pairing it whenever a failure is detected during the chip's
normal operation. In space, avionics , and oceanic appli­
cations where manual maintenance, namely fIeld testing
and replacement, is not feasible, such an auto-repair tech­
nique will be very useful in improving the reliability and
survivability of computer and communication equipment .
J . R. Oay, "A fau lt-dri ven comprehensive redu ndancy a l go ri th m for
repair of dynamic RAM s . " IEEE Desigll Ulld lest, vol .
35-44, June 1 9 8 5
[ 2 ] R . C . Ev a n s , " Te sti n g repairable R A M s a n d m o stl y good memo­
rie s . " i n Proc. Inr. TeSI ConI , Oct . 1 98 1 , pp. 49-55,
Dahhura, " I nc reas ed th ro ug h pu t I'or the
[9] R . W. Haddad and A. T
t e st i ng and repai r of RAMs with red u nd a ncy . " in Proc. Iiu . COllI
Computer-A ided Design , 1 98 7 , pp. 230- 2 3 3 .
1 1 0] :'01 . Hasan a n d C - L Lill . " .\1 inimum fault coverage i n reconhgurable
array s , " i n Pro£. In/. Srlllp. FCHllt-Tolerant Compu t . , June 1 98 8 . p p .
[ I IJ
348- 3 5 3
J . J . Hopfield, "Neural networks a nd p h ys ic a l systems with emergent
collective computatiunal abIlIties , " in Proc.
v o l . 79 , A pri l 1 9 R 2 . pp. 2554 - 2 5 5 8 .
[ 1 2 J J . J . Hoptield
and D . W.
opt i miz ation problem s , "
A ('(u/. Sci.
T a n k , " N ellral co m putation of decisions i n
CY"ern . . v o l . 52 , p p . 1 4 1 - 1 5 2 . 1 98 5 .
[ 1 3 \ S . K i rk p a tri ck . C D . Gelatt. and M . P . Ve cc h i, "Opt i m izat io n hy
simu lated an nea l i ng, " Srience, vol . 220, p p . 67 1 -680, 1 98 3 .
[ 1 4] S . Y . Kuo and W . K . Fu ch s , "Efficient spa re allocation i n recon­
fig urahle array s , " IEEE Design and Tes t . vol . 4, pp
24-3 1 , 1 98 7 .
[ 1 5 J W . -K . H ua ng el ul . . " App roac he , for t h e repair o l' VLSI/WSI RA/I,b
hy row /co l u m n deletiDn , "
i n Proc. Int. Symp. Fau/I- To/erll! /f C()m�
pul . . June 1 98 8 . pp. 342 347 .
1 1 6] P . M alu mder , 1 . H . Pate l . and W
K . Fuchs, " Me thodolog ie s for
testing e mbedd e d content addn:: � �Jble I1lcmones , " IEEE Tral1.L Com­
1 1 7\
purer-Aided Des/gn , vol . 7, pp . 1 1 - 20, J a n . 1 98 8 .
P . M al um de r and J . H . Pate l , . . An efficient huilt-in self-testing a l ­
IEEE TrmH . Ind. Electroll . .
gorithm for random�acce!-.s II1�mury.
vol. 3 6 , May 1 9 89 , pp
[ 1 8 J P . Mazumder and Y. S . Jih, " Processor array sclf-reconfiguration by
ne u ra l networks . " i n ProC. IEEE Wafe r-Swle Integralion . J an . 1 992 .
1 1 91 W. S. McCulloch and W. H . P i t t ' ,
logi c al c a l c u l u , of ideas im­
manent i n nervous actIv i ty , " BIIII. Malh. Bioi " v o l . 5, pp. 1 1 5 - 1 3 3 ,
1 94 3 .
1 201 C M e a d . " An alog V L S I an d Neural Systems . "
di"m-Wcsley, 1 989.
Re ad ing , \1 A : A d ­
[ 2 1 J J. Melng a i l i s . " Foc us ed ion beam t e c h nology and appl icati o n s . " 1.
Va,- Sci. Technol. R . vol. 5,
1 22 ] A . F . Murry a n d A . V
469-49 5 , 1 98 7 .
Smith , " A synchronous VLSI neu ra l net­
works u sing pube�.... tream arithmt: t l c . · · IEEE 1. Solid�State Circu its.
23, pp 688 697, 1 98 8 .
1 2 3 1 G . N ic hol as , " Technical and econOlmcal aspect of laser repai l of
memory , "
in Waf('r�Sca{e Il1te�,.(/ ti(m, G. Saucicr and ] . Trih le . Ed�.
New York: Elsevier,
I 'lR6.
pp. 27 1 - 1 80.
1 24 1 M. A . Si v i l otti , M. R . EmerI L n � , and C. A . Mead. " V LSI archnec­
tures for i mpl e mentati o n of nL:ural l1et\Norks , " Proc. COllI !\jeural
Networks for Computing,
Inst. of Ph y s . . pp. 408-4 1 .1 , 1 986.
c. H. Stapper, A 1\ M c La re n , and M. Dreckman. "Yield model tor
prodUC livity optill11zation of VLSI memory ch i p s w i th re du nda n cy and
partially good product , " IBM 1.
May 1 980.
Del·c1op . , v o l . 24, n o . 3, pp.
[ 2 6 ] C'. H . Stappe r, "On yi el d , fault d i s t ributions, a n d c l u stering of par­
ti c le s . " IBM 1. Res. Dcw·lop . . v o l . 30, no. 3, pp. 326-3.l 8 , May
1 986.
Stopper, 'A
wafer " i th e ! cct n e a ll y programmable interconnec
t ions . " Dig. IFf:f: lilt. Solid SIM(, CireuilS
ConI ,
1 98 5 , pp. 268-
M. Tarr, D . Boudreau, and R . Murphy, " Defect anal y s i s system
!-Ipeeds test and rcpa i r of red u nd a nt Il)e moric s , " E{eclrolllcs. p p . 1 7 5 -
1 79 , J a n . 1 984
129] C - L . Wey ('t ai , " On the
Compuler-A ided Onign .
repair uf redundant R A M ' s , " IEEE Trans.
v ol . CAD-6, pp. 222-23 1 . 1 98 7 .
1 3 1 I:!. f . fitzgerald and E . P . Thoma, "Circuit i m p l e mentation of fu s i ble
redumlant addresse� o f RAMs for prod u ct iv it y enha ncement . "
1. Res. Develop . . vol . 2 4 , pp. 29 1 -298, 1 980.
K . fuc hs an d M . -F. Cha ng, . . Di a g no s is an d repair of l arge m e m ­
o r i e s : a critICal revie\\' a n d re ce n t result s . " Defect a n d Fault T(}ler�
(1l1ce ill VLSI 5ys lem.\·. I. Koren, Ed.
New York: Plenum, 1989, p p .
Pinaki MalUmdcr
2 1 3-225 .
M. R.
and D.
Guide to the Theon'
1 97 9 .
of NP-Complereness.
1 9 86, pp. 30 1 -3 1 0 .
T ri hl e . Eds.
1 985 , and the Ph . D . degrec in electrical and com­
puter enginee ring from the
model , " in Advanced Research
cal Engineering and Computer Science at the U n i­
versity of Michigan, Ann Arbor.
VLSI, pp. 3 5 1 - 36 7 , 1 987.
for Computing, Ame r. Inst. of Phy s . , 1 9R o , pp. 1 82- 1 87 .
of I l l inois,
Assistant Professor at the Department of Electri­
17] H . P . Graf and P . deVe gv a r. " A C M O S implementation o f a ne u ral
U n i v e rsity
Urbana-Champaign. in 1 9R7. P resen tl y he is an
New York: E l se ­
[ 8 1 H . P , G ra t' el al . . " V L S I i m ple me n tat ion of a neural network memory
with several hundreds of n e urons . " i n Proc. ConI .1\/eural Net�vorb
ence from the University of Alberta, C a nada i n
VLSI MOS: condit io n s for testing and reconfiguratio n . " i n Wafer­
recei ved
ence in 1 97 6 . the M . S c . degree in computer sci­
New York: Freeman,
[6] P . Girard, F . M . Roche, a n d B . Pistoule\' " Electron beam effects on
Scale Integralion, G. Sa u c i e r and
( S ' 84- M ' 8 7 )
B . S . E . E . degree fro m the Indian Institute of SCI­
S. Johnson. Compulers and Il1lract(1hilitv: A
Prior to t h i s . he was a research assistant at the
CoordLnated S c ie nc e
L ahorato ry . University
of l1iinois, and for over six
years was with th e !lharat E le c t ronics Ltd . , I n d i a , (a c ol laborator
of RCA)
I R ·\ i\ .' � (' I ! ( )1\ .' Of; C ( ) \ 1 P I T I R · . \ I I l F D I l I S I ( , i\ OF
I YI H dn l l 1 l ( ' f Ret ' I I S A ' I J ' \ S I I \h .
product � .
1 2 . 'i()
Yih-Sh� r J i h i s '85- M ' 9 1 l
I,\I h e re he d e v e l o ped several type� of analog a n d o l g i tal lntegrJtcd c i rc u i t;..
for consumer electronics
\ 01
During the �umll1er� of 1 9 R 5 and 1 9 8 6.
I . j.\i\lI,\R\
received the B . S
I 'N.'
gree I n cornputl'r "clenee and i n f() [ matinn t' n g i ­
ncc n n g t rdlll Nation Taiwan U m v e rc.;i t} . Tai p e i .
he wa, M e mber of t h e T e c h n i c a l Staff i n the Naperv i l l e b r a n c h of AT&T
Ta i " ,l n , Repunlic "f C h i n a . i n 1 9 R I
sign automation. ultralast digital c i rc u i t d e s i g n . and n e u ra l network�. H�
gree i n c o m p u t e r "iCH.'nce from !'vh . : h l ga n State
Bell Labomtori e � . H i s rc<.;earch interest l n c l w..ks V L S I te,..; t i n g . phY�lcal de­
the M . S . de­
l� a reC I p i e nt of D l g it3 1 ' � Incentive" for Excellence A\\f a r"u, Natlllnl.:li Sci ­
U n l \ cr" l l . East L a n , i n g , I n 1 9 8 5 , and the Ph . D .
ence Foundation Research I n i tiatIOn Award, and B e l l � o rthern Research
degree i n COlllputL'f .,, ( i e nee and e n g i neering from
the U n i \' �)t M I c h I gan , A n n Arbo r . ill 1 9 9 1
Laboralory Faculty A w a rd . He has p u b l ished '''er 5 0 pape" I n IEEE and
archival journals and
He is a Guest Editnr of the I EEE 0 1 .51(" '>ill T c.s I M .\( ; ' / I " "
is�uc on memory t e s t I n g , to be p u b l i c.;hed i n
1 i)(J1. .
1 :-.
Dr. Mazumder i, a m e m b " c , ) 1' S i g m a X L Phi K " ppa P h i a n d A C \1
revin...·cd internatIOnal c n n ference proceeding..... ,
( u n c n lh a Re�earch S w tf Member in the
J. \V a l ..,on Research Center. Yorktown
H e i g h t '> . N Y
( \ 1 IllPU1\::[ l"lHTHIIUnic�ltl(}n.
LIl lt-to!craIlt
1 1 1 :-' rc�earch lnten:�h. i n c l uue VLSI
l'\ ) fJ l p u t i n g .
h q;! h -bancl\\' i Li t h
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF