Lecture 4: Pseudorandom Generators, Hard-Core Predicates, Goldreich

Lecture 4: Pseudorandom Generators, Hard-Core Predicates, Goldreich
0368.4162: Introduction to Cryptography
Ran Canetti
Lecture 4
Fall 2008
Scribes: Margarita Vald
24 November 2008
Topics for today - Pseudorandom generators:
• Definition
• Range extension
• Construction from OWF
– Hard core bits
1
Defining PRGs
def def Definition 1. Two ensembles, D1 = Dn1 n∈N and D2 = Dn2 n∈N are polynomially indistinguishable (D1 ≈c D2 ) if for every non-uniform polynomial-time algorithm A and all sufficiently
large n’s,
P robx←D1 [A (1n , x) = 1] − P robx←D2 [A (1n , x) = 1] < ν(n)
n
n
where ν(n) is some negligible function.
Definition 2 (Pseudorandom Ensembles). The ensemble D = {Dn }n∈N is called pseudorandom if
def
D ≈c U for the uniform ensemble U = {Un }n∈N .
Theorem 3. There exists a distribution ensemble D = {Dn }n∈N such that D is not statistically close
to the uniform ensemble U = {Un }n∈N and yet, D ≈c U .
The theorem is proven without computational assumptions, based on the following diagonalization
argument: Since there are only exponentially many polynomial size circuits on a bit, it is possible to
define a distribution that “fools” all such circuits. For a proof see [4].
Theorem 3 highlights the fact that pseudorandomness in itself is not good enough for encryption
purposes. We are interested in pseudorandom ensembles that can be efficiently sampled.
∗
Definition 4. A pseudorandom generator (PRG) is a deterministic function G : {0, 1} → {0, 1}
satisfying the following conditions:
∗
• Efficiency: G is computable in polynomial-time.
• Expansion: There exist a function l : N → N such that l(n) > n for all n ∈ N, and |G (x)| =
∗
l (|x|) for all x ∈ {0, 1} . (l (n) is called the expansion factor of G.)
• Pseudorandomness: The ensemble {G (Un )}n∈N is pseudorandom.
A PRG can be viewed as the formalization of the informal notion of a “Stream cipher”. That is, a
PRG can be thought of as the “security specification” for stream ciphers.
1
x0
G
x'0
G1
x2
x1
x'1
σ1
G1
xp'(n)
G1
x'2
σ2
x'p'(n)
σ p'(n)
n
Figure 1: The construction operating on seed x0 ∈ {0, 1} .
2
Increasing the Expansion Factor
Given pseudorandom generator G1 with expansion factor l (n) = n + 1 , we construct a pseudorandom generator G with arbitrary polynomial expansion factor as follows.
Construction Let G1 be a deterministic polynomial-time function, mapping strings of length n into
strings of length n + 1, and let p (n) be polynomial. We now formally describe the construction
n
of algorithm G on input x ∈ {0, 1} :
• Let p0 (n) = p (n) − n. Note that this is the amount by which G is supposed to increase
the length of its input.
• set x0 = x and n = |x| . For i=1 to p0 (n), do:
– Let x0i−1 denote the first n bits of xi−1 , and let σi−1 denote the remaining i − 1 bits.
(When i = 1, σ0 is the empty string.)
– Set xi := G1 x0i−1 , σi−1
• Output xp0 (n)
The construction is depicted in Figure 1.
Theorem 5. Let G1 , p (·), and G be as in the construction such that p (n) > n. if G1 is a pseudorandom generator , then so is G.
Intuitively, the Pseudorandomness of G follows from that of G1 by replacing each application of G1
by a random process that on input a uniformly distributed n-bit -long string will output a uniformly
distributed (n + 1)-bit-long string. consequently, the indistinguishably of a single application of the
random process from a single application of G1 implies that polynomially many applications of the
random process are indistinguishable from polynomially many applications of G1 .
How to formalize this idea? Can we use induction to prove security? This is problematic! Every
application of G1 adds extra information to the adversary. Since G1 is a PRG, the extra information
is small but still at each application of G1 the error grows. We thus need a way to bound the growth
of error and simple induction doesn’t give us a way to do that.
The actual proof uses the hybrid technique.
The hybrid technique. The hybrid technique is used in many proofs of security and is a basic tool
for proving indistinguishably when a basic primitive is applied multiple times. The technique works
by defining a series of hybrid distributions that bridge between two “extreme distribution”, these
being the distributions that we wish to prove indistinguishable. To apply the proof technique, three
conditions must hold: First, the extreme hybrids should match the original cases of interest. Second,
2
it must be possible to translate the capability of distinguishing neighboring hybrids into the capability
of breaking some underlying assumption. Finally, the number of hybrids should be polynomial, so
the “distinguishing success” is only reduced by a polynomial factor.
We will prove theorem 5 for p0 (n) = 2 (Homework: prove for any polynomial number of iterations
of G1 ).
Proof. Let A0 be an adversary that distinguishes between Un+2 , G (Un ) by margin . We construct
A that distinguishes between Un+1 , G1 (Un ) by a margin of /2.
2 def
1 def
0 def
Hn = Un+2
Hn = Hybrid
Hn = G (Un )
r1 · · · rn+2
G1 (r1 · · · rn ) rn+1 G1 (y1 · · · yn ) yn+1 , where yi = G1 (r1 · · · rn )i
Construction of A: (on input z = z1 · · · zn+1 )
• A tosses a coin b
– If b = 1 (we assume that A0 can distinguish between Hn1 and Hn2 )
R
we apply A0 on z1 · · · zn+1 σ1 where σ1 ← {0, 1} and output A0 answer.
– If b = 0 (we assume that A0 can distinguish between Hn1 and Hn0 )
we apply A0 on G1 (z1 · · · zn ) zn+1 and output A0 answer.
P robz←Un+1 [A (z) = 1|b = 1] +
P robz←Un+1 [A (z) = 1|b = 0]
P robz←Un+1 [A (z) = 1] =
1/2
=
1/2
P robz←{Hn2 } [A0 (z) = 1] + P robz←{Hn1 } [A0 (z) = 1]
P robz←G1 (Un ) [A (z) = 1|b = 1] +
P robz←G1 (Un ) [A (z) = 1|b = 0]
P robz←G1 (Un ) [A (z) = 1] =
1/2
=
1/2
(1)
(2)
P robz←{Hn1 } [A0 (z) = 1] + P robz←{Hn0 } [A0 (z) = 1]
|(1) − (2)| = 1/2 P robz←Un+1 [A0 (z) = 1] − P robz←G(Un ) [A0 (z) = 1] =
3
/2
Constructing PRGs
There are several approaches to constructing PRGs:
1. We can give a specific construction and simply assume that it is a PRG (recall the shrinking
generator described in the last class). We can also try to analyze specific attacks. But we are
looking for something better in terms of security guarantees.
2. Alternatively, we can give a construction whose security can proved based on the hardness of
a well-known computational problem (e.g., one of the problems mentioned in the last class).
3
Example 6. A PRG based on a strong assumption on the hardness of discrete logarithm.
So far we have assumed that, given a prime p and two random elements g, h in Zp∗ it is hard to
find an index i s.t. g i ≡ h mod p. For the following assumption we need an extra structure: a
group G of prime order (in a group of prime order every element is a generator).
By choosing p = 2 · q + 1 , where q is also a prime, and letting G be the sub-group of order q in
Zp∗ we obtain such a group (The primes q above are called “Sophie-Germain”). Let {Gn }n∈N
be a sequence of groups, where Gn is of prime order qn , qn ∼ 2n .
R
R
The DDH assumption for {Gn }n∈N : Let g, h ← Gn and r, s ← [1, ..., qn ] then
{(g, g r , h, h r )n }n∈N ≈c {(g, g r , h, h s )n }n∈N
Note that hr in the l.h.s is deterministically determined by the random elements g, g r , h
The assumption says that not only it is hard to find the DL of elements but it is also hard to
recognize when two elements have the same DL wrt two given random generators.
This stronger assumption is called “Decisional Diffie-Hellman” assumption (for reasons that
will became clear later in the course).
Under this assumption, we construct a PRG as follows:
GENG (g, h, r) = g, g r , h, hr
This generate an expansion by log |Gn | bits.
Claim 7. If the DDH assumption holds for G then GENG is a PRG.
3. A third alternative is to construct PRGs based on a general hardness assumption.
Theorem 8. There exist PRGs iff there exist OWFs [3].
You can find a sketch of the proof in [4]. However, the construction which transforms an arbitrary
OWF into PRG and the proof that this construction indeed yields PRGs, is beyond the scope of this
course . In class we’ll show a simple derivation:
Theorem 9. If there exist one way permutations , then there exist PRGs.
4
Constructing PRGs From One Way Permutations
n
n
Let f : {0, 1} → {0, 1} be a length-preserving one-way permutation. How will we construct
PRGs? We have seen that it suffice to construct PRGs that expand by one bit. Our first attempt may
be:
G(x) = f (x), x1
Taking the first bit or any other bit of x is a bad idea. All we have guaranteed is the hardness to invert
f , but not the “pseudorandomness” of each bit separately. Thus we’re looking for a way to “extract”
pseudorandom bits from OWFs.
∗
Definition 10 (Hard-Core Predicate). A polynomial time computable predicate B : {0, 1} → {0, 1}
is called a hard-core for a function f if for every non-uniform polynomial-time adversary A, exists
a negligible function ν, such that:
P robx←Un [A (f (x)) = B (x)] <
An equivalent definition is:
4
1
+ ν (n)
2
∗
Definition 11. B : {0, 1} → {0, 1} is a hard-core for f , if for x ← Un , b ← U1 :
(f (x) , B (x)) ≈c (f (x) , b)
This notion provides a “bridge” between hardness and pseudorandomness: According to Definition
11 the two distributions have a significant statistical distance, yet they are indistinguishable.
Theorem 12. Let f be a OWP and let B be a hard-core predicate for f . Then the algorithm Gen
defined by:
def
Gen(x) = f (x) · B(x)
is PRG.
Proof. Following immediately from Definition 11.
5
OWFs with Hard-Core predicates
Can we construct OWFs with HC predicates?
Theorem 13 (Blum-Micali 82). The following predicate is hard-core for the DL function:
B(x) = 1 ⇐⇒ x > p/2
This gives a construction of a PRG based on the DL assumption. However, we want to construct
more general PRGs. We don’t know how to show that any OWF has an HC bit, but we will show
something almost as good:
def
Theorem 14 (Goldreich-Levin 87). Let f be OWF. Let f 0 be defined by: f 0 (x, r) = (f (x), r), where
|x| = |r| .Then the predicate
B(x) = hx, ri =
n
X
xi ri (mod 2)
i=1
is hard-core for f 0 .
By theorem 14, given any OWF f we can modify f to OWF f 0 with hard-core predicate. Furthermore,
if f is a OWP, then so is f 0 .
6
Markov, Chernoff, Hoefding, Chebyshev
Markov, Chernoff, Hoefding, Chebyshev are fundamental inequalities from probability theory. These
inequalities bound the “tail probabilities”, namely the probability that random variable, or a sum of
random variables, turns out to be far from the expectation. We take the opportunity to review them
here, since we will use them to prove theorem 14.
6.1
Markov inequality
Theorem 15. Let X be a non-negative random variable, and v > 0, then:
P rob [X ≥ v] ≤
E [x]
v
Proof. We have
X
X
X
P rob [X = x]·x ≥
P rob [X = x]·0+
P rob [X = x]·v = P rob [X ≥ v]·v
E [X] =
x≥0
0≤x<v
x≥v
5
6.2
Chernoff, Hoefding inequality
Theorem 16 (Chernoff Bound1 ). Let X1 , . . . , Xn be n independent random variables with expectation µ such that ∀j, Xj ∈ {0, 1} . Then for any > 0,
Pn
i=1 Xi
−2 n
P rob − µ > ≤ 2e 2µ(1−µ)
n
Remark 17. Chernoff proved this theorem for 0 − 1 random variables, while Hoefding deals with
general domains.
6.3
Chebyshev inequality
Sometimes we can’t guarantee full independence of the random variables, but only limited independence.
Definition 18. A sequence of random variables X1 , . . . , Xn is pairwise independent if for any i, j,
a, b,
P rob [Xi = a ∧ Xj = b] = P rob [Xi = a] P rob [Xi = b]
Theorem 19 (Chebyshev bound). Let X1 , . . . , Xn be pairwise independent random variables with
expectation µ and variance σ 2 . Then, for any > 0:
Pn
Xi
σ2
− µ > ≤ 2
P rob i=1
n
n
Proof.
Pn
i=1 Xi
P rob ≤
− µ > n
Pn
P rob
E
7
i=1
!
2
2
−µ >
2 −µ
Xi
n
(by M arkov)
2
V ar
=
Xi
n
P
n
≤
=
i=1
h Pn i=1
2
Pn
1
i=1
n2
2
Xi
i
n
σ2
=
=
1
n2
Pn
i=1
V ar [Xi ]
2
σ2
2 n
Proof of the Goldreich-Levin theorem
Proof. By reduction: Let A0 be an algorithm that on input 1|x| , f (x), r predicts σ = hx, ri with
probability e0 . We will construct an algorithm A that, using A0 , inverts f with probability e. That is,
given y, A will output x s.t f (x) = y with probability e. A will use y only to run A0 on, and will
concentrate on trying to reconstruct x from the “noisy information” provided by A0 .
1A
proof is available in this link: http://people.csail.mit.edu/ronitt/COURSE/S07/lec25.pdf
6
( x)
H
r
σ = x, r
GL
Figure 2: The probabilistic (oracle) algorithm GL
Lemma 20. (See Figure 2 on page 7) There exists a probabilistic (oracle) algorithm GL such that
n
n
for any h : {0, 1} → {0, 1}, x ∈ {0, 1} , and > 0, if h (r) agrees with σ = hx, ri in ≥ 21 + fraction of r’s , then:
2
P rob GLh (n, ) = x ≥ Ω /n
where GLh (n, ) runs in time poly(n, 1/) . (Here GLh means the algorithm GL has oracle access
to h.)
Remark 21. Note that x is fixed, and h relates to the same x always.
Proof of theorem 14 from the lemma:
Intuitively, the main difference between the setting of the lemma and that of the theorem is that in the
former setting the value x is well-defined, and unique, whereas in the latter setting A0 may refer to a
different preimage of f (x) in each answer. We will show that this this difference can be overcome.
Assume, in contradiction to the conclusion that hx, ri isn’t HC predicate, that there exist an A0 such
that:
h i
P robx,r A0 1|x| , f 0 (x, r) = hx, ri ≥ 1/2 + R
n
Where x, r ← {0, 1} , and f 0 (x, r) = (f (x) , r), and is non-negligible. Now we define an
algorithm A (y) as follows:
0
|x|
• Output GLA (1
,y,·r )
(n, /2)
Claim 22. For at least a fraction /2 of the x’s we have:
h i
p (x) = P robr A0 1|x| , f (x) , r = hx, ri ≥ 1/2 + /2
(3)
Proof. Consider the set:
def
Sn = {x|p (x) ≥ 1/2 + /2}
Assume by the way of contradiction that: |Sn | = δ < /2. Then:
h i
h i
P robx,r A0 1|x| , f 0 (x, r) = hx, ri = P rob [x ∈ Sn ] · P robr A0 1|x| , f (x) , r = hx, ri |x ∈ Sn +
h i
P rob [x ∈
/ Sn ] · P robr A0 1|x| , f (x) , r = hx, ri |x ∈
/ Sn
≤
δ · 1 + (1 − δ) · (1/2 + /2)
<
δ · 1 + 1 · (1/2 + /2) < 1/2 + 7
r
σ = x, r
Figure 3: h (r) agrees with σ = hx, ri in the entire domain rn
σ ≠ x, r
negl(n)
r
σ = x, r
Figure 4: h (r) doesn’t agree with σ = hx, ri with negligible probability over rn
This means that for ≥ /2 fraction of x’s, the function h (r) = A0 1|x| , f (x) , r agrees with hx, ri
in ≥ 1/2 + /2 fractions of inputs. We know that for those x’s,
h
i
0
|x|
2
P rob GLA (1 ,f (x),r) (n, /2) = x ≥ Ω /n .
Accordingly, we have P robx←X [A (f (x)) = x] ≥ (/2) · Ω
is non-negligible. This contradicts the one-wayness of f .
2/n
, which is non-negligible because
We turn to proving lemma 20; But first we’ll do a number of warmups.
7.1
Warmup 1
Suppose h (r) agrees with hx, ri everywhere. see Figure 3 on page 8.
Note hx, ei i = xi , where ei = is a vector with all entries being 0 except for the i-th entry which is 1.
Now define an algorithm as follows:
• Set zi = h (ei ) for i ∈ [n].
• Output zi , . . . , zn .
Clearly, the above algorithm always output x.
7.2
Warmup 2
Consider agreement 1 − negl (n), instead of everywhere. That is, let Ex = {r|h (r) = hx, ri} then
|Ex |
2n ≥ 1 − negl (n). see Figure 4 on page 8.
8
σ ≠ x, r
1
−ε
4
r
σ = x, r
Figure 5: h (r) doesn’t agree with σ = hx, ri with probability 1/4 − over rn
Now this technique doesn’t work since h can be mistaken on some or all the ei vectors. The key idea
is random self reducibility: we can reduce computing hx, ri at specific points (e.g. e1 , . . . , en ) to
computing it at random points (where h will be correct with high probability).
Specifically, hx, ei i = hx, ri ⊕ hx, r ⊕ ei i, and if r is random then so is r ⊕ ei . Now define an
algorithm as follows:
n
• Choose r ← {0, 1} .
• Set zi = h (r) ⊕ h (r ⊕ ei ) for i ∈ [n].
• Output z1 , . . . , zn .
Since h (r) (resp. h (r ⊕ ei )) agrees with hx, ri (resp. hx, r ⊕ ei i) with probability 1 − negl (n),
using union bound2 we conclude that P rr [z 6= x] ≤ (n + 1) · negl (n) = negl (n).
7.3
Warmup 3
Consider agreement 3/4 + , where is non-negligible. That is,
on page 9.
|Ex |
2n
≥
3
4
+ poly (n) . see Figure 5
Note that the error for each coordinate i using the previous scheme is:
P robr [h (r) ⊕ h (r ⊕ ei ) 6= xi ] ≤ P robr [h (r) 6= hx, ri] + P robr [h (r ⊕ ei ) 6= hx, r ⊕ ei i]
≤ (1/4 − ) + (1/4 − ) = 1/2 − 2
Therefore we cannot use a simple union bound as in the previous scheme, we will need to reduce the
error for each coordinate i.
Now define an algorithm as follows:
• Choose r1 , . . . , rt ← {0, 1}
n
t=O
1
2 logn
.
• Set zi = majj {ζj } for i ∈ [n] and ζj = h (rj ) ⊕ h (rj ⊕ ei ).
• Output z1 , . . . , zn .
Note by Chernoff bound3 , P robr [zi 6= xi ] ≤ 1/2n. So by union bound, we know P robr [z 6= x] ≤
n · (1/2n) = 1/2.
However, when
are needed.
|Ex |
2n
is below or equal 3/4 the above algorithm does not work anymore and new ideas
2 The
Union bound states that P r[A ∨ B] ≤ P r[A] + P r[B], for any two (possibly dependent) events A, B.
bound states that the probability that the average of t independent experiments is “γ -far” from its expectation
2
is of the order 2−O(γ t) . More formally, let Zj be the indicator variable which is 1 if ζj = xi , where j = 1, . . . , t.
Pt
1
Then all Zj are independent and E[Zj ] ≥ 12 + 2 · . Then, if Z =
i=1 Zj , we have E[Z] ≥ t · ( 2 + 2 · ), and
2
−
O
γ
t
(
)
P r[Z < t/2] ≤ 2
.
3 Chernoff’s
9
σ ≠ x, r
1
−ε
2
r
σ = x, r
Figure 6: General case: h (r) doesn’t agree with σ = hx, ri with probability 1/2 − over rn
7.4
Warmup 4
Consider agreement 1/2 + , where 0 < <
1
4
is a constant.
Here the idea is to choose at random r1 , . . . , rt independent values such that for j ∈ [t] we can
simultaneously guess hx, rj i for all j with significant probability. Details follow:
• Choose r1 , . . . , rt , (t = O (logn))
• Choose σ1 , . . . , σk ← {0, 1} (hope σi = hx, ri i for all i ∈ [t], which happens with probability
2−t where t is logarithmic in n).
• Set zi = majj {ζj } for i ∈ [n] and ζj = σj ⊕ h (rj ⊕ ei ).
• Output z1 , . . . , zn .
There are O (logn) guesses, so the probability of guessing all the σ’s right is good enough. (i.e.
polynomial.)
Note by Chernoff bound P robr [zi 6= xi ] ≤ 1/2n. So by union bound, we know P robr [z 6= x] ≤
n · (1/2n) = 1/2.
Overall, we know P r [z = x] ≥
all the σ’s right.
7.5
1
poly(n)
· (1/2) =
1
poly(n) ,
where
1
poly(n)
is the probability of guessing
General Case
Proof. Consider agreement 1/2 + , where > 0 is a polynomial fraction. see Figure 6 on page 10.
Here the idea is to choose r1, . . . , rt so that for j ∈ [t] we can simultaneously guess hx, rj i for
2
all j with probability Ω /n (Note that this is much larger than 2−t , which is what random independent guesses would give). Specifically, the strategy is to choose r1 , . . . , rt from a random log
t-dimensional subspace, where t = O (n/2 ) . Details follow:
n
• Choose s1 , . . . , sk ← {0, 1} (k = log (t + 1) , t = O (n/2 )).
• Choose σ1 , . . ., σk ← {0, 1} (hope σi = hx, si i for i ∈ [k], which happens with probability
2
2−k = Ω /n ).
• ∀w ⊆ {1, · · · , k} , w 6= ∅, compute rw = ⊕j∈w sj and ρw = ⊕j∈w σj (if the hope occurs, then
ρw = hx, rw i).
• Set zi = majw {ρw ⊕ h (rw ⊕ ei )} for i ∈ [n] and output z1 , . . . , zn .
Note:
10
1. If for all i, σi = hx, Si i then for all w, ρw = ⊕j∈w σj .
2. The sequence {rw }w⊆{1,··· ,k} is pairwise independent. Consequently, for each i, the sequence
{h (rw ⊕ ei )}w⊆{1,··· ,k} is a sequence of pairwise independent random vectors.
This means we can apply Chebyshev’s inequality: Let Xw be the indicator variable which is 1 if
ρw ⊕ h (rw ⊕ ei ) = xi , where w ⊆ {1, · · · , k}. then all Xw ’s are pairwise independent and
2
P
2
≥ 12 − 2 . Then, if X = w⊆{1,··· ,k} Xw , we have E[X] ≥ t · ( 21 + ). If
E[Xw ] ≥ 21 + and σw
2
( 1 ) −2
the hope occurs, by pairwise independence we know P rob [zi 6= xi ] = P rob[X < t/2] ≤ 2 t·2 ≤
1
1
1
t·2 ≤ 2·n and hence by union bound P r [z 6= x] ≤ /2.
2
2
Overall, we know P r [z = x] ≥ ·Ω /n (1/2) = Ω /n , which establishes the theorem. Therefore, as long as we have polynomially many samples (O (n/2 ) pairwise independent samples), we
are done.
8
Reflections on GL
We can look at the result of the Goldreich-Levin Theorem in a number of ways:
Convertings unpredictability into pseudorandomness: This viewpoint corresponds to what we were
doing here. specifically, if we cannot predict x with probability ≥ , then we cannot compute
hx, ri for a random r with probability > 1/2 + .
Learning: We can also view the Goldreich-Levin Theorem as learning a noisy linear function.
Specifically, consider an oracle for a linear function which may sometimes return wrong answers, and set the goal to be learning the real function.
On the other hand, we can also view the theorem as learning the large Fourier coefficients of a
Boolean function, with the underlying (Fourier) transform being a decomposition of a Boolean
function into linear functions. This subroutine turn out to be very useful for learning a number
of other natural classes of functions (e.g. DNF formula).
Coding theory: We can also relate the theorem to error-correcting codes, viewing hx, ·i as an encoding of x (of length 2n - this is called a Hadamard core), and view the wrong answers from the
aforementioned oracle as transmission errors. Then we see that the Goldreich-Levin Theorem
gives us kind of very fast decoding algorithm, that runs in time poly logarithmic in the length
of the codeword.
References
[1] Manuel Blum and Silvio Micali. How to generate cryptographically strong sequences of pseudorandom bits. SIAM J. Comput., 13(4):850–864, 1984.
[2] O. Goldreich and L. A. Levin. A hard-core predicate for all one-way functions. In STOC ’89:
Proceedings of the twenty-first annual ACM symposium on Theory of computing, pages 25–32,
New York, NY, USA, 1989. ACM Press.
[3] Johan Hastad, Russell Impagliazzo, Leonid A. Levin, and Michael Luby. A pseudo- random
generator from any one-way function. SIAM J. Comput., 28(4):1364–1396, 1999.
[4] Oded Goldreich. Foundations of Cryptography Volume I Basic Tools. Cambridge University
press, 2006.
11
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement