Lecture 1 - NYU Computer Science

Transcrição

Lecture 1 - NYU Computer Science
G22.3220-001/ G63.2180-001 Advanced Cryptography
September 9, 2009
Lecture 1
Lecturer: Yevgeniy Dodis
Scribe: Bianca De Souza
Identification problem
Suppose Alice wants to convince Bob she is who she is . Her twin sister Eve is interested to fool
Bob by successfully impersonating Alice.
In this situation, the following questions rise:
• Dimension 1: Is Bob’s storage secure from Eve?
• Dimension 2: Can Eve eavesdrop when real Alice is authenticating to Bob?
• Dimension 3: Is Alice’s storage secure from Eve?
Depending on each possibility for the above questions, a particular scheme (sometimes more
than one scheme) is applicable. We address dimensions 1 and 2, while dimension 3 will be addressed
later.
Next, we are going to address each of these situations for dimensions 1 and 2.
Secure Communication and Bob’s storage
Note that when both storage and communication channel are secure, Alice and Bob may share a
common secret and when communicating to verify Alice, Bob asks her what is the common secret.
This is a trivial case.
1-1
What about Dimension 3 (Alice’s storage insecure)? Of course, if Eve knows x then Eve will
be able to impersonate her. So how can we hope to have security in this case? We will assume that
Eve knows ≤ ℓ bits of information about x, where ℓ < n = |x|. Formally, for an arbitrary function
g : {0, 1}n → {0, 1}ℓ , we assume that Eve can obtain y = g(x).
We can then observe the following simple lemma, establishing dimension 3 of the trivial scheme.
Lemma 1. If Alice and Bob share a random n-bit secret x and Eve can learn y = g(x), where
|y| ≤ n − λ then Eve can’t impersonate Alice with probability ≥ 2−λ .
To see this consider two given random variables X and Y , possibly correlated. We can define
the following useful notions:
(a) Entropy:1 we say H∞ (X) ≥ n if for all (unbounded) A, Pr(A → X) ≤
1
2n .
(b) Conditional Entropy: more generally, we say H∞ (X | Y ) ≥ t if for all (unbounded) A,
Pr(A(Y ) → X) ≤ 21t .
We can then easily show that
Lemma 2. If |Y | ≤ ℓ and H∞ (x) ≥ n then H∞ (X | Y ) ≥ n − ℓ.
1
To prove this, assume that there is some A such that Pr(A(Y ) → X) > 2n−ℓ
. We can construct
ℓ
who simply guesses the value Y , being right with probability at least 1/2 , and then runs A(Y ).
1
Clearly, Pr(A′ → X) > 21ℓ · 2n−ℓ
= 21n , a contradiction.
Lemma 1 now follows with X = x, Y = g(x) and ℓ = |Y | ≤ n − λ.
A′
Password authentication
In the scenario where the channel of communication is secure but Bob’s store is not, the following
protocol can be used: Alice has a secret x (password) and Bob stores y = f (x) for a certain function
f.
• Alice sends x = x′ to Bob and Bob computes f (x′ ).
• Bob compares it with the value y stored and accept Alice’s proof if and only if f (x′ ) = y.
Since Eve may have access to Bob’s storage, what properties of f are needed for this scheme to
work?
(a) f is efficient (easy to compute)
(b) Given y = f (x) for a random x, it is hard to compute any x′ such that f (x′ ) = y.
In other words, f is a one-way function (OWF)!
Definition 3. Let λ be a security parameter, n = n(λ) is the input size and k = k(λ) is the output
length. A function f : {0, 1}n → {0, 1}k is one-way if :
1. ∃ poly-time algorithm which computes f (x) correctly ∀x.
2. ∀ PPT(probabilistic polynomial time) attacker A, AdvOWF (A) ≤ negl(λ) ,
where AdvOWF (A) = Pr(f (x′ ) = y | x ← {0, 1}n ; y = f (x); x′ ← A(y, 1λ ))
1
This notion is usually called min-entropy, but we will omit “min-” since we will not study other notions.
1-2
Remark: More generally, one can have a key generator algorithm Gen such that Gen(1λ ) outputs
a public key P K, which implicitly defines a domain D = D(P K) and the range R = R(P K) of a
given one-way function fP K .
Example: Under the Discrete Log Assumption, exponentiation mod a prime is a OWF.
Gen(1λ ): p is a randomly generated λ-bit prime.
g is a generator of field Z∗p .
Public key P K = (p, g) determines DP K = Zp−1 and RP K = Z∗p .
fp,g (x) = g x mod p.
n ≈ k ≈ λ.
What about in dimension 3? As before, let’s assume that Eve can learn at most ℓ = ℓ(n) bits
about Alice’s n-bit secret x (ℓ < n). To solve this leakage problem, we need the definition below.
Definition 4. λ is a security parameter, n = n(λ) and k = k(λ) are the bit length of the input
and output, respectively. A function f : {0, 1}n → {0, 1}k is an ℓ-leakage-resilient one-way
function (ℓ-LR-OWF) if :
1. ∃ poly-time algorithm which computes f (x) correctly ∀x.
2. ∀ PPTA = (A1 , A2 ) such that P K ← Gen(1λ ), x ← DP K
A1 (P K) → (g, state)
y = fP K (x), z ← g(x)
A2 (y, z, state) → x′
Pr(fP K (x′ ) = y) ≤ negl(λ)
With this in mind, we can try the following:
Attempt 1: Hope that all OWF’s are LR. The good news is that it is true for ℓ(λ) = O(log λ)
1
). The bad news is that it is unlikely we can say more about it. As an example,
(since 21ℓ ≥ poly(λ)
consider f (x1 , x2 ) = f ′ (x1 ) where |x1 | = λ and |x2 | = λ100 .
Attempt 2: Try natural OWF’s. As examples, for p prime, consider f ′ (x) = g x mod p and
f (x) = g1x1 g2x2 mod p where g, g1 and g2 are independent random generators of a large primeorder subgroup G of Z∗p . Turns out we don’t know how to prove leakage-resilience of f ′ , but will
later will that f is LR under the Discrete Log Assumption. Let us see why...
Definition 5. λ is a security parameter, n = n(λ) is the input size and k = k(λ) is the output
length. A function f : {0, 1}n → {0, 1}k is second preimage resistant (SPR) if :
1. n > k.
2. ∀ PPTattacker B, AdvSPR (B) ≤ negl(λ), where
AdvSPR (B) = Pr(x 6= x′ , f (x′ ) = f (x′ ) | x ← {0, 1}n ; B(x) → x′ )
Lemma 6. If f is SPR then f is (n − k − ω(log λ))-LR-OWF. Further, if f is regular (i.e., every
y ∈ {0, 1}k has precisely 2n−k pre-images x), then f is (n − k − 1)-LR-OWF.
Proof. Assume that f is not LR-OWF. So there exists A such that AdvLR−OWF (A) ≥ ǫ nonℓ+k
neglegible. We can construct an algorithm B such that AdvSPR (B) ≥ ǫ − 22n , i.e., non-negligible
because ℓ ≤ (n − k − ω(log λ)).
1-3
B(P K, f, x)
• run A(P K) → g
• y = f (x)
• z ← g(x)
• run A(y, z) → x′
• output x′
We have that Pr(A succeds) ≥ ǫ where f (x′ ) = f (x). But what are the changes that x 6= x′ ?
Recall that Pr(X ∧ Y ) ≥ Pr(X) − Pr(not Y ). Thus,
Pr(B succeeds) ≥ Pr(A succeeds ∧ x 6= x′ )
≥ Pr(A succeeds) − Pr(x = x′ )
≥ ǫ − Pr(x = x′ )
ℓ+k
All that remains is to show that Pr(x = x′ ) ≤ 22n = negl(λ). But |(y, z)| = k + ℓ, so Lemma 1
1
= negl(λ).
implies that H∞ (X | (Y, Z)) ≥ n − (k + ℓ). Therefore for all A, Pr(x = x′ ) ≤ 2n−(ℓ+k)
When f is regular, we can have a better analysis. Namely,
Pr(B succeeds) ≥ Pr(A succeeds ∧ x 6= x′ )
= Pr(A succeeds) · Pr(x 6= x′ | A succeeds)
ℓ+k
≥ ǫ · (1 − 22n )
≥ 2ǫ
Here we used the fact that every y has 2n−k pre-images, so even though A’s success biases the
distribution of y, this does not affect the chances that x 6= x′ , so the latter probability is indeed
ℓ+k
(1 − 22n ) ≥ 21 .
Let’s construct the promised SPR.
Theorem 7. (See Pompel’ 89) If OWF exist then ∀ polynomial poly(λ) there exist SPR where k = λ
and n = poly(λ).
Corollary 8. If OWFexist then there exist ℓ-LR-OWFon n bits with ℓ(n) = n − λ.
Observation 9. If f is collision-resistant then f is SPR.
Definition 10. A function f is collision-resistant (CR) if for all A,
AdvCR (A) ≤ negl(λ) where AdvCR (A) = Pr(x 6= x′ , f (x′ ) = f (x′ ) | (x, x′ ) ← A(f ))
Note that for CR’s, A can choose any x and for the case of SPR’s x has to be random.
1-4
Example: For our previous example, we show f (x) = g1x1 g2x2 mod p is collision-resistant. Suppose that f is not a CR, then one can find (x1 , x2 ) 6= (x′1 , x′2 ) such that
x′
x′
g1x1 g2x2 = g1 1 g2 2
[(x′ −x )(x −x′ )−1 mod q]
Thus, g2 = g1 1 1 2 2
, where q is the order of the subgroup G of Z∗p with the large
prime order. But this contradicts the fact that computing discrete log of g2 base g1 is hard since
g2 is random.
Note that this function is f : 2k −→ k. Using the previous lemma and the regularity of f , we
can tolerate leakage of ℓ = n − k − 1 ≈ 2k − k = k = n/2 bits. Using more generators g1 , . . . , gt , we
can have f : tk −→ k and tolerate leakage of ℓ = n − k − 1 ≈ tk − k = (1 − 1t )n bits.
With this, we have covered the first row of the table above.
Symetric-Key Identification Schemes
Suppose that Alice and Bob share a symmetric key s ∈ {0, 1}λ , which is not known to Eve. However,
Eve can now observe the communication between Alice and Bob.
More specifically, Eve’s attack has two stategs:
(a) Learning stage, where Eve can see how real Alice proves her identity. There are two options
here.
Passive attack - means that Eve does not interfere, she only gets transcripts of honest
interactions between Alice and Bob.
Active attack - means that Eve interact with either Bob or Alice (or both); for instance,
as in the ’chosen message attack’ or ’man-in-the-middle’.
(b) Impersonation stage, where Alice disappears and Eve is trying to impersonate Alice to Bob:
We require that Pr(Eve impersonates Alice) ≤ negl(λ). That means that after Alice leaves
the communication channel, Eve is not able pretend to be Alice.
Below are two natural schemes using symmetric-key encryption and authetication.
Scheme Using Symmetric-key Encryption (Es , Ds ):
• Given m ← {0, 1}λ , Bob computes c = Es (m) and sends c to Alice
• Alice then computes m̃ = Ds (c) and sends it to Bob
• If m = m̃ then Bob accepts Alice’s ID
Scheme Using Symmetric-key Authentication (M ACs ):
• Given m ← {0, 1}λ , Bob sends m to Alice
• Alice then computes σ = M ACs (m) and sends it to Bob
• Bob verifies Alice’s ID using s to get m back
1-5
Next time we’ll prove the following theorems:
Theorem 11. The SK-ENC scheme above is:
(i) passive secure if (E, D) is one-way under chosen plaintext attack (OW-CPA).
(ii) active secure if (E, D) is one-way against pre-challenge chosen ciphertext attack (OW-CCA1).
Theorem 12. The SK-MAC scheme above is active secure if M AC is universally unforgeable
under chosen message attack (UUF-CMA).
1-6