People have been dreaming about heavier than air flight since at least the days of Leonardo Da Vinci (not to mention Icarus from the greek mythology). Jules Verne wrote with rather insightful details about going to the moon in 1865. But, as far as I know, in all the thousands of years people have been using secret writing, until about 50 years ago no one has considered the possibility of communicating securely without first exchanging a shared secret key. However, in the late 1960's and early 1970's, several people started to question this "common wisdom".
Perhaps the most surprising of these visionaries was an undergraduate student at Berkeley named Ralph Merkle. In the fall of 1974 he wrote a project proposal for his computer security course that while "it might seem intuitively obvious that if two people have never had the opportunity to prearrange an encryption method, then they will be unable to communicate securely over an insecure channel... I believe it is false". The project proposal was rejected by his professor as "not good enough". Merkle later submitted a paper to the communication of the ACM where he apologized for the lack of references since he was unable to find any mention of the problem in the scientific literature, and the only source where he saw the problem even raised was in a science fiction story.
The paper was rejected with the comment that "Experience shows that it is extremely dangerous to transmit key information in the clear."
Merkle showed that one can design a protocol where Alice and Bob can use
We only found out much later that in the late 1960's, a few years before Merkle, James Ellis of the British Intelligence agency GCHQ was having similar thoughts.
His curiosity was spurred by an old World-War II manuscript from Bell labs that suggested the following way that two people could communicate securely over a phone line. Alice would inject noise to the line, Bob would relay his messages, and then Alice would subtract the noise to get the signal. The idea is that an adversary over the line sees only the sum of Alice's and Bob's signals, and doesn't know what came from what. This got James Ellis thinking whether it would be possible to achieve something like that digitally. As he later recollected, in 1970 he realized that in principle this should be possible, since he could think of an hypothetical black box
But among all those thinking of public key cryptography, probably the people who saw the furthest were two researchers at Stanford, Whit Diffie and Martin Hellman. They realized that with the advent of electronic communication, cryptography would find new applications beyond the military domain of spies and submarines. And they understood that in this new world of many users and point to point communication, cryptography would need to scale up. They envisioned an object which we now call "trapdoor permutation" though they called it "one way trapdoor function" or sometimes simply "public key encryption". This is a collection of permutations
However, Diffie and Hellman were in a position not unlike physicists who predicted that a certain particle should exist but without any experimental verification. Luckily they met Ralph Merkle, and his ideas about a probabilistic key exchange protocol, together with a suggestion from their Stanford colleague John Gill, inspired them to come up with what today is known as the Diffie-Hellman Key Exchange (which unbeknownst to them was found two years earlier at GCHQ by Malcolm Williamson). They published their paper "New Directions in Cryptography" in 1976, and it is considered to have brought about the birth of modern cryptography. However, they still didn't find their elusive trapdoor function. This was done the next year by Rivest, Shamir and Adleman who came up with the RSA trapdoor function, which through the framework of Diffie and Hellman yielded not just encryption but also signatures (this was essentially the same function discovered earlier by Clifford Cocks at GCHQ, though as far as I can tell Cocks, Ellis and Williamson did not realize the application to digital signatures). From this point on began a flurry of advances in cryptography which hasn't really died down till this day.
Before we embark on the wonderful journey to public key cryptography, let's briefly look back and see what we learned about private key cryptography. This material is mostly covered in Chapters 1 to 9 of the Katz Lindell (KL) book and Part I (Chapters 1-9) of the Boneh Shoup (BS) book. Now would be a good time for you to read the corresponding proofs in one or both of these books. It is often helpful to see the same proof presented in a slightly different way. Below is a review of some of the various reductions we saw in class, with pointers to the corresponding sections in these books.
- Pseudorandom generators (PRG) length extension (from
$n+1$ output PRG to$poly(n)$ output PRG): KL 7.4.2, BS 3.4.2 - PRG's to pseudorandom functions (PRF's): KL 7.5, BS 4.6
- PRF's to Chosen Plaintext Attack (CPA) secure encryption: KL 3.5.2, BS 5.5
- PRF's to secure Message Authentication Codes (MAC's): KL 4.3, BS 6.3
- MAC's + CPA secure encryption to chosen ciphertext attack (CCA) secure encryption: BS 4.5.4, BS 9.4
- Pseudorandom permutation (PRP's) to CPA secure encryption / block cipher modes: KL 3.5.2, KL 3.6.2, BS 4.1, 4.4, 5.4
- Hash function applications: fingerprinting, Merkle trees, passwords: KL 5.6, BS Chapter 8
- Coin tossing over the phone: we saw a construction in class that used a commitment scheme built out of a pseudorandom generator. This is shown in BS 3.12, KL 5.6.5 shows an alternative construction using random oracles.
- PRP's from PRF's: we only sketched the construction which can be found in KL 7.6 or BS 4.5
One major point we did not talk about in this course was one way functions. The definition of a one way function is quite simple:
A function $f:{0,1}^\rightarrow{0,1}^$ is a one way function if it is efficiently computable and for every
The "OWF conjecture" is the conjecture that one way functions exist. It turns out to be a necessary and sufficient condition for much of private key cryptography. That is, the following theorem is known (by combining works of many people):
The following are equivalent: \
- One way functions exist \
- Pseudorandom generators (with non-trivial stretch) exist \
- Pseudorandom functions exist \
- CPA secure private key encryptions exist \
- CCA secure private key encryptions exist \
- Message Authentication Codes exist \
- Commitment schemes exist
The key result in the proof of this theorem is the result of Hastad, Impagliazzo, Levin and Luby that if one way functions exist then pseudorandom generators exist.
If you are interested in finding out more,
Sections 7.2-7.4 in the KL book cover a special case of this theorem for the case that the one way function is a permutation on
Another topic we did not discuss in depth is attacks on private key cryptosystems. These attacks often work by "opening the black box" and looking at the internal operation of block ciphers or hash functions. One then often assigns variables to various internal registers, and then we look to finding collections of inputs that would satisfy some non-trivial relation between those variables. This is a rather vague description, but you can read KL Section 6.2.6 on linear and differential cryptanalysis and BS Sections 3.7-3.9 and 4.3 for more information. See also this course of Adi Shamir. There is also the fascinating area of side channel attacks on both public and private key crypto.
We will discuss in this lecture Digital signatures, which are the public key analog of message authentication codes. Surprisingly, despite being a "public key" object, it is possible to base digital signatures on one-way functions (this is obtained using ideas of Lamport, Merkle, Goldwasser-Goldreich-Micali, Naor-Yung, and Rompel). However these constructions are not very efficient (and this may be inherent) and so in practice people use digital signatures that are built using similar techniques to those used for public key encryption.
We now discuss how we define security for public key encryption. As mentioned above, it took quite a while for cryptographers to arrive at the "right" definition, but in the interest of time we will skip ahead to what by now is the standard basic notion (see also PKCfig{.ref}):
{#PKCfig .class width=300px height=300px}
A triple of efficient algorithms
-
$G$ is a probabilistic algorithm known as the key generation algorithm that on input$1^n$ outputs a distribution over pair of keys$(e,d)$ . \ -
$E$ is the encryption algorithm that takes a pair of inputs$e,m$ with$m\in {0,1}^n$ and outputs$c=E_e(m)$ \ -
$D$ is the decryption algorithm that takes a pair of inputs$d,c$ and outputs$m'=D_d(c)$ . \ - For every
$m\in{0,1}^n$ , with probability$1-negl(n)$ over the choice of$(e,d)$ output from$G(1^n)$ and the coins of$E$ ,$D$,$D_d(E_e(m))=m$ . \
We say that
-
$(e,d) \leftarrow_R G(1^n)$ \ -
$A$ is given$e$ and outputs a pair of messages$m_0,m_1 \in {0,1}^n$ . \ -
$A$ is given$c=E_e(m_b)$ for$b\leftarrow_R{0,1}$ . \ -
$A$ outputs$b'\in{0,1}$ and wins if$b'=b$ .
Despite it being a "chosen plaintext attack", we don't explicitly give
One metaphor for a public key encryption is a "self-locking lock" where you don't need the key to lock it (but rather you simply push the shackle until it clicks and lock) but you do need the key to unlock it.
So, if Alice generates
Why would someone imagine that such a magical object could exist?
The writing of both James Ellis as well as Diffie and Hellman suggests that their thought process was roughly as follows.
You imagine a "magic black box"
"Obfuscation based public key encyption":
Ingredients: (i) A pseudorandom permutation collection ${ p_k }_{k\in {0,1}^}$ where for every $k\in {0,1}^n$, $p_k:{0,1}^n \rightarrow {0,1}^n$, (ii) An "obfuscating compiler" polynomial-time computable $O:{0,1}^ \rightarrow {0,1}^*$ such that for every circuit
$C$ ,$O(C)$ is a circuit that computes the same function as$C$
-
Key Generation: The private key is
$k \leftarrow_R {0,1}^n$ , the public key is$E=O(C_k)$ where$C_k$ is the circuit that maps$x\in {0,1}^n$ to$p_k(x)$ .
-
Encryption: To encrypt
$m\in {0,1}^n$ with public key$E$ , choose$IV \leftarrow_R {0,1}^n$ and output$(IV, E(x \oplus IV))$ .
-
Decryption: To decrypt
$(IV,y)$ with key$k$ , output$IV \oplus p_k^{-1}(y)$ .
Diffie and Hellman couldn't really find a way to make this work, but it convinced them this notion of public key is not inherently impossible.
This concept of compiling a program into a functionally equivalent but "inscrutable" form is known as software obfuscation .
It had turned out to be quite a tricky object to both define formally and achieve, but it serves as a very good intuition as to what can be achieved, even if,
as the random oracle, this intuition can sometimes be too optimistic. (Indeed, if software obfuscation was possible then we could obtain a "random oracle like"
hash function by taking the code of a function
We will not formally define obfuscators yet, but on intuitive level it would be a compiler that takes a program
-
$P'$ is not much slower/bigger than$P$ (e.g., as a Boolean circuit it would be at most polynomially larger) -
$P'$ is functionally equivalent to$P$ , i.e.,$P'(x)=P(x)$ for every input$x$ .1 -
$P'$ is "inscrutable" in the sense that seeing the code of$P'$ is not more informative than getting black box access to$P$ .
Let me stress again that there is no known construction of obfuscators achieving something similar to this definition. In fact, the most natural formalization of this definition is impossible to achieve (as we might see later in this course). Only very recently (exciting!) progress was finally made towards obfuscators-like notions strong enough to achieve these and other applications, and there are some significant caveats (see my survey on this topic).
However, when trying to stretch your imagination to consider the amazing possibilities that could be achieved in cryptography, it is not a bad heuristic to first ask yourself what could be possible if only everyone involved had access to a magic black box. It certainly worked well for Diffie and Hellman.
We would have loved to prove a theorem of the form:
"Theorem": If the PRG conjecture is true then there exists a CPA-secure public key encryption.
This would have meant that we do not need to assume anything more than the already minimal notion of pseudorandom generators (or equivalently, one way functions) to obtain public key cryptography. Unfortunately, no such result is known (and this may be inherent). The kind of results we know have the following form:
Theorem: If problem
$X$ is hard then there exists a CPA-secure public key encryption.
Where
We will start by describing cryptosystems based on the first family (which was discovered before the other, as well as much more widely implemented), and in future lectures talk about the second family.
The Diffie-Hellman public key system is built on the presumed difficulty of the discrete logarithm problem:
For any number
The discrete logarithm problem is the problem of computing, given
John Gill suggested to Diffie and Hellman that modular exponentiation can be a good source for the kind of "easy-to-compute but hard-to-invert" functions they were looking for. Diffie and Hellman based a public key encryption scheme as follows:
- The key generation algorithm, on input
$n$ , samples a prime number$p$ of$n$ bits description (i.e., between$2^{n-1}$ to$2^n$ ), a number$g\leftarrow_R \Z_p$ and$a \leftarrow_R {0,\ldots,p-1}$ . We also sample a hash function$H:{0,1}^n\rightarrow{0,1}^\ell$ . The public key$e$ is$(p,g,g^a,H)$ while the secret key$d$ is$a$ .4
-
The encryption algorithm, on input a message
$m \in {0,1}^\ell$ and a public key$e=(p,g,h,H)$ will choose a random$b\leftarrow_R {0,\ldots,p-1}$ , and output$(g^b,H(h^b)\oplus m)$ . -
The decryption algorithm, on input a ciphertext
$(f,y)$ and the secret key, will output$H(f^a) \oplus y$ .
The correctness of the decryption algorithm follows from the fact that
If there is a polynomial time algorithm for the discrete logarithm problem then the Diffie-Hellman system is insecure.
Using a discrete logarithm algorithm, we can compute the private key
Unfortunately, no such result is known in the other direction. However in the random oracle model, we can prove that this protocol is secure assuming the task of computing
Computational Diffie-Hellman Assumption: Let
$\mathbb{G}$ be a group elements of which can be described in$n$ bits, with an associative and commutative multiplication operation that can be computed in$poly(n)$ time. The Computational Diffie-Hellman (CDH) assumption holds with respect to the group$\mathbb{G}$ if for every generator (see below)$g$ of$\mathbb{G}$ and efficient algorithm$A$ , the probability that on input$g,g^a,g^b$ ,$A$ outputs the element$g^{ab}$ is negligible as a function of$n$ .^[Formally, since it is an asymptotic statement, the CDH assumption needs to be defined with a sequence of groups. However, to make notation simpler we will ignore this issue, and use it only for groups (such as the numbers modulo some$n$ bit primes) where we can easily increase the "security parameter"$n$ .]
In particular we can make the following conjecture:
Computational Diffie-Hellman Conjecture for mod prime groups: For a random
$n$ -bit prime and random$g \in \mathbb{Z}_p$ , the CDH holds with respect to the group$\mathbb{G} = { g^a \mod p ;| a\in \mathbb{Z} }$ .
That is, for every polynomial
Please take your time to re-read the following conjecture until you are sure you understand what it means. Victor Shoup's excellent and online available book A Computational Introduction to Number Theory and Algebra has an in depth treatment of groups, generators, and the discrete log and Diffie-Hellman problem. See also Chapters 10.4 and 10.5 in the Boneh-Shoup book, and Chapters 8.3 and 11.4 in the Katz-Lindell book.
Suppose that the Computational Diffie-Hellman Conjecture for mod prime groups is true. Then, the Diffie-Hellman public key encryption is CPA secure in the random oracle model.
For CPA security we need to prove that (for fixed
-
$(g^a,g^b,H(g^{ab})\oplus m)$ for$a,b$ chosen uniformly and independently in$\Z_{p}$ . \ -
$(g^a,g^b,H(g^{ab})\oplus m')$ for$a,b$ chosen uniformly and independently in$\Z_{p}$ .
(can you see why this implies CPA security? you should pause here and verify this!)
We make the following claim:
CLAIM: For a fixed
Proof of claim: The proof is simple. We claim that under the assumptions above,
Now given the claim, we can complete the proof of security via the following hybrids. Define the following "hybrid" distributions (where in all cases
-
$H_0$ :$(g^a,g^b,H(g^{ab}) \oplus m)$ \ -
$H_1$ :$(g^a,g^b,U_\ell \oplus m)$ \ -
$H_2$ :$(g^a,g^b,U_\ell \oplus m')$ \ -
$H_3$ :$(g^a,g^b,H(g^{ab}) \oplus m')$
The claim implies that
The distributions
The distributions
Together these imply that
As mentioned, the Diffie-Hellman systems can be run with many variants of Abelian groups. Of course, for some of those groups the discrete logarithm problem might be easy, and so they would be inappropriate to use for this system. One variant that has been proposed is elliptic curve cryptography. This is a group consisting of points of the form
In most of the cryptography literature the protocol above is called the Diffie-Hellman Key Exchange protocol, and when considered as a public key system it is sometimes known as ElGamal encryption.6 The reason for this mostly stems from the early confusion on what are the right security definitions. Diffie and Hellman thought of encryption as a deterministic process and so they called their scheme a "key exchange protocol". The work of Goldwasser and Micali showed that encryption must be probabilistic for security. Also, because of efficiency considerations, these days public key encryption is mostly used as a mechanism to exchange a key for a private key encryption, that is then used for the bulk of the communication. Together this means that there is not much point in distinguishing between a two message key exchange algorithm and a public key encryption.
To sample a random
Efficient testing: That there is a
Prime density: That the probability that a random
The number of primes between
Recall that the least common multiple (LCM) of two or more
CLAIM 1:
CLAIM 2: If
The two claim immediately imply the result since they imply that
Proof of CLAIM 1: Let
Proof of CLAIM 2: Consider the integral
If you haven't seen group theory, it might be useful for you to do a quick review. We will not use much group theory and mostly use the theory of finite commutative (also known as Abelian) groups (in fact often cyclic) which are such a baby version that it might not be considered true "group theory" by many group theorists. Shoup's excellent book contains everything we need to know (and much more than that). What you need to remember is the following:
-
A finite commutative group
$\mathbb{G}$ is a finite set together with a multiplication operation that satisfies$a\cdot b = b\cdot a$ and$(a\cdot b)\cdot c = (a\cdot b)\cdot c)$ . -
$\mathbb{G}$ has a special element known as$1$ , where$g1=1g=g$ for every$g\in\mathbb{G}$ and for every$g\in \mathbb{G}$ there exists an element$g^{-1}\in \mathbb{G}$ such that$gg^{-1}=1$ . -
For every
$g\in \mathbb{G}$ , the order of$g$ , denoted$order(g)$ , is the smallest positive integer$a$ such that$g^a=1$ .
The following basic facts are all not too hard to prove and would be useful exercises:
-
For every
$g\in \mathbb{G}$ , the map$a \mapsto g^a$ is a$k$ to$1$ map from${0,\ldots,|\mathbb{G}|-1}$ to$\mathbb{G}$ where$k=|\mathbb{G}|/order(g)$ . See footnote for hint^[For every$f\in \mathbb{G}$ , you can show a one to one and onto mapping between the set${ a : g^a = 1 }$ and the set${b : g^b= f }$ by choosing some element$b$ from the latter set and looking at the map$a \mapsto a+b \mod |\mathbb{G}|$ .] -
As a corollary, the order of
$g$ is always a divisor of$|\mathbb{G}|$ . This is a special case of a more general phenomenon: the set${ g^a ;|; a\in\mathbb{Z} }$ is a subset of the group$\mathbb{G}$ that is closed under multiplication, and such subsets are known as subgroups of$\mathbb{G}$ . It is not hard to show (using the same approach as above) that for every group$\mathbb{G}$ and subgroup$\mathbb{H}$ , the size of$\mathbb{H}$ divides the size of$\mathbb{G}$ . This is known as Lagrange's Theorem in group theory. -
An element
$g$ of$\mathbb{G}$ is called a generator if$order(g)=|\mathbb{G}|$ . A group is called cyclic if it has a generator. If$\mathbb{G}$ is cyclic then there is a (not necessarily efficiently computable) isomorphism$\phi:\mathbb{G}\rightarrow\Z_{|\mathbb{G}|}$ which is a one-to-one and onto map satisfying$\phi(g\cdot h)=\phi(g)+\phi(h)$ for every$g,h\in\mathbb{G}$ .
When using a group
It is not hard to show that a random element
Try to stop here and verify all the facts on groups mentioned above.
Public key encryption solves the confidentiality problem but we still need to solve the authenticity or integrity problem, which might be even more important in practice. That is, suppose Alice wants to endorse a message
A triple of algorithms
- On input
$1^n$ , the probabilistic key generation algorithm$G$ outputs a pair$(s,v)$ of keys, where$s$ is the private signing key and$v$ is the public verification key. \ - On input a message
$m$ and the signing key$s$ , the signing algorithm$S$ outputs a string$\sigma = S_{s}(m)$ such that with probability$1-negl(n)$ ,$V_v(m,S_s(m))=1$ . \ - Every efficient adversary
$A$ wins the following game with at most negligible probability: \
- The keys
$(s,v)$ are chosen by the key generation algorithm. \ - The adversary gets the inputs
$1^n$ ,$v$ , and black box access to the signing algorithm$S_s(\cdot)$ . \ - The adversary wins if they output a pair $(m^,\sigma^)$ such that $m^$ was not queried before to the signing algorithm and $V_v(m^,\sigma^*)=1$.
Just like for MACs (see MACdef{.ref}), our definition of security for digital signatures with respect to a chosen message attack does not preclude the ability of the adversary of producing a new signature for the same message that it has seen a signature of. Just like in MACs, people sometimes consider the notion of strong unforgability which requires that it would not be possible for the adversary to produce a new message-signature pair (even if the message itself was queried before). Some signature schemes (such as the full domain hash and the DSA scheme) satisfy this stronger notion while others do not. However, just like MACs, it is possible to transform any signature with standard security into a signature that satisfies this stronger unforgability condition.
The Diffie-Hellman protocol can be turned into a signature scheme. This was first done by ElGamal, and a variant of his scheme was developed by the NSA and standardized by NIST as the Digital Signature Algorithm (DSA) standard. When based on an elliptic curve this is known as ECDSA. The starting point is the following generic idea of how to turn an encryption scheme into an identification protocol.
If Alice published a public encryption key
However, this falls short of a signature scheme in two aspects:
- This is only an identification protocol and does not allow Alice to endorse a particular message
$m$ . \ - This is an interactive protocol, and so Alice cannot generate a static signature based on
$m$ that can be verified by any party without further interaction.
The first issue is not so significant, since we can always have the ciphertext be an encryption of
The second issue is more serious.
We could imagine Alice trying to run this protocol on her own by generating the ciphertext and then decrypting it, and then sending over the transcript to Bob.
But this does not really prove that she knows the corresponding private key.
After all, even without knowing
DSA Signatures: The DSA signature algorithm works as follows: (See also Section 12.5.2 in the KL book)
-
Key generation: Pick generator
$g$ for$\mathbb{G}$ and$a\in {0,\ldots,|\mathbb{G}|-1}$ and let$h=g^a$ . Pick$H:{0,1}^\ell\rightarrow\mathbb{G}$ and$F:\mathbb{G}\rightarrow\mathbb{G}$ to be some functions that can be thought of as "hash functions".7 The public key is$(g,h)$ (as well as the functions$H,F$ ) and secret key is$a$ . \ -
Signature: To sign a message
$m$ with the key$a$ , pick$b$ at random, and let$f=g^b$ , and then let$\sigma = b^{-1}[H(m)+a\cdot F(f)]$ where all computation is done modulo$|\mathbb{G}|$ . The signature is$(f,\sigma)$ . \ -
Verification: To verify a signature
$(f,\sigma)$ on a message$m$ , check that$s\neq 0$ and$f^\sigma=g^{H(m)}h^{F(f)}$ .
You should pause here and verify that this is indeed a valid signature scheme, in the sense that for every
Very roughly speaking, the idea behind security is that on one hand
Before seeing the actual proof, it is a very good exercise to try to see how to convert the intuition above into a formal proof.
Suppose that the discrete logarithm assumption holds for the group
Suppose, towards the sake of contradiction, that there was a
Recall that in a chosen message attack in the random oracle model, the adversary interacts with a signature oracle, and oracles that compute the functions
Note that we can simulate the result of the experiment
We let $(m^,f^,\sigma^)$ be the message and signature that the adversary $A'$ outputs at the end of a successful attack.
We can assume without loss of generality that $f^$ is queried to the
Case I: The value
Case II: The value
If Case I happens with non negligible probability, then we know that the value $f^$ is queried when producing the signature $(f^,\sigma)$ for some message $m \neq m^$, and so we know the following two equations hold:
$$ g^{H(m)}h^{F(f^)} = (f^)^{\sigma}$$
and
$$ g^{H(m^)}h^{F(f^)}= (f^)^{\sigma^}$$
Taking logs we get the following equations on $a = \log_g h$ and $b=\log_g f^
If Case II happens, then we split it into two cases as well. Case IIa is that this happens and $F(f^)$ is queried before $H(m^)$ is queried, and Case IIb is that this happens and $F(f^)$ is queried after $H(m^)$ is queried.
We start by considering the setting that Case IIa happens with non-negligible probability
If Case IIb happens with non-negligible probability
The bottom line is that we obtain a probabilistic polynomial time algorithm that on input
In this lecture both our encryption scheme and digital signature schemes were not proven secure under a well stated computational assumption, but rather used the random oracle model heuristic. However, it is known how to obtain schemes that do not rely on this heuristic, and we will see such schemes later on in this course.
Let us discuss briefly how public key cryptography is used to secure web trafic through the SSL/TLS protocol that we all use when we use https://
URLs.
The security this achieve is quite amazing. No matter what wired or wireless network you are using, no matter what country you are in, as long as your device (e.g., phone/laptop/etc..) and the server you are talking to (e.g., Google, Amazon, Microsoft etc.) is functioning properly, you can communicate securely without any party in the middle able to either learn or modify the contents of your interaction.8
In the web setting, therre are servers who have public keys, and users who generally don't have such keys. Ideally, as a user, you should already know the public keys of all the entities you communicate with e.g., amazon.com
, google.com
, etc. However, how are you going to learn those public keys?
The traditional answer was that because they are public these keys are much easier to communicate and the servers could even post them as ads on the New York Times. Of course these days everyone reads the Times through nytimes.com
and so this seems like a chicken-and-egg type of problem.
The solution goes back again to the quote of Archimedes of "Give me a fulcrum, and I shall move the world". The idea is that trust can be transitive. Suppose you have a Mac. Then you have already trusted Apple with quite a bit of your personal information, and so you might be fine if this Mac came pre-installed with the Apple public key which you trust to be authentic. Now, suppose that you want to communicate with Amazon.com
. Now, you might not know the correct public key for Amazon, but Apple surely does. So Apple can supply Amazon with a signed message to the effect of
"I Apple certify that the public key of Amazon.com is
30 82 01 0a 02 82 01 01 00 94 9f 2e fd 07 63 33 53 b1 be e5 d4 21 9d 86 43 70 0e b5 7c 45 bb ab d1 ff 1f b1 48 7b a3 4f be c7 9d 0f 5c 0b f1 dc 13 15 b0 10 e3 e3 b6 21 0b 40 b0 a3 ca af cc bf 69 fb 99 b8 7b 22 32 bc 1b 17 72 5b e5 e5 77 2b bd 65 d0 03 00 10 e7 09 04 e5 f2 f5 36 e3 1b 0a 09 fd 4e 1b 5a 1e d7 da 3c 20 18 93 92 e3 a1 bd 0d 03 7c b6 4f 3a a4 e5 e5 ed 19 97 f1 dc ec 9e 9f 0a 5e 2c ae f1 3a e5 5a d4 ca f6 06 cf 24 37 34 d6 fa c4 4c 7e 0e 12 08 a5 c9 dc cd a0 84 89 35 1b ca c6 9e 3c 65 04 32 36 c7 21 07 f4 55 32 75 62 a6 b3 d6 ba e4 63 dc 01 3a 09 18 f5 c7 49 bc 36 37 52 60 23 c2 10 82 7a 60 ec 9d 21 a6 b4 da 44 d7 52 ac c4 2e 3d fe 89 93 d1 ba 7e dc 25 55 46 50 56 3e e0 f0 8e c3 0a aa 68 70 af ec 90 25 2b 56 f6 fb f7 49 15 60 50 c8 b4 c4 78 7a 6b 97 ec cd 27 2e 88 98 92 db 02 03 01 00 01
"
Such a message is known as a certificate, and it allows you to extend your trust in Apple to a trust in Amazon. Now when your browser communicates with amazon, it can request this message, and if it is not present not continue with the interaction or at least display some warning. Clearly a person in the middle can stop this message from travelling and hence not allow the interaction to continue, but they cannot spoof the message and send a certificate for their own public key, unless they know Apple's secret key. (In today's actual implementation, for various business and other reasons, the trusted keys that come pre-installed in browsers and devices do not belong to Apple or Microsoft but rather to particular companies such as Verisign known as certificate authorities. The security of these certificate authorities' private key is crucial to the security of the whole protocol, and it has been attacked before. )
Using certificates, we can assume that Bob the user has the public verification key
This is in a very high level the SSL/TLS protocol, but there are many details inside it including the exact security notions needed from the encryption, how the two parties negotiate which cryptographic algorithm to use, and more. All these issues can and have been used for attacks on this protocol. For two recent discussions see this blog post and this website.
Example: Here is the list of certificate authorities that were trusted by default (as of spring 2016) by Mozilla products: Actalis, Amazon, AS Sertifitseerimiskeskuse (SK), Atos, Autoridad de Certificacion Firmaprofesional, Buypass, CA Disig a.s., Camerfirma, Certicámara S.A., Certigna, Certinomis, certSIGN, China Financial Certification Authority (CFCA), China Internet Network Information Center (CNNIC), Chunghwa Telecom Corporation, Comodo, ComSign, Consorci Administració Oberta de Catalunya (Consorci AOC, CATCert), Cybertrust Japan / JCSI, D-TRUST, Deutscher Sparkassen Verlag GmbH (S-TRUST, DSV-Gruppe), DigiCert, DocuSign (OpenTrust/Keynectis), e-tugra, EDICOM, Entrust, GlobalSign, GoDaddy, Government of France (ANSSI, DCSSI), Government of Hong Kong (SAR), Hongkong Post, Certizen, Government of Japan, Ministry of Internal Affairs and Communications, Government of Spain, Autoritat de Certificació de la Comunitat Valenciana (ACCV), Government of Taiwan, Government Root Certification Authority (GRCA), Government of The Netherlands, PKIoverheid, Government of Turkey, Kamu Sertifikasyon Merkezi (Kamu SM), HARICA, IdenTrust, Izenpe S.A., Microsec e-Szignó CA, NetLock Ltd., PROCERT, QuoVadis, RSA the Security Division of EMC, SECOM Trust Systems Co. Ltd., Start Commercial (StartCom) Ltd., Swisscom (Switzerland) Ltd, SwissSign AG, Symantec / GeoTrust, Symantec / Thawte, Symantec / VeriSign, T-Systems International GmbH (Deutsche Telekom), Taiwan-CA Inc. (TWCA), TeliaSonera, Trend Micro, Trustis, Trustwave, TurkTrust, Unizeto Certum, Visa, Web.com, Wells Fargo Bank N.A., WISeKey, WoSign CA Limited
I record here an alternative way to show that the fraction of primes in
The probability that a random
Let
Thus,
Footnotes
-
For simplicity, assume that the program $P$ is side effect free and hence it simply computes some function, say from ${0,1}^n$ to ${0,1}^\ell$ for some $n,\ell$. ↩
-
There have been some other more exotic suggestions for public key encryption (including some by yours truly as well as suggestions such as the isogeny star problem , though see also this), but they have not yet received wide scrutiny. ↩
-
The running time of the best known algorithms for computing the discrete logarithm modulo $n$ bit primes is $2^{f(n)2^{n^{1/3}}}$ where $f(n)$ is a function that depends polylogarithmically on $n$. If $f(n)$ would equal $1$ then we'd need numbers of $128^3 \approx 2\cdot 10^6$ bits to get $128$ bits of security, but because $f(n)$ is larger than one, the current estimates are that we need to let $n=3072$ bit key to get $128$ bits of of security. Still the existence of such a non-trivial algorithm means that we need much larger keys than those used for private key systems to get the same level of security. In particular, to double the estimated security to $256$ bits, NIST recommends that we multiply the RSA keysize give-fold to $15,360$. (The same document also says that SHA-256 gives $256$ bits of security as a pseudorandom generator but only $128$ bits when used to hash documents for digital signatures; can you see why?) ↩
-
Formally the secret key should contain all the information in the public key plus the extra secret information, but we omit the public information for simplicity of notation. ↩
-
One can get security results for this protocol without a random oracle if we assume a stronger variant known as the Decisional Diffie-Hellman (DDH) assumption. ↩
-
ElGamal's actual contribution was to design a signature scheme based on the Diffie-Hellman problem, a variant of which is the Digital Signature Algorithm (DSA) described below. ↩
-
It is a bit cumbersome, but not so hard, to transform functions that map strings to strings to functions whose domain or range are group elements. As noted in the KL book, in the actual DSA protocol $F$ is not a cryptographic hash function but rather some very simple function that is still assumed to be "good enough" for security. ↩
-
They are able to know that such an interaction took place and the amount of bits exchanged. Preventing these kind of attacks is more subtle and approaches for solutions are known as steganography and anonymous routing. ↩
-
If this key is ephemeral- generated on the spot for this interaction and deleted afterward- then this has the benefit of ensuring the forward secrecy property that even if some entity that is in the habit of recording all communication later finds out Alice's private verification key, then it still will not be able to decrypt the information. In applied crypto circles this property is somewhat misnamed as "perfect forward secrecy" and associated with the Diffie-Hellman key exchange (or its elliptic curves variants), since in those protocols there is not much additional overhead for implementing it (see this blog post). The importance of forward security was emphasized by the discovery of the Heartbleed vulnerability (see this paper) that allowed via a buffer-overflow attack in OpenSSL to learn the private key of the server. ↩