-
Notifications
You must be signed in to change notification settings - Fork 14
/
Copy pathl14-forcehttps.html
551 lines (490 loc) · 22.5 KB
/
l14-forcehttps.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
<h1>SSL/TLS and HTTPS</h1>
<p><strong>Note:</strong> These lecture notes were slightly modified from the ones posted on the 6.858 <a href="http://css.csail.mit.edu/6.858/2014/schedule.html">course website</a> from 2014.</p>
<p>This lecture is about two related topics:</p>
<ul>
<li>How to cryptographically protect network communications,
at a larger scale than Kerberos?
<ul>
<li>Technique: certificates</li>
</ul></li>
<li>How to integrate cryptographic protection of network traffic
into the web security model?
<ul>
<li>HTTPS, Secure cookies, etc.</li>
</ul></li>
</ul>
<h2>Symmetric vs. asymmetric encryption</h2>
<p><strong>Recall:</strong> two kinds of encryption schemes.</p>
<ul>
<li><code>E</code> is encrypt, <code>D</code> is decrypt</li>
<li>Symmetric key cryptography means <em>same key</em> is used to encrypt & decrypt
<ul>
<li><code>ciphertext = E_k(plaintext)</code></li>
<li><code>plaintext = D_k(ciphertext)</code></li>
</ul></li>
<li>Asymmetric key (public-key) cryptography: encrypt & decrypt keys differ
<ul>
<li><code>ciphertext = E_PK(plaintext)</code></li>
<li><code>plaintext = D_SK(ciphertext)</code></li>
<li><code>PK</code> and <code>SK</code> are called public and secret (private) key, respectively</li>
</ul></li>
<li>Public-key cryptography is orders of magnitude slower than symmetric</li>
<li>Encryption provides data <em>secrecy</em>, however, we often also want <em>integrity</em>.</li>
<li><em>Message authentication code (MAC)</em> with symmetric keys can provide integrity.
<ul>
<li>Look up <em>HMAC</em> if you're interested in more details.</li>
</ul></li>
<li>Can use public-key crypto to sign and verify, almost the opposite:
<ul>
<li>Use secret key to generate signature (compute <code>D_SK</code>)</li>
<li>Use public key to check signature (compute <code>E_PK</code>)</li>
</ul></li>
</ul>
<h2>Kerberos</h2>
<ul>
<li>Central KDC knows all principals and their keys.</li>
<li>When <code>A</code> wants to talk to <code>B</code>, <code>A</code> asks the KDC to issue a ticket.</li>
<li>Ticket contains a session key for <code>A</code> to talk to <code>B</code>, generated by KDC.</li>
</ul>
<h3><em>Why is Kerberos not enough?</em></h3>
<ul>
<li>E.g., why isn't the web based on Kerberos?</li>
<li>Might not have a single KDC trusted to generate session keys.</li>
<li>Not everyone might have an account on this single KDC.</li>
<li>KDC might not scale if users contact it every time they went to a web site.</li>
<li>Unfortunate that KDC knows what service each user is connecting to.
<ul>
<li>These limitations are largely inevitable with symmetric encryption.</li>
</ul></li>
</ul>
<h3><em>Alternative plan:</em> Use public key encryption</h3>
<ul>
<li>Suppose <code>A</code> knows the public key of <code>B</code>.</li>
<li>Don't want to use public-key encryption all the time (slow).</li>
<li>Strawman protocol for establishing a secure connection between <code>A</code> and <code>B</code>:
<ul>
<li><code>A</code> generates a random symmetric session key <code>S</code>.</li>
<li><code>A</code> encrypts <code>S</code> for <code>PK_B</code>, sends to <code>B</code>.</li>
<li>Now we have secret key <code>S</code> shared between <code>A</code> and <code>B</code>, can encrypt and</li>
<li>authenticate messages using symmetric encryption, much like Kerberos.</li>
</ul></li>
</ul>
<h3>Good properties of this strawman protocol:</h3>
<ul>
<li><code>A</code>'s data seen only by <code>B</code>.
<ul>
<li>Only <code>B</code> (with <code>SK_B</code>) can decrypt <code>S</code>.</li>
<li>Only <code>B</code> can thus decrypt data encrypted under <code>S</code>.</li>
</ul></li>
<li>No need for a KDC-like central authority to hand out session keys.</li>
</ul>
<h3>What goes wrong with this strawman?</h3>
<ul>
<li>Adversary can record and later replay <code>A</code>'s traffic; <code>B</code> would not notice.
<ul>
<li>Solution: have <code>B</code> send a nonce (random value).</li>
<li>Incorporate the nonce into the final master secret <code>S' = f(S, nonce)</code>.</li>
<li>Often, <code>S</code> is called the <em>pre-master secret</em>, and <code>S'</code> is the <em>master secret</em>.</li>
<li>This process to establish <code>S'</code> is called the "handshake".</li>
</ul></li>
<li>Adversary can impersonate <code>A</code>, by sending another symmetric key to <code>B</code>.
<ul>
<li>Many possible solutions, if <code>B</code> cares who <code>A</code> is.</li>
<li>E.g., <code>B</code> also chooses and sends a symmetric key to <code>A</code>, encrypted with <code>PK_A</code>.</li>
<li>Then both <code>A</code> and <code>B</code> use a hash of the two keys combined.</li>
<li>This is roughly how TLS client certificates work.</li>
</ul></li>
<li>Adversary can later obtain <code>SK_B</code>, decrypt symmetric key and all messages.
<ul>
<li>Solution: use a key exchange protocol like Diffie-Hellman,</li>
<li>which provides <em>forward secrecy</em>, as discussed in last lecture.</li>
</ul></li>
</ul>
<h3><em>Hard problem:</em> what if neither computer knows each other's public key?</h3>
<ul>
<li>Common approach: use a trusted third party to generate certificates.</li>
<li>Certificate is tuple (name, pubkey), signed by certificate authority.</li>
<li>Meaning: certificate authority claims that name's public key is pubkey.</li>
<li><code>B</code> sends <code>A</code> a pubkey along with a certificate.</li>
<li>If <code>A</code> trusts certificate authority, continue as above.</li>
</ul>
<h3>Why might certificates be better than Kerberos?</h3>
<ul>
<li>No need to talk to KDC each time client connects to a new server.</li>
<li>Server can present certificate to client; client can verify signature.</li>
<li>KDC not involved in generating session keys.</li>
<li>Can support "anonymous" clients that have no long-lived key / certificate.</li>
</ul>
<h2>Plan for securing web browsers: HTTPS</h2>
<ul>
<li>New protocol: <em>https</em> instead of <em>http</em> (e.g., <a href="https://www.paypal.com/">https://www.paypal.com/</a>).</li>
<li>Need to protect several things:
<ul>
<li><strong>(A)</strong> Data sent over the network.</li>
<li><strong>(B)</strong> Code/data in user's browser.</li>
<li><strong>(C)</strong> UI seen by the user.</li>
</ul></li>
<li><strong>(A)</strong> <em>How to ensure data is not sniffed or tampered with on the network?</em>
<ul>
<li>Use TLS (a cryptographic protocol that uses certificates).</li>
<li>TLS encrypts and authenticates network traffic.</li>
<li>Negotiate ciphers (and other features: compression, extensions).</li>
<li>Negotiation is done in clear.</li>
<li>Include a MAC of all handshake messages to authenticate.</li>
</ul></li>
<li><strong>(B)</strong> <em>How to protect data and code in the user's browser?</em>
<ul>
<li><strong>Goal:</strong> connect browser security mechanisms to whatever TLS provides.</li>
<li>Recall that browser has two main security mechanisms:</li>
<li>Same-origin policy.</li>
<li>Cookie policy (slightly different).</li>
<li>Same-origin policy with HTTPS/TLS.</li>
<li>TLS certificate name must match hostname in the URL
<ul>
<li>In our example, certificate name must be www.paypal.com.</li>
<li>One level of wildcard is also allowed (*.paypal.com)</li>
<li>Browsers trust a number of certificate authorities.</li>
</ul></li>
<li>Origin (from the same-origin policy) includes the protocol.
<ul>
<li>http://www.paypal.com/ is different from https://www.paypal.com/</li>
<li>Here, we care about integrity of data (e.g., JavaScript code).</li>
<li><em>Result:</em> non-HTTPS pages cannot tamper with HTTPS pages.</li>
<li><em>Rationale:</em> non-HTTPS pages could have been modified by adversary.</li>
</ul></li>
<li>Cookies with HTTPS/TLS.
<ul>
<li>Server certificates help clients differentiate between servers.</li>
<li>Cookies (common form of user credentials) have a "Secure" flag.</li>
<li>Secure cookies can only be sent with HTTPS requests.</li>
<li>Non-Secure cookies can be sent with HTTP and HTTPS requests.</li>
</ul></li>
<li>What happens if adversary tampers with DNS records?
<ul>
<li>Good news: security doesn't depend on DNS.</li>
<li>We already assumed adversary can tamper with network packets.</li>
<li>Wrong server will not know correct private key matching certificate.</li>
</ul></li>
</ul></li>
<li><strong>(C)</strong> <em>Finally, users can enter credentials directly. How to secure that?</em>
<ul>
<li>Lock icon in the browser tells user they're interacting with HTTPS site.</li>
<li>Browser should indicate to the user the name in the site's certificate.</li>
<li>User should verify site name they intend to give credentials to.</li>
</ul></li>
</ul>
<p>How can this plan go wrong?</p>
<ul>
<li>As you might expect, every step above can go wrong.</li>
<li>Not an exhaustive list, but gets at problems that ForceHTTPS wants to solve.</li>
</ul>
<h3><strong>1 (A)</strong> Cryptography</h3>
<p>There have been some attacks on the cryptographic parts of SSL/TLS.</p>
<ul>
<li>Attack by Rizzo and Duong can allow adversary to learn some plaintext by
issuing many carefully-chosen requests over a single connection.
<a href="http://www.educatedguesswork.org/2011/09/security_impact_of_the_rizzodu.html">Ref</a></li>
<li>Recent attack by same people using compression, mentioned in iSEC lecture.
<a href="http://en.wikipedia.org/wiki/CRIME">Ref</a></li>
<li>Most recently, more padding oracle attacks.
<a href="https://www.openssl.org/~bodo/ssl-poodle.pdf">Ref</a></li>
<li>Some servers/CAs use weak crypto, e.g. certificates using MD5.</li>
<li>Some clients choose weak crypto (e.g., SSL/TLS on Android).
<a href="http://op-co.de/blog/posts/android_ssl_downgrade/">Ref</a></li>
<li>But, cryptography is rarely the weakest part of a system.</li>
</ul>
<h3><strong>2 (B)</strong> Authenticating the server</h3>
<p>Adversary may be able to obtain a certificate for someone else's name.</p>
<ul>
<li>Used to require a faxed request on company letterhead (but how to check?)</li>
<li>Now often requires receiving secret token at root@domain.com or similar</li>
<li>Security depends on the policy of least secure certificate authority</li>
<li>There are 100's of trusted certificate authorities in most browsers</li>
<li>Several CA compromises in 2011 (certs for gmail, facebook, ..)
<a href="http://dankaminsky.com/2011/08/31/notnotar/">Ref</a> </li>
<li>Servers may be compromised and the corresponding private key stolen.</li>
</ul>
<p>How to deal with compromised certificate (e.g., invalid cert or stolen key)?</p>
<ul>
<li>Certificates have expiration dates.</li>
<li>Checking certificate status with CA on every request is hard to scale.</li>
<li>Certificate Revocation List (CRL) published by some CA's, but relatively
<ul>
<li>few certificates in them (spot-checking: most have zero revoked certs).</li>
</ul></li>
<li>CRL must be periodically downloaded by client.
<ul>
<li>Could be slow, if many certs are revoked.</li>
<li>Not a problem if few or zero certs are revoked, but not too useful.</li>
</ul></li>
<li>OCSP: online certificate status protocol.
<ul>
<li>Query whether a certificate is valid or not.</li>
<li>One issue: OCSP protocol didn't require signing "try later" messages.
<a href="http://www.thoughtcrime.org/papers/ocsp-attack.pdf">Ref</a></li>
</ul></li>
<li>Various heuristics for guessing whether certificate is OK or not.
<ul>
<li>CertPatrol, EFF's SSL Observatory, ..</li>
<li>Not as easy as "did the cert change?". Websites sometimes test new CAs.</li>
</ul></li>
<li>Problem: online revocation checks are soft-fail.
<ul>
<li>An active network attacker can just make the checks unavailable.</li>
<li>Browsers don't like blocking on a side channel.</li>
<li>Performance, single point of failure, captive portals, etc.
[ Ref: https://www.imperialviolet.org/2011/03/18/revocation.html ]</li>
</ul></li>
<li>In practice browsers push updates with blacklist after major breaches.
[ Ref: https://www.imperialviolet.org/2012/02/05/crlsets.html ]</li>
</ul>
<p>Users ignore certificate mismatch errors.</p>
<ul>
<li>Despite certificates being easy to obtain, many sites misconfigure them.</li>
<li>Some don't want to deal with (non-zero) cost of getting certificates.</li>
<li>Others forget to renew them (certificates have expiration dates).</li>
<li>End result: browsers allow users to override mismatched certificates.
<ul>
<li>Problematic: human is now part of the process in deciding if cert is valid.</li>
<li>Hard for developers to exactly know what certs will be accepted by browsers.</li>
</ul></li>
<li>Empirically, about 60% of bypass buttons shown by Chrome are clicked through.
(Though this data might be stale at this point..)</li>
</ul>
<p>What's the risk of a user accepting an invalid certificate?</p>
<ul>
<li>Might be benign (expired cert, server operator forgot to renew).</li>
<li>Might be a man-in-the-middle attack, connecting to adversary's server.</li>
<li>Why is this bad?
<ul>
<li>User's browser will send user's cookies to the adversary.</li>
<li>User might enter sensitive data into adversary's website.</li>
<li>User might assume data on the page is coming from the right site.</li>
</ul></li>
</ul>
<h3><strong>3 (B)</strong> Mixing HTTP and HTTPS content</h3>
<ul>
<li>Web page origin is determined by the URL of the page itself.</li>
<li>Page can have many embedded elements:
<ul>
<li>Javascript via <code><SCRIPT></code> tags</li>
<li>CSS style sheets via <code><STYLE></code> tags</li>
<li>Flash code via <code><EMBED></code> tags</li>
<li>Images via <code><IMG></code> tags</li>
</ul></li>
<li>If adversary can tamper with these elements, could control the page.
<ul>
<li>In particular, Javascript and Flash code give control over page.</li>
<li>CSS: less control, but still abusable, esp w/ complex attribute selectors.</li>
</ul></li>
<li>Probably the developer wouldn't include Javascript from attacker's site.</li>
<li>But, if the URL is non-HTTPS, adversary can tamper with HTTP response.</li>
<li>Alternative approach: explicitly authenticate embedded elements.</li>
<li>E.g., could include a hash of the Javascript code being loaded.
<ul>
<li>Prevents an adversary from tampering with response.</li>
<li>Does not require full HTTPS.</li>
</ul></li>
<li>Might be deployed in browsers in the near future.
<a href="http://www.w3.org/TR/SRI/">Ref</a></li>
</ul>
<h3><strong>4 (B)</strong> Protecting cookies</h3>
<ul>
<li>Web application developer could make a mistake, forgets the Secure flag.</li>
<li>User visits http://bank.com/ instead of https://bank.com/, leaks cookie.</li>
<li>Suppose the user only visits https://bank.com/.
<ul>
<li>Why is this still a problem?
<ul>
<li>Adversary can cause another HTTP site to redirect to http://bank.com/.</li>
<li>Even if user never visits any HTTP site, application code might be buggy.</li>
</ul></li>
<li>Some sites serve login forms over HTTPS and serve other content over HTTP.</li>
<li>Be careful when serving over both HTTP and HTTPS.
<ul>
<li>E.g., Google's login service creates new cookies on request.</li>
<li>Login service has its own (Secure) cookie.</li>
<li>Can request login to a Google site by loading login's HTTPS URL.</li>
<li>Used to be able to also login via cookie that wasn't Secure.</li>
<li>ForceHTTPS solves problem by redirecting HTTP URLs to HTTPS.
<a href="http://blog.icir.org/2008/02/sidejacking-forced-sidejacking-and.html">Ref</a></li>
</ul></li>
</ul></li>
<li>Cookie integrity problems.
<ul>
<li>Non-Secure cookies set on http://bank.com still sent to https://bank.com.</li>
<li>No way to determine who set the cookie.</li>
</ul></li>
</ul>
<h3><strong>5 (C)</strong> Users directly entering credentials</h3>
<ul>
<li>Phishing attacks.</li>
<li>Users don't check for lock icon.</li>
<li>Users don't carefully check domain name, don't know what to look for.
<ul>
<li>E.g., typo domains (paypa1.com), unicode</li>
</ul></li>
<li>Web developers put login forms on HTTP pages (target login script is HTTPS).
<ul>
<li>Adversary can modify login form to point to another URL.</li>
<li>Login form not protected from tampering, user has no way to tell.</li>
</ul></li>
</ul>
<h2>ForceHTTPS</h2>
<h3>How does ForceHTTPS (this paper) address some of these problems?</h3>
<ul>
<li>Server can set a flag for its own hostname in the user's browser.
<ul>
<li>Makes SSL/TLS certificate misconfigurations into a fatal error.</li>
<li>Redirects HTTP requests to HTTPS.</li>
<li>Prohibits non-HTTPS embedding (+ performs ForceHTTPS for them).</li>
</ul></li>
</ul>
<h3>What problems does ForceHTTPS solve?</h3>
<ul>
<li>Mostly 2, 3, and to some extent 4 (see list above)
<ul>
<li>Users accepting invalid certificates.</li>
<li>Developer mistakes: embedding insecure content.</li>
<li>Developer mistakes: forgetting to flag cookie as Secure.</li>
<li>Adversary injecting cookies via HTTP.</li>
</ul></li>
</ul>
<h3>Is this really necessary? Can we just only use HTTPS, set Secure cookies, etc?</h3>
<ul>
<li>Users can still click-through errors, so it still helps for #2.</li>
<li>Not necessary for #3 assuming the web developer never makes a mistake.</li>
<li>Still helpful for #4.
<ul>
<li>Marking cookies as Secure gives confidentiality, but not integrity.</li>
<li>Active attacker can serve fake set at http://bank.com, and set cookies
<ul>
<li>for https://bank.com. (https://bank.com cannot distinguish)</li>
</ul></li>
</ul></li>
</ul>
<h3>Why not just turn on ForceHTTPS for everyone?</h3>
<ul>
<li>HTTPS site might not exist.</li>
<li>If it does, might not be the same site (https://web.mit.edu is
<ul>
<li>authenticated, but http://web.mit.edu isn't).</li>
</ul></li>
<li>HTTPS page may expect users to click through (self-signed certs).</li>
</ul>
<h3>Implementing ForceHTTPS</h3>
<ul>
<li>The ForceHTTPS bit is stored in a cookie.</li>
<li>Interesting issues:
<ul>
<li>State exhaustion (the ForceHTTPS cookie getting evicted).</li>
<li>Denial of service (force entire domain; force via JS; force via HTTP).
<ul>
<li>Why does ForceHTTPS only allow specific hosts, instead of entire domain?</li>
<li>Why does ForceHTTPS require cookie to be set via headers and not via JS?</li>
<li>Why does ForceHTTPS require cookie to be set via HTTPS, not HTTP?</li>
</ul></li>
<li>Bootstrapping (how to get ForceHTTPS bit; how to avoid privacy leaks).
<ul>
<li>Possible solution 1: DNSSEC.</li>
<li>Possible solution 2: embed ForceHTTPS bit in URL name (if possible).</li>
<li>If there's a way to get some authenticated bits from server owner
(DNSSEC, URL name, etc), should we just get the public key directly?</li>
<li>Difficulties: users have unreliable networks. Browsers are unwilling
to block the handshake on a side-channel request.</li>
</ul></li>
</ul></li>
</ul>
<h3>Current status of ForceHTTPS</h3>
<ul>
<li>Some ideas from ForceHTTPS have been adopted into standards.</li>
<li>HTTP Strict-Transport-Security header is similar to a ForceHTTPS cookie.
<a href="http://tools.ietf.org/html/rfc6797">Ref: RFC6797</a>,
<a href="http://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security">Ref: HTTP Strict Transport Security</a>
<ul>
<li>Uses header instead of magic cookie:
<ul>
<li>Strict-Transport-Security: max-age=7884000; includeSubDomains</li>
</ul></li>
<li>Turns HTTP links into HTTPS links.</li>
<li>Prohibits user from overriding SSL/TLS errors (e.g., bad certificate).</li>
<li>Optionally applies to all subdomains.
<ul>
<li>Why is this useful? Non-secure and forged cookies can be leaked or set on subdomains.</li>
</ul></li>
<li>Optionally provides an interface for users to manually enable it.</li>
<li>Implemented in Chrome, Firefox, and Opera.</li>
<li>Bootstrapping largely unsolved.</li>
<li>Chrome has a hard-coded list of preloads.</li>
</ul></li>
<li>IE9, Firefox 23, and Chrome now block mixed scripting by default.
<ul>
<li><a href="http://blog.chromium.org/2012/08/ending-mixed-scripting-vulnerabilities.html">Ref: Ending mixed scripting vulnerabilities</a>,</li>
<li><a href="https://blog.mozilla.org/tanvi/2013/04/10/mixed-content-blocking-enabled-in-firefox-23/">Ref: Mixed content blocking enabled in Firefox</a>,</li>
<li><a href="http://blogs.msdn.com/b/ie/archive/2011/06/23/internet-explorer-9-security-part-4-protecting-consumers-from-malicious-mixed-content.aspx">Ref: Protecting consumers from malicious mixed content</a></li>
</ul></li>
</ul>
<h3>Another recent experiment in this space: HTTPS-Everywhere.</h3>
<ul>
<li>Focuses on the "power user" aspect of ForceHTTPS.</li>
<li>Allows users to force the use of HTTPS for some domains.</li>
<li>Collaboration between Tor and EFF.</li>
<li>Add-on for Firefox and Chrome.</li>
<li>Comes with rules to rewrite URLs for popular web sites.</li>
</ul>
<h3>Other ways to address problems in SSL/TLS</h3>
<ul>
<li>Better tools / better developers to avoid programming mistakes.
<ul>
<li>Mark all sensitive cookies as Secure (#4).</li>
<li>Avoid any insecure embedding (#3).</li>
<li>Unfortunately, seems error-prone..</li>
<li>Does not help end-users (requires developer involvement).</li>
</ul></li>
<li>EV certificates.
<ul>
<li>Trying to address problem 5: users don't know what to look for in cert.</li>
<li>In addition to URL, embed the company name (e.g., "PayPal, Inc.")</li>
<li>Typically shows up as a green box next to the URL bar.</li>
<li>Why would this be more secure?</li>
<li>When would it actually improve security?</li>
<li>Might indirectly help solve #2, if users come to expect EV certificates.</li>
</ul></li>
<li>Blacklist weak crypto.
<ul>
<li>Browsers are starting to reject MD5 signatures on certificates
(iOS 5, Chrome 18, Firefox 16)</li>
<li>and RSA keys with <code>< 1024</code> bits.
(Chrome 18, OS X 10.7.4, Windows XP+ after a recent update)</li>
<li>and even SHA-1 by Chrome.
<a href="http://googleonlinesecurity.blogspot.com/2014/09/gradually-sunsetting-sha-1.html">Ref: Gradually sunsetting SHA1</a></li>
</ul></li>
<li>OCSP stapling.
<ul>
<li>OCSP responses are signed by CA.</li>
<li>Server sends OCSP response in handshake instead of querying online (#2).</li>
<li>Effectively a short-lived certificate.</li>
<li>Problems:
<ul>
<li>Not widely deployed.</li>
<li>Only possible to staple one OCSP response.</li>
</ul></li>
</ul></li>
<li>Key pinning.
<ul>
<li>Only accept certificates signed by per-site whitelist of CAs.</li>
<li>Remove reliance on least secure CA (#2).</li>
<li>Currently a hard-coded list of sites in Chrome.</li>
<li>Diginotar compromise caught in 2011 because of key pinning.</li>
<li>Plans to add mechanism for sites to advertise pins.
<ul>
<li><a href="http://tools.ietf.org/html/draft-ietf-websec-key-pinning-21">Ref: IETF draft on websec key pinning</a></li>
<li><a href="http://tack.io/">Ref: tack.io</a></li>
</ul></li>
<li>Same bootstrapping difficulty as in ForceHTTPS.</li>
</ul></li>
</ul>
<h2>Other references</h2>
<ul>
<li><a href="http://www.imperialviolet.org/2012/07/19/hope9talk.html">http://www.imperialviolet.org/2012/07/19/hope9talk.html</a></li>
</ul>