Abstract
How to deal with large tightness gaps in security proofs is a vexing issue in cryptography. Even when analyzing protocols that are of practical importance, leading researchers often fail to treat this question with the seriousness that it deserves. We discuss nontightness in connection with complexity leveraging, HMAC, lattice-based cryptography, identity-based encryption, and hybrid encryption.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
By “k bits of security” we mean that there is good reason to believe that, if a successful attack (of a specified type) takes time T and has success probability \(\epsilon \), then \(T/\epsilon > 2^k\).
- 2.
We are not accounting for recent progress by Kim and Barbulescu [51] in algorithms for computing discrete logarithms in \({\mathbb G}_T\). This will lead to working with even larger parameters.
- 3.
In iterated hash functions one also appends a “length block” to the message M before hashing. We are omitting the length block for simplicity.
- 4.
The early posted versions of [54] contained a serious error that was pointed out to the authors by Pietrzak, namely, the theorem is given assuming only the PRF property rather than the strong PRF property that is needed in the proof. This error was explained and corrected in the posted versions and the published version. Soon after the corrected version was posted, Pietrzak posted a paper [70] containing a different proof of essentially the same result as in Corollary 10.3 of Theorem 10.1 of [54] (see also [40]).
- 5.
The abstract to [40] also erroneously states that “NMAC was introduced by Bellare, Canetti and Krawczyk [Crypto96], who proved it to be a secure pseudorandom function (PRF), and thus also a MAC, assuming that (1) f is a PRF and (2) the function we get when cascading f is weakly collision-resistant”.
- 6.
In this quotation Bellare uses the word “blackbox” in a non-standard way. Later in his paper he defines a “blackbox” reduction to be one that is constructible and a “non-blackbox” reduction to be one that is non-constructible. However, when comparing a proof as in [12] that uses “coin-fixing” with more recent proofs that do not, the standard terms are nonuniform/uniform rather than non-blackbox/blackbox.
- 7.
The section “Our Contributions” in [40] starts out: “Our first contribution is a simpler, uniform, and as we will show, basically tight proof for the PRF-security of NMAC\({}^f\) assuming only that f is a PRF.” The authors apparently meant to say that their tightness gap is best possible, i.e., cannot be improved. Their proof is not tight, however—far from it. Their tightness gap is nq, essentially the same as in Corollary 10.3 of [54].
- 8.
Regev’s theorem can also be stated with the \(\text {GapSVP}_{\gamma }\) problem instead of \(\text {SIVP}_{\gamma }\). Given an n-dimensional lattice L and a number \(r > 0\), GapSVP\({}_\gamma \) requires that one output “yes” if \(\lambda _1(L) \le r\) and “no” if \(\lambda _1(L) > \gamma r\) (either “yes” or “no” is allowed if \(r <\lambda _1\le \gamma r\)).
- 9.
In the IBE setting “CP” does not stand for chosen plaintext but rather for clave pedida, which means “requested key” in Spanish.
- 10.
- 11.
We can suppose that the \(2^a\) ciphertexts are sorted according to their first two blocks (or perhaps stored using a conventional hash function). Then one iteration of the attack takes essentially unit time, since it just requires computing \((C''_1,C''_2)\) and looking for it in the sorted table. Since the expected number of iterations is \(2^{128-a}\), the running time of the attack is \(T=2^{128-a}\) (and the success probability is essentially 1).
References
Aggarwal, D., Dadush, D., Regev, O., Stephens-Davidowitz, N.: Solving the shortest vector problem in \(2^n\) time via discrete Gaussian sampling. In: Proceedings of the 47th Annual Symposium Foundations of Computer Science, pp. 733–742 (2015)
Ajtai, M.: Generating hard instances of lattice problems. In: Proceedings of the 28th Annual ACM Symposium on Theory of Computing, pp. 99–108. ACM (1996)
Ajtai, M., Dwork, C.: A public-key cryptosystem with worst-case/average-case equivalence. In: Proceedings of the 29th Annual ACM Symposium on Theory of Computing, pp. 284–293. ACM (1997)
Albrecht, M., Player, R., Scott, S.: On the concrete hardness of learning with errors. J. Math. Cryptol. 9, 169–203 (2015)
Alkim, E., Ducas, L., Pöppelmann, T., Schwabe, P.: Post-quantum key exchange - a new hope. In: Proceeding of the 25th USENIX Security Symposium, pp. 327–343 (2016)
ANSI X9.98: Lattice-Based Polynomial Public Key Establishment Algorithm for the Financial Services Industry, Part 1: Key Establishment, Part 2: Data Encryption (2010)
Arora, S., Ge, R.: New algorithms for learning in presence of errors. In: Aceto, L., Henzinger, M., Sgall, J. (eds.) ICALP 2011. LNCS, vol. 6755, pp. 403–415. Springer, Heidelberg (2011). doi:10.1007/978-3-642-22006-7_34
Attrapadung, N., Furukawa, J., Gomi, T., Hanaoka, G., Imai, H., Zhang, R.: Efficient identity-based encryption with tight security reduction. In: Pointcheval, D., Mu, Y., Chen, K. (eds.) CANS 2006. LNCS, vol. 4301, pp. 19–36. Springer, Heidelberg (2006). doi:10.1007/11935070_2
Barbulescu, R., Gaudry, P., Joux, A., Thomé, E.: A heuristic quasi-polynomial algorithm for discrete logarithm in finite fields of small characteristic. In: Nguyen, P.Q., Oswald, E. (eds.) EUROCRYPT 2014. LNCS, vol. 8441, pp. 1–16. Springer, Heidelberg (2014). doi:10.1007/978-3-642-55220-5_1
Barreto, P.S.L.M., Naehrig, M.: Pairing-friendly elliptic curves of prime order. In: Preneel, B., Tavares, S. (eds.) SAC 2005. LNCS, vol. 3897, pp. 319–331. Springer, Heidelberg (2006). doi:10.1007/11693383_22
Bellare, M.: Practice-oriented provable-security. In: Damgård, I.B. (ed.) EEF School 1998. LNCS, vol. 1561, pp. 1–15. Springer, Heidelberg (1999). doi:10.1007/3-540-48969-X_1
Bellare, M.: New proofs for NMAC and HMAC: security without collision-resistance. In: Dwork, C. (ed.) CRYPTO 2006. LNCS, vol. 4117, pp. 602–619. Springer, Heidelberg (2006). doi:10.1007/11818175_36
Bellare, M.: email to N. Koblitz, 24 February 2012
Bellare, M.: New proofs for NMAC and HMAC: security without collision-resistance. J. Cryptol. 28, 844–878 (2015)
Bellare, M., Bernstein, D.J., Tessaro, S.: Hash-function based PRFs: AMAC and its multi-user security. In: Fischlin, M., Coron, J.-S. (eds.) EUROCRYPT 2016. LNCS, vol. 9665, pp. 566–595. Springer, Heidelberg (2016). doi:10.1007/978-3-662-49890-3_22
Bellare, M., Boldyreva, A., Micali, S.: Public-key encryption in a multi-user setting: security proofs and improvements. In: Preneel, B. (ed.) EUROCRYPT 2000. LNCS, vol. 1807, pp. 259–274. Springer, Heidelberg (2000). doi:10.1007/3-540-45539-6_18. https://cseweb.ucsd.edu/~mihir/papers/musu.html
Bellare, M., Canetti, R., Krawczyk, H.: Keying hash functions for message authentication. In: Koblitz, N. (ed.) CRYPTO 1996. LNCS, vol. 1109, pp. 1–15. Springer, Heidelberg (1996). doi:10.1007/3-540-68697-5_1
Bellare, M., Canetti, R., Krawczyk, H.: Pseudorandom functions revisited: the cascade construction and its concrete security. In: Proceedings of the 37th Annual Symposium Foundations of Computer Science, pp. 514–523 (1996). http://cseweb.ucsd.edu/users/mihir/papers/cascade.pdf
Bellare, M., Canetti, R., Krawczyk, H.: HMAC: keyed-hashing for message authentication, Internet RFC 2104 (1997)
Bernstein, D.: Multi-user Schnorr security, revisited. http://eprint.iacr.org/2015/996.pdf
Blömer, J., Seifert, J.: On the complexity of computing short linearly independent vectors and short bases in a lattice. In: Proceedings of the 31st Annual ACM Symposium on Theory of Computing, pp. 711–720. ACM (1999)
Boldyreva, A.: Strengthening security of RSA-OAEP. In: Fischlin, M. (ed.) CT-RSA 2009. LNCS, vol. 5473, pp. 399–413. Springer, Heidelberg (2009). doi:10.1007/978-3-642-00862-7_27
Boneh, D., Boyen, X.: Efficient selective-ID secure identity based encryption without random oracles. http://eprint.iacr.org/2004/172.pdf
Boneh, D., Franklin, M.: Identity-based encryption from the Weil pairing. SIAM J. Comput. 32, 586–615 (2003)
Boneh, D., Waters, B.: Constrained pseudorandom functions and their applications. In: Sako, K., Sarkar, P. (eds.) ASIACRYPT 2013. LNCS, vol. 8270, pp. 280–300. Springer, Heidelberg (2013). doi:10.1007/978-3-642-42045-0_15
Bos, J., Costello, C., Naehrig, M., Stebila, D.: Post-quantum key exchange for the TLS protocol from the ring learning with errors problem. In: Proceedings of the 2015 IEEE Symposium on Security and Privacy, pp. 553–570 (2015)
Chatterjee, S., Menezes, A., Sarkar, P.: Another look at tightness. In: Miri, A., Vaudenay, S. (eds.) SAC 2011. LNCS, vol. 7118, pp. 293–319. Springer, Heidelberg (2012). doi:10.1007/978-3-642-28496-0_18
Chen, L.: Recommendation for key derivation using pseudorandom functions (revised), NIST SP 800–108 (2009)
Chen, L.: Recommendation for key derivation through extraction-then-expansion, NIST SP 800–56C (2011)
Coppersmith, D.: Fast evaluation of logarithms in fields of characteristic two. IEEE Trans. Inf. Theory 30, 587–594 (1984)
Cramer, R., Shoup, V.: A practical public key cryptosystem provably secure against adaptive chosen ciphertext attack. In: Krawczyk, H. (ed.) CRYPTO 1998. LNCS, vol. 1462, pp. 13–25. Springer, Heidelberg (1998). doi:10.1007/BFb0055717
Cramer, R., Shoup, V.: Design and analysis of practical public-key encryption schemes secure against adaptive chosen ciphertext attack. SIAM J. Comput. 33, 167–226 (2003)
Dang, Q.: Recommendation for applications using approved hash algorithms, NIST SP 800–107 (2012)
Dierks, T., Allen, C.: The TLS protocol, Internet RFC 2246 (1999)
Fuchsbauer, G.: Constrained verifiable random functions. In: Abdalla, M., Prisco, R. (eds.) SCN 2014. LNCS, vol. 8642, pp. 95–114. Springer, Cham (2014). doi:10.1007/978-3-319-10879-7_7
Fujisaki, E., Okamoto, T.: Secure integration of asymmetric and symmetric encryption schemes. In: Wiener, M. (ed.) CRYPTO 1999. LNCS, vol. 1666, pp. 537–554. Springer, Heidelberg (1999). doi:10.1007/3-540-48405-1_34
Galbraith, S., Malone-Lee, J., Smart, N.: Public key signatures in the multi-user setting. Inf. Process. Lett. 83, 263–266 (2002)
Galindo, D.: Boneh-Franklin identity based encryption revisited. In: Caires, L., Italiano, G.F., Monteiro, L., Palamidessi, C., Yung, M. (eds.) ICALP 2005. LNCS, vol. 3580, pp. 791–802. Springer, Heidelberg (2005). doi:10.1007/11523468_64
Garg, S., Gentry, C., Halevi, S., Raykova, M., Sahai, A., Waters, B.: Candidate indistinguishability obfuscation and functional encryption for all circuits. http://eprint.iacr.org/2013/451.pdf
Gaži, P., Pietrzak, K., Rybár, M.: The exact PRF-security of NMAC and HMAC. In: Garay, J.A., Gennaro, R. (eds.) CRYPTO 2014. LNCS, vol. 8616, pp. 113–130. Springer, Heidelberg (2014). doi:10.1007/978-3-662-44371-2_7
Goldreich, O., Goldwasser, S.: On the limits of nonapproximability of lattice problems. J. Comput. Syst. Sci. 60, 540–563 (2000)
Goldwasser, S., Bellare, M.: Lecture Notes on Cryptography, July 2008. http://cseweb.ucsd.edu/mihir/papers/gb.pdf
Goldwasser, S., Kalai, Y.: Cryptographic assumptions: a position paper. http://eprint.iacr.org/2015/907.pdf
Goldwasser, S., Micali, S.: Probabilistic encryption. J. Comput. Syst. Sci. 28, 270–299 (1984)
Harkins, D., Carrel, D.: The internet key exchange (IKE), Internet RFC 2409 (1998)
Hoffstein, J., Howgrave-Graham, N., Pipher, J., Whyte, W.: Practical lattice-based cryptography: NTRUEncrypt and NTRUSign. In: Vallée, B., Nguyen, P.Q. (eds.) The LLL Algorithm, pp. 349–390. Springer, Heidelberg (2010). doi:10.1007/978-3-642-02295-1_11
Hoffstein, J., Pipher, J., Silverman, J.H.: NTRU: a ring-based public key cryptosystem. In: Buhler, J.P. (ed.) ANTS 1998. LNCS, vol. 1423, pp. 267–288. Springer, Heidelberg (1998). doi:10.1007/BFb0054868
IEEE 1363.1: Standard Specification for Public Key Cryptographic Techniques Based on Hard Problems over Lattices (2008)
Katz, J., Lindell, Y.: Introduction to Modern Cryptography. Chapman and Hall/CRC, London (2007)
Kiltz, E., Masny, D., Pan, J.: Optimal security proofs for signatures from identification schemes. In: Robshaw, M., Katz, J. (eds.) CRYPTO 2016. LNCS, vol. 9815, pp. 33–61. Springer, Heidelberg (2016). doi:10.1007/978-3-662-53008-5_2
Kim, T., Barbulescu, R.: Extended tower number field sieve: a new complexity for the medium prime case. In: Robshaw, M., Katz, J. (eds.) CRYPTO 2016. LNCS, vol. 9814, pp. 543–571. Springer, Heidelberg (2016). doi:10.1007/978-3-662-53018-4_20
Koblitz, N., Menezes, A.: Another look at “provable security”. II. In: Barua, R., Lange, T. (eds.) INDOCRYPT 2006. LNCS, vol. 4329, pp. 148–175. Springer, Heidelberg (2006). doi:10.1007/11941378_12
Koblitz, N., Menezes, A.: Another look at ‘provable security’. J. Cryptol. 20, 3–37 (2007)
Koblitz, N., Menezes, A.: Another look at HMAC. J. Math. Cryptol. 7, 225–251 (2013)
Koblitz, N., Menezes, A.: Another look at non-uniformity. Groups Complex. Cryptol. 5, 117–139 (2013)
Koblitz, N., Menezes, A.: Another look at security definitions. Adv. Math. Commun. 7, 1–38 (2013)
Koblitz, N., Menezes, A.: Another look at security theorems for 1-key nested MACs. In: Koç, Ç.K. (ed.) Open Problems in Mathematics and Computational Science, pp. 69–89. Springer, Cham (2014). doi:10.1007/978-3-319-10683-0_4
Koblitz, N., Menezes, A.: A riddle wrapped in an enigma. IEEE Secur. Priv. 14, 34–42 (2016)
Krawczyk, H., Eronen, P.: HMAC-based extract-and-expand key derivation function (HKDF), Internet RFC 5869 (2010)
Krawczyk, H.: Cryptographic extraction and key derivation: the HKDF scheme. In: Rabin, T. (ed.) CRYPTO 2010. LNCS, vol. 6223, pp. 631–648. Springer, Heidelberg (2010). doi:10.1007/978-3-642-14623-7_34
Laarhoven, T., Mosca, M., van de Pol, J.: Finding shortest lattice vectors faster using quantum search. Des. Codes Crypt. 77, 375–400 (2015)
Lyubashevsky, V., Micciancio, D., Peikert, C., Rosen, A.: SWIFFT: a modest proposal for FFT hashing. In: Nyberg, K. (ed.) FSE 2008. LNCS, vol. 5086, pp. 54–72. Springer, Heidelberg (2008). doi:10.1007/978-3-540-71039-4_4
Lyubashevsky, V., Peikert, C., Regev, O.: On ideal lattices, learning with errors over rings. J. ACM 60, 43:1–43:35 (2013)
Menezes, A.: Another look at provable security, Invited talk at Eurocrypt 2012. http://www.cs.bris.ac.uk/eurocrypt2012/Program/Weds/Menezes.pdf
Micciancio, D., Goldwasser, S.: Complexity of Lattice Problems: A Cryptographic Perspective. Springer, New York (2002). doi:10.1007/978-1-4615-0897-7
M’Raihi, D., Bellare, M., Hoornaert, F., Naccache, D., Ranen, O.: HOTP: an HMAC-based one time password algorithm, Internet RFC 4226 (2005)
Peikert, C.: Lattice cryptography for the internet. In: Mosca, M. (ed.) PQCrypto 2014. LNCS, vol. 8772, pp. 197–219. Springer, Cham (2014). doi:10.1007/978-3-319-11659-4_12
Peikert, C.: 19 February 2015 blog posting. http://web.eecs.umich.edu/~cpeikert/soliloquy.html
Peikert, C.: A decade of lattice cryptography. http://eprint.iacr.org/2015/939
Pietrzak, K.: A closer look at HMAC. http://eprint.iacr.org/2013/212.pdf
Regev, O.: On lattices, learning with errors, random linear codes, cryptography. J. ACM 56, 34:1–34:40 (2009)
Schnorr, C.P.: Efficient identification and signatures for smart cards. In: Quisquater, J.-J., Vandewalle, J. (eds.) EUROCRYPT 1989. LNCS, vol. 434, pp. 688–689. Springer, Heidelberg (1990). doi:10.1007/3-540-46885-4_68
Shoup, V.: Sequences of games: a tool for taming complexity in security proofs. http://eprint.iacr.org/2004/332.pdf
Shoup, V.: ISO/IEC 18033–2:2006, Information Technology – Security Techniques – Encryption Algorithms – Part 2: Asymmetric Ciphers (2006). http://www.shoup.net/iso/std6.pdf
Stehlé, D., Steinfeld, R.: Making NTRU as secure as worst-case problems over ideal lattices. In: Paterson, K.G. (ed.) EUROCRYPT 2011. LNCS, vol. 6632, pp. 27–47. Springer, Heidelberg (2011). doi:10.1007/978-3-642-20465-4_4
Stern, J., Pointcheval, D., Malone-Lee, J., Smart, N.P.: Flaws in applying proof methodologies to signature schemes. In: Yung, M. (ed.) CRYPTO 2002. LNCS, vol. 2442, pp. 93–110. Springer, Heidelberg (2002). doi:10.1007/3-540-45708-9_7
Zaverucha, G.M.: Hybrid encryption in the multi-user setting. http://eprint.iacr.org/2012/159.pdf
Zhang, R., Imai, H.: Improvements on security proofs of some identity based encryption schemes. In: Feng, D., Lin, D., Yung, M. (eds.) CISC 2005. LNCS, vol. 3822, pp. 28–41. Springer, Heidelberg (2005). doi:10.1007/11599548_3
Acknowledgments
We wish to thank Greg Zaverucha for extensive help with Appendix B as well as useful comments on the other sections, Michael Naehrig for reviewing and commenting on Sect. 5, Somindu C. Ramanna for providing helpful comments on an earlier draft of Sect. 6, Ann Hibner Koblitz for editorial suggestions, and Ian Blake, Eike Kiltz, and Chris Peikert for helpful feedback and suggestions. Of course, none of them is responsible for any of the opinions expressed in this article.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendices
A Concrete Analysis of Regev’s Worst-Case/Average-Case Reduction
Let \(q=q(n)\) and \(m=m(n)\) be integers, and let \(\alpha = \alpha (n) \in (0,1)\) be such that \(\alpha q > 2\sqrt{n}\). Let \(\chi \) be the probability distribution on \({\mathbb Z}_q\) obtained by sampling from a Gaussian distribution with mean 0 and variance \(\alpha ^2/2\pi \), and then multiplying by q and rounding to the closest integer modulo q. Then the (search version of the) LWE problem is the following: Let s be a secret vector selected uniformly at random from \({\mathbb Z}_q^n\). Given m samples \((a_i,a_i\cdot s +e_i)\), where each \(a_i\) is selected independently and uniformly at random from \({\mathbb Z}_q^n\), and where each \(e_i\) is selected independently from \({\mathbb Z}_q\) according to \(\chi \), determine s. The decisional version of LWE, called DLWE, asks us to determine whether we have been given m LWE samples \((a_i,a_i\cdot s + e_i)\) or m random samples \((a_i,u_i)\), where each \(u_i\) is selected independently and uniformly at random from \({\mathbb Z}_q\).
Regev [71] proved that the existence of an efficient algorithm that solves DLWE in the average case implies the existence of an efficient quantum algorithm that solves \(\text {SIVP}_{\gamma }\) in the worst case where \(\gamma = \tilde{O}(n/\alpha )\). In the remainder of this section we provide justification for the following refinement of Regev’s theorem:
Claim
Let \(q=n^2\) and \(\alpha = 1/(\sqrt{n} \log ^2 n)\), whence \(\gamma = \tilde{O}(n^{1.5})\). Suppose that there is an algorithm W that, given \(m=n^c\) samples, solves DLWE for a fraction \(1/n^{d_1}\) of all \(s \in {\mathbb Z}_q^n\) with advantage at least \(1/n^{d_2}\). Then there is a polynomial-time algorithm \(W'\) for solving \(\text {SIVP}_{\gamma }\) that calls the W oracle a total of \(O(n^{11+c+d_1+2d_2})\) times.
1.1 A.1 Gaussian Distributions
Recall that the Gaussian distribution with mean 0 and variance \(\sigma ^2\) is the distribution on \({\mathbb R}\) given by the probability density function
For \(x \in {\mathbb R}^n\) and \(s > 0\), define the Gaussian function scaled by s:
The Gaussian distribution \(D_s\) of parameter s over \({\mathbb R}^n\) is given by the probability density function
Note that \(D_s\) is indeed a probability distribution since \(\int _{x \in {\mathbb R}^n} \rho _s(x) \, dx = s^n\).
If L is a lattice, we can define
Then the discrete Gaussian probability distribution \(D_{L,s}\) of width s for \(x \in L\) is
Let L be a (full-rank integer) lattice of dimension n.
1.2 A.2 Concrete Analysis
In this section the tightness gap of a reduction algorithm from problem A to problem B is the number of calls to the oracle for B that are made by the reduction algorithm.
Regev’s worst-case/average-case reduction has two main components:
-
1.
The reduction of (search-)LWE to average-case DLWE (denoted \(\text {DLWE}_{ac}\)).
-
2.
The reduction of worst-case \(\text {SIVP}_{\gamma }\) to LWE.
Reduction of LWE to \(\mathbf {DLWE}_{\varvec{ac.}}\) This reduction has three parts.
Part I. Worst-Case to Average-Case. \(\text {DLWE}_{wc}\) denotes the worst-case DLWE problem. Lemma 4.1 in [71] shows that an algorithm \(W_1\) that solves \(\text {DLWE}_{ac}\) for a fraction \(\frac{1}{n^{d_1}}\) of all \(s \in {\mathbb Z}_q^n\) with acceptance probabilities differing by at least \(\frac{1}{n^{d_2}}\) can be used to construct an algorithm \(W_2\) that solves \(\text {DLWE}_{wc}\) with probability essentially 1 for all \(s \in {\mathbb Z}_q^n\). The algorithm \(W_2\) invokes \(W_1\) a total of \(O(n^{d_1+2d_2+2})\) times.
Part II. Search to Decision. Lemma 4.2 in [71] shows that an algorithm \(W_2\) which solves \(\text {DLWE}_{wc}\) for all \(s \in {\mathbb Z}_q^n\) with probability essentially 1 can be used to construct an algorithm \(W_3\) that solves (search-)LWE for all \(s \in {\mathbb Z}_q^n\) with probability essentially 1. Algorithm \(W_3\) invokes \(W_2\) a total of nq times, so this reduction has a tightness gap of nq.
Part III. Continuous to Discrete. Lemma 4.3 in [71] shows that an algorithm \(W_3\) that solves LWE can be used to construct an algorithm \(W_5\) that solves \(\text {LWE}_{q,\Psi _{\alpha }}\). (See [71] for the definition of the \(\text {LWE}_{q,\Psi _{\alpha }}\) problem.) This reduction is tight.
Reduction of \(\mathbf {SIVP}_{\gamma }\) to LWE. This reduction has tightness gap \(6n^{6+c}\). The reduction has two parts.
Part I. DGS to LWE. Let \(\epsilon = \epsilon (n)\) be some negligible function of n. Theorem 3.1 of [71] shows that an algorithm \(W_4\) that solves \(\text {LWE}_{q,\Psi _{\alpha }}\) given m samples can be used to construct a quantum algorithm \(W_{9}\) for \(DGS_{\sqrt{2n} \cdot \eta _{\epsilon }(L)/\alpha }\). Here, \(\eta _{\epsilon }(L)\) is the “smoothing parameter with accuracy \(\epsilon \)”, and \(\text {DGS}_{r'}\) (discrete Gaussian sampling problem) is the problem of computing a sample from the discrete Gaussian probability distribution \(D_{L,r'}\) where \(r' \ge \sqrt{2n} \cdot \eta _{\epsilon }(L)/\alpha \).
Let \(r = \sqrt{2n} \cdot \eta _{\epsilon }(L)/\alpha \). Let \(r_i = r \cdot (\alpha q/\sqrt{n})^i\) for \(i \in [0,3n]\). Algorithm \(W_{9}\) begins by producing \(n^c\) samples from \(D_{L,r_{3n}}\) (Lemma 3.2 in [71]); the \(W_4\) oracle is not used in this step. Next, by repeatedly applying the ‘iterative step,’ it uses the \(n^c\) samples from \(D_{L,r_i}\) to produce \(n^c\) samples from \(D_{L,r_{i-1}}\) for \(i=3n, 3n-1, \ldots , 1\). Since \(r_0=r\), the last step produces the desired sample from \(D_{L,r}\).
The iterative step (Lemma 3.3 in [71]) uses \(n^c\) samples from \(D_{L,r_i}\) to produce one sample from \(D_{L,r_{i-1}}\); this step is then repeated to produce \(n^c\) samples from \(D_{L,r_{i-1}}\). Thus, the iterative step is executed a total of \(3n \cdot n^c = 3n^{1+c}\) times.
Each iterative step has two parts.
-
1.
The first part invokes \(W_4\) a total of \(n^2\) times:
-
Lemma 3.7 in [71] uses \(W_4\) to construct an algorithm \(W_5\) that solves \(\text {LWE}_{q,\Psi _{\beta }}\); \(W_5\) invokes \(W_4\) n times.
-
Lemma 3.11 in [71] uses \(W_5\) and the \(n^c\) samples from \(D_{L,r_i}\) to construct an algorithm \(W_6\) that solves the \(\text {CVP}^{(q)}_{L^*, \alpha q/\sqrt{2}r_i}\) problem. The reduction is tight.
-
Lemma 3.5 in [71] uses \(W_6\) to construct an algorithm \(W_7\) that solves the \(\text {CVP}_{L^*,\alpha q/\sqrt{2}r_i}\) problem. Algorithm \(W_7\) invokes \(W_6\) n times.
-
-
2.
The second part (Lemma 3.14 in [71]) uses \(W_7\) to construct a quantum algorithm \(W_8\) that produces a sample from \(D_{L,r_{i-1}}\). This reduction is tight.
Since each iterative step has tightness gap \(n^2\), the total tightness gap for the reduction of DGS to LWE is \(3n^{3+c}\).
Part II. \(\text {SIVP}_{\gamma }\) to DGS Lemma 3.17 in [71] uses \(W_{9}\) to construct an algorithm \(W_{10}\) that solves \(\text {SIVP}_{2\sqrt{2} n \eta _{\epsilon }(L)/\alpha }\). Algorithm \(W_{10}\) invokes \(W_{9}\) \(2n^3\) times.
Lemma 2.12 in [71] states that \(\eta _{\epsilon }(L) \le \sqrt{\omega (\log n)} \cdot \lambda _n(L)\) for some negligible function \(\epsilon (n)\). Thus
Summary. Regev’s reduction of \(\text {SIVP}_{\gamma }\) to \(\text {DLWE}_{ac}\) has tightness gap
B Nontightness and Multi-user Attacks
In an important paper that has been all but ignored by the cryptographic research community, Zaverucha [77] showed that “provably secure” hybrid encryption, as described in several standards, is insecure in the multi-user setting if certain permitted (and even recommended) choices are made in the implementation. Because this work should be much better known than it is, we shall devote this section to explaining and summarizing [77]. We shall focus on hybrid encryption schemes in the comprehensive ISO/IEC 18033-2 standard [74].
We first recall the definition in [16] of IND-CCA security (Indistinguishability under Chosen-Ciphertext Attack) of encryption in the multi-user setting. Suppose there are n users. The adversary is given n public keys, a decryption oracle for each public key, and an LR (left-or-right encryption) oracle for each public key. The adversary can query each decryption oracle up to \(q_D\) times and each LR oracle up to \(q_\mathrm{LR}\) times. A decryption query simply asks for a chosen ciphertext to be decrypted under the corresponding public key. An LR query works differently. The n LR-oracles all have a hidden random bit b in common. The adversary chooses two equal-length messages \(M_0\) and \(M_1\) to query to one of the LR-oracles, which then returns an encryption \(C^*\) of \(M_b\). The adversary is not permitted to query \(C^*\) to the decryption oracle for the same public key. The adversary’s task is to guess b with success probability significantly greater than 1 / 2.
Remark 12
This “multi-challenge” security model (that is, \(q_\mathrm{LR}>1\)) can also be used in the single-user setting, but almost never is ([22] is a rare exception); in the standard IND-CCA security model \(q_\mathrm{LR}=1\). We shall later give a simple attack that shows that the standard IND-CCA is deficient and should be replaced by the multi-challenge model.
Remark 13
In [16] the authors give a generic reduction with tightness gap \(n\cdot q_\mathrm{LR}\) between the multi-user and single-user settings. In the full version of [16] they also give a construction that shows that this tightness bound is optimal; that is, they describe a protocol that can be attacked with \(n \cdot q_\mathrm{LR}\) times the advantage in the multi-user setting than in the single-challenge single-user setting. Their construction is contrived and impractical; later we shall describe a simple attack on hybrid encryption that shows that in practice as well as in theory the generic tightness bound in [16] is best possible. That is, the attack described below reduces security by a factor equal to n times the number of messages sent to each user (see Remark 16). (In specific cases tighter reductions are sometimes possible—for example, the paper [16] contains a reduction with tightness gap \(q_\mathrm{LR}\) in the case of the Cramer–Shoup public-key encryption scheme [31].)
We now recall the setup and terminology of hybrid encryption. The encryption has two stages: a key-encapsulation mechanism (KEM) using a public-key cryptosystem (with the recipient’s public/secret key pair denoted PK/SK), and a data-encapsulation mechanism (DEM) using a symmetric-key cryptosystem that encrypts the data by means of the shared key K that is produced by the KEM. The KEM takes PK as input and produces both the key material K by means of a key-derivation function (KDF) and also a ciphertext \(C_1\) that will enable the recipient to compute K; the DEM takes K and the message M as input and produces a ciphertext \(C_2\). The recipient decrypts by first using \(C_1\) and SK to find K and then using \(C_2\) and the symmetric key K to find M.
Among the public-key systems commonly used for KEM are Cramer-Shoup [31] and ECIES (ElGamal encryption using elliptic curves, see [74]); symmetric-key systems commonly used for DEM are AES in cipher block chaining (CBC) mode and XOR-Encrypt using a hash function with a counter. (We will describe this in more detail shortly.) The KDF is a publicly known way to produce key material of a desired length L from a shared secret that’s computed using the public-key system.
Suppose, following [74], that we use 128-bit AES in CBC-mode with zero initialization vector for DEM. Let MAC denote a message authentication code that depends on a 128-bit key. Our KDF produces two 128-bit keys \(K=(k_1,k_2)\). To send a 128m-bit message M, we set \(C_2\) equal to a pair \((C',t)\), where \(C'\) is the 128m-bit ciphertext computed below and \(t=\)MAC\({}_{k_2}(C')\) is its tag. The ciphertext \(C'=(C'_1,\ldots ,C'_m)\) is given by: \(C'_1=\)AES\({}_{k_1}(M_1), C'_i=\)AES\({}_{k_1}(C'_{i-1}\oplus M_i)\) for \(i=2,\ldots ,m\).
After receiving \((C_1,C_2)=(C_1,C',t)\), the recipient first uses \(C_1\), SK, and the KDF to find \((k_1,k_2)\), and then uses the shared key \(k_2\) to verify that t is in fact the tag of \(C'\); otherwise she rejects the message. Then she decrypts using \(k_1\).
Alternatively, for DEM we could use XOR-Encrypt with a hash function \({\mathcal H}\) as follows. To send a message M consisting of m 256-bit blocks, we have the KDF generate a 256m-bit key \(k_1=(k_{1,1},\ldots ,k_{1,m})\) by setting \(k_{1,i}={\mathcal H}(z_0\Vert i)\), where \(z_0\) is a shared secret produced by KEM, and also a MAC-ing key \(k_2\). The MAC works as before, but now \(C'\) is determined by setting \(C'_i=M_i\oplus k_{1,i}\). This is the hash function with counter (CTR) mode mentioned above.
In [32] Cramer and Shoup gave a tight proof that hybrid encryption has IND-CCA security under quite weak assumptions. The MAC-scheme need only be “one-time secure” (because it receives a new key \(k_2\) for each message), and the symmetric encryption function need only be one-time secure against passive adversaries—in particular, there is no need for randomization (again the reason is that it gets a new key \(k_1\) for each message). In accordance with the general principle that standards should not require extra features that are not needed in the security reductions, the standards for hybrid encryption [74] do not require randomization in the symmetric encryption; nor do they impose very stringent conditions on the KDF. In addition, in [74] Shoup comments that if KEM is implemented using the Cramer–Shoup construction [31], which has a security proof without random oracles, and if DEM is implemented using AES-CBC, then it is possible to prove a tight security reduction for the hybrid encryption scheme without the random oracle assumption. Thus, anyone who mistrusts random oracle proofs should use AES-CBC rather than XOR-Encrypt. All of these security proofs are given in the single-user setting.
1.1 B.1 Attacks in the Multi-user Setting
We now describe some of the attacks of Zaverucha [77] in the multi-user setting, which of course is the most common setting in practice. Let \(n=2^a\) be the number of users. First suppose that the DEM is implemented using AES128 in CBC-mode. Suppose that Bob sends all of the users messages that all have the same first two blocks \((M_1,M_2)\) (that is, they start with the same 256-bit header). The rest of the message blocks may be the same (i.e., broadcast encryption), or they may be different. The adversary Cynthia’s goal is to read at least one of the \(2^a\) messages. She guesses a key k that she hopes is the \(k_1\)-key for one of the messages. She computes \(C''_1=\)AES\({}_k(M_1)\) and \(C''_2=\text {AES}{}_k(C''_1\oplus M_2)\) and compares the pair \((C''_1,C''_2)\) with the first two blocks of ciphertext sent to the different users.Footnote 11 If there’s a match, then it is almost certain that she has guessed the key \(k_1=k\) for the corresponding message. That is because there are \(2^{128}\) possible keys \(k_1\) and \(2^{256}\) possible pairs \((C'_1,C'_2)\), so it is highly unlikely that distinct keys would give the same \((C'_1,C'_2)\). Once Cynthia knows \(k_1\)—each guess has a \(2^{-(128-a)}\) chance of producing a match—she can quickly compute the rest of the plaintext. This means that even though the hybrid encryption scheme might have a tight security reduction in the single-user setting that proves 128 bits of security, in the multi-user setting it has only \(128-a\) bits of security. Commenting on how dropping randomization in DEM made his attack possible, Zaverucha [77] calls this “an example of a provable security analysis leading to decreased practical security.”
Remark 14
In modern cryptography—ever since the seminal Goldwasser-Micali paper [44]—it has been assumed that encryption must always be probabilistic. In [74] this principle is violated in the interest of greater efficiency because the security proof in [32] does not require randomization. This decision was bold, but also rash, as Zaverucha’s attack shows.
Remark 15
A time–memory–data tradeoff can be applied to speed up the on-line portion of the attack; see Remark 7 in [27]. Namely, at a cost of precomputation time \(2^{128-a}\) and storage size \(2^{2(128-a)/3}\), the secret key k of one of the \(2^a\) users can be determined in time \(2^{2(128-a)/3}\).
Remark 16
The above attack can also be carried out in the single-user setting if we suppose that Bob is sending Alice \(2^{a'}\) different messages that all have the same header \((M_1,M_2)\). Since different keys are generated for different messages (even to the same user), there is no need for the recipients of the messages to be different. This gives a reduction of the number of bits of security by \(a'\). This attack shows the need for the multi-challenge security model even in the single-user setting. Thus, even in the single-user setting the standard security model for encryption is deficient because it fails to account for the very realistic possibility that Bob uses hybrid encryption as standardized in [74] to send Alice many messages that have the same header.
Remark 17
Note that if the \(2^{a'}\) messages are broadcast to \(2^a\) users, then obviously the reduction in security is by \(a'+a\) bits. In some circumstances \(a'+a\) could be large enough to reduce the security well below acceptable levels. For example, if \(a'+a>32\), it follows that what was thought to have 128 bits of security now has fewer than 96, which, as remarked in Sect. 1, is not enough. It should be emphasized that the security is reduced because of actual practical attacks, not because of a tightness gap that could conceivably be removed if one finds a different proof.
We note that the above attack does not in general work if DEM is implemented using XOR-Encrypt. (Of course, someone who does not trust security proofs that use random oracles would not be using XOR-Encrypt, and so would be vulnerable.) But Zaverucha has a different attack on hybrid encryption with XOR-Encrypt that works for certain KDF constructions.
1.2 B.2 Attacks on Extract-then-Expand with XOR-Encrypt
The most commonly used KDF takes the shared secret \(z_0\) produced in KEM and derives a key of the desired length by concatenating \({\mathcal H}(z_0\Vert i)\) for \(i=1,\ldots \). However, at Crypto 2010, as Zaverucha [77] explains,
Krawczyk argues that cryptographic applications should move to a single, well-studied, rigorously analyzed family of KDFs. To this end, he formally defines security for KDFs, presents a general construction that uses any keyed pseudorandom function (PRF), and proves the security of his construction in the new model. The approach espoused by the construction is called extract-then-expand. [...] The HKDF scheme is a concrete instantiation of this general construction when HMAC is used for both extraction and expansion.
The Extract-then-Expand key derivation mechanism was soon standardized [28, 29, 59]. In particular, RFC 5869 describes HKDF, which instantiates the Extract-then-Expand mechanism with HMAC, and states that HKDF is intended for use in a variety of KDF applications including hybrid encryption.
Extract-then-Expand works in hybrid encryption as follows. Suppose that \(z_0\) is the shared secret produced in KEM. The Extract phase produces a bitstring \(z_1=\>\)Extract\((z_0)\), perhaps of only 128 bits, which is much shorter than \(z_0\). (The Extract phase may also depend on a “salt,” but this is optional, and we shall omit it.) Then the key material K is obtained by a function that expands \(z_1\), i.e., \(K=\>\)Expand\((z_1,L)\), where L as before is the bitlength of K. (There is also the option of putting some contextual information inside the Expand-function, but we shall not do this.)
We now describe Zaverucha’s attack on hybrid encryption when Extract-then-Expand with 128-bit \(z_1\) values is used as the KDF and XOR-Encrypt is used for message encryption. Suppose that Bob sends messages to \(2^a\) users that all have the same header \((M_1,M_2)\) and the same bitlength L. Cynthia’s goal is to recover at least one of the plaintexts. Rather than guessing a key, she now guesses the bitstring \(z_1\). For each guess she computes \(K=\>\)Expand\((z_1,L)\) and \(C''_i=M_i\oplus k_{1,i}, i=1,2\). When she gets a match with \((C'_1,C'_2)\) for one of the users, she can then recover the rest of the plaintext sent to that user: \(M_i=C'_i\oplus k_{1,i}, i>2\).
Note that this attack does not work for XOR-Encrypt with the KDF using \({\mathcal H}(z_0\Vert i)\) described above. Once again the “provably secure” choice of Extract-then-Expand turns out to be vulnerable, whereas the traditional choice of KDF is not. Zaverucha comments that “In this example, replacing a commonly used KDF in favor of a provably secure one causes a decrease in practical security.”
As discussed in [77], Zaverucha’s attacks can be avoided in practice by putting in features that are not required in the standard single-user single-challenge security proofs. It would be worthwhile to give proofs of this.
Open Problem. Give a tight security reduction for hybrid encryption in the multi-user multi-challenge security model (random oracles are permitted) if DEM uses either: (1) randomized encryption rather than one-time-secure encryption (for example, AES-CBC with random IV that is different for each message and each recipient), (2) XOR-Encrypt using \({\mathcal H}(z_0\Vert i)\) for the KDF, (3) XOR-Encrypt using HKDF with a recipient- and message-dependent salt in the Extract phase and/or recipient- and message-dependent contextual information in the Expand phase.
We conclude this section by noting a curious irony. As we remarked in Sect. 1, it is very rare for a standards body to pay much attention to tightness gaps in the security reductions that are used to support a proposed standard or to whether those security reductions were proved in the multi-user or single-user setting. However, recently the IETF decided that the standard for Schnorr signatures [72] should require that the public key be included in the hash function. The reason was that Bernstein [20] had found a flaw in the tight reduction from an adversary in the single-user setting to an adversary in the multi-user setting that had been given by Galbraith et al. [37], and he had proved that a tight security reduction could be restored if the public key is included in the hash function. (Later Kiltz et al. [50] gave a tight security reduction without needing to include the public key in the hash function; however, their assumptions are stronger than in [37], and it is not yet clear whether their result will cause the IETF to go back to dropping the public key from the hash input.)
The peculiar thing is that the tightness gap between single-user and multi-user settings is only a small part of the tightness problem for Schnorr signatures.
Lemma 5.7 in [50] gives a security proof in the random oracle model for the Schnorr signature scheme in the single-user setting. The proof has a tightness gap equal to the number of random oracle queries, which can be very large—in particular, much larger than the number of users in the multi-user setting. Even a tight single-user/multi-user equivalence leaves untouched the large tightness gap between Schnorr security and hardness of the underlying Discrete Log Problem. It should also be noted that the IETF was responding to the error Bernstein found in a proof, not to any actual attack that exploited the tightness gap (we now know that such an attack is probably impossible, because of the recent proof in [50] that under a certain reasonable assumption there is no single-user/multi-user tightness gap).
In the meantime, standards bodies have done nothing to address Zaverucha’s critique of the standardized version [74] of hybrid encryption, which allows implementations that have far less security than previously thought, as shown by actual attacks.
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Chatterjee, S., Koblitz, N., Menezes, A., Sarkar, P. (2017). Another Look at Tightness II: Practical Issues in Cryptography. In: Phan, RW., Yung, M. (eds) Paradigms in Cryptology – Mycrypt 2016. Malicious and Exploratory Cryptology. Mycrypt 2016. Lecture Notes in Computer Science(), vol 10311. Springer, Cham. https://doi.org/10.1007/978-3-319-61273-7_3
Download citation
DOI: https://doi.org/10.1007/978-3-319-61273-7_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-61272-0
Online ISBN: 978-3-319-61273-7
eBook Packages: Computer ScienceComputer Science (R0)