ROCA: Return Of the Coppersmith Attack
On October 30, 2017, a group of Czech researchers from Masaryk University presented the ROCA paper at the ACM CCS Conference, which earned the Real-World Impact Award. We briefly mentioned ROCA when it was first reported but haven't dug into details of the vulnerability yet. Because of its far-ranging impact, it seems important to review the vulnerability in light of the new results published recently.
ROCA and its impacts
As we all probably know, most modern cryptography is based on the fundamental assumption that finding the prime factors of a large number is much harder than generating that number from two large primes. For this assumption to hold, however, the prime numbers need to be randomly chosen, otherwise it becomes possible to guess those numbers more easily. The prime generation process can take time, especially on embedded devices like cryptographic tokens.
The ROCA vulnerability occurred because Infineon, a popular smartcard manufacturer, developed its own proprietary algorithm based on the fast prime technique. If used correctly, fast prime allows the creation of randomly distributed primes faster than traditional methods, which generally consist of generating random numbers and testing for primality. The ROCA paper shows that Infineon goofed on the implementation and the resulting primes were not evenly distributed. This opened the possibility of recovering private keys from public key material in a reasonable time frame.
Private key recovery is one of the worst security vulnerabilities (short of direct cleartext recovery) that can happen in asymmetric crypto systems. It turns out that Infineon is one of only three secure chip manufacturers and therefore its chips are everywhere: in cryptographic keycards (e.g. the popular Yubikey 4) but also Trusted Platform Modules (TPMs) that live in most modern computers; even some official identity cards, which are used in everything from banking to voting, have Infineon chips. So the impact of this vulnerability is broad: medical records, voting, OpenPGP signatures and encryption, Digital Rights Management (DRM), full disk encryption, and secure boot; all become vulnerable if the wrong keycard generated the private key material.
Hacking the Estonian elections
Let's take an extreme example of identity theft to illustrate the severity of the ROCA vulnerability. Estonia used Infineon chips in its state-issued identity cards. Because those cards are used in its electronic voting system, it was speculated that votes could be forged in an election. Indeed, there was a parliamentary election in 2015, at a time when vulnerable cards were in the wild.
The Estonian government claims that leveraging the attack to
commit electoral fraud in Estonia is "complicated and not
cheap
". This
seems to rely on an unnamed expert as saying it would "cost
approximately €60 billion
" to mount such an attack, a number found
in this article published by the Estonian state media
(ERR). Indeed, estimates from the ROCA paper show that cracking a
single key would cost €80,000 in cloud computing costs. Since
then, however, the vulnerability was also reviewed by
cryptographers Daniel J. Bernstein and Tanja Lange, who found that it
was possible to improve the performance of the attack: they stopped
after a four-fold 25% improvement, but suspect even further speed-ups are
possible. So let's see what effect those numbers would have in an election
now.
There are 750,000 vulnerable cards, according to ERR, out of the 1.3 million cards currently in circulation, according to National Electoral Committee. There were around 900,000 eligible voters in the 2015 parliamentary elections, about 600,000 of those voted and about 30% of voters cast their votes electronically. So, assuming the distribution of compromised cards is uniform among voters, we can use the percentage of compromised cards (roughly 60%) to estimate that vulnerable cards could have been used in about 17% of the total votes.
The 2015 election was
pretty close: the Estonian Reform
Party beat its closest
rival, the Estonian Centre Party, by only 5%. So, it
could have been possible to affect the result of that election, even
without compromising all the cards. Bernstein and Lange were actually
generous when they said that "'large-scale vote fraud' does not
require breaking all of the ID cards; 10% already buys massive
influence
".
In fact, according to their numbers, the €80,000 required to break
one card can be reduced by a factor of four thanks to their various
performance improvements and by another factor of ten using
dedicated hardware, which means we can expect targeted attacks to
be 40 times cheaper than the original paper, bringing the cost of
one vote down to around €2000.
An Ars Technica
article quoted
another researcher as saying: "I'm not sure whether someone can slash
the cost of one key below
$1,000 as of today, but I certainly see it as a possibility.
"
So, being generous with the exchange rates for the sake of convenience, we could use a price per vote range of about €1,000-2,000. This means that buying the 5% of the votes necessary to win that 2015 election (or around 30,000 votes) would cost €30-60 million, which is a much lower number than originally announced and definitely affordable for a hostile foreign-state actor.
Now, I am not saying that this happened during the 2015 Estonian election. I am merely pointing out that the resources required to buy votes (one way or another) are much cheaper than previously thought. And while the Electoral Committee released the server-side source code in 2013, that wasn't where the ROCA problem lies. The vulnerability, this time, is client-side: the identity cards based on proprietary hardware. Fortunately, the Estonian authorities took action to block the Infineon cards on November 3 and issue new certificates for the vulnerable cards in a state-wide replacement program. So far, there is no evidence of foul play or identity theft, according to state officials.
The attack and disclosure
ROCA is an acronym for "Return Of the Coppersmith Attack" which, in turn, refers to a class of attacks on RSA that uses some knowledge about the secret key material that allows the key to be guessed in less than brute-force time. The actual mathematical details are available in the full paper [PDF] and go beyond my modest mathematical knowledge, but it certainly seems like Infineon took shortcuts in designing its random number generator. What the Czech researchers found is that the primes generated by the Infineon algorithm had a specific structure that made them easier to guess than randomly distributed primes. Indeed, all primes generated by RSALib, the library Infineon created for those chips, followed this pattern:
p = k ∗ M + (65537^a mod M)
Here p
is one of the secret primes used to construct the public
key. The secrets in the equation are k
and a
, while M
varies
depending with the chosen key size. The problem is that M
is
disproportionately large, close enough to the size of p
so that k
and a
are too small. And this is where the Coppersmith
method comes
in: it relies on small integers being part of the equation generating
the prime. It is also where I get lost in the math; interested readers can
read the papers to find out more.
Something interesting here is that Bernstein and Lange were able to reproduce the results before the paper was publicly released, using only information disclosed as part of the public advisory. As such, they raise the interesting question of whether delaying publication of the actual paper helped in containing the security vulnerability, since they were able to construct an attack within about a week of part-time work. They outlined that publicizing the keyword "Coppersmith" in the title of the research paper (which was public at the time of disclosure) was especially useful in confirming they were on the right path during their research. In that sense, Bernstein and Lange imply that delaying the publication of the paper had no benefit to the security community, and may have actually benefited attackers in a critical period, while impeding the work of honest security researchers.
They also questioned the unusually long delay given to Infineon (eight months) before disclosure. Usually, "responsible disclosure" ranges from a few days to several months. For example, Google's Project Zero team gives projects 90 days to fix problems before it publicizes the results. The long delay was presumably given to provide ample time for companies and governments to organize mitigation measures. Yet, when the paper was finally released, some companies (e.g. Gemalto) still weren't clear if their products were affected. According to Ars Technica, Gemalto's customers, including Microsoft, which uses the products for two-factor authentication, are now still left wondering if their products are secure. Similarly, Estonia recently scrambled to suspend all affected cards, earlier than originally planned, even though they were notified of the vulnerability back in August. It seems the long delay didn't work: stakeholders did not proceed with those changes promptly and thousands if not millions of devices are still vulnerable at the time of writing.
Part of the problem is that a lot of those devices simply cannot be
upgraded in the field. For example, Yubico forbids firmware changes to
the Yubikey 4. So the company had to set up a
free replacement program for the popular keycards. TPM modules,
fortunately, are often flashable, but those need to be done through a
manufacturer update, which may be hard to arrange. Most often, fixes are
just not deployed at all, because supply chains
make it difficult to find the original manufacturer that can
provide fixes. In fact, the ROCA paper described the impact evaluation
as "far from straightforward
", partly because secure hardware is
often designed as an opaque system that is deliberately hard to
evaluate. In particular, the researchers stated that "prevalence of
the RSALib in the area of programmable smartcards is notoriously
difficult to estimate
". The researchers further explained that we may
deal with the fallout of ROCA for a long time:
The ROCA paper also noted that medical records identification cards and electronic payment cards (EMV) may be vulnerable to this issue as well, although they were not aware of publicly available sources for the public keys for those devices—a necessary condition to mount an attack.
Yet the researchers are confident their disclosure led to good results. In an email interview, Petr Svenda said:
Svenda also believes the security vulnerability is "more likely to be unintentional" because the prime structure leak was "so obvious" when compared to other cryptographic vulnerabilities like Dual_EC_DRBG. He pointed to some of the recommendations from the paper:
Don't keep the design secret and have verifiable implementation
Use some kind of secure multiparty computation to remove single points of failure
The latter idea is novel for me, but was proposed in academic literature back in 1997. The idea is to not rely solely on one algorithm or device to generate primes but collaboratively generate them among multiple parties. This makes vulnerabilities like ROCA much less likely because the same mistake would need to happen among all the components for the attack to be effective.
Obviously, you can also generate private keys on another computer and move them to the keycard later. But, as the Debian OpenSSL debacle showed, desktop computers are not immune to this type of vulnerability either.
The ROCA paper highlights the "dangers of keeping the
design secret and the implementation closed-source, even if both are
thoroughly analyzed and certified by experts
". They were also
critical of the certification process which "counter-intuitively
'rewards' the secrecy of design by additional certification 'points'
when an implementation is difficult for potential attackers to
obtain - thus favoring security by obscurity.
"
The paper concludes by stating that "relevant certification bodies
might want to reconsider such an approach in favor of open
implementations and specifications.
" This is a similar conclusion to
the one I
reached in my comparison of cryptographic
tokens: we should
be able to design secure, yet open, hardware and hopefully these kinds of
vulnerabilities will serve as a lesson for the future.
Index entries for this article | |
---|---|
Security | Cryptography |
Security | Encryption/Vulnerabilities |
GuestArticles | Beaupré, Antoine |
Posted Nov 14, 2017 17:37 UTC (Tue)
by tialaramex (subscriber, #21167)
[Link] (4 responses)
In theory such keys could exist by chance, but at billions-to-one odds, so that we might be surprised to see one, and should be astonished to see more. In practice there seem to be at least a handful and perhaps dozens in certificates for the Web PKI.
It remains on my TODO list to figure out if the public keys I found are especially _more_ vulnerable than the general case discussed in the ROCA paper. Their properties are even weirder, and my intuition is that weird is bad, but my mathematics isn't good enough to turn that intuition instantly into anything concrete. At the very least their existence suggests either:
Infineon weren't the only people to have whatever bad idea is embodied by RSAlib, someone else independently made the same error
When I saw that these keys were mentioned in the paper at all my heart leapt - but they dismiss them, which might mean they are much smarter than me and saw there's nothing very interesting going on (just more of the same) or perhaps it just means that life is short and they, unlike me, moved on to other things. Still, at least they were mentioned, so I know I'm not hallucinating.
Posted Nov 14, 2017 23:52 UTC (Tue)
by ballombe (subscriber, #9523)
[Link]
Posted Nov 17, 2017 17:34 UTC (Fri)
by flussence (guest, #85566)
[Link] (2 responses)
Posted Nov 19, 2017 21:42 UTC (Sun)
by nix (subscriber, #2304)
[Link] (1 responses)
Posted Nov 20, 2017 14:12 UTC (Mon)
by tialaramex (subscriber, #21167)
[Link]
Unfortunately all these types of solutions are also vulnerable to a problem where somebody nicks your Yubikey, field upgrades it to a version that works against you, then gives it back. Being obliged to send the device away to the manufacturer partly averts this attack. Of course a _very_ sophisticated adversary might be able to produce a look-alike device that suits their purpose and can be substituted quickly, for example by pick-pocketing. For example if you're Bill Browder, then sure, even the current arrangement isn't going to keep you safe from the type of forces able to have your associates murdered with impunity and then blame you for their deaths. But most of us aren't Bill Browder.
Posted Nov 14, 2017 20:18 UTC (Tue)
by smoogen (subscriber, #97)
[Link] (2 responses)
I do not understand that wording. If the firmware can't be 'touched' by outside forces without physically altering the key.. that isn't forbidding in the general English sense as much as "working as designed". Forbidding is normally used for "Yubico could change the firmware but it refuses to allow anyone to do so"
Posted Nov 14, 2017 20:27 UTC (Tue)
by coolhandluke (guest, #114151)
[Link] (1 responses)
Yubico removed that ability (in the name of security). They also moved to closed source applets -- for OpenPGP, at least. I'd guess that *they* have the ability to upgrade the firmware if they wanted to but, yes, they have "forbidden" end users from doing so.
Posted Nov 14, 2017 20:54 UTC (Tue)
by smoogen (subscriber, #97)
[Link]
The closed source applets I don't have much say on.. I never used the keys for opengpg but just for different types of OTP
Posted Nov 15, 2017 7:27 UTC (Wed)
by andrewsh (subscriber, #71043)
[Link]
Posted Nov 15, 2017 14:41 UTC (Wed)
by arnout (subscriber, #94240)
[Link] (6 responses)
It is of course not impossible to do this. E.g. you could set up a honeypot shop that gives a reduction when you log in with eID. But I feel that acquiring the eID certificate will easily reach the cost of €80,000 for breaking the private key.
Posted Nov 15, 2017 17:00 UTC (Wed)
by dottedmag (subscriber, #18590)
[Link] (5 responses)
Can't find the relevant links right now.
Posted Nov 16, 2017 23:50 UTC (Thu)
by giraffedata (guest, #1954)
[Link] (4 responses)
Posted Nov 18, 2017 21:19 UTC (Sat)
by JanC_ (guest, #34940)
[Link] (3 responses)
Posted Nov 19, 2017 18:53 UTC (Sun)
by jem (subscriber, #24231)
[Link] (2 responses)
Posted Nov 20, 2017 17:17 UTC (Mon)
by ScottMinster (subscriber, #67541)
[Link] (1 responses)
Posted Nov 20, 2017 20:04 UTC (Mon)
by jem (subscriber, #24231)
[Link]
Posted Nov 17, 2017 6:45 UTC (Fri)
by marcH (subscriber, #57642)
[Link] (1 responses)
The question "is closed-source more or less secure than open-source?" is often debated. I think the answer is "it depends on many other things" and that this question alone is not very interesting when taken out of context.
There's on the other hand a well known "Defect Cost Increase" theory (and picture) that explains why and how the later defects are found and the more they cost. I think this theory demonstrates in a much better and simpler way why obscurity is terrible for security: because it *delays* the discovery of vulnerabilities - in this instance *after* millions of vulnerable products have been sold and used.
Posted Nov 17, 2017 7:33 UTC (Fri)
by marcH (subscriber, #57642)
[Link]
DCI typically refers to the _time_ between the moment a bug is introduced and the moment it is discovered and root-caused. However it also applies to _spatial_ distances like the one between the source code location of the bug and the other, distant part of the code that pays the price and crashes. So concepts like isolation, modularity, and the "problem of too many layers of indirection" are DCI examples/applications.
Similar to source code, the distance can also be "organizational", think distant teams in a very large company. This is why open-source is more efficient: it removes all the various shades of red tape between the guy who creates the bug and the other who finds it.
Going back to _time_ between bug and discovery, Continuous Integration is a super obvious application of DCI. So are static and compile time checks. Test Driven Development is yet another one where an initially failing test is implemented so defects are "found" even *before* they would be introduced.
Of course we know that engineers get into software development to abstract themselves away from the boring *space and especially time* constraints of the real world (and from their evil project manager incarnations), so it's not so surprising after all that DCI is not their primary concern :-)
ROCA: Return Of the Coppersmith Attack
OR
Infineon also shipped other devices that pretend not to be using RSAlib but are anyway.
ROCA: Return Of the Coppersmith Attack
Coppersmith attack.
ROCA: Return Of the Coppersmith Attack
ROCA: Return Of the Coppersmith Attack
ROCA: Return Of the Coppersmith Attack
ROCA: Return Of the Coppersmith Attack
ROCA: Return Of the Coppersmith Attack
ROCA: Return Of the Coppersmith Attack
ROCA: Return Of the Coppersmith Attack
Forging votes: how to get the public keys?
Forging votes: how to get the public keys?
What good is eID if the public keys are not publicly available in a registry that associates each public key with a meaningful identity?
Forging votes: how to get the public keys?
Forging votes: how to get the public keys?
"Public keys" and "certificates" are essentially the same for this discussion. The certificate contains the public key together with personal information linking the public key with the identity of the key holder.
I guess the question boils down to: If every transaction involving a signature performed with a private key also conveys the corresponding certificate, what is the public registry containing the certificates of the whole population of a nation needed for?
If you have all the CA certificates up to a trust point, then you can verify that the certificate is valid and belongs to the subject. The registry does not provide any additional information.
Forging votes: how to get the public keys?
Forging votes: how to get the public keys?
Forging votes: how to get the public keys?
ROCA: Return Of the Coppersmith Attack
ROCA: Return Of the Coppersmith Attack