Security in an error-prone world
In 1955, the Journal of the American Medical Association said that the interiors of contemporary cars were so poorly designed that it was amazing when anybody escaped an accident without serious injury. Ten years later, Ralph Nader's seminal Unsafe at Any Speed was published. In response, automotive engineers said that they designed their cars to operate safely — they were designed to drive, not to crash. Crashes were the result of bad driving, so the proper response was better driver education. The addition of safety features would come at a cost in style and comfort; it would also cost more. Customers, they said, did not want those safety features.
Fifty years later, though, cars are designed around crumple zones and crumple-resistant passenger areas. They have airbags, seat belts with pre-tensioners, collision sensors, and more. Modern cars, Konstantin said, are designed with driver errors in mind; as a result, automotive fatalities are a fraction of their peak nearly 50 years ago.
Computers and their software, though, are still designed like 1960s cars. They are all about power and comfort. Engineers will say that these systems have been designed to run safely, that things fail when humans make mistakes. Protecting users from their own mistakes is expensive, safety features can hurt the usability of software, customers are not asking for more safety features, and so on. The problem is best solved, they say, with more user education.
Konstantin faced this problem full-on in 2011, when he was hired in the aftermath of the kernel.org compromise. The approach he found was to design security infrastructure like a medieval fortress — or like Helm's Deep from the Lord of the Rings. There is a big wall with archers to defend it, a second line of defense (consisting solely of Gimli the dwarf), an inner keep, and a final line made up of two old white guys. Plus a disaster-recovery plan called "Gandalf."
The thing is, we design systems like this, but then somebody gets in anyway. The forensics consultants are called in; they find out that the back door used was always there — the administrators used it to get their work done. Or the attacker used an internal PHP application that should have never been there; it has a comment saying "temporary for Bob," but nobody even remembers who Bob is. People make mistakes, and they always will; we need, he said, to think more about how we can prevent these mistakes from becoming security problems. We need, in other words, to equip our systems with airbags to prevent serious problems when things do go wrong.
Airbags for systems
Konstantin then went through the various levels of a system to talk about what those airbags might look like.
At the networking level, we are already deploying firewalls, using virtual LANs, zoning, and virtual private networks, and performing routine nmap scans. But there are a number of things we are not doing, starting with payload inspection to see what is happening on our network; that is hard, especially when encrypted traffic is involved. There are mechanisms for tracking the reputation of IP addresses for spam blocking, but reputations are not used for other kinds of traffic. We are not, in general, actually bothering with DNSSEC or bothering to check TLS certificates. Or even to use TLS certificates in many cases.
For servers, we are using virtualization for isolation, applying security updates, doing centralized logging, and using SSH keys for logging in. But we should be able to do far better than that. We should stop disabling SELinux (or AppArmor or whatever); they are there for when something goes wrong. SELinux can keep an eye on that PHP application that has no business connecting to other sites, digging through /proc, looking at /etc/passwd, scanning the network, or sending email. Running a system with SELinux enabled can be learned, Konstantin said; we need to stop turning it off.
We should also be using two-factor authentication on all of our servers. Hardware tokens (he favors YubiKeys) are better than software tokens running on phones, but either is better than nothing at all. SSH keys should be stored on smart cards (or a YubiKey NEO) rather than on laptops. The Linux Foundation team has put up a set of documents on how to make this combination work well with Linux.
Containers, he said, will not, contrary to some claims, make system administrators obsolete. But they can help isolate software stacks from each other. Think of a container, he said, as a sort of whole-OS static linking mechanism. They are a type of airbag: they allow a complex software stack to be bundled away and hidden from the world; the whole thing can then be further confined with SELinux. Containers also make good crash-test dummies — they can be used to test software in production-like environments. In such a setting, it's easy to check for open ports, improper log files, etc. This, he said, "is what DevOps is about."
On workstations things are a bit more difficult; confining applications is not an easy task. SELinux is nearly useless on a desktop. The X Window System is the essence of pre-safety design; there is only security between users, and applications have full access to everything. So a single vulnerable X application means the compromise of the entire desktop. X, Konstantin said, must die. Unfortunately, that won't happen for a long time.
Then there is the issue of web browsers. They run code from untrusted outside users, they have a huge attack surface, and they have proprietary plugins. And we can't live without them. So we end up with issues like CVE-2015-4495, which was actively exploited to search systems for SSH keys and passwords.
As a rule, the most unprotected system on the entire net is the system administrator's desktop. It sits on the VPN with full access to the net, it has access to the password vault and is full of privileged SSH keys. There is also often a handy list of other systems that those keys will grant access to. The system is full of dangerous keystrokes, disclosing passwords to any attacker that happens by.
How does one address this threat? Requiring the use of one-time passwords — preferably not supplied by a phone app — is the first basic step. SSH keys should be stored on smart cards, and never in home directories. Proper security policies need to be written and administrators educated, forcefully if need be, to follow them. Konstantin also suggested looking into Qubes, which, he said, is the only serious attempt at workstation security out there. Qubes sidesteps most X vulnerabilities and can minimize the impact of things going wrong. Its safety orientation makes it "the Volvo of distributions."
When it comes to the functioning of administrative teams, there is no alternative to relying on education, so it is necessary to be prepared for failures. A team should establish secure communications so that its members can talk to each other when the network cannot be trusted. Email communications should employ PGP signatures, and instant messaging should be via a trusted mechanism as well. There need to be firm policies about what can be sent in clear text; important stuff should always be encrypted. Sites need workstation security policies, hardening checklists, and examples of secure workflows. Separate, dedicated browsers should be used for work and play. The system administrative team should use code review like any other development project, and changes should require signoffs.
Checklists should be created and used for everything: deployment of a new system, code review, staff onboarding, staff departure, etc. There should be a procedure to quickly lock out an administrator — a tricky task. Checklists can be the most powerful tool available to avoid bad experiences.
In closing, Konstantin reiterated that mistakes are going to happen; the important thing is to make sure that these accidents are not fatal. Our current systems are great to drive, but they do not forgive mistakes; we are at our "unsafe at any speed" moment. We have the technology to make things safer, but we're not using it; that needs to change. Konstantin and his team are putting together a set of recommended policies and are looking for help to improve them.
[Your editor would like to thank the Linux Foundation for supporting his
travel to KLF].
Index entries for this article | |
---|---|
Security | Best practices |
Conference | Korea Linux Forum/2015 |
Posted Nov 3, 2015 19:49 UTC (Tue)
by dlang (guest, #313)
[Link] (12 responses)
FUD.
It was built to resist damage from the impact, not absorb it and sacrifice itself to protect the passengers (like very other car of it's era)
it would survive and protect you quite well in a low speed collision. Different ways of designing things work better in larger collisions, but take far more damage in low speed collisions.
Posted Nov 3, 2015 20:06 UTC (Tue)
by bbockelm (subscriber, #71069)
[Link] (11 responses)
https://www.youtube.com/watch?v=C_r5UJrxcck
I guess I would have called 40mph "low speed" as I exceed that every day -- although I'm sure the official definition is different!
More on-topic, as you point out, making vehicles safer often has the side-effect of making repairs more costly (or impossible). There's probably a lesson here in computer security also: security often has unwanted side-effects (perhaps unnecessary?) on usability and cost.
Posted Nov 3, 2015 21:25 UTC (Tue)
by smoogen (subscriber, #97)
[Link] (10 responses)
Also talking about cars is dangerous at least in the US. People will come up with all kinds of stories about how XYZ car was safer than any other car and the people who have ever said anything against it were commie loving sons of .... So I expect that one person will have heated words, someone else will come up with how the Bel Air was a poor-mans Cadillac and then the knives come out until they both agree that it was better than driving a Ford. [Or vice versa.. and they might all agree that they would trade them all for a Silver hawk Studebaker].
Three week later someone will ask them about the security talk and all they will remember is that someone dissed their car and it will all start over..
Posted Nov 3, 2015 21:29 UTC (Tue)
by corbet (editor, #1)
[Link] (9 responses)
Posted Nov 3, 2015 21:33 UTC (Tue)
by smoogen (subscriber, #97)
[Link]
Posted Nov 3, 2015 21:50 UTC (Tue)
by bronson (subscriber, #4806)
[Link]
Posted Nov 3, 2015 22:08 UTC (Tue)
by dlang (guest, #313)
[Link] (6 responses)
I was disputing the statement that the car would kill you because it wasn't designed with the thought that anything would go wrong.
It was very much designed with the thought that you would bump into someone in a parking lot, back into a pole, and various other things like that.
bringing this back to IT Security, it all depends on what threads you are defending against.
Any organization can be shown to be insecure, if you assume that the threat is the NSA/KGB with no distractions targeting you.
No care is safe in a collision with a Bullet Train at full speed.
security (and safety) are not binary secure/insecure. Just because something isn't secure against one threat doesn't mean that it's horrible to use under all conditions.
If you don't use something because it's not perfectly secure, you won't have a business to protect because you will never actually do anything.
I'm also not saying that IT Security shouldn't be improved. I'm ranting against the common trap that Security people fall into of treating it as a binary thing and spreading FUD.
Posted Nov 3, 2015 22:17 UTC (Tue)
by smoogen (subscriber, #97)
[Link]
Posted Nov 3, 2015 22:52 UTC (Tue)
by bronson (subscriber, #4806)
[Link]
"But, if I spend another 4 mil on PA Networks, will we be MORE secure?"
"Technically yes, but you need to understand that the chances of..."
"Forget it. Here's a PO. Now, what if we spent even more money?"
Posted Nov 3, 2015 23:03 UTC (Tue)
by bronson (subscriber, #4806)
[Link] (1 responses)
"The Barracuda 600 got a 2.5 on the latest crash test. See the YouTube video here -- not for the faint of heart!"
Posted Nov 3, 2015 23:08 UTC (Tue)
by dlang (guest, #313)
[Link]
Posted Nov 5, 2015 14:44 UTC (Thu)
by droundy (subscriber, #4559)
[Link] (1 responses)
I have to say, if you're defending against threads, your best bet is either dragons or grubs...
Posted Nov 6, 2015 21:34 UTC (Fri)
by k8to (guest, #15413)
[Link]
Since I did, I now feel very old.
(Cheatsheet: pern.)
Posted Nov 3, 2015 19:55 UTC (Tue)
by dlang (guest, #313)
[Link] (3 responses)
If you can actually create a checklist for everything, you will spend so much time trying to find the right checklist that you won't actually do anything.
There is a place for checklists, but trying to reduce everything to a checklist is a disaster.
Major items (hiring/firing) need to be mostly automated so that the 'checklist' boils down to running a handful of processes.
Posted Nov 4, 2015 9:16 UTC (Wed)
by jezuch (subscriber, #52988)
[Link] (1 responses)
Airline pilots have checklists for everything[1]. Somehow they don't have any problem finding the one appropriate for the task at hand (and they don't do anything without a checklist in hand!).
[1] Well, "everything". There are freak accidents nobody imagined, after all.
Posted Nov 4, 2015 17:42 UTC (Wed)
by dlang (guest, #313)
[Link]
Posted Nov 5, 2015 18:20 UTC (Thu)
by WolfWings (subscriber, #56790)
[Link]
https://scholar.google.com/scholar?q=checklists+reduce+ho...
https://en.wikipedia.org/wiki/Pilot_error#Checklists
I don't think anyone is saying each organization should be writing all of their own checklists; many of them could be assembled by publishers and become a commodity, and in the case of running commands those can be fully automated with any number of scripting approaches.
But outside of distinct or unique diagnostic utilities, no SysAdmin should be typing 'apt-get install' or 'yum install' on the CLI of servers anymore, or the equivalent. Document what the standard is, and implement it, but let computers do what computers are good at: Automation. And for everything else? Yes, it boils down to a checklist. :)
Posted Nov 3, 2015 20:50 UTC (Tue)
by mathstuf (subscriber, #69389)
[Link] (29 responses)
> SSH keys should be stored on smart cards, and never in home directories.
Are there any that support more than a few keys at a time? I have 22 SSH keys and 16 TOTP keys to keep track of. A YubiKey only holds 2 TOTP keys, so that's 9 yubikeys I need to carry around. Instead, I have an encrypted, automounted USB key with symlinks in $HOME. Now if only GPG would resolve symlinks when writing back to its files rather than clobbering they symlink…
Posted Nov 3, 2015 22:46 UTC (Tue)
by nybble41 (subscriber, #55106)
[Link] (20 responses)
Why would you need more than one SSH key? The key is to prove who you are. It doesn't have to be specific to the host you're connecting to. I generally use a separate key for each system I connect *from*, so that I can deactivate them individually if compromised, but with a smartcard you wouldn't need to store the key locally; you carry it around with you from system to system.
Posted Nov 3, 2015 23:09 UTC (Tue)
by mathstuf (subscriber, #69389)
[Link]
It came in handy when Fedora forced SSH key changes a couple years ago: I only had to update that one key and not choose between reissuing pubkeys to a bunch of machines or have one oddball key.
Posted Nov 4, 2015 13:34 UTC (Wed)
by nix (subscriber, #2304)
[Link] (2 responses)
For most uses, though, one smartcarded key would do (if they worked! :( ), combined with a bit of care about where you forward the agent to, so the agent doesn't follow you into the role accounts (so an attacker who gets in there cannot command your agent to carry out operations using your key).
Posted Nov 4, 2015 22:40 UTC (Wed)
by nybble41 (subscriber, #55106)
[Link] (1 responses)
Why would you ever share keys with others? Besides the increased risk of key exposure, that makes it impossible to revoke just one user from the role account, or properly log which user accessed the account. There is no reason why you can't have a single role account which can be accessed with multiple keys, one per unique user.
(Obviously, make sure you configure sshd to locate the authorized_keys file somewhere outside the role user's home directory, or use one of the other mechanisms available to supply authorized keys. Only an admin should be able to add or remove authorized keys for a role account.)
Posted Nov 10, 2015 16:22 UTC (Tue)
by nix (subscriber, #2304)
[Link]
Posted Nov 5, 2015 9:46 UTC (Thu)
by madhatter (subscriber, #4665)
[Link] (15 responses)
Posted Nov 5, 2015 16:16 UTC (Thu)
by nybble41 (subscriber, #55106)
[Link] (14 responses)
Posted Nov 5, 2015 17:14 UTC (Thu)
by madhatter (subscriber, #4665)
[Link] (13 responses)
Posted Nov 5, 2015 18:36 UTC (Thu)
by mathstuf (subscriber, #69389)
[Link] (10 responses)
Posted Nov 5, 2015 21:22 UTC (Thu)
by madhatter (subscriber, #4665)
[Link] (9 responses)
The TOTP tokens are accessed via NFC, using (in my case) a free (newBSD-licensed, available on f-droid.org) Android app; some other device has to be involved, as the Yubikey has no internal clock. The external app provides a timestamp via NFC, and the Yubikey seals that using each of the secrets it has in NFC storage. As for identifying which TOTP code is for which external service, when each secret is loaded into the 'key a text snippet goes with it, and this is returned by the 'key over NFC, along with each associated TOTP code.
So when I fire up the app and bring it close to my 'key, three different TOTP codes appear on my phone's screen, each with a small text snippet (usually one that I chose) reminding me which particular remote service that TOTP code is intended for.
In case anyone's wondering, I have loaded the app from scratch onto someone else's phone, and verified that (as long as it's done in the same 30-second window) the same TOTP codes appear, with the same text snippets; all the service-specific stuff is on the Yubikey, the phone provides only communications, display, and a timestamp.
Posted Nov 5, 2015 23:35 UTC (Thu)
by mathstuf (subscriber, #69389)
[Link]
Posted Nov 12, 2015 14:52 UTC (Thu)
by itvirta (guest, #49997)
[Link] (7 responses)
Because I started thinking about someone walking past one of them and asking it to sign
Posted Nov 12, 2015 18:51 UTC (Thu)
by flussence (guest, #85566)
[Link] (2 responses)
Posted Nov 12, 2015 22:05 UTC (Thu)
by johill (subscriber, #25196)
[Link] (1 responses)
Posted Nov 16, 2015 14:00 UTC (Mon)
by itvirta (guest, #49997)
[Link]
Can't tell why I didn't find this the first time, though...
Posted Nov 18, 2015 21:43 UTC (Wed)
by nix (subscriber, #2304)
[Link] (3 responses)
(This is not ideal -- backward-compatibility concerns limit the session counter to 7 bits, and obviously the protocol requires it to saturate rather than wrapping, so overflows are well within the bounds of possibility. But it's not *bad*, and you can reset the session counter by resetting the underlying AES key and sending the new one to your authentication servers.)
Posted Nov 20, 2015 10:01 UTC (Fri)
by tao (subscriber, #17563)
[Link] (2 responses)
Posted Nov 23, 2015 23:30 UTC (Mon)
by nix (subscriber, #2304)
[Link] (1 responses)
Posted Nov 24, 2015 17:38 UTC (Tue)
by tao (subscriber, #17563)
[Link]
Posted Nov 5, 2015 21:42 UTC (Thu)
by nybble41 (subscriber, #55106)
[Link] (1 responses)
And that's perfectly fine, but the comment you were replying to (mine) was questioning the need to store 22 *SSH* keys. Perhaps you meant to reply to mathstuf instead?
Posted Nov 5, 2015 21:45 UTC (Thu)
by madhatter (subscriber, #4665)
[Link]
Posted Nov 3, 2015 23:05 UTC (Tue)
by wahern (subscriber, #37304)
[Link] (4 responses)
16 HOTP/TOTP keys I can understand. It's why I was so psyched when the NEO came out with OpenPGP support, and why I really, really, really hope that Google's U2F project will see widespread adoption. U2F puts native smartcard support in the browser, making the entire stack--from the driver up to the JavaScript API--hassle free. Passwords, even HOTP- and TOTP-based systems, whether generated from a token or not, just don't scale from an individual perspective. I really wish Mozilla[1] would finally finish their U2F support. Microsoft _claims_ to be committed to supporting it, too. So there's hope.
[1] Of course, Mozilla has always had PKCS#11 support. But the ecosystem is too fractured and proprietary, particularly when it comes to card management. U2F specifies the things that matter to maximize interoperability, and works around driver hassles by abusing USB HID. If anything U2F is too complicated and flexible, but it's the closest thing yet which stands any chance of bringing widespread pubkey authentication to the masses.
Posted Nov 4, 2015 21:41 UTC (Wed)
by Lennie (subscriber, #49641)
[Link] (2 responses)
So far, Mozilla seems to not be interested in doing that right now.
My guess would be because of: security of course. Stacks like the USB-stack are prone to problems and have been used to crash systems or worse: install malware.
For example:
At a security conference there was a talk about vulnerabilities in Windows using fuzzing tests to find problems in the HID-stack.
The security researcher found a bug in the Bluetooth or USB stack that at least could crash Windows (possibly with a buffer overflow, possible security bug) and reported it to Microsoft and they did nothing. They said: this is only with local access, that can't be exploited remotely.
So what he demonstrated was: remote desktop supports HID-devices like USB and he used remote desktop to crash Windows servers.
Anyway this was a couple of years ago and I believe Microsoft still hasn't fixed it though.
All you have to do is fuzzing USB device names.
Now think about browsers and how many machines they got deployed to. You want to be pretty sure it's safe. ;-)
There is an add-on:
There is a bug bounty:
Anyway, see bugzilla for the progress:
Posted Nov 5, 2015 11:43 UTC (Thu)
by raven667 (subscriber, #5198)
[Link] (1 responses)
This doesn't seem wrong, can an unprivileged user set up virtual USB devices such that this interface could be attacked remotely? I understand the remote desktop case, but doesn't that have to be enabled by a privileged user before it could be used as an attack vector? if you have to be either locally present or have administrative access before you can access this attack vector it's not really all that interesting.
Posted Nov 7, 2015 18:45 UTC (Sat)
by Lennie (subscriber, #49641)
[Link]
You need to have an account on the server.
But Microsoft sells products like 'terminal server'.
So an exploit could be used to do privilege escalation.
Posted Nov 5, 2015 14:50 UTC (Thu)
by kpfleming (subscriber, #23250)
[Link]
[1]: https://w3c.github.io/websec/web-authentication-charter
Posted Nov 6, 2015 0:31 UTC (Fri)
by pedrocr (guest, #57415)
[Link] (2 responses)
Can't you just symlink the key dir instead?
Posted Nov 6, 2015 0:58 UTC (Fri)
by mathstuf (subscriber, #69389)
[Link] (1 responses)
Posted Nov 6, 2015 1:03 UTC (Fri)
by pedrocr (guest, #57415)
[Link]
Posted Nov 3, 2015 22:26 UTC (Tue)
by nix (subscriber, #2304)
[Link] (15 responses)
To make things worse, the YubiKey is a multifunction device, but it can only do one thing at once: so if you use it for OTP passwords or U2F, every time you touch the key's touchpad or ask for a U2F password it'll disconnect from the gpg-agent, and neither pcscd nor the builtin ccid driver in GnuPG 2.1 is remotely expecting this. (pcscd is a whole other mass of crawling horror, running right down to the fact that the author has recently removed all autospawning support for non-systemd configurations for poorly-stated reasons, meaning that if you don't run systemd you are forced to keep a systemwide pcscd running just in case a user plugs a smartcard in: if you want this configuration to be remotely secure, you have to use a very recent JS-infested PolicyKit. What a mess.)
PIV support using the yubico-piv-tool does work much better -- but it relies, again, on pcscd to do the heavy lifting, and this has no idea that slot 9a allows reauthentication without a password. Since nothing anywhere stores the PIN and there is nothing like automatic reconnection, whenever you ask for an OTP, your SSH key vanishes until you do an ssh-add -e and -a again, and type in the PIN (again). Half the time you'll get stuck in some unclear state where doing *either* of these just tells you 'agent refused operation' while the logs scream 'process_remove_smartcard_key: pkcs11_del_provider failed'.
There is a debugging interface for pcscd: it involves *renaming the shared library* and making a new symlink. I've got some debug logs out of this ridiculous system but haven't yet analyzed them to see what's going wrong (I need to learn about the protocol it's trying to talk, first).
This whole thing is not remotely ready for prime-time. I'd love to recommend SSH and GPG key storage on the Neo and its use everywhere, but I just can't. I use PIV keys myself and am frequently bitten by the smartcard connection being lost because I had the temerity to use the Neo like a Yubikey and get an OTP out of it, and much of the time I can't get it back again.
Posted Nov 3, 2015 23:11 UTC (Tue)
by mathstuf (subscriber, #69389)
[Link] (9 responses)
Well, my zshrc sets it and it isn't in systemctl --user show-environment, so…yes? :)
> if you ssh or su to another user
Eh, not clearing your environment in such situations is probably not the best idea anyways.
> and want it to use your smartcard
Why would you want to use an SSH or GPG key as another user? Genuinely curious (though I disable agent forwarding; no need to have a malicious server slurp keys when I connect to it).
> very recent JS-infested PolicyKit
To be fair, this happened a while ago. Sad as it is :( .
So it seems that my automounted usb key is still the most flexible setup for me if this is the state of things.
Posted Nov 4, 2015 0:23 UTC (Wed)
by wahern (subscriber, #37304)
[Link] (6 responses)
1) A server would normally only be able to use the key, not read it. Maybe you meant something else. 2) If the client SSH program is buggy, yes, a malicious server could read the key, but that's because you're not using a smartcard. Personally, I never understood the appeal of putting a private key on a USB device. It's not much different than using a password encrypted key stored on your computer. It makes it easier to use on other computer, but that seems even more risky, because now the security of the key is a function of the least secure computer you use it on. Though I don't know your precise reasons, and am not trying to judge them specifically.
Even though I use a smartcard, I only enable authentication forwarding on a case-by-case basis. Still, I've always thought it would be useful to configure a card to require a physical key press before performing the signing operation. This seems like a more useful feature than a PIN, IMO, especially for contexts like banking. Somebody can hack my computer and steal my PIN, then use the card without my knowledge--it's plugged in throughout most of the day. Whereas if it required a physical confirmation, they couldn't. PINs address the wrong attack scenario--the biggest threat is somebody hacking my computer, not stealing my smartcard. If somebody steals my smartcard I'll know about it, or at least the damage be circumscribed. And maintaining physical custody and security of a key is much easier as a practical matter, especially in terms of the threats I and most people face. Whereras much like a password, if they steal my PIN I'll have no idea. And the universe of people that could access my PIN is, as a practical matter, any sufficiently knowledge hacker on the face of the planet.
This is why when people use the term 2-factor, I cringe. Even supposed professionals are enamored with this phrase, without giving much consideration to the _real_ threat scenarios, and to the relative costs and benefits of these factors. Yes, a coworker stealing your smartcard for 5 minutes when you're at lunch is a real threat. But the state of computer security is _so_ utterly abysmal that the threat absolutely pales in comparison to remote threats. Combined with the implementation and interoperability problems that something as simple as a PIN can cause (it's 2015 and, with all its problems, the Yubikey NEO is as good as it gets), this and similar features shouldn't be considered a requirement if you want to improve organizational security.
I have it on my TODO list to hack the Yubikey NEO OpenPGP applet to support personal data objects. Perhaps I should look into support for requiring a key press before signing.
Posted Nov 4, 2015 0:46 UTC (Wed)
by dlang (guest, #313)
[Link] (2 responses)
I don't care how complex your pin is, if someone can see your keystrokes they can get your pin.
I agree that you really want your two factor authentication to be something that requires affirmative action to use (either typing in the result, or at least hitting a button on the key itself)
Posted Nov 4, 2015 13:39 UTC (Wed)
by nix (subscriber, #2304)
[Link] (1 responses)
In fact, capturing it is more or less useless anyway: it only serves to *unlock* the smartcard, and if you plug it in, you're probably going to do that anyway. An attacker doesn't need to capture your PIN: it just needs to wait for you to type it in yourself, then query the unlocked smartcard as usual. Getting your key remains impossible, as is doing authentication operations when the smartcard is not plugged in. So you've basically restricted a successful attacker to only attacking when you're around and can potentially spot attacks. (You probably won't, but still.)
Posted Dec 1, 2015 13:37 UTC (Tue)
by dany (guest, #18902)
[Link]
Posted Nov 4, 2015 1:32 UTC (Wed)
by mricon (subscriber, #59252)
[Link]
"Touch to sign" is part of the OpenPGP Card v3 spec and I expect it will be supported by upcoming versions of yubikey NEO.
Posted Nov 4, 2015 2:27 UTC (Wed)
by mathstuf (subscriber, #69389)
[Link]
The setup is a key with what I need day-to-day (SSH keys, keepass database, SSL client certs) with a passphrase I can actually type. There are other copies with everything I need (main GPG private key, SSL cert backups, TOTP recovery keys, etc.) without the daily typeable passphrase and instead a much longer passphrase.
> It makes it easier to use on other computer, but that seems even more risky, because now the security of the key is a function of the least secure computer you use it on.
Well, no different than other setups, really. Just don't use it on machines I don't trust.
Posted Nov 4, 2015 13:37 UTC (Wed)
by nix (subscriber, #2304)
[Link]
My local setup has agent forwarding turned on to the clustered machines (which also share $HOME filesystems via NFS) but the firewall host does not have any of that, so an attacker will be stuck on there, unable to ssh in to the cluster even if I'm sshed into the firewall, because that SSH connection is *not* accompanied by an agent forwarding, so the smartcarded key I used to get into the firewall and across the cluster is still inaccessible.
Posted Nov 11, 2015 21:20 UTC (Wed)
by nix (subscriber, #2304)
[Link]
Alas, it's not related to that. The problem is that the connection to the agent is forwarded over ssh back to the gpg-agent, and it communicates the value of $GPG_TTY back over that (Assuan) connection: gpg-agent then tries to kick up a keychain on your TTY, and oh look it's a different user and I bloody hope the gpg-agent isn't running as root, so it can't do it. Now you're in trouble.
As far as I can tell, gpg-agent is only designed to work in a situation in which each Unix user has his own keychain, and only one Unix user has a connected smartcard, and no other user ever wants to use it. It actively militates against a scheme where you use multiple uids to separate your concerns (I use a different uid for work and non-work, for instance), and it always will. The problem is that the keychain is forked by the gpg-agent to ensure that nothing else can spy on the passphrase as it passes through -- but the keychain *cannot* be forked by the gpg-agent without causing the problem above!
I see no way to fix this :(
Posted Nov 11, 2015 21:31 UTC (Wed)
by nix (subscriber, #2304)
[Link]
Posted Nov 3, 2015 23:40 UTC (Tue)
by wahern (subscriber, #37304)
[Link] (1 responses)
SSH authentication forwarding shouldn't be problematic--it's the same ssh instance talking to the agent when authenticating the first hop as when you're authenticating the next hop. Certainly I've never had a problem. I presume you're talking about GnuPG protocol-specific forwarding, which does seem to be buggy. But that's irrelevant for SSH authentication, it's just a bummer when you use mutt to read your e-mail on a remote server (as I do) and would like to be able to use PGP for e-mail.
MacPGP2 is using GnuPG 2.0, FWIW. But MacGPG's secret sauce is their GUI PIN entry program, so maybe they've fixed other problems as well.
Finally, my Yubikey NEO will work in OpenPGP mode _and_ HOTP mode just fine, although I do have to re-enter my PIN after generating an HOTP. Multifunction worked both before and after Apple's pcscd problems, but notably Apple's bug made pcscd lose track of the card state. Something similar (though I doubt identical, because it's forked) seems to be the problem here.
I agree things could be better. But compared to the way things were just a few years ago, particularly with SSH it's like night & day.
Ideally somebody will devise a scheme to use U2F keys for SSH, and OpenSSH will gain native U2F support, removing the need for all the middleware.[1]
[1] Ludovic Rousseau is one of the hardest working and most capable FOSS developers out there, but the deck is simply stacked against him. There's only so much one person (or a whole team of people) can do to wrangle the horrendously complex state of smartcard interfacing and management. The OpenPGP smartcard spec works because it simplifies many things, leaves less room for optional crap, and specifies basic management capabilities. U2F simplifies things even further. Heck, they could've probably just ditched the PIN requirement altogether. A "1-factor" pubkey smartcard without a PIN is still an unfathomably better state of affairs than using passwords when it comes to remote authentication, and even better than password-in-all-but-name schemes like HOTP, TOTP, and biometrics.
Posted Nov 4, 2015 13:56 UTC (Wed)
by nix (subscriber, #2304)
[Link]
In particular, it's stateless, so if U2F stops working for a second while we do an OTP authentication, nothing bad happens (and it's physically impossible to do both at once, since both involve a button press).
Presumably it would be done similarly to how PKCS#11/PIV support already is, only rather than a PKCS11Provider, you'd specify an URL to an authentication server (obviously in some new ssh_config option), and if you wanted to forward things, you'd use an SSH agent and have ssh-add and agent forwarding do the work of getting to where your smartcard is actually plugged in.
Posted Nov 5, 2015 15:18 UTC (Thu)
by apoelstra (subscriber, #75205)
[Link] (2 responses)
Thanks for this. For years now I've just thought gpg-agent just "usually doesn't work" but never had an idea of what was going wrong or how to look into it. This paragraph provides many hints.
Posted Nov 5, 2015 22:15 UTC (Thu)
by flussence (guest, #85566)
[Link]
It's completely FUBAR. I wish OpenBSD's alternative would catch on.
Posted Nov 10, 2015 16:25 UTC (Tue)
by nix (subscriber, #2304)
[Link]
Sadly, this golden ideal remains unattainable :(
Posted Nov 4, 2015 16:34 UTC (Wed)
by ibukanov (subscriber, #3942)
[Link]
Posted Nov 5, 2015 10:32 UTC (Thu)
by ortalo (guest, #4654)
[Link] (4 responses)
Posted Nov 5, 2015 21:38 UTC (Thu)
by smoogen (subscriber, #97)
[Link] (3 responses)
The part that is hard to understand is that it took decades for those changes to work their way through various systems. And the similar "forcing" of changes upon computer users, providers, etc will take equivalent times. My guess for how it will occur is :
Banks/transaction companies are tired of paying for lost/stolen identities. They will then push for mandatory computer insurance. The insurance companies will then push for changes in both what a user must know before they can use a computer, what they do if they want to keep their insurance and also what businesses must do to make the computer "safe". And in 30-40 years we will have "affordable" computer insurance and "safer" computers. [yay us.]
Posted Nov 6, 2015 7:00 UTC (Fri)
by ibukanov (subscriber, #3942)
[Link] (1 responses)
I suspect what happens is that government regulation will lead to race to the bottom in insurance prices and those with the cheapest rates and least strings attached will simply wipe out companies that try to insists on real security. Then comes a judgement day.
Posted Nov 6, 2015 17:00 UTC (Fri)
by smoogen (subscriber, #97)
[Link]
Posted Nov 6, 2015 17:01 UTC (Fri)
by kleptog (subscriber, #1183)
[Link]
Except this isn't happening really. Firstly, the losses due to transaction fraud just aren't that big. The banks insure themselves and take the costs out of the fees they charge. The 1.5% they charge on credit cards probably more than makes up any costs they might make due to fraud. And since the losses are going down rather than up, there's no reason for this policy to change.
AFAIK there is no insurance policy for lost/stolen identities. Primarily I think because costs are highly variable and hard to quantify and it just doesn't happen very often. And an insurance policy would pay money, whereas what you really need is to get records cleaned up and removed. I'm not sure if any insurance company is interested in doing that kind of work.
Security in an error-prone world
Security in an error-prone world
Security in an error-prone world
One can argue about the details, but the decline in automotive deaths, as shown in Konstantin's slides (page 19) is pretty clear. Some of that is surely due to safer roads, the drunk-driving crackdown, and more. But the reduction in the fatality rate by over half, despite a huge increase in miles driven, must also be due to safer cars.
Security in an error-prone world
Security in an error-prone world
Security in an error-prone world
Security in an error-prone world
Security in an error-prone world
Security in an error-prone world
Security in an error-prone world
Security in an error-prone world
Security in an error-prone world
Security in an error-prone world
Security in an error-prone world
Security in an error-prone world
Security in an error-prone world
Security in an error-prone world
Security in an error-prone world
Security in an error-prone world
Security in an error-prone world
Security in an error-prone world
Security in an error-prone world
Security in an error-prone world
Security in an error-prone world
Security in an error-prone world
You're completely right, but that wasn't what I was saying. If you scroll back up, you'll see that mathstuf asked if there were devices that stored more than a few keys because "A YubiKey only holds 2 TOTP keys, so that's 9 yubikeys I need to carry around". My point was merely that a Yubikey Neo can definitely store more than 2 TOTP keys, possibly quite a lot more.
Security in an error-prone world
Security in an error-prone world
The tokens produced by tapping on the Yubikey's "button" are HOTP tokens (either HOTP OATH, or Yubico's own (open) method of generating HOTP tokens).
Security in an error-prone world
Security in an error-prone world
Security in an error-prone world
a timestamp for say, tomorrow, giving plenty of time to walk away and use the codes to login later.
Security in an error-prone world
Security in an error-prone world
Security in an error-prone world
shows the app displaying the otp:s on the smartphone.
Though it also mentions a possibility of password protecting the credentials.
Security in an error-prone world
Security in an error-prone world
Security in an error-prone world
Security in an error-prone world
Security in an error-prone world
Security in an error-prone world
Security in an error-prone world
Security in an error-prone world
__
__
https://addons.mozilla.org/pl/firefox/addon/u2f-support-a...
https://github.com/prefiks/u2f4moz
https://www.bountysource.com/issues/10401143-implement-th...
https://bugzilla.mozilla.org/show_bug.cgi?id=1065729
Security in an error-prone world
Security in an error-prone world
Security in an error-prone world
Security in an error-prone world
Security in an error-prone world
Security in an error-prone world
Security in an error-prone world
Security in an error-prone world
Security in an error-prone world
Why would you want to use an SSH or GPG key as another user? Genuinely curious (though I disable agent forwarding; no need to have a malicious server slurp keys when I connect to it).
Security in an error-prone world
Security in an error-prone world
Security in an error-prone world
Security in an error-prone world
Security in an error-prone world
Security in an error-prone world
Security in an error-prone world
Security in an error-prone world
Why would you want to use an SSH or GPG key as another user? Genuinely curious
I consider a smartcarded SSH key to be 'something I have' combined with 'something I know': proof that I have physical access to the smartcard (though not quite as much proof as a touch-to-generate one-time password) and proof that I know the PIN. As such, it's quite safe to use it for multiple users, if what you're using those users for is separation of concerns and to stop programs running as one from accidentally smashing programs running as the other. Identities are not the same as Unix uids!
Security in an error-prone world
Security in an error-prone world
I've been using a Yubikey NEO with the OpenPGP applet on OS X with MacPGP2 for over a year, and I have no such bad experiences.
Excellent! That means I'm probably just doing something stupid wrong -- though the fact remains that there are lots of ways to get it wrong, and the way described on the LF site is one way to do it (because that's what I tried, and it didn't work).
SSH authentication forwarding shouldn't be problematic--it's the same ssh instance talking to the agent when authenticating the first hop as when you're authenticating the next hop. Certainly I've never had a problem.
That works until you use your Yubikey to do anything else (e.g. OTP). If you were using native SSH, you could use ssh-add -e / -s to sever the smartcard connection and restart it, and everything would mostly be fine -- but using GPG, well, as soon as the connection is severed, the gpg-agent (and, if you're using it, pcscd) hang, hard. You have to kill -9 and restart them, and as soon as you do that the authentication forwarding is severed: you have to restart all your ssh sessions too! This is very far from optimal.
notably Apple's bug made pcscd lose track of the card state. Something similar (though I doubt identical, because it's forked) seems to be the problem here.
Almost certainly. Possibly this is the ill-defined 'issues' which caused pcscd autostarting without systemd to be removed, but I doubt it: it was only half a dozen lines, and in particular nothing changed about smartcard state tracking: pcscd still exits when idle in both cases, presumably losing track of card state as it does so.
But compared to the way things were just a few years ago, particularly with SSH it's like night & day.
Good God that's horrifying. :)
Ideally somebody will devise a scheme to use U2F keys for SSH, and OpenSSH will gain native U2F support, removing the need for all the middleware.[1]
Agreed! I'd be oh so very happy with that. U2F looks much easier to wrangle than PKCS#11, enough so that adding support is something that does not fill me with horror... oh no I haven't just given myself another spare-time project that'll never get done, have I? ... you'd think I'd learn.
Security in an error-prone world
Security in an error-prone world
Security in an error-prone world
Security in an error-prone world
Security in an error-prone world
There was a pretty clear incentive for improving car passengers safety (which only becomes car security when you think about the driver as a potential opponent) in the 60s. Everyone was a potential victim. And pain is a common knowledge.
We do not have such a simple incentive in our case so I do not see so many reasons for a potential inversion of the vulnerability increase phenomenon. Or more precisely, I can see some reasons myself but I suspect most computer users are not at all aware of them and I am not sure we are yet at a maximum of the insecurity problem.
Contrary to the cars, we have to actively uncover why insecurity harms us and what are its actual failures (easy spying by governemental agencies is one failure that powerfull people still call a success...).
Security in an error-prone world
Security in an error-prone world
Security in an error-prone world
Security in an error-prone world