[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
|
|
Subscribe / Log in / New account

Security in an error-prone world

By Jonathan Corbet
November 3, 2015
Korea Linux Forum
The 1957 Chevrolet Bel Air was a beautiful car, kernel.org administrator Konstantin Ryabitsev said at the beginning of his Korea Linux Forum talk. It had roomy seats, lots of features, and a smooth ride; it was all about power and comfort. But if you got into an accident with this car, it would kill you; it was not designed around the idea that things might go wrong. Our computer systems in 2015 mirror the Bel Air of 1957; they are not designed around humans and the mistakes they make. Konstantin had a simple message for the audience: take a cue from the automotive industry and design and build systems that do not fail catastrophically when errors are made.

In 1955, the Journal of the American Medical Association said that the interiors of contemporary cars were so poorly designed that it was amazing when anybody escaped an accident without serious injury. Ten years later, Ralph Nader's seminal Unsafe at Any Speed was published. In response, automotive engineers said that they designed their cars to operate safely — they were designed to drive, not to crash. Crashes were the result of bad driving, so the proper response was better driver education. The addition of safety features would come at a cost in style and comfort; it would also cost more. Customers, they said, did not want those safety features.

Fifty years later, though, cars are designed around crumple zones and crumple-resistant passenger areas. They have airbags, seat belts with pre-tensioners, collision sensors, and more. Modern cars, Konstantin said, are designed with driver errors in mind; as a result, automotive fatalities are a fraction of their peak nearly 50 years ago.

Computers and their software, though, are still designed like 1960s cars. They are all about power and comfort. Engineers will say that these systems have been designed to run safely, that things fail when humans make mistakes. Protecting users from their own mistakes is expensive, safety features can hurt the usability of software, customers are not asking for more safety features, and so on. The problem is best solved, they say, with more user education.

Konstantin faced this problem full-on in 2011, when he was hired in the aftermath of the kernel.org compromise. The approach he found was to design security infrastructure like a medieval fortress — or like Helm's Deep from the Lord of the Rings. There is a big wall with archers to defend it, a second line of defense (consisting solely of Gimli the dwarf), an inner keep, and a final line made up of two old white guys. Plus a disaster-recovery plan called "Gandalf."

The thing is, we design systems like this, but then somebody gets in anyway. The forensics consultants are called in; they find out that the back door used was always there — the administrators used it to get their work done. Or the attacker used an internal PHP application that should have never been there; it has a comment saying "temporary for Bob," but nobody even remembers who Bob is. People make mistakes, and they always will; we need, he said, to think more about how we can prevent these mistakes from becoming security problems. We need, in other words, to equip our systems with airbags to prevent serious problems when things do go wrong.

Airbags for systems

Konstantin then went through the various levels of a system to talk about what those airbags might look like.

At the networking level, we are already deploying firewalls, using virtual LANs, zoning, and virtual private networks, and performing routine nmap scans. But there are a number of things we are not doing, starting with payload inspection to see what is happening on our network; that is hard, especially when encrypted traffic is involved. There are mechanisms for tracking the reputation of IP addresses for spam blocking, but reputations are not used for other kinds of traffic. We are not, in general, actually bothering with DNSSEC or bothering to check TLS certificates. Or even to use TLS certificates in many cases.

For servers, we are using virtualization for isolation, applying security updates, doing centralized logging, and using SSH keys for logging in. But we should be able to do far better than that. We should stop disabling SELinux (or AppArmor or whatever); they are there for when something goes wrong. SELinux can keep an eye on that PHP application that has no business connecting to other sites, digging through /proc, looking at /etc/passwd, scanning the network, or sending email. Running a system with SELinux enabled can be learned, Konstantin said; we need to stop turning it off.

We should also be using two-factor authentication on all of our servers. Hardware tokens (he favors YubiKeys) are better than software tokens running on phones, but either is better than nothing at all. SSH keys should be stored on smart cards (or a YubiKey NEO) rather than on laptops. The Linux Foundation team has put up a set of documents on how to make this combination work well with Linux.

Containers, he said, will not, contrary to some claims, make system administrators obsolete. But they can help isolate software stacks from each other. Think of a container, he said, as a sort of whole-OS static [Konstantin Ryabitsev] linking mechanism. They are a type of airbag: they allow a complex software stack to be bundled away and hidden from the world; the whole thing can then be further confined with SELinux. Containers also make good crash-test dummies — they can be used to test software in production-like environments. In such a setting, it's easy to check for open ports, improper log files, etc. This, he said, "is what DevOps is about."

On workstations things are a bit more difficult; confining applications is not an easy task. SELinux is nearly useless on a desktop. The X Window System is the essence of pre-safety design; there is only security between users, and applications have full access to everything. So a single vulnerable X application means the compromise of the entire desktop. X, Konstantin said, must die. Unfortunately, that won't happen for a long time.

Then there is the issue of web browsers. They run code from untrusted outside users, they have a huge attack surface, and they have proprietary plugins. And we can't live without them. So we end up with issues like CVE-2015-4495, which was actively exploited to search systems for SSH keys and passwords.

As a rule, the most unprotected system on the entire net is the system administrator's desktop. It sits on the VPN with full access to the net, it has access to the password vault and is full of privileged SSH keys. There is also often a handy list of other systems that those keys will grant access to. The system is full of dangerous keystrokes, disclosing passwords to any attacker that happens by.

How does one address this threat? Requiring the use of one-time passwords — preferably not supplied by a phone app — is the first basic step. SSH keys should be stored on smart cards, and never in home directories. Proper security policies need to be written and administrators educated, forcefully if need be, to follow them. Konstantin also suggested looking into Qubes, which, he said, is the only serious attempt at workstation security out there. Qubes sidesteps most X vulnerabilities and can minimize the impact of things going wrong. Its safety orientation makes it "the Volvo of distributions."

When it comes to the functioning of administrative teams, there is no alternative to relying on education, so it is necessary to be prepared for failures. A team should establish secure communications so that its members can talk to each other when the network cannot be trusted. Email communications should employ PGP signatures, and instant messaging should be via a trusted mechanism as well. There need to be firm policies about what can be sent in clear text; important stuff should always be encrypted. Sites need workstation security policies, hardening checklists, and examples of secure workflows. Separate, dedicated browsers should be used for work and play. The system administrative team should use code review like any other development project, and changes should require signoffs.

Checklists should be created and used for everything: deployment of a new system, code review, staff onboarding, staff departure, etc. There should be a procedure to quickly lock out an administrator — a tricky task. Checklists can be the most powerful tool available to avoid bad experiences.

In closing, Konstantin reiterated that mistakes are going to happen; the important thing is to make sure that these accidents are not fatal. Our current systems are great to drive, but they do not forgive mistakes; we are at our "unsafe at any speed" moment. We have the technology to make things safer, but we're not using it; that needs to change. Konstantin and his team are putting together a set of recommended policies and are looking for help to improve them.

[Your editor would like to thank the Linux Foundation for supporting his travel to KLF].

Index entries for this article
SecurityBest practices
ConferenceKorea Linux Forum/2015


to post comments

Security in an error-prone world

Posted Nov 3, 2015 19:49 UTC (Tue) by dlang (guest, #313) [Link] (12 responses)

> But if you got into an accident with this car, it would kill you; it was not designed around the idea that things might go wrong.

FUD.

It was built to resist damage from the impact, not absorb it and sacrifice itself to protect the passengers (like very other car of it's era)

it would survive and protect you quite well in a low speed collision. Different ways of designing things work better in larger collisions, but take far more damage in low speed collisions.

Security in an error-prone world

Posted Nov 3, 2015 20:06 UTC (Tue) by bbockelm (subscriber, #71069) [Link] (11 responses)

Out of curiosity (because the safety of a 1957 Bel Air is a bit off-topic), do you know of any citations indicating that the Bel Air is safer at a lower speed? This article makes me recall this particular YouTube video:

https://www.youtube.com/watch?v=C_r5UJrxcck

I guess I would have called 40mph "low speed" as I exceed that every day -- although I'm sure the official definition is different!

More on-topic, as you point out, making vehicles safer often has the side-effect of making repairs more costly (or impossible). There's probably a lesson here in computer security also: security often has unwanted side-effects (perhaps unnecessary?) on usability and cost.

Security in an error-prone world

Posted Nov 3, 2015 21:25 UTC (Tue) by smoogen (subscriber, #97) [Link] (10 responses)

Low speed definitions have changed over time (and per manufacturer before standardization occurred). Low speed was considered 5 mph. [EG 5mph bumpers.] The fact that most people don't drive only at 5 mph makes the survival numbers hard to gauge because some manufacturers rated 40mph very high speeds and others considered them consistent.

Also talking about cars is dangerous at least in the US. People will come up with all kinds of stories about how XYZ car was safer than any other car and the people who have ever said anything against it were commie loving sons of .... So I expect that one person will have heated words, someone else will come up with how the Bel Air was a poor-mans Cadillac and then the knives come out until they both agree that it was better than driving a Ford. [Or vice versa.. and they might all agree that they would trade them all for a Silver hawk Studebaker].

Three week later someone will ask them about the security talk and all they will remember is that someone dissed their car and it will all start over..

Security in an error-prone world

Posted Nov 3, 2015 21:29 UTC (Tue) by corbet (editor, #1) [Link] (9 responses)

One can argue about the details, but the decline in automotive deaths, as shown in Konstantin's slides (page 19) is pretty clear. Some of that is surely due to safer roads, the drunk-driving crackdown, and more. But the reduction in the fatality rate by over half, despite a huge increase in miles driven, must also be due to safer cars.

Security in an error-prone world

Posted Nov 3, 2015 21:33 UTC (Tue) by smoogen (subscriber, #97) [Link]

I agree with you. I just have sat in so many garages growing up listening to car enthusiasts get into fights over THEIR car not being the one that was the awful one when that wasn't really said.. [which parallels most computer discussions of this instruction set or that motherboard or that computer language.]

Security in an error-prone world

Posted Nov 3, 2015 21:50 UTC (Tue) by bronson (subscriber, #4806) [Link]

It's also clear in the video that bbockelm mentioned. Watch the Bel Air's A-pillar disintegrate and roof cave in (0:23 and 0:46), while the Malibu's passenger cabin doesn't even visibly deform (1:10). Very impressive. It's surprising they're made out of the same material.

Security in an error-prone world

Posted Nov 3, 2015 22:08 UTC (Tue) by dlang (guest, #313) [Link] (6 responses)

I never said that the newer cars weren't safer overall

I was disputing the statement that the car would kill you because it wasn't designed with the thought that anything would go wrong.

It was very much designed with the thought that you would bump into someone in a parking lot, back into a pole, and various other things like that.

bringing this back to IT Security, it all depends on what threads you are defending against.

Any organization can be shown to be insecure, if you assume that the threat is the NSA/KGB with no distractions targeting you.

No care is safe in a collision with a Bullet Train at full speed.

security (and safety) are not binary secure/insecure. Just because something isn't secure against one threat doesn't mean that it's horrible to use under all conditions.

If you don't use something because it's not perfectly secure, you won't have a business to protect because you will never actually do anything.

I'm also not saying that IT Security shouldn't be improved. I'm ranting against the common trap that Security people fall into of treating it as a binary thing and spreading FUD.

Security in an error-prone world

Posted Nov 3, 2015 22:17 UTC (Tue) by smoogen (subscriber, #97) [Link]

OK never mind then.

Security in an error-prone world

Posted Nov 3, 2015 22:52 UTC (Tue) by bronson (subscriber, #4806) [Link]

Alas, that binary trap is what dumps such stupid money into the security industry.

"But, if I spend another 4 mil on PA Networks, will we be MORE secure?"

"Technically yes, but you need to understand that the chances of..."

"Forget it. Here's a PO. Now, what if we spent even more money?"

Security in an error-prone world

Posted Nov 3, 2015 23:03 UTC (Tue) by bronson (subscriber, #4806) [Link] (1 responses)

Also bringing it back to IT security, it might help if we had an NHTSA for networking equipment...

"The Barracuda 600 got a 2.5 on the latest crash test. See the YouTube video here -- not for the faint of heart!"

Security in an error-prone world

Posted Nov 3, 2015 23:08 UTC (Tue) by dlang (guest, #313) [Link]

I agree, it would be great to have security equipment/software rated on how much stuff it blocked rather how fast it will pass traffic with no rules.

Security in an error-prone world

Posted Nov 5, 2015 14:44 UTC (Thu) by droundy (subscriber, #4559) [Link] (1 responses)

> bringing this back to IT Security, it all depends on what threads you are defending against.

I have to say, if you're defending against threads, your best bet is either dragons or grubs...

Security in an error-prone world

Posted Nov 6, 2015 21:34 UTC (Fri) by k8to (guest, #15413) [Link]

This took me a while to figure out.

Since I did, I now feel very old.

(Cheatsheet: pern.)

Security in an error-prone world

Posted Nov 3, 2015 19:55 UTC (Tue) by dlang (guest, #313) [Link] (3 responses)

> Checklists should be created and used for everything

If you can actually create a checklist for everything, you will spend so much time trying to find the right checklist that you won't actually do anything.

There is a place for checklists, but trying to reduce everything to a checklist is a disaster.

Major items (hiring/firing) need to be mostly automated so that the 'checklist' boils down to running a handful of processes.

Security in an error-prone world

Posted Nov 4, 2015 9:16 UTC (Wed) by jezuch (subscriber, #52988) [Link] (1 responses)

> If you can actually create a checklist for everything, you will spend so much time trying to find the right checklist that you won't actually do anything.

Airline pilots have checklists for everything[1]. Somehow they don't have any problem finding the one appropriate for the task at hand (and they don't do anything without a checklist in hand!).

[1] Well, "everything". There are freak accidents nobody imagined, after all.

Security in an error-prone world

Posted Nov 4, 2015 17:42 UTC (Wed) by dlang (guest, #313) [Link]

no, airline pilots don't have checklists for "everything", they have checklists for lots of things, but they don't pull out a checklist to turn, climb, etc.

Security in an error-prone world

Posted Nov 5, 2015 18:20 UTC (Thu) by WolfWings (subscriber, #56790) [Link]

Considering how much more complex medical practices are, or airplane operations, than setting up a set of Linux servers? I disagree with your statement that there can't be checklists for everything important.

https://scholar.google.com/scholar?q=checklists+reduce+ho...

https://en.wikipedia.org/wiki/Pilot_error#Checklists

I don't think anyone is saying each organization should be writing all of their own checklists; many of them could be assembled by publishers and become a commodity, and in the case of running commands those can be fully automated with any number of scripting approaches.

But outside of distinct or unique diagnostic utilities, no SysAdmin should be typing 'apt-get install' or 'yum install' on the CLI of servers anymore, or the equivalent. Document what the standard is, and implement it, but let computers do what computers are good at: Automation. And for everything else? Yes, it boils down to a checklist. :)

Security in an error-prone world

Posted Nov 3, 2015 20:50 UTC (Tue) by mathstuf (subscriber, #69389) [Link] (29 responses)

> Hardware tokens (he favors YubiKeys) are better than software tokens running on phones, but either is better than nothing at all. SSH keys should be stored on smart cards (or a YubiKey NEO) rather than on laptops.

> SSH keys should be stored on smart cards, and never in home directories.

Are there any that support more than a few keys at a time? I have 22 SSH keys and 16 TOTP keys to keep track of. A YubiKey only holds 2 TOTP keys, so that's 9 yubikeys I need to carry around. Instead, I have an encrypted, automounted USB key with symlinks in $HOME. Now if only GPG would resolve symlinks when writing back to its files rather than clobbering they symlink…

Security in an error-prone world

Posted Nov 3, 2015 22:46 UTC (Tue) by nybble41 (subscriber, #55106) [Link] (20 responses)

> Are there any that support more than a few keys at a time? I have 22 SSH keys and 16 TOTP keys to keep track of.

Why would you need more than one SSH key? The key is to prove who you are. It doesn't have to be specific to the host you're connecting to. I generally use a separate key for each system I connect *from*, so that I can deactivate them individually if compromised, but with a smartcard you wouldn't need to store the key locally; you carry it around with you from system to system.

Security in an error-prone world

Posted Nov 3, 2015 23:09 UTC (Tue) by mathstuf (subscriber, #69389) [Link]

For work I have one general key plus 4 for connecting to an rrsync host (each key is granted access to a single directory). For the rest, they are per-service. The idea was to place keys on hosts which need the keys (so a phone would have no need of my github key for instance, but it would need access to my git-annex repositories), but that never really happened. It allows me to be flexible in the machines rather than copying a new pubkey to umpteen servers when a new machine comes online in my network. There are also keys for backup access (per machine).

It came in handy when Fedora forced SSH key changes a couple years ago: I only had to update that one key and not choose between reissuing pubkeys to a bunch of machines or have one oddball key.

Security in an error-prone world

Posted Nov 4, 2015 13:34 UTC (Wed) by nix (subscriber, #2304) [Link] (2 responses)

You do need one key per security boundary, e.g. if you're sharing keys with others for role accounts -- but, in practice, if you have lots of those you should be using certificates, so you'd still need only one key (the others would share a cert with you).

For most uses, though, one smartcarded key would do (if they worked! :( ), combined with a bit of care about where you forward the agent to, so the agent doesn't follow you into the role accounts (so an attacker who gets in there cannot command your agent to carry out operations using your key).

Security in an error-prone world

Posted Nov 4, 2015 22:40 UTC (Wed) by nybble41 (subscriber, #55106) [Link] (1 responses)

> You do need one key per security boundary, e.g. if you're sharing keys with others for role accounts...

Why would you ever share keys with others? Besides the increased risk of key exposure, that makes it impossible to revoke just one user from the role account, or properly log which user accessed the account. There is no reason why you can't have a single role account which can be accessed with multiple keys, one per unique user.

(Obviously, make sure you configure sshd to locate the authorized_keys file somewhere outside the role user's home directory, or use one of the other mechanisms available to supply authorized keys. Only an admin should be able to add or remove authorized keys for a role account.)

Security in an error-prone world

Posted Nov 10, 2015 16:22 UTC (Tue) by nix (subscriber, #2304) [Link]

I don't know why *I'd* share keys with others, but alas everywhere I have ever worked there has been sharing of keys, usually mandated by some corporate honcho so that they have only one thing to revoke to kill off that thing at end-of-life. Certs should be used for this instead, and everyone should have their own key, but not everything supports that yet (e.g. it's a relatively recent addition to OpenSSH).

Security in an error-prone world

Posted Nov 5, 2015 9:46 UTC (Thu) by madhatter (subscriber, #4665) [Link] (15 responses)

My Yubikey Neo already has three TOTP tokens on it accessed via RFID (plus two HOTP, via USB), and I believe the RFID token ceiling is quite a lot higher than that.

Security in an error-prone world

Posted Nov 5, 2015 16:16 UTC (Thu) by nybble41 (subscriber, #55106) [Link] (14 responses)

Sure, but that's TOTP/HOTP. They depend on shared secrets, so you need a different one for each host, or else they each could impersonate you to the other hosts. SSH keys use public-key cryptography, so you can use the same key to authenticate to many different hosts.

Security in an error-prone world

Posted Nov 5, 2015 17:14 UTC (Thu) by madhatter (subscriber, #4665) [Link] (13 responses)

You're completely right, but that wasn't what I was saying. If you scroll back up, you'll see that mathstuf asked if there were devices that stored more than a few keys because "A YubiKey only holds 2 TOTP keys, so that's 9 yubikeys I need to carry around". My point was merely that a Yubikey Neo can definitely store more than 2 TOTP keys, possibly quite a lot more.

Security in an error-prone world

Posted Nov 5, 2015 18:36 UTC (Thu) by mathstuf (subscriber, #69389) [Link] (10 responses)

That's helpful. How do you determine which one to use? I read that it's one tap for one and two for the other. How is one expected to remember whether service frobnitz is 5 clicks or 6?

Security in an error-prone world

Posted Nov 5, 2015 21:22 UTC (Thu) by madhatter (subscriber, #4665) [Link] (9 responses)

The tokens produced by tapping on the Yubikey's "button" are HOTP tokens (either HOTP OATH, or Yubico's own (open) method of generating HOTP tokens).

The TOTP tokens are accessed via NFC, using (in my case) a free (newBSD-licensed, available on f-droid.org) Android app; some other device has to be involved, as the Yubikey has no internal clock. The external app provides a timestamp via NFC, and the Yubikey seals that using each of the secrets it has in NFC storage. As for identifying which TOTP code is for which external service, when each secret is loaded into the 'key a text snippet goes with it, and this is returned by the 'key over NFC, along with each associated TOTP code.

So when I fire up the app and bring it close to my 'key, three different TOTP codes appear on my phone's screen, each with a small text snippet (usually one that I chose) reminding me which particular remote service that TOTP code is intended for.

In case anyone's wondering, I have loaded the app from scratch onto someone else's phone, and verified that (as long as it's done in the same 30-second window) the same TOTP codes appear, with the same text snippets; all the service-specific stuff is on the Yubikey, the phone provides only communications, display, and a timestamp.

Security in an error-prone world

Posted Nov 5, 2015 23:35 UTC (Thu) by mathstuf (subscriber, #69389) [Link]

That does sound better. I'll have to look into getting one of the new yubikeys then.

Security in an error-prone world

Posted Nov 12, 2015 14:52 UTC (Thu) by itvirta (guest, #49997) [Link] (7 responses)

Does the NEO authenticate the device requesting it to sign a timestamp?

Because I started thinking about someone walking past one of them and asking it to sign
a timestamp for say, tomorrow, giving plenty of time to walk away and use the codes to login later.

Security in an error-prone world

Posted Nov 12, 2015 18:51 UTC (Thu) by flussence (guest, #85566) [Link] (2 responses)

It's not possible to siphon codes off the key inconspicuously like that. The phone only provides UI, the key outputs the codes via USB HID to the computer it's plugged into.

Security in an error-prone world

Posted Nov 12, 2015 22:05 UTC (Thu) by johill (subscriber, #25196) [Link] (1 responses)

I'm pretty sure the response goes via NFC as well - it has two modes, NFC or USB, but I don't think it combines them like that.

Security in an error-prone world

Posted Nov 16, 2015 14:00 UTC (Mon) by itvirta (guest, #49997) [Link]

At the least the video about the Android app (https://www.yubico.com/tag/android/)
shows the app displaying the otp:s on the smartphone.
Though it also mentions a possibility of password protecting the credentials.

Can't tell why I didn't find this the first time, though...

Security in an error-prone world

Posted Nov 18, 2015 21:43 UTC (Wed) by nix (subscriber, #2304) [Link] (3 responses)

It has no clock so cannot do that. What the Yubico OTP protocol does have is a counter which increments whenever a password is requested, and another counter which increments whenever power to the key is cut (how this interacts with NFC I'm not sure because I don't have any devices that can do NFC to test it with). An authentication server verifies that any password it receives has a higher session counter than the last password it saw from that key, or, if the same, a higher password-requested counter. So replay attacks are impossible, and if you want to reuse a password you acquired you'd better do it before the legitimate user logs in again even once: as soon as he does that the password you snarfed is useless.

(This is not ideal -- backward-compatibility concerns limit the session counter to 7 bits, and obviously the protocol requires it to saturate rather than wrapping, so overflows are well within the bounds of possibility. But it's not *bad*, and you can reset the session counter by resetting the underlying AES key and sending the new one to your authentication servers.)

Security in an error-prone world

Posted Nov 20, 2015 10:01 UTC (Fri) by tao (subscriber, #17563) [Link] (2 responses)

Uhm. Maybe I'm missing something here -- are you saying that you can only have 128 sessions (7 bits) before you need to generate a new AES key? I'm fairly sure I'd burn through that in no-time. Or is session defined in some other way than I imagine?

Security in an error-prone world

Posted Nov 23, 2015 23:30 UTC (Mon) by nix (subscriber, #2304) [Link] (1 responses)

No, I can't count. It's 15 bits, not 7. 32767 sessions. A 'session' is a plug/unplug with at least one key generation in between, and even with my dodgy USB hub causing several replugs a day I'm not burning through them very fast.

Security in an error-prone world

Posted Nov 24, 2015 17:38 UTC (Tue) by tao (subscriber, #17563) [Link]

Ahhh, right. That sounds more reasonable :)

Security in an error-prone world

Posted Nov 5, 2015 21:42 UTC (Thu) by nybble41 (subscriber, #55106) [Link] (1 responses)

> You're completely right, but that wasn't what I was saying. ... My point was merely that a Yubikey Neo can definitely store more than 2 TOTP keys, possibly quite a lot more.

And that's perfectly fine, but the comment you were replying to (mine) was questioning the need to store 22 *SSH* keys. Perhaps you meant to reply to mathstuf instead?

Security in an error-prone world

Posted Nov 5, 2015 21:45 UTC (Thu) by madhatter (subscriber, #4665) [Link]

You're dead right, and I apologise for being confusing.

Security in an error-prone world

Posted Nov 3, 2015 23:05 UTC (Tue) by wahern (subscriber, #37304) [Link] (4 responses)

22 SSH keys? Why not one SSH key, with a backup stored in a cabinet? (Or if you're really paranoid, 2 or 3 for daily use.) The Yubikey NEO has an OpenPGP smartcard applet. It'll work natively with GnuPG (and thus OpenSSH via gpg-agent) on Linux, OS X, and Windows, without any fscking around with OpenSC. It's the closest thing to plug-and-play you'll find in the smartcard world, and it works beautifully.

16 HOTP/TOTP keys I can understand. It's why I was so psyched when the NEO came out with OpenPGP support, and why I really, really, really hope that Google's U2F project will see widespread adoption. U2F puts native smartcard support in the browser, making the entire stack--from the driver up to the JavaScript API--hassle free. Passwords, even HOTP- and TOTP-based systems, whether generated from a token or not, just don't scale from an individual perspective. I really wish Mozilla[1] would finally finish their U2F support. Microsoft _claims_ to be committed to supporting it, too. So there's hope.

[1] Of course, Mozilla has always had PKCS#11 support. But the ecosystem is too fractured and proprietary, particularly when it comes to card management. U2F specifies the things that matter to maximize interoperability, and works around driver hassles by abusing USB HID. If anything U2F is too complicated and flexible, but it's the closest thing yet which stands any chance of bringing widespread pubkey authentication to the masses.

Security in an error-prone world

Posted Nov 4, 2015 21:41 UTC (Wed) by Lennie (subscriber, #49641) [Link] (2 responses)

U2F means you need to add a USB-HID stack to your browser.

So far, Mozilla seems to not be interested in doing that right now.

My guess would be because of: security of course. Stacks like the USB-stack are prone to problems and have been used to crash systems or worse: install malware.

For example:
__

At a security conference there was a talk about vulnerabilities in Windows using fuzzing tests to find problems in the HID-stack.

The security researcher found a bug in the Bluetooth or USB stack that at least could crash Windows (possibly with a buffer overflow, possible security bug) and reported it to Microsoft and they did nothing. They said: this is only with local access, that can't be exploited remotely.

So what he demonstrated was: remote desktop supports HID-devices like USB and he used remote desktop to crash Windows servers.

Anyway this was a couple of years ago and I believe Microsoft still hasn't fixed it though.

All you have to do is fuzzing USB device names.
__

Now think about browsers and how many machines they got deployed to. You want to be pretty sure it's safe. ;-)

There is an add-on:
https://addons.mozilla.org/pl/firefox/addon/u2f-support-a...
https://github.com/prefiks/u2f4moz

There is a bug bounty:
https://www.bountysource.com/issues/10401143-implement-th...

Anyway, see bugzilla for the progress:
https://bugzilla.mozilla.org/show_bug.cgi?id=1065729

Security in an error-prone world

Posted Nov 5, 2015 11:43 UTC (Thu) by raven667 (subscriber, #5198) [Link] (1 responses)

> They said: this is only with local access, that can't be exploited remotely.

This doesn't seem wrong, can an unprivileged user set up virtual USB devices such that this interface could be attacked remotely? I understand the remote desktop case, but doesn't that have to be enabled by a privileged user before it could be used as an attack vector? if you have to be either locally present or have administrative access before you can access this attack vector it's not really all that interesting.

Security in an error-prone world

Posted Nov 7, 2015 18:45 UTC (Sat) by Lennie (subscriber, #49641) [Link]

As I understand it, it's enabled by default.

You need to have an account on the server.

But Microsoft sells products like 'terminal server'.

So an exploit could be used to do privilege escalation.

Security in an error-prone world

Posted Nov 5, 2015 14:50 UTC (Thu) by kpfleming (subscriber, #23250) [Link]

The W3C is moving[1] towards forming a Working Group to standardize support for U2F and UAF in browsers and other Web platform agents.

[1]: https://w3c.github.io/websec/web-authentication-charter

Security in an error-prone world

Posted Nov 6, 2015 0:31 UTC (Fri) by pedrocr (guest, #57415) [Link] (2 responses)

>Now if only GPG would resolve symlinks when writing back to its files rather than clobbering they symlink…

Can't you just symlink the key dir instead?

Security in an error-prone world

Posted Nov 6, 2015 0:58 UTC (Fri) by mathstuf (subscriber, #69389) [Link] (1 responses)

No, things like the gpg-agent socket go under there as well as the configuration file. It seems I haven't done this yet, but the secret ring should be on the key, but the public and trust information should be on the machine itself so I can always verify/decrypt things without the key.

Security in an error-prone world

Posted Nov 6, 2015 1:03 UTC (Fri) by pedrocr (guest, #57415) [Link]

Are any of those things useful if you don't have the keys? Isn't a completely broken gpg when you don't have USB mounted just fine? If not an overlay filesystem is probably your best bet.

Security in an error-prone world

Posted Nov 3, 2015 22:26 UTC (Tue) by nix (subscriber, #2304) [Link] (15 responses)

Oh God they recommend using gpg-agent as an ssh-agent replacement? I tried to get this working, but eventually gave up: it's devastatingly buggy. Among other problems, forwarding of the agent to other machines is only just barely kinda sorta working in GnuPG 2.1 and has no chance of working in 2.0; the eternal GPG_TTY bugs (are you sure you're resetting that in *every single* startup script? Without exception?) routinely lead to you being asked to unlock your key using a pinentry running on the wrong console, causing an apparent hang; even if that's fixed, if you ssh or su to another user and want it to use your smartcard, you suddenly find that you can't because the pinentry can't appear on that tty since it's owned by another user and not writable by the one the gpg-agent is running on... the list goes on and on. It's a mass of barely functional components held together with baling wire and twine.

To make things worse, the YubiKey is a multifunction device, but it can only do one thing at once: so if you use it for OTP passwords or U2F, every time you touch the key's touchpad or ask for a U2F password it'll disconnect from the gpg-agent, and neither pcscd nor the builtin ccid driver in GnuPG 2.1 is remotely expecting this. (pcscd is a whole other mass of crawling horror, running right down to the fact that the author has recently removed all autospawning support for non-systemd configurations for poorly-stated reasons, meaning that if you don't run systemd you are forced to keep a systemwide pcscd running just in case a user plugs a smartcard in: if you want this configuration to be remotely secure, you have to use a very recent JS-infested PolicyKit. What a mess.)

PIV support using the yubico-piv-tool does work much better -- but it relies, again, on pcscd to do the heavy lifting, and this has no idea that slot 9a allows reauthentication without a password. Since nothing anywhere stores the PIN and there is nothing like automatic reconnection, whenever you ask for an OTP, your SSH key vanishes until you do an ssh-add -e and -a again, and type in the PIN (again). Half the time you'll get stuck in some unclear state where doing *either* of these just tells you 'agent refused operation' while the logs scream 'process_remove_smartcard_key: pkcs11_del_provider failed'.

There is a debugging interface for pcscd: it involves *renaming the shared library* and making a new symlink. I've got some debug logs out of this ridiculous system but haven't yet analyzed them to see what's going wrong (I need to learn about the protocol it's trying to talk, first).

This whole thing is not remotely ready for prime-time. I'd love to recommend SSH and GPG key storage on the Neo and its use everywhere, but I just can't. I use PIV keys myself and am frequently bitten by the smartcard connection being lost because I had the temerity to use the Neo like a Yubikey and get an OTP out of it, and much of the time I can't get it back again.

Security in an error-prone world

Posted Nov 3, 2015 23:11 UTC (Tue) by mathstuf (subscriber, #69389) [Link] (9 responses)

> (are you sure you're resetting that in *every single* startup script? Without exception?)

Well, my zshrc sets it and it isn't in systemctl --user show-environment, so…yes? :)

> if you ssh or su to another user

Eh, not clearing your environment in such situations is probably not the best idea anyways.

> and want it to use your smartcard

Why would you want to use an SSH or GPG key as another user? Genuinely curious (though I disable agent forwarding; no need to have a malicious server slurp keys when I connect to it).

> very recent JS-infested PolicyKit

To be fair, this happened a while ago. Sad as it is :( .

So it seems that my automounted usb key is still the most flexible setup for me if this is the state of things.

Security in an error-prone world

Posted Nov 4, 2015 0:23 UTC (Wed) by wahern (subscriber, #37304) [Link] (6 responses)

Why would you want to use an SSH or GPG key as another user? Genuinely curious (though I disable agent forwarding; no need to have a malicious server slurp keys when I connect to it).

1) A server would normally only be able to use the key, not read it. Maybe you meant something else. 2) If the client SSH program is buggy, yes, a malicious server could read the key, but that's because you're not using a smartcard. Personally, I never understood the appeal of putting a private key on a USB device. It's not much different than using a password encrypted key stored on your computer. It makes it easier to use on other computer, but that seems even more risky, because now the security of the key is a function of the least secure computer you use it on. Though I don't know your precise reasons, and am not trying to judge them specifically.

Even though I use a smartcard, I only enable authentication forwarding on a case-by-case basis. Still, I've always thought it would be useful to configure a card to require a physical key press before performing the signing operation. This seems like a more useful feature than a PIN, IMO, especially for contexts like banking. Somebody can hack my computer and steal my PIN, then use the card without my knowledge--it's plugged in throughout most of the day. Whereas if it required a physical confirmation, they couldn't. PINs address the wrong attack scenario--the biggest threat is somebody hacking my computer, not stealing my smartcard. If somebody steals my smartcard I'll know about it, or at least the damage be circumscribed. And maintaining physical custody and security of a key is much easier as a practical matter, especially in terms of the threats I and most people face. Whereras much like a password, if they steal my PIN I'll have no idea. And the universe of people that could access my PIN is, as a practical matter, any sufficiently knowledge hacker on the face of the planet.

This is why when people use the term 2-factor, I cringe. Even supposed professionals are enamored with this phrase, without giving much consideration to the _real_ threat scenarios, and to the relative costs and benefits of these factors. Yes, a coworker stealing your smartcard for 5 minutes when you're at lunch is a real threat. But the state of computer security is _so_ utterly abysmal that the threat absolutely pales in comparison to remote threats. Combined with the implementation and interoperability problems that something as simple as a PIN can cause (it's 2015 and, with all its problems, the Yubikey NEO is as good as it gets), this and similar features shouldn't be considered a requirement if you want to improve organizational security.

I have it on my TODO list to hack the Yubikey NEO OpenPGP applet to support personal data objects. Perhaps I should look into support for requiring a key press before signing.

Security in an error-prone world

Posted Nov 4, 2015 0:46 UTC (Wed) by dlang (guest, #313) [Link] (2 responses)

far too many systems have been hacked by getting a copy of someone's ssh key and hopping from machine to machine as that user (and picking up other privileges along the way)

I don't care how complex your pin is, if someone can see your keystrokes they can get your pin.

I agree that you really want your two factor authentication to be something that requires affirmative action to use (either typing in the result, or at least hitting a button on the key itself)

Security in an error-prone world

Posted Nov 4, 2015 13:39 UTC (Wed) by nix (subscriber, #2304) [Link] (1 responses)

They can get your pin, but they cannot get your identity, and as soon as you take your smartcard out (which you should do whenever you step away from the machine for more than a few minutes) that pin is useless.

In fact, capturing it is more or less useless anyway: it only serves to *unlock* the smartcard, and if you plug it in, you're probably going to do that anyway. An attacker doesn't need to capture your PIN: it just needs to wait for you to type it in yourself, then query the unlocked smartcard as usual. Getting your key remains impossible, as is doing authentication operations when the smartcard is not plugged in. So you've basically restricted a successful attacker to only attacking when you're around and can potentially spot attacks. (You probably won't, but still.)

Security in an error-prone world

Posted Dec 1, 2015 13:37 UTC (Tue) by dany (guest, #18902) [Link]

as an update, new yubikey4 has feature "touch" which if enabled, requires to touch button before any interesting operation on GPG key. you can enable it for any key (sign, auth, decryption) and you can force this feature so it cannot be disabled, until new private key is placed on card. so even if attacker knows your PIN, they cannot use your key in any meaningful way.

Security in an error-prone world

Posted Nov 4, 2015 1:32 UTC (Wed) by mricon (subscriber, #59252) [Link]

> Perhaps I should look into support for requiring a key press before signing.

"Touch to sign" is part of the OpenPGP Card v3 spec and I expect it will be supported by upcoming versions of yubikey NEO.

Security in an error-prone world

Posted Nov 4, 2015 2:27 UTC (Wed) by mathstuf (subscriber, #69389) [Link]

> Personally, I never understood the appeal of putting a private key on a USB device. It's not much different than using a password encrypted key stored on your computer.

The setup is a key with what I need day-to-day (SSH keys, keepass database, SSL client certs) with a passphrase I can actually type. There are other copies with everything I need (main GPG private key, SSL cert backups, TOTP recovery keys, etc.) without the daily typeable passphrase and instead a much longer passphrase.

> It makes it easier to use on other computer, but that seems even more risky, because now the security of the key is a function of the least secure computer you use it on.

Well, no different than other setups, really. Just don't use it on machines I don't trust.

Security in an error-prone world

Posted Nov 4, 2015 13:37 UTC (Wed) by nix (subscriber, #2304) [Link]

Quite. You use agent forwarding in clustered environments, where you have lots of systems within the same security boundary but you still don't want someone who gets into one of them to be able to ssh completely freely around in there. A good clue if you have a setup like that is if you're using networked filesystems across them.

My local setup has agent forwarding turned on to the clustered machines (which also share $HOME filesystems via NFS) but the firewall host does not have any of that, so an attacker will be stuck on there, unable to ssh in to the cluster even if I'm sshed into the firewall, because that SSH connection is *not* accompanied by an agent forwarding, so the smartcarded key I used to get into the firewall and across the cluster is still inaccessible.

Security in an error-prone world

Posted Nov 11, 2015 21:20 UTC (Wed) by nix (subscriber, #2304) [Link]

> Eh, not clearing your environment in such situations is probably not the best idea anyways.

Alas, it's not related to that. The problem is that the connection to the agent is forwarded over ssh back to the gpg-agent, and it communicates the value of $GPG_TTY back over that (Assuan) connection: gpg-agent then tries to kick up a keychain on your TTY, and oh look it's a different user and I bloody hope the gpg-agent isn't running as root, so it can't do it. Now you're in trouble.

As far as I can tell, gpg-agent is only designed to work in a situation in which each Unix user has his own keychain, and only one Unix user has a connected smartcard, and no other user ever wants to use it. It actively militates against a scheme where you use multiple uids to separate your concerns (I use a different uid for work and non-work, for instance), and it always will. The problem is that the keychain is forked by the gpg-agent to ensure that nothing else can spy on the passphrase as it passes through -- but the keychain *cannot* be forked by the gpg-agent without causing the problem above!

I see no way to fix this :(

Security in an error-prone world

Posted Nov 11, 2015 21:31 UTC (Wed) by nix (subscriber, #2304) [Link]

Why would you want to use an SSH or GPG key as another user? Genuinely curious
I consider a smartcarded SSH key to be 'something I have' combined with 'something I know': proof that I have physical access to the smartcard (though not quite as much proof as a touch-to-generate one-time password) and proof that I know the PIN. As such, it's quite safe to use it for multiple users, if what you're using those users for is separation of concerns and to stop programs running as one from accidentally smashing programs running as the other. Identities are not the same as Unix uids!

Security in an error-prone world

Posted Nov 3, 2015 23:40 UTC (Tue) by wahern (subscriber, #37304) [Link] (1 responses)

I've been using a Yubikey NEO with the OpenPGP applet on OS X with MacPGP2 for over a year, and I have no such bad experiences. Except for a brief stint when Apple borked their pcscd fork, it's been smooth sailing.

SSH authentication forwarding shouldn't be problematic--it's the same ssh instance talking to the agent when authenticating the first hop as when you're authenticating the next hop. Certainly I've never had a problem. I presume you're talking about GnuPG protocol-specific forwarding, which does seem to be buggy. But that's irrelevant for SSH authentication, it's just a bummer when you use mutt to read your e-mail on a remote server (as I do) and would like to be able to use PGP for e-mail.

MacPGP2 is using GnuPG 2.0, FWIW. But MacGPG's secret sauce is their GUI PIN entry program, so maybe they've fixed other problems as well.

Finally, my Yubikey NEO will work in OpenPGP mode _and_ HOTP mode just fine, although I do have to re-enter my PIN after generating an HOTP. Multifunction worked both before and after Apple's pcscd problems, but notably Apple's bug made pcscd lose track of the card state. Something similar (though I doubt identical, because it's forked) seems to be the problem here.

I agree things could be better. But compared to the way things were just a few years ago, particularly with SSH it's like night & day.

Ideally somebody will devise a scheme to use U2F keys for SSH, and OpenSSH will gain native U2F support, removing the need for all the middleware.[1]

[1] Ludovic Rousseau is one of the hardest working and most capable FOSS developers out there, but the deck is simply stacked against him. There's only so much one person (or a whole team of people) can do to wrangle the horrendously complex state of smartcard interfacing and management. The OpenPGP smartcard spec works because it simplifies many things, leaves less room for optional crap, and specifies basic management capabilities. U2F simplifies things even further. Heck, they could've probably just ditched the PIN requirement altogether. A "1-factor" pubkey smartcard without a PIN is still an unfathomably better state of affairs than using passwords when it comes to remote authentication, and even better than password-in-all-but-name schemes like HOTP, TOTP, and biometrics.

Security in an error-prone world

Posted Nov 4, 2015 13:56 UTC (Wed) by nix (subscriber, #2304) [Link]

I've been using a Yubikey NEO with the OpenPGP applet on OS X with MacPGP2 for over a year, and I have no such bad experiences.
Excellent! That means I'm probably just doing something stupid wrong -- though the fact remains that there are lots of ways to get it wrong, and the way described on the LF site is one way to do it (because that's what I tried, and it didn't work).
SSH authentication forwarding shouldn't be problematic--it's the same ssh instance talking to the agent when authenticating the first hop as when you're authenticating the next hop. Certainly I've never had a problem.
That works until you use your Yubikey to do anything else (e.g. OTP). If you were using native SSH, you could use ssh-add -e / -s to sever the smartcard connection and restart it, and everything would mostly be fine -- but using GPG, well, as soon as the connection is severed, the gpg-agent (and, if you're using it, pcscd) hang, hard. You have to kill -9 and restart them, and as soon as you do that the authentication forwarding is severed: you have to restart all your ssh sessions too! This is very far from optimal.
notably Apple's bug made pcscd lose track of the card state. Something similar (though I doubt identical, because it's forked) seems to be the problem here.
Almost certainly. Possibly this is the ill-defined 'issues' which caused pcscd autostarting without systemd to be removed, but I doubt it: it was only half a dozen lines, and in particular nothing changed about smartcard state tracking: pcscd still exits when idle in both cases, presumably losing track of card state as it does so.
But compared to the way things were just a few years ago, particularly with SSH it's like night & day.
Good God that's horrifying. :)
Ideally somebody will devise a scheme to use U2F keys for SSH, and OpenSSH will gain native U2F support, removing the need for all the middleware.[1]
Agreed! I'd be oh so very happy with that. U2F looks much easier to wrangle than PKCS#11, enough so that adding support is something that does not fill me with horror... oh no I haven't just given myself another spare-time project that'll never get done, have I? ... you'd think I'd learn.

In particular, it's stateless, so if U2F stops working for a second while we do an OTP authentication, nothing bad happens (and it's physically impossible to do both at once, since both involve a button press).

Presumably it would be done similarly to how PKCS#11/PIV support already is, only rather than a PKCS11Provider, you'd specify an URL to an authentication server (obviously in some new ssh_config option), and if you wanted to forward things, you'd use an SSH agent and have ssh-add and agent forwarding do the work of getting to where your smartcard is actually plugged in.

Security in an error-prone world

Posted Nov 5, 2015 15:18 UTC (Thu) by apoelstra (subscriber, #75205) [Link] (2 responses)

> Oh God they recommend using gpg-agent as an ssh-agent replacement? I tried to get this working, but eventually gave up: it's devastatingly buggy. Among other problems, forwarding of the agent to other machines is only just barely kinda sorta working in GnuPG 2.1 and has no chance of working in 2.0; the eternal GPG_TTY bugs (are you sure you're resetting that in *every single* startup script? Without exception?) routinely lead to you being asked to unlock your key using a pinentry running on the wrong console, causing an apparent hang; even if that's fixed, if you ssh or su to another user and want it to use your smartcard, you suddenly find that you can't because the pinentry can't appear on that tty since it's owned by another user and not writable by the one the gpg-agent is running on... the list goes on and on. It's a mass of barely functional components held together with baling wire and twine.

Thanks for this. For years now I've just thought gpg-agent just "usually doesn't work" but never had an idea of what was going wrong or how to look into it. This paragraph provides many hints.

Security in an error-prone world

Posted Nov 5, 2015 22:15 UTC (Thu) by flussence (guest, #85566) [Link]

gpg-agent has *one* useful function in my daily routine: to work around gpg itself being broken out of the box. I run `gpg -s -o /dev/null /dev/null` once, then thereafter I can actually make git commits without gpg barfing with TTY errors.

It's completely FUBAR. I wish OpenBSD's alternative would catch on.

Security in an error-prone world

Posted Nov 10, 2015 16:25 UTC (Tue) by nix (subscriber, #2304) [Link]

I dug into this because, surely, if I fixed all the problems, I could get to the golden ideal of having all my SSH and GPG auth, on whatever machine, all handled via the same set of smartcarded keys, derived from the same offline master!

Sadly, this golden ideal remains unattainable :(

Security in an error-prone world

Posted Nov 4, 2015 16:34 UTC (Wed) by ibukanov (subscriber, #3942) [Link]

In my experience SELinux is useless at protecting custom applications. Surely if one can afford to hire one of very few of those that gets SELinux, then surely, go for it. Otherwise put the application into a container running as non-root user there. If SELinux prevents that, just disable it, but file a bug against your distribution.

Security in an error-prone world

Posted Nov 5, 2015 10:32 UTC (Thu) by ortalo (guest, #4654) [Link] (4 responses)

After much thinking about this (undeniably high-quality) intervention, I can't help thinking that Konstantin Ryabitsev is pretty optimistic.
There was a pretty clear incentive for improving car passengers safety (which only becomes car security when you think about the driver as a potential opponent) in the 60s. Everyone was a potential victim. And pain is a common knowledge.
We do not have such a simple incentive in our case so I do not see so many reasons for a potential inversion of the vulnerability increase phenomenon. Or more precisely, I can see some reasons myself but I suspect most computer users are not at all aware of them and I am not sure we are yet at a maximum of the insecurity problem.
Contrary to the cars, we have to actively uncover why insecurity harms us and what are its actual failures (easy spying by governemental agencies is one failure that powerfull people still call a success...).

Security in an error-prone world

Posted Nov 5, 2015 21:38 UTC (Thu) by smoogen (subscriber, #97) [Link] (3 responses)

From what I have read, the largest incentive wasn't that everyone was a potential victim.. most people are blissfully think that it is more likely they will die in a lightning strike than a car accident (even though the odds are still more likely the later). The push was that costs for all this were getting pushed to governments and they started imposing proof of insurance as part of what was required to have an automobile. The insurance industry which also gave business insurance to various car companies saw that the rates of death/injury/etc were unsustainable (they would go out of business quickly because rates would be either too high and payouts too big) and threatened to stop covering car companies business insurance until changes were made. Car companies began rolling out changes when this happened and slowly but surely cars got 'safer' and insurance remained 'affordable' (for particular definitions of safe and affordable).

The part that is hard to understand is that it took decades for those changes to work their way through various systems. And the similar "forcing" of changes upon computer users, providers, etc will take equivalent times. My guess for how it will occur is :

Banks/transaction companies are tired of paying for lost/stolen identities. They will then push for mandatory computer insurance. The insurance companies will then push for changes in both what a user must know before they can use a computer, what they do if they want to keep their insurance and also what businesses must do to make the computer "safe". And in 30-40 years we will have "affordable" computer insurance and "safer" computers. [yay us.]

Security in an error-prone world

Posted Nov 6, 2015 7:00 UTC (Fri) by ibukanov (subscriber, #3942) [Link] (1 responses)

There is a big difference between insurance for car accidents and for computer crimes. Car accidents follow nice thin-tailed distribution when the law of big numbers works and insurance claims are predictable. Computer crimes on other hand are very fat-tailed when most of the damage comes from few rare frauds. It may take many decades before the real cost of crimes are known which is well beyond time horizon of typical companies.

I suspect what happens is that government regulation will lead to race to the bottom in insurance prices and those with the cheapest rates and least strings attached will simply wipe out companies that try to insists on real security. Then comes a judgement day.

Security in an error-prone world

Posted Nov 6, 2015 17:00 UTC (Fri) by smoogen (subscriber, #97) [Link]

Yeah. I was realizing that insurance wouldn't work in the fact that it is more like hurricane/earthquake insurance than car insurance. It is going to happen and it is going to hit more people than are possible to cover so you end up with massive scams on how to be better covered but never get any payoff.

Security in an error-prone world

Posted Nov 6, 2015 17:01 UTC (Fri) by kleptog (subscriber, #1183) [Link]

> Banks/transaction companies are tired of paying for lost/stolen identities.

Except this isn't happening really. Firstly, the losses due to transaction fraud just aren't that big. The banks insure themselves and take the costs out of the fees they charge. The 1.5% they charge on credit cards probably more than makes up any costs they might make due to fraud. And since the losses are going down rather than up, there's no reason for this policy to change.

AFAIK there is no insurance policy for lost/stolen identities. Primarily I think because costs are highly variable and hard to quantify and it just doesn't happen very often. And an insurance policy would pay money, whereas what you really need is to get records cleaned up and removed. I'm not sure if any insurance company is interested in doing that kind of work.


Copyright © 2015, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds