Kernel.org's road to recovery
On October 3, a basic kernel.org returned to the net. Git hosting is back, but only for a very small number of trees: mainline, stable, and linux-next. The return of the other trees is waiting for the relevant developers to reestablish their access to the site - a process that involves developers verifying the integrity of their own systems, then generating a new PGP/GPG key, integrating it into the web of trust, and forwarding the public key to the kernel.org maintainers. This procedure could take a while; it is not clear how many developers will be able to regain their access to kernel.org before the 3.2 merge window opens.
The front-page web interface is back though, as of this writing, it is not being updated to reflect the state of the git trees. Most other kernel.org services remain down; some could stay that way for some time. It is worth remembering that kernel.org only has one full-time system administrator, a position that has been funded by the Linux Foundation since 2008. That administrator, along with a number of volunteers, is likely to be quite busy; some of the less-important services may not return anytime soon.
A full understanding of what happened is also likely to take some time. Even in the absence of a report on this intrusion, though, there are some conclusions that can be made. The first is obvious: the threat is real. There are attackers out there with time, resources, motivation, and skills. Given the potential value of either putting a back door into the kernel or adding a trojan that would run on developers' machines, we have to assume that there will be more attacks in the future. If the restored kernel.org is not run in a more secure manner, it will be compromised again in short order.
The site's administrators have already announced that shell accounts will not be returning to the systems where git trees are hosted. Prior to the breakin, there were on the order of 450 of those accounts; that is a lot of keys to the front door to have handed out. No matter how careful all those developers may be - and some are more careful than others - the chances of one of them having a compromised machine approach 100%. Keeping all those shell accounts off the system is clearly an important step toward a higher level of security.
Kernel.org has its roots in the community and was run the way kernel developers often run their machines. So, for example, kernel.org tended to run mainline -rc kernels - a good exercise in dogfooding, perhaps, but it also exposed the system to bleeding-edge bugs, and, perhaps more importantly, obscured the real cause of kernel panics experienced last August, delaying the realization that the system had been compromised. The kernel currently running on the new systems has not been announced; one assumes it is something a little better tested, better supported, and stable. (No criticism is intended by pointing this out, incidentally. Kernel.org has been run very well for a long time; the point here is that the environment has changed, so practices need to change too.)
At this point it seems clear that a single administrator for such a high-profile site is not an adequate level of staffing. Given the resources available in our community, it seems like it should be possible to increase the amount of support available to kernel.org. There are rumors that this is being worked on, but nothing has been announced.
Developers are going to have to learn to pay more attention to the security of their systems. There are scattered reports of kernel developers turning up compromised systems; in some cases, they may have been infected as the result of excessive trust in kernel.org. Certain practices will have to change; for that reason, the Fedora project's announcement of a zero-tolerance policy toward private keys on Fedora systems is welcome. Developers are on the front line here: everybody is depending on them to keep their code - and the infrastructure that distributes that code - secure.
There is an interesting question related to that: will kernel developers move back to kernel.org? These developers have had to find new homes for their git repositories during the outage; some of them are likely to decide that leaving those repositories in their new location is easier than establishing identities in the web of trust and getting back into kernel.org. Linus has said in the past that he sees the presence of a kernel.org-hosted tree in a pull request as a sign that the request is more likely to be genuine. Requiring that repositories be hosted at kernel.org seems like an unlikely step for this community, though. It is not entirely clear whether trees distributed around the net increase the security risk to the kernel, or whether putting all the eggs into the kernel.org basket would be worse.
One other conclusion would seem to jump out at this point: kernel.org got hit
this time, but there are a lot of other important projects and hosting
sites out there. Any of those projects is just as likely to be a target as
the kernel. If we are not to have a long series of embarrassing compromises,
some with seriously unfortunate consequences, we're going to have to take
security more seriously everywhere. Doing so without ruining our
community's openness is going to be a challenge, to say the least, but it
is one we need to take on. Security is a pain, but being broken into and
used to attack your users and developers is even more so.
Index entries for this article | |
---|---|
Kernel | Development tools/Infrastructure |
Kernel | Kernel.org |
Security | Free software infrastructure |
Security | Kernel.org |
Posted Oct 6, 2011 2:27 UTC (Thu)
by malor (guest, #2973)
[Link] (114 responses)
It's much, much more important than it was twenty years ago; governments have gone actively hostile toward their own citizens all over the world, including the United States (which is now executing its own citizens without bothering with the judicial system). Open source has gotten so prevalent that lives are now literally dependent on the security features of the Linux kernel.
Twenty years ago, if Linux got something wrong, about the worst that would happen was maybe some corporate espionage. But these days, like it or not, reject it or not, if you guys blow it badly, people can die. For real.
If the global shift toward violent authoritarianism continues, the life you save could someday be your own.
Posted Oct 6, 2011 2:49 UTC (Thu)
by dlang (guest, #313)
[Link]
Everything is a risk, the only way to really secure your computer is to turn it off, unplug it, wrap it in a faraday cage, and then start working on physical security. Since such a machine provides very little value to people, everything is a matter of what level of risk you are willing to take.
running a 'allyes' kernel publicly exposed to attackers (i.e. on the Internet) is a very bad ideal. You want your Internet exposed devices to have as small of an attack surface as possible, and this means disabling features that you don't need. The distro kernels tend to marginal in this area, they enable just about everything, but do so as a module. so it's not always loaded, but some action can cause the kernel to think it's needed and then the module will be auto-loaded.
you need to understand the risks, and then evaluate the risks, not just think "risk == BAD"
Posted Oct 6, 2011 3:33 UTC (Thu)
by ebiederm (subscriber, #35028)
[Link] (108 responses)
From what I can see, given the current state of the art of identifying and fixing bugs in general it must be assumed that all software is buggy and ultimately the bad guys will find those bugs.
Security on the internet seems to be a race between software developers deploying new versions of high quality code and hostile developers finding the bugs that have been overlooked.
Eric
Posted Oct 6, 2011 9:10 UTC (Thu)
by PaXTeam (guest, #24616)
[Link] (107 responses)
nothing of the sort was asked, rather, we asked kernel devs to document with a few greppable words what they already know about the security impact of a given commit (*if* they already know, no need to spend time on figuring it out otherwise). that surely doesn't take up more than a few seconds of typing (or actually, as Linus made it clear that he actively censors such commits, it'd even speed things up).
Posted Oct 6, 2011 10:48 UTC (Thu)
by abacus (guest, #49001)
[Link] (18 responses)
Posted Oct 6, 2011 15:16 UTC (Thu)
by NAR (subscriber, #1313)
[Link]
Posted Oct 6, 2011 15:40 UTC (Thu)
by mpr22 (subscriber, #60784)
[Link] (16 responses)
And? I should note here that I'm in the "all kernel fixes not provably security-irrelevant are security fixes" camp, on the grounds that there are too many people who lie in the mutual intersection of the following sets: Yes, these people need to be debugged. However, adequate lawfully and morally acceptable techniques for such debugging do not come readily to mind.
Posted Oct 6, 2011 15:51 UTC (Thu)
by dlang (guest, #313)
[Link] (9 responses)
the real problem with the idea of tagging all security relevant patches is the outcry that will come when patches that are _not_ tagged as being security patches end up being found to be security related at some later time (including possibly before the kernel is even released)
Posted Oct 6, 2011 21:01 UTC (Thu)
by PaXTeam (guest, #24616)
[Link] (8 responses)
what logic?
> the real problem with the idea of tagging all security relevant patches
why would there be an outcry for not disclosing something one didn't know about at the time of disclosure? let me guess, it's just another strawman 'logic' of yours trying to digress from the actual problem: if a developer knows he's fixing a bug with security impact, he must not cover up that fact, simple as that. what he doesn't know is and has always been utterly irrelevant for this discussion.
Posted Oct 6, 2011 21:23 UTC (Thu)
by dlang (guest, #313)
[Link] (7 responses)
Posted Oct 6, 2011 23:24 UTC (Thu)
by PaXTeam (guest, #24616)
[Link] (6 responses)
Posted Oct 7, 2011 18:32 UTC (Fri)
by vonbrand (guest, #4458)
[Link] (5 responses)
Au contraire. Show that there is no miscreant grepping for such stuff in the kernel (and other changelogs) in order to find out if they can put their foot in the door, and we might reconsider.
Posted Oct 7, 2011 21:22 UTC (Fri)
by PaXTeam (guest, #24616)
[Link] (4 responses)
Posted Oct 9, 2011 16:05 UTC (Sun)
by vonbrand (guest, #4458)
[Link] (3 responses)
Honesty is all about intentions.
Posted Oct 10, 2011 7:57 UTC (Mon)
by PaXTeam (guest, #24616)
[Link] (2 responses)
Posted Oct 11, 2011 1:10 UTC (Tue)
by vonbrand (guest, #4458)
[Link] (1 responses)
He asked not to indulge in a theater of flagging commits with useless (and probably misleading) comments. That is a very far cry from dishonesty. The contention that such commit messages will make Linux look bad is nonsense, if somebody wants to get data on security problems there are lots of other sources, very much more accurate than self-selected comments on patches.
Posted Oct 11, 2011 7:36 UTC (Tue)
by PaXTeam (guest, #24616)
[Link]
no, he didn't *ask* anything. he *declared* that he does *not* want to see greppable words that'd identify a commit as fixing a security bug. no ifs and buts there. in less euphemistic words it's also called a coverup. second, if identifying security fixes was 'useless (and probably misleading)' then 1. why does he still let through such commits sometimes, 2. why does the rest world do this? something doesn't add up here if you theory holds ;).
Posted Oct 7, 2011 15:30 UTC (Fri)
by vonbrand (guest, #4458)
[Link] (2 responses)
Count me in the camp with "any kernel bug that can't be shown to be absolutely neutral with respect to results is a security bug."
Posted Oct 10, 2011 0:16 UTC (Mon)
by malor (guest, #2973)
[Link] (1 responses)
Posted Oct 10, 2011 1:41 UTC (Mon)
by raven667 (subscriber, #5198)
[Link]
Posted Oct 10, 2011 3:27 UTC (Mon)
by jamesh (guest, #1159)
[Link] (2 responses)
If P does imply Q, then not-Q really does imply not-P. Did you instead mean that it doesn't follow that "not-P implies not-Q"?
That would make more sense in this case since "commits marked with a CVE number fix security vulnerabilities" does not imply that "commits without a CVS number do not fix security vulnerabilities".
Posted Oct 10, 2011 4:44 UTC (Mon)
by vonbrand (guest, #4458)
[Link]
Presumably he meant "P implies Q" is not the same as "not P implies not Q."
Posted Oct 10, 2011 9:24 UTC (Mon)
by mpr22 (subscriber, #60784)
[Link]
Posted Oct 6, 2011 17:08 UTC (Thu)
by clugstj (subscriber, #4020)
[Link] (87 responses)
All bugs are security risks - therefore all of them are implicitly annotated with the "greppable words".
Allowing developers untrained in security to add security annotations to changes would only add more noise to the commit messages.
Posted Oct 6, 2011 20:50 UTC (Thu)
by PaXTeam (guest, #24616)
[Link] (86 responses)
not all bugs are security risks as not all bugs result in violating a security boundary (i.e., break some information flow control). but let's go with your thought, what would be the greppable words then? for extra bones, explain the security risk of commit 976d167615b64e14bc1491ca51d424e2ba9a5e84.
> Allowing developers untrained in security to add security annotations to
so on one hand we have supposedly security conscious developers who do care about the security of the code they produce and/or sign off on, and on the other hand they're untrained in security. as they say, you can't have it both ways ;). second, developers don't need to be trained in security to be able to understand when a PoC exploit demonstrates, say, code execution. and i sure as hell want to know about such fixes.
Posted Oct 6, 2011 21:22 UTC (Thu)
by dlang (guest, #313)
[Link] (80 responses)
most of the time the developers are interested in fixing bugs for the sake of fixing bugs.
Analysing the fix to tell if there are security implications of the fix is a separate step that requires a very different mindset than just fixing the problem in the first place. There are many, many cases where an exploit has been published and many good security people have the reaction "they were able to exploit _that_ bug???". This means that the accuracy of any evaluation by the developer is low (and tends towards false negatives as the developer doesn't see a way to exploit that bug, even thought it is actually possible)
This results in kernel developers (among others) considering the value of spending the time to try and figure out if there are any security implications of a bugfix for any random bug to be very low
In addition to this, many of the same people consider anything that tags only some of the real security fixes as being security fixes to have a negative value, and this pushes the net value of tagging commits clearly to be a net loss.
Posted Oct 6, 2011 23:24 UTC (Thu)
by fuhchee (guest, #40059)
[Link]
If truly this is beyond the talented engineers, perhaps they could familiarize themselves with the CWE labeling system [1], which merely classifies the bug being fixed, and does not require the different mindset skill of actually exploiting the problem.
[1] http://nvd.nist.gov/cwe.cfm
Posted Oct 6, 2011 23:34 UTC (Thu)
by PaXTeam (guest, #24616)
[Link] (78 responses)
fine by me, also completely irrelevant for covering up security fixes.
> Analysing the fix to tell if there are security implications of the fix [...]
you can stop right there. noone asked them to do such a job. they're not even qualified for such a job. what we did ask them is to be honest. if i find a security bug and provide a PoC exploit for it, i *want* to see the commit of the fix mention the fact that it's fixing a security bug. this is not negotiable. the kernel policy is diametrically opposite to this, Linus explicitly stated that he would even *censor* any such mention of security related info in commit messages. no wonder i stopped submitting such fixes upstream and keep them in PaX instead. as a security professional yourself, i'm sure you appreciate my covering up said fixes though (see, who said i can't accomodate stupid policies ;), i expect a pat on the back at least ;).
> In addition to this, many of the same people consider anything that tags
define many. i only recall Ingo and perhaps Linus ever saying something stupid like that and when i asked for the *reasons* behind such an opinion, i got nothing but BS. maybe you've got better ones?
Posted Oct 7, 2011 0:02 UTC (Fri)
by nix (subscriber, #2304)
[Link] (77 responses)
You're a loony.
Posted Oct 7, 2011 0:26 UTC (Fri)
by PaXTeam (guest, #24616)
[Link] (76 responses)
Posted Oct 7, 2011 0:52 UTC (Fri)
by dlang (guest, #313)
[Link] (75 responses)
I have never heard of a case where the kernel team has refused to accept a patch because it claimed to be a security fix, what the kernel team has refused is to start tagging fixes as being security or not security fixes
Posted Oct 7, 2011 1:13 UTC (Fri)
by malor (guest, #2973)
[Link] (71 responses)
If you know it's a security issue, the ONLY reason to hide that fact is to try to juke stats about how (in)secure Linux is. Security fixes are embarassing, and the kernel team is trying to hide just how bad their code is.
That's all it really is. It's not 'security theater' to blame. It's poor programmers trying to shovel bad code under the rug.
Nobody is asking for security analysis, just that, if a bug is KNOWN to be security-related, that they pass that info along, not deliberately hide it.
Hiding information of that type is shameful in and of itself, and it's quite possible someone might end up dead because they didn't realize that a hole had been patched, and that they'd already been exploited. Not knowing that a hole was plugged means they might not think to look.
That's the stakes in the modern networked world, and fucking around with goddamn semantic games when people's lives are at risk is selfish bullshit of the highest order. Pass along all the information you have about the impact of a bug. Hiding it is putting people at risk for zero real benefit.
Posted Oct 7, 2011 18:43 UTC (Fri)
by vonbrand (guest, #4458)
[Link] (70 responses)
If they know it is a security risk, they'll probably say so. The problem is that (as has been said many, many times) finding out if a particular glitch has any actual impact ("sure, this could lead to an integer overflow if <add longish list of conditions on variable values>, in which case maybe..."), let alone can be exploited as a security hole, is hard work and requires a mindset and training that not many kernel developers share. Any such assesment they do will miss an order of magnitude more exploitable flaws than the ones flagged, and flag many that are completely irrelevant. Pure noise, a complete waste of effort.
Posted Oct 7, 2011 18:52 UTC (Fri)
by malor (guest, #2973)
[Link] (1 responses)
In other words, he lies to make his team look better.
Nobody is asking for extra work to be done, just that if it's known to be a security issue, that that information be passed along, instead of being actively hidden.
Posted Oct 7, 2011 19:29 UTC (Fri)
by dlang (guest, #313)
[Link]
Linus may ask people to change something before he pulls it, and he may avoid saying things in his changelogs, but the tools do not allow him to edit other people's changelogs.
Posted Oct 7, 2011 18:59 UTC (Fri)
by malor (guest, #2973)
[Link] (3 responses)
You don't have to do anything else to make us happy, just stop lying.
Posted Oct 7, 2011 19:31 UTC (Fri)
by dlang (guest, #313)
[Link] (2 responses)
not saying that it has a security impact is not direct lying. at most it's lying by implication or by omissions, but to make a case that it's lying by these criteria you would need to establish that it's a normal thing to have such data in there to start with, and it's not.
Posted Oct 7, 2011 20:34 UTC (Fri)
by malor (guest, #2973)
[Link]
You can fucking dance around that all you want, trying to justify behavior that simply can't be justified, but it remains true. It is unethical behavior, probably the second-worst thing you can do as a coder.
Posted Oct 7, 2011 21:49 UTC (Fri)
by PaXTeam (guest, #24616)
[Link]
as a security professional you must know cve.mitre.org and all the links they have to various resources that disclose this kind of information. you were saying...?
Posted Oct 7, 2011 21:39 UTC (Fri)
by PaXTeam (guest, #24616)
[Link] (63 responses)
read http://lkml.org/lkml/2008/7/15/699 again. Linus *directly* contradicts you.
> The problem is that...
...you haven't been paying attention to what we were saying. nobody, parse that again, *nobody* asked the kernel devs to evaluate the security impact of bugs themselves. what we did ask them is that if someone else does that work then they pretty please disclose that information instead of keeping it for themselves.
> Pure noise, a complete waste of effort.
so are you claiming that all the effort that goes into the CVE database is 'a complete waste of effort'? you might as well tell the LWN folks to stop wasting your precious subscriber fee on reporting such wasted efforts.
Posted Oct 7, 2011 22:21 UTC (Fri)
by malor (guest, #2973)
[Link] (47 responses)
Notice how, over and over and over and over, no matter how many times anyone tells them, they insist on mischaracterizing what is being asked of them?
What we actually ask: reveal security implications you already know of. That's it. The entire request, in two words, is "be honest". You wouldn't think that would be a big deal.
What they constantly insist is being asked for: original security research and impact analysis.
At this point, after years of this going back and forth, I don't think it's reasonable to presume that this is an innocent misunderstanding any longer. It's been repeated too many times, on too many fronts. The REAL objection is that the Linux kernel is absolutely terrible from a security perspective. They want to work on speed, not correctness, and will savagely misrepresent opposing requests to avoid confronting the fact that their laser focus on speed is not shared by a very large fraction of the larger community. In fact, they'll go out of their way to characterize the people who are focused on correctness as being proponents of 'security theater'.
Security is a hard problem, and they don't want to solve that problem. They want to be left alone to work on the speed problem instead. The world is not cooperating with them, and so they're lying about their bugs to try to force it to happen.
Posted Oct 8, 2011 5:15 UTC (Sat)
by jrn (subscriber, #64214)
[Link] (19 responses)
Be careful about who "they" is. The people you are responding to on lwn.net are not necessarily the same people who are writing a lot of commit messages for security-related fixes.
If you actually want to change practice in this area, your best bet is to make a lot of security fixes, and to write the commit messages yourself. Another way to improve things would be to offer a separate publication - for example, a list of commits with whatever information about their impact is publicly known, or a tree with commit notes providing that same information.
Posted Oct 8, 2011 5:23 UTC (Sat)
by malor (guest, #2973)
[Link] (18 responses)
If they didn't lie, there'd be no need for all that extra work to duplicate the already-existing knowledge. The bad guys are going to be doing it anyway, and then either using or selling what they find. The only people that are being hurt by deliberate secrecy are the good ones.
This includes the devs themselves; if the team as a whole realized just how many security holes were slipping through, they might focus with just a little less intensity on making the kernel run fast, and a little more on making it run right.
Posted Oct 8, 2011 12:26 UTC (Sat)
by nix (subscriber, #2304)
[Link] (14 responses)
This is why PaXTeam's excellent technical nous goes largely wasted into out-of-tree stuff that is relatively little used[1]: when he tries to interact with other people, the moment any criticism of any kind is levelled -- which takes about six seconds on a list as hardboiled as the kernel list -- out come the vituperative personal attacks, conspiracy theories, and imputations of malice -- and of course he is never wrong either, no matter what evidence is presented. We've seen it virtually every time PaXTeam tries to participate in conversations on the kernel list, such that now I suspect most kernel devs have him killfiled. Nobody wants to work with someone like that: it's unpleasant even to read it. A Cassandra imprisoned by his own acid tongue. It's a great shame.
I expect further personal attacks in response to this comment, even though it's complimentary in part, but I don't care. He can't help it.
[1] yes, people do use grsec. But consider how much more widely used these fixes would be if more of them got into Fedora (the fixes ones that have an acceptable cost/benefit tradeoff, of course, I know that some have been rejected on those grounds, which is inevitable.
Posted Oct 8, 2011 19:13 UTC (Sat)
by fuhchee (guest, #40059)
[Link]
Sometimes it seems as though the groups - socially - are more alike than different, just resent the resemblance.
Posted Oct 8, 2011 19:55 UTC (Sat)
by malor (guest, #2973)
[Link] (1 responses)
I don't see that any other word really suffices, in that context.
The rest of all the foofoorah, I know nothing about. My only contact with kernel devs is via these comments on LWN. I'm not affiliated with any particular camp. I just want to be told the whole truth, so I can make my own decisions, instead of having my agency taken from me. I'm not asking for any extra work, just that known security issues be revealed.
Indirectly, of course, my hope is that the quality of the kernel will improve, because if just how many security holes are coming out of that team becomes easily visible to the world, I think the pressure will ramp up to stop adding new features, and fix the old ones instead. And I think that's exactly what the kernel devs don't want to do.
Posted Oct 8, 2011 21:48 UTC (Sat)
by mpr22 (subscriber, #60784)
[Link]
Posted Oct 8, 2011 20:01 UTC (Sat)
by malor (guest, #2973)
[Link] (9 responses)
Posted Oct 9, 2011 14:47 UTC (Sun)
by vonbrand (guest, #4458)
[Link] (8 responses)
If Linux development is so completely broken, I do wonder why you even bother...
Posted Oct 10, 2011 0:22 UTC (Mon)
by malor (guest, #2973)
[Link] (7 responses)
Which means, of course, you get hacks like this one, and then a butchering of functionality because shell access can't be safely shared on a Linux machine.
There's going to be more compromises. Lots more.
Posted Oct 10, 2011 0:39 UTC (Mon)
by vonbrand (guest, #4458)
[Link] (6 responses)
Any examples handy? They would make a great point... and they must be aplenty, if we are to believe your allegations.
Posted Oct 10, 2011 1:19 UTC (Mon)
by malor (guest, #2973)
[Link] (5 responses)
Posted Oct 10, 2011 2:28 UTC (Mon)
by vonbrand (guest, #4458)
[Link] (4 responses)
Sorry I wasn't clear. You claimed currently having shell access is equivalent to root. That I'd like to see the boatload of handy examples you've got to back this up. They would make a great point for your assertion that Linux' development is broken, and give hackers a great incentive to fix vulnerabilities and thighten up their coding.
Posted Oct 10, 2011 22:41 UTC (Mon)
by malor (guest, #2973)
[Link] (2 responses)
From RedHat errata:
* An integer overflow flaw in agp_allocate_memory() could allow a local user to cause a denial of service or escalate their privileges (CVE-2011-1746, Important)
Bunch of other stuff too, but there's two likely local root exploits from October 5. Took me about ten minutes to spot, and that's only because I had to look through some lesser CVEs LWN posted about twenty minutes ago.
It would have proved the point even more thoroughly to have gotten a local root exploit today, but five days ago, I think, is adequate.
Posted Oct 11, 2011 0:09 UTC (Tue)
by vonbrand (guest, #4458)
[Link] (1 responses)
And? How do you know whoever patched the bug knew the CVEs beforehand? This is a RHEL kernel, i.e., a stable kernel (+ patches), so this came probably via the stable patch stream.
Posted Oct 11, 2011 0:24 UTC (Tue)
by malor (guest, #2973)
[Link]
Posted Oct 10, 2011 22:47 UTC (Mon)
by malor (guest, #2973)
[Link]
Posted Oct 10, 2011 13:15 UTC (Mon)
by PaXTeam (guest, #24616)
[Link]
it was 'bad faith' and i think you've got enough proof now (straight from the horses's mouth even) that i was right ;). as a sidenote, calling someone a looney is "*really likely* to make people trust you and want to work with you". oh the irony ;).
> This is why PaXTeam's excellent technical nous goes largely wasted into
being out-of-tree is not a waste, it's called a fork and is actively encouraged by kernel developers (i'm sure you can google some Linus quotes to that effect yourself). second, PaX is more like a research project, not a product, so it's natural that its userbase is restricted. nevertheless, the ideas pioneered there over the years have found their way into every major OS these days (linux, BSDs, iOS (both kinds of them), OS X, Windows, etc). i wish every good thing in the world was this 'wasted' ;).
> when he tries to interact with other people, the moment any criticism of
that was quite a mouthful, but let me try to make some sense of it. first, some kind of proof would be useful in general when you throw out accusations (the irony is that you asked for the same in the past, yet are refusing to do the exact same thing when it's your turn ;). second, you should probably be reading more lkml to know the context better and understand the social interactions there. third, what evidence are you talking about? last i checked, i had a few dozen unanswered questions left, both on lkml and here, and i'm still waiting patiently for the 'evidence' ;).
> I expect further personal attacks in response to this comment [...]
so you go out of your way to attack me personally and at the same time make also snide remarks at kernel developers. and then you run and cry 'but pretty please do not come after me'. IOW, you've just proven yourself to be a troll (not that i had any doubts before ;).
PS: your knowledge of the linux ecosystem is kinda outdated. Fedora et al. are 'irrelevant'. what matters today is android, by virtue of having an install base of orders of magnitude bigger than even Ubuntu. the rest is 'also run' (no disrespect meant, but nix's playing the numbers game here and the numbers are like this. i personally don't care which particular linux flavour ends up using PaX or other ideas, the important thing is that they're available for those who need them).
Posted Oct 11, 2011 1:28 UTC (Tue)
by vonbrand (guest, #4458)
[Link] (2 responses)
You presume that the kernel hackers are prescient, and immediately know the security ramifications of each and every bug they fix. Sorry to disapoint you, if they were that smart they wouldn't make the patched mistakes in the first place. Unless you are into serious conspiracy theories, where they insert the bugs full knowing they can be exploited...
Posted Oct 13, 2011 8:14 UTC (Thu)
by Klavs (guest, #10563)
[Link] (1 responses)
Currently, they - by their own admission - choose not to reveal such knowledge in changelogs (which could defintely be called a "lie of omission").
I don't think anyone disagrees with the fact, that even if such knowledge was in the changelog, many bugfixes, would not be known by the dev(s) to be security fixes as well - and as such, one will never be able to simple grep for a "Security fix" or similar in changelogs to know when to upgrade to stay secure - such is the world of computers today :)
Posted Oct 13, 2011 8:20 UTC (Thu)
by jrn (subscriber, #64214)
[Link]
Again, be careful who "they" is. Linus has said he chooses to avoid easily greppable phrases, yes.
Posted Oct 11, 2011 1:24 UTC (Tue)
by vonbrand (guest, #4458)
[Link] (26 responses)
And the simple answer has been given over and over: "There are very, very few of those; way too few to be of any relevance for whatever you are trying to do. We worry there are people out there who will think that only the commits flagged as with security impact are important, so encouraging said selectiveness is a loss. Furthermore, there are miscreants grepping changelogs for stuff like "overflow" to zero in on potential security problems. Yes, security through obscurity, which is useful as long as it isn't for the long term or the only security measure. What could be won with the flagging is minuscule, what would be lost is, in our opinion, much more than the gain. If you want to research the security impact of bugs, knock yourself out. It's all out there for the taking."
Posted Oct 11, 2011 1:49 UTC (Tue)
by jrn (subscriber, #64214)
[Link] (2 responses)
> And the simple answer has been given over and over: "There are very, very few of those
Are you kidding? There are very, very many of those.
A more complex answer would be more accurate: "Sometimes people are sloppy or forgetful, sometimes they do not want to reveal how to exploit a bug, and sometimes by some strange fluke they actually do do a good job of explaining what a patch fixes". And while it is right to be concerned that some patches do a poor job of explaining their impact and why anyone would want the change they make (which is what a change description should do), mischaracterising the problem and helplessly demanding that other people solve it instead of, say, reviewing patches as they appear on the linux-kernel@ list and providing feedback to help their authors, does not seem like a particularly good way to improve, well, anything.
Posted Oct 11, 2011 1:59 UTC (Tue)
by vonbrand (guest, #4458)
[Link] (1 responses)
Examples, please? You go around accusing people of dishonesty and lying, and have yet to show an example of said behaviour, let alone that it is widespread, and even less that it has a measurable impact on the kernel's security overall (which it really can't have, the commit messages are pure comments).
Posted Oct 11, 2011 2:23 UTC (Tue)
by jrn (subscriber, #64214)
[Link]
I don't recall accusing anyone of dishonesty and lying, but if I'm mistaken, I'd be glad to have a pointer. Cheers.
Posted Oct 11, 2011 8:46 UTC (Tue)
by PaXTeam (guest, #24616)
[Link] (22 responses)
> There are very, very few of those; way too few to be of any relevance
first, noone has ever shown actual numbers, i.e., all this 'very few of those' is purely made up, with *nothing* whatsoever to back it up (i bet you don't answer back with actual numbers much like you ignored the rest of my questions so far, you know how well that advances your arguments ;).
second, what does it matter whether there's a lot or only a few of such fixes where the security impact is known? are they doing the same judgement call when they're fixing other kinds of bugs? like, are we to assume now that file system corruption bugs in ext* are also suppressed because "there are very very few of those"? they can't have it both ways.
third, it's none of their business what any given user is trying to do with that information. the rest of the world publishes security errata all the time with varying level of details, but the very fact of having a security bug is not usually suppressed (save for a few companies and apparently linux left in the dark ages).
> We worry there are people out there who will think that only the commits
first, why is it a loss when someone backports a security fix he learns about? does it reduce his security or something?
second, and i asked this already in this very thread, who are these 'people out there' who think this way? show me a single and relevant one (i.e., not your grandma but someone's responsible for the kernel of some sizable chunk of the linux user world). i bet you can't show anyone, and you just made this excuse up to appear 'caring' yet you're achieving the exact opposite.
third, and i asked this already, but let's see if you'll avoid it: what do you think about LWN's having an entire page dedicated to security resources (errata, etc)? are they too "encouraging said selectiveness" causing a net loss? i'm sure the LWN folks are all ears to hear you out on this one.
> Furthermore, there are miscreants grepping changelogs for stuff
first, show me evidence of this but then i'm pretty sure you also made this up. little hint for the future: evidence based arguments fare much better in a discussion than your imagination.
second, and i explained this too some time ago, you're assuming the existence of a person with impossible qualities. namely, you assume that a person can write an exploit based on a patch when he can read its commit message but he's unable to write said exploit based on the same patch when he cannot read the commit message. you should probably talk to some exploit writers one of these days and ask them whether they write exploits based on a few words in a commit *message* or the actual *code* being fixed. you'll be surprised, i guarantee you that ;).
third, you're assuming that exploit writers write exploits based on the commit that fixes the bug vs. the one that introduces it. what evidence is this assumption based on?
> Yes, security through obscurity, which is useful as long as it isn't for
why is security by obscurity useful when it's not for the long term? and what other security measures do the kernel devs have in place?
> What could be won with the flagging is minuscule, what would be lost is,
neither of this has been established yet, but keep trying ;).
> If you want to research the security impact of bugs, knock yourself out.
so on one hand the loss (caused by disclosing security impact of fixes) is much more than the gain but on the other hand they're encouraging others to do the very same and hence cause a net loss (harm). now that's some logic there. unfortunately, they can't have it both ways.
Posted Oct 11, 2011 9:07 UTC (Tue)
by mpr22 (subscriber, #60784)
[Link] (3 responses)
I have actually encountered people who should know better engaging in behaviours sufficiently similar to "security fixes only!", though not on Linux. In this case it was approximately "fixes for our known problems only, cherry-picked from the more recent patches so that we can play semantic games with the qualification authority to avoid requal", and they subsequently ran into a problem that had been fixed in the latest patch, which they had been sent. They were somewhat upset when they were told that they wouldn't get support unless they applied the patches properly. So yes, these people exist, and what matters is not the detail metric "how large a portion of the general-public user base do they feed kernels to?", but the overall metric "how important is it that they not screw up?".
Posted Oct 11, 2011 10:34 UTC (Tue)
by PaXTeam (guest, #24616)
[Link] (2 responses)
Posted Oct 11, 2011 18:58 UTC (Tue)
by dlang (guest, #313)
[Link] (1 responses)
Far too many people have the opinion that change, _any_ change should be avoided and so they avoid doing any changes that aren't either tagged as security fixes or causing an outage.
Posted Oct 13, 2011 8:23 UTC (Thu)
by Klavs (guest, #10563)
[Link]
There's a reason people pay RHEL to backport ONLY fixes (bugs, security etc.) - so the change becomes as little as possible - increasing the likelyhood of the amount of bugs with security impact going down, as time goes by and bugfixes are applied.
Posted Oct 11, 2011 15:48 UTC (Tue)
by vonbrand (guest, #4458)
[Link] (17 responses)
You contend that my statement that for few bugs developers are aware of their security implications. Fine. Then prove me wrong by showing lots and lots of examples of patches where the developer did know the security impact. The "apply important fixes" angle is presumably well covered by the stable kernel series. If somebody wants to do their own work here, they'd better know what they are doing. Just grepping around for some keywords in the changelogs won't get them very far. Your second point is pure nonsense, ext* (and a lot of other classes of patches) are important. Nobody is advocating suppressing any class of patches, just flagging commits with potential miscreant atractors. Third, noone I heard of is trying to supress security information. Nobody is in any position to do so, in fact. What I do see is efforts to fix security bugs, and get the fixes out to anybody affected as soon as humanly possible, hopefully without alerting would-be miscreants beforehand. Sure, it's not perfect, you are wellcome to propose ways of making it more fluid. And yes, LWN's security errata page is a part of this effort. I never said exploits are written as you say, so this point is moot. Security through obscurity works as long as the attackers are in the dark, which will usually be for a limited time only. So it can have a short term beneficial effect, but won't normally work long term. That is all I said. If the net effect of posting in upstream changelogs that a patch fixes, say, an overflow is a net positive or negative is very much up for debate. AFAICS, there are clear negative effects (miscreants grepping, "only apply flagged security fixes" mindset) and few (if any) positive ones, so the net result would be a loss. You clearly see it otherwise, but haven't shown any positive result of your proposal. And the ones in charge of writing the kernel's commit messages are the ones in charge of the decision, not you or me.
Posted Oct 13, 2011 8:32 UTC (Thu)
by Klavs (guest, #10563)
[Link] (14 responses)
If you're smart enough to exploit a bug of a certain type - you'll be looking at the code(!) for any lines of C - that looks like the certain bug you want to exploit. I would like to hear of a person - smart enough to actually write an exploit, but dumb enough to be helped by information of security impact in changelogs? Pls. - just find me one, who can actually write an exploit, who thinks he/she is helped by such a msg in a changelog :)
As you may have gathered, I see no reason for actively excluding available information of bugs having a security impact.
Anyone dumb enough to think because it sometimes says along the lines of "security impact" in the changelog (which it actually already does some times AFAIK) - they should only upgrade when that's the case - is already doing their job horribly - and won't be worse off, if any relevant information was actually in the changelogs.
Posted Oct 13, 2011 8:47 UTC (Thu)
by malor (guest, #2973)
[Link] (13 responses)
Only good guys read changelogs, basically, so hiding security information only hurts good guys. And it makes the Linux kernel look more secure than it actually is, which is another form of lying by omission.
There is a reason why many many admins try to limit patches to known security-related issues... it's because they're constantly getting new features shoveled at them, with brand-new potential, unanalyzed security impacts. And programmers are very good at introducing weird and subtle regressions with their fixes. Architects and administrators only get shit when stuff breaks, so they try to change as little as possible with a setup, once they know it works. Even tiny tuning adjustments in the kernel code can throw a large-scale application out of kilter, so the people in charge of actually putting all that abstract code to real work in the world try to avoid running new code in a given application unless they either have to, or need new features.
The harder the programmers make it on the architects and administrators, the more appealing the BSDs, Solaris, and even Windows look. And hiding security impacts makes it much harder for them to do their jobs. Programmers just wave their hands and say "You should just run all the code we give you, no matter what", but they don't lose their jobs when the cluster dies.
Posted Oct 13, 2011 17:33 UTC (Thu)
by dlang (guest, #313)
[Link] (8 responses)
This can lead to worse security than not making such comments in the commit message.
In my opinion, this is a far bigger reason to not put such comments in the commit message than worries about bad guys reading them
Posted Oct 13, 2011 20:20 UTC (Thu)
by raven667 (subscriber, #5198)
[Link]
Posted Oct 15, 2011 16:40 UTC (Sat)
by PaXTeam (guest, #24616)
[Link] (6 responses)
if security fixes are marked as such then how can they be missing 'security related fixes'?
> This can lead to worse security than not making such comments in the commit message.
how does fixing a security bug *decrease* security?
Posted Oct 15, 2011 21:19 UTC (Sat)
by dlang (guest, #313)
[Link] (5 responses)
so if you only install fixes that were tagged as security fixes, you will miss other fixes that have security implications because those implications were not known at the time they were written, and so they were not tagged.
it's not that fixing a security bug decreases security. what decreases security is the attitude that if it's not tagged as being a security fix, then it doesn't have security implications.
tagged as a security fix guarantees security implications
not tagged as a security fix does not guarantee that there are no security implications.
And if even you are making the mistake that tagging known security fixes means that other fixes don't need to be applied (on the basis that they don't have security implications), then you have just proven the case that many of the kernel developers are trying to make, that tagging some patches as security related will cause people to ignore the others and have less security than updating to a newer version with all of the fixes
Posted Oct 16, 2011 6:35 UTC (Sun)
by malor (guest, #2973)
[Link]
That's up to them to decide. The guys running these huge, complex systems are pretty goddamn good at what they're doing, and you guys are forcing new, untested code down their throats.
Let people have their own agency, and make their own decisions. Don't try to force them to do things the way YOU think they should, sitting there coding on your laptop. Let the guys (and gals) standing in those roaring data centers full of thousands of machines make those calls for themselves.
Just be honest, and things will come out better for the people who choose to use your code. If you are not expert in large-scale systems management, you shouldn't try to substitute your judgement for those who are.
Posted Oct 16, 2011 21:37 UTC (Sun)
by PaXTeam (guest, #24616)
[Link] (3 responses)
so far so good.
> so if you only install fixes that were tagged as security fixes, you
now, following this logic, noone will ever be able to apply all security fixes since the security impact of a given commit may reveal itself any time in the distant future. therefore everyone who applies anything (tagged or not) is in a constant state of 'not tagged as being a security fix, then it doesn't have security implications'. IOW, i don't see the usefulness of your statement, it looks like a tautology.
> what decreases security is the attitude that if it's not tagged as being
why does it decrease security?
and since you've just established that everyone can only do selective backporting, regardless of commits being tagged with whatever or not, this attitude is seemingly prevalent, even you suffer it yourself, so why does it matter again?
> And if even you are making the mistake that tagging known security fixes
actually, i don't make that mistake, in fact, i don't see it as a mistake and you have yet to explain *why* it is a mistake at all. for a start, your acknowledging that fixing a security bug doesn't decrease security means that you're already in contradiction.
> then you have just proven the case that many of the kernel developers
this one bleeds from several wounds, i'm afraid:
1. you haven't shown evidence that people are actually ignoring anything else but explicitly marked security fixes (i think i asked this one before ;).
2. you haven't shown evidence that ignoring anything but explicitly marked security fixes is a bad thing (you actually acknowledged that it's not, now what ;).
3. you haven't explained what 'all of the fixes' means. you and others already said that *everything* not proven otherwise is a security fix therefore the same everything must be backported by everyone who cares which in practice is possible only by following linus's git HEAD. i bet even you don't dare to do that to your company's servers (i actually wonder what you do given that you don't use -stable either).
4. you haven't shown evidence that *not* ignoring (i.e., backporting) random unmarked paches increases one's security/etc. you see, all those security and other fixes are the result of some *earlier* change that *introduced* the problem, so you'd have to somehow prove that the net result of backporting everything under the sun (i.e., following git HEAD) is positive, not negative.
Posted Oct 17, 2011 1:09 UTC (Mon)
by raven667 (subscriber, #5198)
[Link] (2 responses)
I don't know if you pay attention to kernel development but from my understanding running the latest Linus kernel release is what is recommended to have all the fixes. I'm sure there are some people who _do_ run raw Linus kernels who want the latest fixes as soon as they are out of the oven. The current Linus kernel certainly has more security relevant fixes than any vendor kernel which only has backports as the very nature of cherry picking backports is going to miss security fixes which aren't known at the time the fix is made. That is what the kernel release announcements recommend.
Many people think running the latest kernel.org release is potentially too disruptive due to other changes unrelated to bug and security fix work. Unfortunately trying to separate feature from fix work didn't work as a process from the kernel developers perspective which is why the development process was changed in the transition from 2.4 to 2.6 so that feature and architectural changes are fed right into the main line of development.
I think that the major vendors (RedHat, Debian, SuSE, various embedded, etc.) should continuously re-evaluate how close they can run to the main line of kernel.org kernels rather than trying to cherry pick backports and maintain their own "stable" forks. Ideally the regular kernel releases would be equivalent in stability and superior in security than the current situation.
Posted Oct 17, 2011 6:53 UTC (Mon)
by malor (guest, #2973)
[Link] (1 responses)
And that, right there, is the single core problem with Linux security.
Security is hard. It means more pain during development. Separating fixes and features is a pain in the ass. But if it doesn't get done, you end up in the snarl they're in now.
Even the developers themselves can't provide secure shared access to a single Linux kernel image. How can anyone else expect to?
Posted Oct 17, 2011 7:28 UTC (Mon)
by dlang (guest, #313)
[Link]
especially when the bugfix can end up refactoring the code in the process.
yes, this is a big problem with Linux, but the rate of fixes (of all kinds) is the great strength of Linux. At this point nobody knows how to fix the weakness without giving up the strength. There are other OS groups (openBSD comes to mind) that seem like they follow the philosophy that you are advocating, but despite the fact that they had several years of a head start on Linux, their development models have caused them to be far less useful on current hardware. (and therefor any security benefits they may provide, far less useful)
I don't understand your comment about the kernel developers being unable to provide shared access to a single kernel image.
are you referring to the fact that there was a privilege escalation vulnerability on kernel.org? if so, any conclusions about what the problem was need to wait until we learn what happened. And in any case, the vast majority of the kernel developers were not involved in administering the systems (and note that it was several systems, not a single system)
Posted Oct 13, 2011 18:32 UTC (Thu)
by raven667 (subscriber, #5198)
[Link] (3 responses)
I just have to respond to this one thing. The kernel announcements and general discussions have been pretty open about the belief that it is _less_ secure than many people would like. Just the other day in a discussion about containers and namespaces, major kernel developers comment was that it wasn't worth the effort to increase security separation between containers because there will always be local root exploits that will break separation.
The kernel developers do not appear to be trying to claim more security by omission, they are explicitly claiming less.
Posted Oct 14, 2011 1:29 UTC (Fri)
by malor (guest, #2973)
[Link] (2 responses)
Well, that's good in the sense that they're admitting there's a big problem. But I would argue that if they can't keep user accounts secure from gaining root access, then there's really not much point to even HAVING user accounts. If your summary is accurate, there's no way you can safely use Linux to share access between potentially hostile accounts on one kernel. You can sorta do it through virtualization, but running an entire kernel per user is a hell of a lot of overhead to carry around.
Security is probably the hardest problem in computing, and if they are indeed saying "there will always be root exploits", it sounds like they're giving up on the idea entirely. They want to make it go fast, and security be damned.
This is something that people need to be very aware of; that wording makes it sound like they're throwing in the towel. If so, Linux is no longer appropriate for many use cases, particularly when lives are at risk.
Posted Oct 14, 2011 3:10 UTC (Fri)
by dlang (guest, #313)
[Link]
the kernel developers are not giving up.
there was one person who made the claim in the discussion on containers that containers were not good enough, but on the other hand, I'm one of the people who says that virtualisation isn't good enough isolation for some applications due to possible bugs in the hypervisor. It all depends on how much security you are going for.
This is part of the reason that SELinux is optional.
Posted Oct 14, 2011 7:14 UTC (Fri)
by anselm (subscriber, #2796)
[Link]
Not necessarily. Maybe they're just being realistic while they're trying to fix problems as they are discovered (and prevent them where they can).
With a program of the size and complexity of the Linux kernel, I would be very sceptical of anybody claiming the logical opposite, namely that »there will never be even a single root exploit«. Not even the OpenBSD folks subscribe to that kind of hubris ;^)
Posted Oct 15, 2011 16:31 UTC (Sat)
by PaXTeam (guest, #24616)
[Link] (1 responses)
actually no, what i contend is your assertion of there being 'very, very few' commits fixing a bug with a known security impact to be of relevance. have you got evidence for this assertion?
> Then prove me wrong by showing lots and lots of examples of patches
this is tangential, but i actually did provide examples in the past, here on LWN, in threads you also participated in so let google be your friend if you really want to see examples.
> The "apply important fixes" angle is presumably well covered by the
i guessed you'd bring this up but it means you also shoot yourself in the foot ;). you see, there's a contradiction in your statements. according to you:
> Count me in the camp with "any kernel bug that can't be shown to be
that implies that most of the bugfixes must be backported to -stable but we know that's not the case. therefore either -stable doesn't apply all 'important fixes' (including security fixes) or most bugfixes aren't security related as you claimed before. which is it?
> Your second point is pure nonsense, ext* (and a lot of other classes of
we aren't talking about suppressing patches per se (that'd be crazy), but important information in commit messages (security impact in general, file system corruption for the ext* case i brought up as comparison).
so you're admitting that there's a useful category of impact information that you would not advocate to suppress. that's a good step! now you'll have to explain why 'security impact' is different from 'filesystem corruption' in this regard. for that you'll have to explain how exploiting security bugs can never ever corrupt filesystems (else you'll have to conclude that at least some of the security fixes must be marked for filesystem corruption, which is enough to grep for, contradicting your other desire to make security fixes non-greppable), and also why helping miscreants to corrupt filesystems is a good thing (i.e., you can't use the same argument for contradicting purposes).
> Third, noone I heard of is trying to supress security information.
did you read the Linus mail (and the whole thread actually) i linked to? he admitted it.
> What I do see is efforts to fix security bugs, and get the fixes out to
how can they be fixing security bugs when they don't even know what bugs have a security impact? or are you now praising selective fixing of bugs?
> And yes, LWN's security errata page is a part of this effort.
so when kernel devs put security impact info into a commit it's a bad thing but when LWN points at the same commit it's a good thing. i think you want to try this one again.
> I never said exploits are written as you say, so this point is moot.
but you did, even in this latest response in yours:
> [...]hopefully without alerting would-be miscreants beforehand.[...]
this statement means that you assume that people can write exploits *because* they read about exploitable bugs in the commit message. that is, you're claiming that to exploit a kernel bug one has to read about the fact that a given commit fixes it, and magically the exploit appears out of thin air.
in the reality out there, people writing exploits couldn't care less about what the commit message says about the security impact, instead they'll look at the actual code and decide based on that. in other words, your justification to cover up security impact information in commit messages doesn't stand on any legs so far.
> Security through obscurity works as long as the attackers are in the
have you got evidence that attackers are in the dark when all they can rely on is the code in a patch (vs. the commit message)? as a sidenote, i'd like to hear your theory on how 0-day exploits are written 'cos they certainly can't be based on any security related information in the commits.
> AFAICS, there are clear negative effects (miscreants grepping,
you haven't shown any evidence for this.
> "only apply flagged security fixes" mindset)
you haven't shown any evidence for this. (see a theme here? repeating the same statement without evidence doesn't make it any more true)
> and few (if any) positive ones,
you must be out of your mind if you think that making security fixes public has no positive outcome. what else on earth would allow people to fix their systems?
> so the net result would be a loss.
since all the premises for this conclusion have yet to be shown to be true, the jury is still out on this one.
> You clearly see it otherwise, but haven't shown any positive result of your proposal.
it's not my proposal, it's what most of the rest of the world does (heck, even the linux world, just ask any distro maintainer how much they appreciate that they have to reverse engineer security impact information from kernel commits).
Posted Oct 15, 2011 21:11 UTC (Sat)
by dlang (guest, #313)
[Link]
I see the -stable branch as useful for fixing any functional bugs that slipped through, but I don't rely on them for fixing security bugs
Posted Oct 9, 2011 14:39 UTC (Sun)
by vonbrand (guest, #4458)
[Link] (4 responses)
If you want the work done, do it. Just whining nobody else wants to do it for you won't get it done.
Posted Oct 10, 2011 0:15 UTC (Mon)
by malor (guest, #2973)
[Link] (3 responses)
Dude, if I could read minds, I'd be using my superpower in much more interesting ways.
I can't be honest for other people. It is not possible.
Posted Oct 10, 2011 2:32 UTC (Mon)
by vonbrand (guest, #4458)
[Link] (2 responses)
I thought you were interested in possible security implications of bugs. What the author of the patch was thinking at the time is irrelevant, she might have been daydreaming of the vacation with her boyfriend when she noticed a potential integer wraparound.
Posted Oct 10, 2011 22:23 UTC (Mon)
by malor (guest, #2973)
[Link]
If a developer knows a bug has a definite security impact, I want to know that. That's all. Nobody else can know what's in his or her head, or read what's on the bug reports that have been submitted.
Be honest about known security implications, instead of hiding them. It's not a difficult request. It probably takes more work to come up with euphemisms to hide the security issue, than it does to just write what they're actually doing.
Posted Oct 10, 2011 22:25 UTC (Mon)
by malor (guest, #2973)
[Link]
Posted Oct 9, 2011 14:51 UTC (Sun)
by vonbrand (guest, #4458)
[Link] (9 responses)
I never said that CVE is a waste of effort, but it isn't part of each program's change log either.
Posted Oct 10, 2011 8:19 UTC (Mon)
by PaXTeam (guest, #24616)
[Link] (8 responses)
you declared that research into the security impact of fixes is 'Pure noise, a complete waste of effort' due to false negatives and positives. you can't have this both ways, i'm afraid ;).
> but it isn't part of each program's change log either.
care to list a few projects (preferably something as 'important' as linux) that actively suppress CVE info as linux developers do?
Posted Oct 10, 2011 8:38 UTC (Mon)
by jrn (subscriber, #64214)
[Link] (5 responses)
"git log --grep=CVE" does not show me signs of active suppression of CVE info. Are you sure you didn't misunderstand Linus?
HTH,
Posted Oct 10, 2011 9:03 UTC (Mon)
by PaXTeam (guest, #24616)
[Link] (4 responses)
Posted Oct 10, 2011 9:31 UTC (Mon)
by jrn (subscriber, #64214)
[Link] (3 responses)
Some fixes are even made before a CVE is allocated. Since commit messages are not changed after the fact, the git log is not a good place to keep a canonical mapping for CVEs to commit names. Good thing the CVE database exists, huh?
Posted Oct 10, 2011 13:02 UTC (Mon)
by PaXTeam (guest, #24616)
[Link] (2 responses)
Posted Oct 10, 2011 13:48 UTC (Mon)
by vonbrand (guest, #4458)
[Link] (1 responses)
Oh, come on. In the development branch of the kernel somebody notices a glitch and fixes it. Some weeks or months later, somebody running a production kernel finds a security problem, which is dutifully assigned a CVE and the whole circus. The patch is backported from the development branch (or redelevoped independently). Or even somebody fixes a bug, somebody else looking over the commits gets intrigued, develops a PoC exploit, a CVE gets assigned. Or a bug is discovered and fixed, its security impact is assesed and reported, a CVE is issued. In all these scenarios the CVE asignment comes after the patch is integrated. Small wonder the CVE isn't mentioned in the changelog. Yet again, if you want to decorate each commit with CVE numbers, PoC exploits, detailed security assesments, knock yourself out in your own git tree. For me it is enough that the bug got fixed, and move on. Sure, security fixes should be backported. You know what, that is what the -stable trees are for...
Posted Oct 10, 2011 14:43 UTC (Mon)
by PaXTeam (guest, #24616)
[Link]
> For me it is enough that the bug got fixed, and move on.
how do you know when a security bug gets fixed when such information is covered up? have you got some psychic abilities or other channels that mere mortals are not privy to?
> Sure, security fixes should be backported.
yes, if you know which commits fix security issues. you too can point out every single commit that has a CVE but isn't mentioned in the git commit log. you see, if you can't find them, then how could others?
> You know what, that is what the -stable trees are for...
wait, are you saying that the -stable trees contain all the CVEs that are missing in the Linus tree (since the importance of the backported commits must be known by then)? can you back it up with actual numbers? ;)
Posted Oct 10, 2011 13:34 UTC (Mon)
by vonbrand (guest, #4458)
[Link] (1 responses)
I did not say that research into security impact of fixes is useless, I contend that the first impression by the one doing the patch is probably useless. Quite a different statement.
Posted Oct 10, 2011 14:31 UTC (Mon)
by PaXTeam (guest, #24616)
[Link]
> Any such assesment they do will miss an order of magnitude more
i don't see 'first impression' in there, but i do see 'assessment' which in my book is much closer to research than what you now claim you meant. but let it be ;), the main thing is that you now admitted that there is such a thing as security bugs (you're one step ahead of the kernel devs) and their research is not useless, contrary to what Linus/Ingo/etc claimed over the years. the next step you'll have to make is that doing the research is not enough, it has to be published to be of value and then we're on the same page and can ask the kernel devs together to not suppress such research. i'm so rooting for you!
Posted Oct 7, 2011 1:25 UTC (Fri)
by malor (guest, #2973)
[Link] (1 responses)
People are asking you to stop lying. How could anyone argue that this is a bad position?
If people incorrectly think that Linux is safer than it is, then it will get used in more places; people will depend on it to keep them safe when, if the devs were being truthful, they wouldn't. This is an advantage to the Linux devs, increased job security, with a direct disadvantage to the people being lied to.
Lying to take advantage of people is wrong, full stop. In this context, in the modern world, they could die because of this deception. Short of actively inserting vulnerabilities themselves, there is probably nothing more ethically wrong that any coder could do.
That's all that's being asked here: stop lying. Nothing more. Stop actively hiding the impact of your bugs. You don't have to go out of your way to figure out what those impacts are, but if you KNOW a bug is security related, tell the truth.
People are asking you to tell the truth, and you guys are shouting "NO FUCKING WAY!"
Posted Oct 9, 2011 16:20 UTC (Sun)
by vonbrand (guest, #4458)
[Link]
How is not tagging a patch that might perhaps fix a security problem with a lot of explanation, which will take work to research and write up, "lying"? I'd prefer to have kernel hackers working on what they do best, not setting themselves up for all kind on accusations along the lines "didn't see the obvious [with 20/20 hindsight] security problem here!" and "totally incompetent, this can't possibly be a security risk!" leading up to "liar!" A kernel bug is extremely serious, period. Anything else, like a relative security layman's assesment if it could be exploited, moveover with little research and no real evidence, is just noise. If somebody wants to publish a kernel tree with CVE numbers and other decorations as notes attached to the commits, it is a free world.
Posted Oct 7, 2011 7:47 UTC (Fri)
by PaXTeam (guest, #24616)
[Link]
that's a false dichotomy, not to mention a confusion of cause and effect. in particular, what we've been asking for is honesty. being honest implies that 1. if i submit a commit with a description containing, say, 'this is a fix for a buffer overflow' then i do *not* want them to remove 'buffer overflow' from there (i.e., they're expected to 'change what they accept', censorship at this level is simply ridiculous), 2. if i submit a bugreport with a PoC exploit clearly demonstrating code execution then i want them to mention that simple fact (i.e, 'change what they generate' as the current practice is outright coverup and/or using creative wording in the hope that somehow it'll evade people's mental detectors).
> I have never heard of a case where the kernel team has refused to accept
strawman, it's never been about whether they accept a security fix (i think it'd even be criminal negligence if they refused to fix a security bug), but what they actually disclose with it, i.e., not mentioning the security relevance of a commit.
> what the kernel team has refused is to start tagging fixes as being
you're wrong, check the commit log for CVE numbers and other known (and lesser known) keywords associated with security issues. the problem is that they used to do a better job at actually disclosing security bugs but have been playing the dumb for a few years now.
i also note you didn't answer what you, the security professional, would do if the world at large stopped disclosing security fixes, kernel dev style. that tells me much more about your (not) being a professional and/or (not) actually believing in your own arguments than any posturing here.
Posted Oct 6, 2011 22:21 UTC (Thu)
by nix (subscriber, #2304)
[Link] (1 responses)
But, yes, this is quite contrived, and any software which checks for workarounds for security holes in -rc kernels by version number checking would rapidly become unmaintainable. I hope.
Posted Oct 7, 2011 7:50 UTC (Fri)
by PaXTeam (guest, #24616)
[Link]
Posted Oct 9, 2011 15:22 UTC (Sun)
by vonbrand (guest, #4458)
[Link] (1 responses)
If you look at any guidelines on secure programming, they are almost identical to "program carefully," only that they emphasize some points. Kernel programing is work that requires utmost care by its nature. Program with care, you should be in the clear. Finding out if some random mistake you notice and fix has security implications is extra, non-productive work. If you track down some reported bug, your fix will presumably refer to the report (with PoC and security assessment). In no case are commit comments altered. And I'm convinced that the bugs with known by the commiter only security implications being fixed is a vanishingly small minority. Adding comments detailing how a signedness mistake or a possible wraparound could led to a buffer overflow or other problems later on is pure noise. The fix has to be applied regardless.
Posted Oct 10, 2011 8:54 UTC (Mon)
by PaXTeam (guest, #24616)
[Link]
they're not. 'program carefully' is as useful as 'live healthily'. useless information by itself, the devil's in the details.
> Finding out if some random mistake you notice and fix has security
and you keep bringing up this strawman because...? noone asked kernel devs to do the assessment themselves, what they're being asked is to pass down that info if someone else did it for them (and their users).
> If you track down some reported bug, your fix will presumably refer to
what i'd send out would most definitely have this information *but* it would then be *censored* in the actual commit message. i'll refer you to Linus's statement i linked somewhere above.
> In no case are commit comments altered.
they are, and not only for covering up security fixes. Linus routinely adds his own blurb to commits where he thinks something's missing (look for "- Linus" in the commit message).
> And I'm convinced that the bugs with known by the commiter only security
and how would you know that a given commit's security impact was known by the committer if that information is covered up? could that suppression of information be the cause of the bias in your beliefs perhaps?
> Adding comments detailing how a signedness mistake or a possible
depends on what you understand as 'details' in the above. personally i'd already be happy if only the words 'integer wraparoud' or 'buffer overflow' appeared in the commit (and i don't care about the 'how to exploit this'), but even *they* are covered up (i'll refer you again to Linus's assertion that 'no greppable keywords'). looks like you've just proved how much you personally value such information ;).
Posted Oct 9, 2011 17:01 UTC (Sun)
by jrn (subscriber, #64214)
[Link]
That commit does not have much risk of causing a regression, so the threshold for justifying it on security grounds does not have to be very high. So let's see:
I would say that the security impact in the context of a 3.1-rc9 kernel is positive, since it documents (through the output of commands such as "uname -a") that the kernel follows a certain well documented set of behaviors and sysadmins can act accordingly. On the other hand, backporting that patch to a 3.0.y stable kernel would have severe negative security impact, because it would create a false impression that bugs affecting v3.0 and not affecting v3.1-rc9 have been fixed. Even looking at this from the point of view of security alone, I am glad that commit was not tagged with "Cc: stable".
Hope that helps.
Posted Oct 6, 2011 22:14 UTC (Thu)
by nix (subscriber, #2304)
[Link] (3 responses)
(Perhaps you meant 'ten years ago'?)
Posted Oct 6, 2011 22:49 UTC (Thu)
by malor (guest, #2973)
[Link] (2 responses)
The basic point remains: back then, a security breach was a hassle, but generally cost you only the time to fix it. These days, having your network penetrated can have extremely unpleasant consequences, up to and including death.
Posted Oct 7, 2011 9:00 UTC (Fri)
by cate (subscriber, #1359)
[Link] (1 responses)
13 years ago many security people was thinking about perimeters, DMZ, etc. thinking that internal net was safe, because "in control" of security people. Only to discover that they were very wrong: people attached modems (then laptop, then USB disks) against corporate rules.
I think now we have the same problem: some people think that kernel is unbreakable (it they update quickly after announced CVE), thus tend to trust the "computer perimeter" too much.
IMHO if a system can kill a man because of a kernel bug, it means that the security responsible was very incompetent.
Posted Oct 7, 2011 10:42 UTC (Fri)
by ortalo (guest, #4654)
[Link]
Posted Oct 7, 2011 10:35 UTC (Fri)
by ortalo (guest, #4654)
[Link]
It makes me wonder how it is possible to prevent such deviations from interfering with the core of the subject.
Reminds me the ITSEC asumption: physical > procedural > logical security. (If true, pretty annoying in a community context btw at first glance.)
Posted Oct 7, 2011 10:39 UTC (Fri)
by ortalo (guest, #4654)
[Link] (1 responses)
PS: Personnally, I'd favor to complement that with a reward system based on beer/wine bottles.
Posted Oct 9, 2011 7:36 UTC (Sun)
by tfheen (subscriber, #17598)
[Link]
Posted Oct 7, 2011 10:46 UTC (Fri)
by ortalo (guest, #4654)
[Link] (1 responses)
You can keep it private if it's not shareable or if legal action is in process; but I am especially interested in knowing the profile of the attackers going after something like kernel.org.
Posted Oct 7, 2011 13:14 UTC (Fri)
by corbet (editor, #1)
[Link]
Posted Oct 7, 2011 11:52 UTC (Fri)
by mb (subscriber, #50428)
[Link]
And stop bitching about 'security theater' and listen to what that community is telling you. Security is inconvenient, and not a lot of fun for programmers, because you're chasing down all these bizarre corner cases instead of writing something new and cool, but it's terribly important. Kernel.org's road to recovery
Kernel.org's road to recovery
Kernel.org's road to recovery
Kernel.org's road to recovery
> asking that lots of extra information be attached to bug fixes that
> takes up developers time and gets in the way of tracking down the
> bizarre corner cases.
Kernel.org's road to recovery
Nothing of the sort was asked, rather, we asked kernel devs to document with a few greppable words what they already know about the security impact of a given commit.
I'm afraid that as soon as it becomes easy to find out via grep which patches potentially fix security issues that people would start publishing stats about how many security issues have been fixed in the Linux kernel and that these stats would be used in negative publicity about the Linux kernel.
Kernel.org's road to recovery
Kernel.org's road to recovery
I'm afraid that as soon as it becomes easy to find out via grep which patches potentially fix security issues that people would start publishing stats about how many security issues have been fixed in the Linux kernel and that these stats would be used in negative publicity about the Linux kernel.
Kernel.org's road to recovery
Kernel.org's road to recovery
> easier for them to follow this invalid logic.
> is the outcry that will come when patches that are _not_ tagged as being
> security patches end up being found to be security related at some later
> time (including possibly before the kernel is even released)
Kernel.org's road to recovery
Kernel.org's road to recovery
Kernel.org's road to recovery
Kernel.org's road to recovery
Kernel.org's road to recovery
Kernel.org's road to recovery
Kernel.org's road to recovery
Kernel.org's road to recovery
> (and probably misleading) comments.
Kernel.org's road to recovery
Kernel.org's road to recovery
Kernel.org's road to recovery
Kernel.org's road to recovery
Kernel.org's road to recovery
Yes, typo.
Kernel.org's road to recovery
Kernel.org's road to recovery
Kernel.org's road to recovery
> annotated with the "greppable words".
> changes would only add more noise to the commit messages.
Kernel.org's road to recovery
Kernel.org's road to recovery
Kernel.org's road to recovery
> only some of the real security fixes as being security fixes to have a
> negative value,
Kernel.org's road to recovery
Kernel.org's road to recovery
Kernel.org's road to recovery
Which is bullshit, in letters ten feet tall.Kernel.org's road to recovery
Kernel.org's road to recovery
No, Linus actively removes security notes from changelogs.Kernel.org's road to recovery
Kernel.org's road to recovery
And, again: all that is being asked is to stop lying.Kernel.org's road to recovery
Kernel.org's road to recovery
Kernel.org's road to recovery
Kernel.org's road to recovery
> in there to start with, and it's not.
Kernel.org's road to recovery
Kernel.org's road to recovery
parse that again, *nobody* asked the kernel devs to evaluate the security impact of bugs themselves.
Kernel.org's road to recovery
That's a false alternative. You're claiming that security research by third parties is somehow equivalent to honesty by the people making the current patch sets. Kernel.org's road to recovery
Kernel.org's road to recovery
Kernel.org's road to recovery
Well, for what it's worth, the 'lying' thing comes from me, not from PaXTeam, and I stand behind that assertion 100%. Whether or not anyone happens to like that description, it is accurate. Information is being deliberately suppressed, to the benefit of the people doing the suppressing, and the detriment of the people the information is being hidden from. Kernel.org's road to recovery
There are a number of words or phrases, at varying places along the sliding scale of euphemism vs. dysphemism, for describing deliberate concealment of true information, but the unqualified term "lying" is not one I regard as well-used for such, being better reserved for the knowing and deliberate emanation of false information. "Lying by omission" is adequate, if you are absolutely insistent that the word "lying" must be used.
Kernel.org's road to recovery
Kernel.org's road to recovery
Kernel.org's road to recovery
Kernel.org's road to recovery
Kernel.org's road to recovery
Kernel.org's road to recovery
Kernel.org's road to recovery
Try the security alert from five days ago: Kernel.org's road to recovery
* Flaws in the AGPGART driver implementation when handling certain IOCTL commands could allow a local user to cause a denial of service or escalate their privileges. (CVE-2011-1745, CVE-2011-2022, Important)
Kernel.org's road to recovery
Kernel.org's road to recovery
Oh, and I didn't mention the remote root exploit from today's post, because that looks hard to exploit, involving an attempt to mount a CIFS share from a hostile server. But it is remote root, and using CIFS to share files across security boundaries is hardly unheard of.
Kernel.org's road to recovery
Kernel.org's road to recovery
> to make people trust you and want to work with you.
> out-of-tree stuff that is relatively little used[1]:
> any kind is levelled -- which takes about six seconds on a list as
> hardboiled as the kernel list -- out come the vituperative personal
> attacks, conspiracy theories, and imputations of malice -- and of course
> he is never wrong either, no matter what evidence is presented
Kernel.org's road to recovery
Kernel.org's road to recovery
Kernel.org's road to recovery
Kernel.org's road to recovery
What we actually ask: reveal security implications you already know of. That's it. The entire request, in two words, is "be honest". You wouldn't think that would be a big deal.
Kernel.org's road to recovery
Kernel.org's road to recovery
Kernel.org's road to recovery
Kernel.org's road to recovery
> for whatever you are trying to do.
> flagged as with security impact are important, so encouraging said
> selectiveness is a loss.
> like "overflow" to zero in on potential security problems.
> the long term or the only security measure.
> in our opinion, much more than the gain.
> It's all out there for the taking.
Kernel.org's road to recovery
Kernel.org's road to recovery
Kernel.org's road to recovery
Kernel.org's road to recovery
Kernel.org's road to recovery
Kernel.org's road to recovery
Kernel.org's road to recovery
Kernel.org's road to recovery
Kernel.org's road to recovery
Kernel.org's road to recovery
> commit, they will skip installing a lot of security related fixes.
Kernel.org's road to recovery
But less stability, because patches have an annoying habit of introducing both regressions and new, unwanted features, which can themselves, of course, have all kinds of nasty security implications.Kernel.org's road to recovery
you have just proven the case that many of the kernel developers are trying to make, that tagging some patches as security related will cause people to ignore the others and have less security than updating to a newer version with all of the fixes
Kernel.org's road to recovery
> will miss other fixes that have security implications because those
> implications were not known at the time they were written, and so they
> were not tagged.
> a security fix, then it doesn't have security implications.
> means that other fixes don't need to be applied (on the basis that they
> don't have security implications),
> are trying to make, that tagging some patches as security related will
> cause people to ignore the others and have less security than updating
> to a newer version with all of the fixes
Kernel.org's road to recovery
3. you haven't explained what 'all of the fixes' means. you and others already said that *everything* not proven otherwise is a security fix therefore the same everything must be backported by everyone who cares which in practice is possible only by following linus's git HEAD. i bet even you don't dare to do that to your company's servers (i actually wonder what you do given that you don't use -stable either).
Kernel.org's road to recovery
Unfortunately trying to separate feature from fix work didn't work as a process from the kernel developers perspective
Kernel.org's road to recovery
Kernel.org's road to recovery
And it makes the Linux kernel look more secure than it actually is, which is another form of lying by omission.
Kernel.org's road to recovery
major kernel developers comment was that it wasn't worth the effort to increase security separation between containers because there will always be local root exploits that will break separation.
Kernel.org's road to recovery
Kernel.org's road to recovery
Security is probably the hardest problem in computing, and if they are indeed saying "there will always be root exploits", it sounds like they're giving up on the idea entirely.
Kernel.org's road to recovery
> their security implications.
> where the developer did know the security impact.
> stable kernel series.
> absolutely neutral with respect to results is a security bug."
> patches) are important. Nobody is advocating suppressing any class of
> patches, just flagging commits with potential miscreant atractors.
> Nobody is in any position to do so, in fact.
> anybody affected as soon as humanly possible, hopefully without alerting
> would-be miscreants beforehand.
> dark, which will usually be for a limited time only.
Kernel.org's road to recovery
Kernel.org's road to recovery
So I'm supposed to figure out what's in a developer's head and put it in changelogs?Kernel.org's road to recovery
Kernel.org's road to recovery
Kernel.org's road to recovery
And why on earth are you fighting me so hard about simple honesty?
Kernel.org's road to recovery
Kernel.org's road to recovery
Kernel.org's road to recovery
Kernel.org's road to recovery
Jonathan
Kernel.org's road to recovery
Kernel.org's road to recovery
Kernel.org's road to recovery
Kernel.org's road to recovery
Kernel.org's road to recovery
Kernel.org's road to recovery
Kernel.org's road to recovery
> exploitable flaws than the ones flagged, and flag many that are
> completely irrelevant. Pure noise, a complete waste of effort.
I mean, to put this another way, the kernel devs are arguing that they should knowingly lie about the impact of bugs.Kernel.org's road to recovery
Kernel.org's road to recovery
Kernel.org's road to recovery
> asking them to change what they generate.
> a patch because it claimed to be a security fix,
> security or not security fixes
Kernel.org's road to recovery
for extra bones, explain the security risk of commit 976d167615b64e14bc1491ca51d424e2ba9a5e84.
You need to generate a rather contrived scenario for that one, but it is possible. e.g. shortly before that commit, 805e969f6151eda7bc1a57e9c737054230acc3cc was committed, which as it can cause a network interface to go dead could constitute a form of DoS attack. Userspace software could consult the kernel version and arrange to reduce its network traffic output if a buggy kernel was in use. Thus, skipping this commit would reduce the traffic on that network interface. More importantly, if some *future* commit fixed a security hole -- say, if you were talking about commit a102a9ece5489e1718cd7543aa079082450ac3a2, since we can't foretell the future -- then if that commit was skipped, software which checked the kernel version and refused to do something that would trigger a known hole would be misled into triggering the hole by the absence of that commit.
Kernel.org's road to recovery
Kernel.org's road to recovery
Kernel.org's road to recovery
> implications is extra, non-productive work.
> the report (with PoC and security assessment).
> implications being fixed is a vanishingly small minority.
> wraparound could led to a buffer overflow or other problems later on is
> pure noise.
Kernel.org's road to recovery
Kernel.org's road to recovery
Twenty years ago, if Linux got something wrong, about the worst that would happen was maybe some corporate espionage.
In October 1991, if Linux got something wrong, I don't think corporate espionage would have resulted. About fifteen people's machines would have crashed. :)
Kernel.org's road to recovery
Kernel.org's road to recovery
Kernel.org's road to recovery
Plus, the real culprit is... the attacker.
Kernel.org's road to recovery
Like flamewars on mailing lists this appetite for sensitive subjects finally does more harm than good to the topic. (No criticism intended by the way, I do think it's pretty right to assume that everyone who feels concerned on this topic is well intentioned.)
If the analogy with flamewars is right, this is a human problem then, not a technical one. Maybe it's time to figure a solution at the organizational level in order to "close the theater".
Kernel.org's road to recovery
Kernel.org's road to recovery
Kernel.org's road to recovery
Information on what happened is still pretty scarce. If anybody has any idea of who did it or how, they have not communicated it to me.
Kernel.org's road to recovery
Kernel.org's road to recovery