What to do about CVE numbers
CVE numbers, Kroah-Hartman began, were meant to be a single identifier for vulnerabilities. They are a string that one can "throw into a security bulletin and feel happy". CVE numbers were an improvement over what came before; it used to be impossible to effectively track bugs. This was especially true for the "embedded library in our product has an issue" situation. In other words, he said, CVE numbers are good for zlib, which is embedded in almost every product and has been a source of security bugs for the last fifteen years.
Since CVE numbers are unique, somebody has to hand them out; there are now about 110 organizations that can do so. These include both companies and countries, he said, but not the kernel community, which has nobody handling that task. There also needs to be a unifying database behind these numbers; that is the National Vulnerability Database (NVD). The NVD provides a searchable database of vulnerabilities and assigns a score to each; it is updated slowly, when it is updated at all. The word "national" is interesting, he said; it really means "United States". Naturally, there is now a CNNVD maintained in China as well; it has more stuff and responds more quickly, but once an entry lands there it is never updated.
CVE problems
There are a number of problems with CVE numbers, Kroah-Hartman said; he didn't have time to go through the full set listed in his slides [SlideShare]. To begin with, the database is incomplete, with many vulnerabilities missing altogether or rejected for a variety of reasons. Even when CVE numbers are assigned for a vulnerability, the process tends to take a long time and updating the NVD takes even longer.
A big problem, he said, is that the system is run by the US government. People tend not to trust governments in general, and other governments are increasingly distrustful of the US government in particular. The system is erratically funded by the Department of Homeland Security, and is significantly underfunded overall. People need to trust that this sort of vulnerability database will not leak information, but government-run systems are subject to a number of pressures. During a Senate hearing on Meltdown and Spectre, Senators pressed the NVD representatives on why the Senate had not been notified about the vulnerabilities ahead of time, for example. Kroah-Hartman said that he trusts MITRE to run the NVD, but that the number of governmental representatives wanting early access to data is worrisome.
Another problem is complexity. There is a single CVE entry (CVE-2017-5753) for Spectre version 1, but there are over 100 patches addressing it, and more are still coming. A CVE number doesn't point to patches, reducing its usefulness for helping people be sure they have closed a given vulnerability. It is really not possible to handle such complex things with a single ID number, he said.
CVE numbers are abused by security developers looking to pad their resumes. As a result, a lot of "stupid things" are submitted for CVE numbers, and getting the invalid ones revoked is difficult. As an example, he gave CVE-2019-12379, which was published on May 27. It refers to an alleged memory leak in the console driver, one that, Kroah-Hartman said, poses no security threat at all. In fact, it wasn't even a leak, in the end. Even so, the NVD gave the report a security score of "medium" the day after it was submitted. Shortly thereafter the report was disputed, and it turned out that the "fix" introduced a real memory leak of its own. On June 4, Ben Hutchings reverted the patch.
One might think that the story was over at that point, but the CVE entry was only marked "disputed" in July. Distributions like Fedora have policies that require them to ship fixes for all CVE numbers, so they shipped the buggy patch in the meantime. Cleaning everything up took rather longer. This issue was eventually dealt with, but similar things happen every month — or even every week.
Then, he said, CVE numbers are also abused by engineers to bypass internal procedures — in particular, to get their company to ship a particular patch in a product update. Getting a CVE number is a good way to force a patch into an enterprise kernel, for example. Between 2006 and 2018, he said, there were 1005 kernel CVE numbers assigned. Of those, 414 (40%) had a negative "fix date", with the average fix happening 100 days prior to the CVE-number request. Many of these are just worthwhile fixes that couldn't be merged into a shipping kernel without a CVE number behind them. He summarized by saying that this shows that CVE numbers don't really matter; they no longer carry any useful information.
Bug fixes
The kernel community is currently pulling about 22 bug fixes per day into the stable trees; that is about 5% of the volume going into the mainline kernel, he said, and it should be higher. There are one or two stable-kernel releases each week. Each stable kernel is tested as a unified release and given away for free. The kernel developers are fixing about one known security problem per week, along with a vast number of other bugs that are not known to be security issues when they are fixed. All of these fixes are handled in the same way; "a bug is a bug", he said.
He mentioned a TTY fix that was understood, after three years, to close a serious vulnerability. He was the author of both the original code and the fix, and he hadn't realized that there was a security problem in the code. Users of enterprise kernels were vulnerable to this issue for three years, he said; those who were running the stable kernels were not. Only a small portion of kernel security fixes are assigned CVE numbers; anybody who is only cherry-picking CVE-labeled fixes is thus running an insecure system. Even fixes with CVE numbers often have followup fixes that are not documented.
He has audited a number of kernels for phones, he said. One popular handset was running 4.14.85, with three-million added lines of out-of-tree code ("what could possibly go wrong?"). If you compare that with the 4.14.108 stable release that was current in May when this analysis was done, the phone was 1,759 patches behind. The handset vendor had cherry-picked 36 patches from later kernels, but had missed twelve fixes with CVE numbers, and crucial bug fixes across the kernel tree. As a result, this phone can be crashed (or worse) by a remote attacker.
The Google security team, he said, has a "huge tool" that scours the net for security reports. In 2018, every reported problem was already fixed in the long-term stable kernels before they found it; the only exceptions were for problems in out-of-tree code. There was no need for cherry-picking at all; anybody using those kernels was already secure against known issues. As a result, Google is now requiring Android vendors to use the long-term stable kernels. He called out Sony and Essential as being especially good at picking up new kernel releases; the Pixel devices are lagging a bit, he said, but are "basically there".
There are, he said, 2.5 billion instances of Linux running on Android phones; that is where Linux runs now. All other users are a drop in the bucket in comparison. So this is where security matters the most; if these devices keep up with the stable-kernel releases, they will be secure, he said.
How to fix CVE numbers
Kroah-Hartman put up a slide showing possible "fixes" for CVE numbers. The first, "ignore them", is more-or-less what is happening today. The next, option, "burn them down", could be brought about by requesting a CVE number for every patch applied to the kernel. It would be "a horrible intern job for six months", he said, and somebody has even offered to fund such a position. But we know that the system is broken; abusing it will not make things better. Thus, the third option: "make something new".
The requirements for a replacement are fairly well understood. It would need to provide a unique identifier for vulnerabilities, just like CVE numbers are meant to. The system should be distributed, though; asking for identifiers from others doesn't work. It needs to be updatable over time, searchable, and public.
Consider, he said, commit 7caac62ed, which was applied in August. The changelog for this commit cites no less than three CVE numbers. The kernel community insists that developers break down their changes into simple patches, but this fix for three CVE numbers was still acceptable as a single patch. It really is a single issue, he said, that is better identified by the ID of the patch that fixed it than any of the three CVE numbers attached to it. He ran through a number of other patches, many of which included commit IDs as a way of identifying what was being fixed, usually in a "Fixes" tag. The use of those IDs in this way, he said, has become nearly universal in the kernel community.
Thus, he said, fixes already contain a unique ID: the "Fixes" tag showing where the problem was introduced. That ID could be used as the unique ID for a vulnerability; there is no need to introduce another one. We have, in fact, been using commit IDs this way for 14 years, and nobody has noticed. All that remains to be done is to get some marketing for this scheme. After all, CVE numbers are essentially marketing, telling a story about a particular vulnerability; this new scheme needs something similar.
The first thing that is needed to start the marketing effort, he said, is a catchy name. He ran through some possibilities, including Linux Git Kernel ID (LGKI), Kernel Git ID (KGI), or Git Kernel Hash (GKH). He paused for laughter at that last acronym (which is also his initials) before moving on. In the end, he said, the best name to use is "change ID" — the name we've been using for the last 14 years. A change ID is a world-wide, unique ID that works today, so let's use it. The format would look something like CID-0123456789ab.
Kroah-Hartman concluded by returning to his list of things to do about CVE numbers. We should indeed "ignore CVEs", but he supplemented the list with a fourth entry: rebrand what we have been doing all along.
Questions
Dmitry Vyukov led off the questions by asking about the claim that stable kernel releases are fully tested. Subsequent stable releases fix a lot more stuff, he said, so how, exactly, is that testing happening? Kroah-Hartman answered that the kernel certainly has problems with too many bugs. The stable releases in particular, though, benefit from a lot of effort to avoid regressions; he claimed that only 0.01% of the patches going into stable kernels cause regressions now.
Vyukov answered that he is not seeing any tests being added for bugs found by his syzkaller testing. So how can the community actually prevent regressions? The answer was that we certainly need more tests.
Your editor had to question the 0.01% figure, since some analysis done a few years ago showed a rate closer to 2%. Kroah-Hartman said that the number came from the Chrome OS team, which was counting "noticeable regressions".
The final question was about users who are stuck with vendor kernels that will not be upgraded; what are they to do? Kroah-Hartman responded that this is a real problem. Those vendors typically add about three-million lines of code to their kernels, so they are shipping a "Linux-like system". The answer is to force vendors to get their code upstream; to do that, customers have to push back. Sony, in particular, has been insisting that its vendors have their code in the mainline kernel. That is how we solved the problem for servers years ago; it is still the approach to use today.
[Your editor thanks the Linux Foundation, LWN's travel sponsor, for
supporting his travel to this event.]
Index entries for this article | |
---|---|
Kernel | Security/CVE numbers |
Security | Bug reporting/CVE |
Conference | Kernel Recipes/2019 |
Posted Oct 4, 2019 16:13 UTC (Fri)
by clugstj (subscriber, #4020)
[Link] (7 responses)
Volume does not automatically translate to importance. Nefarious actor can crash one million phones, or he can melt down one nuclear power plant. Which is more important?
Posted Oct 4, 2019 22:09 UTC (Fri)
by khim (subscriber, #9252)
[Link] (2 responses)
And phones carry A LOT OF sensitive information of various sort.
Posted Oct 5, 2019 1:46 UTC (Sat)
by mathstuf (subscriber, #69389)
[Link]
Posted Oct 5, 2019 3:25 UTC (Sat)
by flussence (guest, #85566)
[Link]
Posted Oct 4, 2019 22:43 UTC (Fri)
by jamesmorris (subscriber, #82698)
[Link] (2 responses)
Posted Oct 7, 2019 9:20 UTC (Mon)
by pomac (subscriber, #94901)
[Link] (1 responses)
Posted Oct 7, 2019 11:25 UTC (Mon)
by jem (subscriber, #24231)
[Link]
Posted Oct 7, 2019 9:19 UTC (Mon)
by pomac (subscriber, #94901)
[Link]
If you connect the most important control machine for a nuclear power plant to the internet without firewalls... well.. you'll end up as vapor.
Posted Oct 4, 2019 21:34 UTC (Fri)
by kleptog (subscriber, #1183)
[Link] (11 responses)
Posted Oct 4, 2019 22:46 UTC (Fri)
by Karellen (subscriber, #67644)
[Link] (8 responses)
Posted Oct 5, 2019 1:46 UTC (Sat)
by mathstuf (subscriber, #69389)
[Link] (5 responses)
Posted Oct 5, 2019 6:52 UTC (Sat)
by epa (subscriber, #39769)
[Link] (4 responses)
Better to have Partly-Fixes: x indicating that one of several bugs is being fixed. In that case you need to check by hand that your tree has all of the fixes for bugs introduced in commit x. Fixes: x on the other hand means that as far as anyone knows this commit is a complete fix and you don’t need to pull any others to deal with bugs from x.
Posted Oct 5, 2019 12:03 UTC (Sat)
by mathstuf (subscriber, #69389)
[Link]
Posted Oct 7, 2019 15:04 UTC (Mon)
by smurf (subscriber, #17840)
[Link] (1 responses)
Posted Oct 10, 2019 11:23 UTC (Thu)
by epa (subscriber, #39769)
[Link]
But in fact, your point illustrates that commit messages are not a great place for this information. In git they are immutable. But knowledge (about which commits fix what bugs) changes over time. So it would perhaps be better as a separate database rather than parsing commit messages.
Posted Oct 7, 2019 15:12 UTC (Mon)
by geert (subscriber, #98403)
[Link]
Posted Oct 10, 2019 6:42 UTC (Thu)
by NYKevin (subscriber, #129325)
[Link] (1 responses)
I'm more concerned about the reverse: What if you have two or three commits that, individually, are all fine, but in combination, work to create a security hole (e.g. because someone lost track of where the airtight hatchway is supposed to go)? Then which commit hash do you use?
I suppose the simple answer is that you figure out where the airtight hatchway should have been, figure out which commit was on the wrong side of it, and then blame that one. But that's a lot of "figuring out" for what used to be a simple "assign the next number in our preallocated block of numbers" process. You basically have to decide how you are going to fix the bug in order to give it an identifier, which seems very... wrong to me.
The other option is you use git bisect to find the chronologically earliest commit hash where the bug actually repros, regardless of whether that particular commit is "guilty" or not. But then you might be blaming a commit that didn't actually introduce the bug, which would be fine if everyone was using mainline kernels. Lots of people have non-upstreamed patches, however, and they might be misled by such a scheme (if, for example, one of their patches exposes the bug in a different way, and they never pulled the commit that you blamed, they might falsely believe that they don't need the fix).
Posted Oct 10, 2019 8:52 UTC (Thu)
by geert (subscriber, #98403)
[Link]
The commit description of the fix can/should still have two or three Fixes tags.
Posted Oct 6, 2019 18:08 UTC (Sun)
by marcH (subscriber, #57642)
[Link]
No you can't because it has been Not Invented Here, so "it's tied to Gerrit" and other lies have already been spread https://lwn.net/Articles/797613/
Posted Oct 22, 2019 8:43 UTC (Tue)
by Aissen (guest, #59976)
[Link]
It seems this script does exactly that:
https://github.com/gregkh/gregkh-linux/blob/master/script...
Posted Oct 5, 2019 6:34 UTC (Sat)
by tlamp (subscriber, #108540)
[Link] (3 responses)
But agree very much on the statement that CVE are overused and have seldom any value now.
Posted Oct 7, 2019 7:22 UTC (Mon)
by nim-nim (subscriber, #34454)
[Link] (2 responses)
S* happens, and software has bugs. So you *will* get CVEs. If your software suppliers are unable to report the CVEs fixed in each delivery, they are *lying* (or incompetent suppliers that should be replaced). If they *are* reporting the fixed CVEs, you get a direct measure of the whole software supply bugfixing velocity (so if unit A is still fixing years old CVes while unit B is fixing last week’s CVEs, you know which one has a problem).
I’m not sure if people realize how much that helps cutting the crap and avoiding kilometers of powerpoint obfuscation.
Posted Oct 7, 2019 13:50 UTC (Mon)
by imMute (guest, #96323)
[Link] (1 responses)
I'm not sure I do... Age [of a CVE] is not the only indicator of priority. Maybe Unit A has fixed all the "critical" CVEs and are now working their way through the "probably not even exploitable" CVEs from years ago.
Posted Oct 8, 2019 8:00 UTC (Tue)
by nim-nim (subscriber, #34454)
[Link]
Posted Oct 5, 2019 7:25 UTC (Sat)
by adamg (subscriber, #42260)
[Link] (3 responses)
Comparing this to CVE (which may contain links to multiple git hashes) I think the latter is better.
Perhaps it might be better to split from NVD and create LKNVD (Linux Kernel National Vulerability Database)?
Posted Oct 5, 2019 8:51 UTC (Sat)
by Cyberax (✭ supporter ✭, #52523)
[Link] (2 responses)
Posted Oct 5, 2019 9:21 UTC (Sat)
by amacater (subscriber, #790)
[Link] (1 responses)
Posted Oct 5, 2019 9:51 UTC (Sat)
by dottedmag (subscriber, #18590)
[Link]
Posted Oct 5, 2019 11:33 UTC (Sat)
by yhchen (guest, #111904)
[Link]
Posted Oct 5, 2019 16:48 UTC (Sat)
by IanKelling (subscriber, #89418)
[Link] (5 responses)
In an alternative reality, users would be running a gpv3 kernel, and google gives users the option to switch to a secure kernel by clicking a button or two and then rebooting, similar to how they got most desktop users to switch browsers. But, with google's proprietary leverage, they could do something similar: require vendors to ship phones with all free software at the kernel and below. Note, shipped free software means no cryptographic lockdown.
Posted Oct 5, 2019 23:00 UTC (Sat)
by khim (subscriber, #9252)
[Link] (3 responses)
If Google would try to push for that then vendors would just switch to some other AOSP-based fork of Android.
I think you vastly overestimate influence of Google on Android ecosystem: sure, if any *one* vendor does not like something - then Google could ignore it. But if *all* vendors want something - they would get that.
And the ability to lock down the devices is something *all* vendors very much want. Look on how Huawei first promised to unlock bootloader on Mate 30 Pro and then decided not to do that, in the end.
Posted Oct 6, 2019 0:24 UTC (Sun)
by excors (subscriber, #95769)
[Link] (2 responses)
As an example of why it's useful, see the recent iOS boot ROM vulnerability which allows an attacker with physical access to run arbitrary code on the device at the CPU's highest privilege level.
Because the device is locked down (i.e. there is a secure boot chain that starts with Apple's public key stored in ROM, and the boot ROM verifies the bootloader which verifies the kernel which verifies the system which verifies the apps) it can't be turned into a persistent exploit - the device will always be in a good state (or bricked) after restarting. That gives reasonable protection again evil maid attacks (e.g. when someone tries to install malware in the factory or during delivery or in border security checks (remember to restart your device when you get it back), or before selling a device as second-hand, etc) and allows regular users to trust their device. It's still a bad vulnerability but it would have been much worse without a locked bootloader.
> Look on how Huawei first promised to unlock bootloader on Mate 30 Pro and then decided not to do that, in the end.
I assume most people wanted an unlocked bootloader so they could sideload Google apps onto it (which aren't installed by default because of US trade restrictions)? Huawei found a different (worse) method, by allowing certain third-party apps approved and signed by Huawei to install other apps (like Google's) with system-level privileges: https://arstechnica.com/gadgets/2019/10/the-internets-hor...
Posted Oct 7, 2019 21:53 UTC (Mon)
by IanKelling (subscriber, #89418)
[Link]
No, if the only one who can modify it is apple, it means users are trusting apple, and its not "their device", and it is not necessary to trust only apple to get all those security benefits. Apple could securely let users modify the software on their devices. It would mean using something like a tpm with unmodifiable firmware to tell the user which keys the device trusts to sign the software.
Posted Oct 7, 2019 22:43 UTC (Mon)
by IanKelling (subscriber, #89418)
[Link]
Regular users have no idea about evil maids or rom signature verification, but your argument is that apple should be completely in charge, and I don't think regular users trust apple completely. And it is not necessary to trust only apple to get all those security benefits. Apple could securely let users modify the software on their devices using the same kind of cryptography they currently use to only allow themselves to modify the software.
Posted Oct 7, 2019 9:18 UTC (Mon)
by pomac (subscriber, #94901)
[Link]
Posted Oct 5, 2019 17:24 UTC (Sat)
by IanKelling (subscriber, #89418)
[Link] (4 responses)
Posted Oct 5, 2019 23:06 UTC (Sat)
by khim (subscriber, #9252)
[Link] (3 responses)
Interest in any device for any maker goes to precisely zero once a device is sold. Even security updates and other such things are only ever done to make sure new batches of the same hardware could be sold.
What would happen to *future* device, on the other hand, could be meaningfully influenced if we are smart: people haven't paid for them yet, thus hardware makers could be convinced to do something to make that happen.
Article just comes from "what could we *actually* do" POV, not from "what could we do in the imaginary world filled with fairies and unicorns" POV.
Posted Oct 7, 2019 10:54 UTC (Mon)
by IanKelling (subscriber, #89418)
[Link] (2 responses)
Posted Oct 7, 2019 13:55 UTC (Mon)
by corbet (editor, #1)
[Link] (1 responses)
Posted Oct 8, 2019 12:04 UTC (Tue)
by IanKelling (subscriber, #89418)
[Link]
Corbet, good to know that wasn't intended, but it's clearly there. You wrote:
> The final question was about users who are stuck with vendor kernels that will not be upgraded; what are they to do? Kroah-Hartman responded that this is a real problem. Those vendors typically add about three-million lines of code to their kernels, so they are shipping a "Linux-like system". The answer is to force vendors to get their code upstream; to do that, customers have to push back.
So, "the answer" is very clearly a reference to "users who are stuck", present tense stuck, but your saying, of course thats not what you really meant, only preventing it for future users, but you need to sayyy that if its what you mean. Its like saying "What about the problem that there are a million or so species that will go extinct due to current carbon levels. The answer is to decrease our carbon emissions." But of course, that is not an answer to the stated problem since it won't change existing carbon levels or their effects. It's an answer to prevent the next million, but you have to say that, or else people will read what you wrote literally.
Posted Oct 5, 2019 22:39 UTC (Sat)
by scientes (guest, #83068)
[Link]
Maybe because congress is completely incompetent and unaccountable? Just maybe.
Posted Oct 6, 2019 17:58 UTC (Sun)
by marcH (subscriber, #57642)
[Link] (1 responses)
Distributing exploits is one indirect but effective way to help customers push back [ project managers ].
Most people buying "smart" phone still believe they buy a piece of hardware. They haven't realized yet they buy millions of lines of crappy and rushed out code + *maybe* some firefighting service that goes with it. The sooner and... harder they understand it the better.
Posted Oct 7, 2019 8:05 UTC (Mon)
by NAR (subscriber, #1313)
[Link]
I think that's still not enough. A worm is required that globally bricks devices (or at least changes background images to something like "You're owned!") - the mere possibility that a phone can be remotely cracked will not scare enough people.
Posted Oct 6, 2019 18:22 UTC (Sun)
by marcH (subscriber, #57642)
[Link]
Probably the biggest difference between a security bug versus not: testing the former requires complex tools and unexpected sequences.
Now if tests for normal bugs are lacking in the first place then it's of course difficult to see that difference.
Posted Oct 7, 2019 4:04 UTC (Mon)
by roc (subscriber, #30627)
[Link]
It's entirely rational for some kernel consumers to have a risk budget and to want to focus that risk budget on security bugs. Intentionally frustrating those consumers by refusing to identify bugs with known security implications is sheer bloodymindedness at this point. It contributes to backporting failures like the recent Android (not) zero-day: https://arstechnica.com/information-technology/2019/10/at...
Posted Oct 8, 2019 11:50 UTC (Tue)
by hupstream (guest, #112546)
[Link]
Posted Oct 10, 2019 14:19 UTC (Thu)
by msmeissn (subscriber, #13641)
[Link]
- Spectre variant 1 / single CVE
The Spectre 1 CVE covers actually problem of the CPU, and not the kernel. The kernel has mitigations for it.
I would consider Spectre 1 even more a "bugclass" (like "format string exploit"), so every mitigating fix would need to get its own CVE (which now would be 50-100 or more for Spectre v1 alone).
Same goes for the other Spectre flavor CVEs, like Bounds Check Bypass Store ...
- Giving government bodies like Mitre ahead knowledge of CVE.
For allocation of a CVE it is not necessary to hand out any information, depending on the CNA. Mitre as the Root CNA , or any CNA able to allocate Kernel issues could hand out a blank CVE without getting details of the issue.
- Misallocation by Mitre
If there would be a specific Linux kernel CNA, operated by more knowledgeable people, these could take decisions on what issues get what CVEs.
This would need at least a fulltime position, or even more.
Posted Oct 21, 2019 18:55 UTC (Mon)
by jberkus (guest, #55561)
[Link]
https://github.com/distributedweaknessfiling/
https://twitter.com/kurtseifried/status/1103858442479910913
Not sure how K-H is going to make it work better. Turns out that the main problem with the CVE system is the submitters.
What to do about CVE numbers
What to do about CVE numbers
What to do about CVE numbers
What to do about CVE numbers
What to do about CVE numbers
What to do about CVE numbers
What to do about CVE numbers
What to do about CVE numbers
What to do about CVE numbers
What to do about CVE numbers
What to do about CVE numbers
What to do about CVE numbers
What to do about CVE numbers
What to do about CVE numbers
What to do about CVE numbers
What to do about CVE numbers
Fixes can be identified uniquely by the commit ID in mainline. Backported commits in stable trees carry "Upstream commit foo" or "cherry picked from commit foo" lines, so the fixes can be tracked.
This also fixes the issue where a commit introduces multiple bugs, and you have multiple fixes.
What to do about CVE numbers
What to do about CVE numbers
And of course the description should clearly explain what's the underlying issue.
What to do about CVE numbers
What to do about CVE numbers
What to do about CVE numbers
E.g., Spectre, Meltdown, ..., where not introduced by a single change - I mean maybe the one adding support for the respective vulnerable hardware, but that can't hardly count?
What to do about CVE numbers
What to do about CVE numbers
What to do about CVE numbers
Commit ID / Git Kernel Hash as a security id
This will mean we will have multiple Commit IDs to a single security vulnerability.
Commit ID / Git Kernel Hash as a security id
Commit ID / Git Kernel Hash as a security id
Commit ID / Git Kernel Hash as a security id
What to do about CVE numbers
What to do about CVE numbers
What to do about CVE numbers
What to do about CVE numbers
What to do about CVE numbers
What to do about CVE numbers
What to do about CVE numbers
What to do about CVE numbers
What to do about CVE numbers
What to do about CVE numbers
I think you've read something into the article that wasn't there. Nobody thinks that upstreaming is going to rescue all of the unsupported devices out there. Nothing is going to fix those. The objective is to stop creating such devices in the future.
Getting code upstream
Getting code upstream
What to do about CVE numbers
What to do about CVE numbers
Distributing exploits is one indirect but effective way to help customers push back [ project managers ].
What to do about CVE numbers
What to do about CVE numbers
What to do about CVE numbers
Talk video
What to do about CVE numbers
So far Mitre does it best effort, and as GregKH states does "too much" occasionaly.
Mitre for instance blocks any CVE requests for drivers/staging/ already.
What to do about CVE numbers