Preventing stack guard-page hopping
The kernel has long placed a guard page — a page that is inaccessible to the owning process — below each stack area. (Actually, it hasn't been all that long; the guard page was added in 2010). A process that wanders off the bottom of a stack into the guard page will be rewarded with a segmentation-fault signal, which is likely to bring about the process's untimely end. The world has generally assumed that the guard page is sufficient to protect against stack overflows but, it seems, the world was mistaken.
On June 19, Qualys disclosed a set of vulnerabilities that make it clear that a single guard page is not sufficient to protect against stack overflow attacks. These vulnerabilities have been dubbed "Stack Clash"; the associated domain name, logo, and line of designer underwear would appear to not have been put in place yet. This problem has clearly been discussed in private channels for a while, since a number of distributors were immediately ready with kernel updates to mitigate the issue.
The fundamental problem with the guard page is that it is too small. There are a number of ways in which the stack can be expanded by more than one page at a time. These include places in the GNU C Library that make large alloca() calls and programs with large variable-length arrays or other large on-stack data structures. It turns out to be relatively easy for an attacker to cause a program to generate stack addresses that hop over the guard page, stomping on whatever memory is placed below the stack. The proof-of-concept attacks posted by Qualys are all local code-execution exploits, but it seems foolhardy to assume that there is no vector by which the problem could be exploited remotely.
The
fix merged for 4.12 came from Hugh Dickins, with credit to Oleg
Nesterov and Michal Hocko. It takes a simple, arguably brute-force
approach to the problem: the 4KB guard page is turned into a 1MB guard
region on any automatically growing virtual memory area. As the patch
changelog notes: "It is obviously not a full fix because the problem is
somehow inherent, but it should reduce attack space a lot.
" The
size of the guard area is not configurable at run time (that can wait until
somebody demonstrates a need for it), but it can be changed at boot time
with the stack_guard_gap command-line parameter.
The 1MB guard region should indeed be difficult to jump over. It is (or should be) a rare program that attempts to allocate that much memory on the stack, and other limits (such as the limit on command-line length) should make it difficult to trick a program into making such an allocation. On most 64-bit systems, it should be possible to make the guard region quite a bit larger if the administrator worries that 1MB is not enough. Doubtless there are attackers who are feverishly working on ways to hop over those regions but, for a while at least, they may well conclude that there are easier ways to attack any given system.
The real problem, of course, is that a stack pointer can be abused to
access memory that is not the stack. Someday, perhaps, we'll all have
memory-type bits in pointers that will enable the hardware to detect and
block such attacks. For now, though, we all need to be updating our
systems to raise the bar for a successful compromise. Distributors have
updates now, and the fix is in the queue for the next round of stable
kernel updates due on June 21.
Index entries for this article | |
---|---|
Kernel | Security/Vulnerabilities |
Security | Linux kernel |
Posted Jun 19, 2017 19:55 UTC (Mon)
by tux3 (subscriber, #101245)
[Link] (34 responses)
I wouldn't be surprised to learn that there is a whole lot of software out there at various level of openness that will happily allocate a handful of MBs on demand, and that will probably never be recompiled with those options.
Posted Jun 19, 2017 20:13 UTC (Mon)
by Cyberax (✭ supporter ✭, #52523)
[Link] (17 responses)
Posted Jun 19, 2017 20:26 UTC (Mon)
by cpitrat (subscriber, #116459)
[Link] (15 responses)
I'm surprised a 900 lines patch is only about increasing the size of the page guard. Isn't there more in it ?
Posted Jun 19, 2017 21:09 UTC (Mon)
by roc (subscriber, #30627)
[Link] (2 responses)
The local privilege escalation threat assumes that the high-privilege C code is trusted, and then exploits it.
If the attacker can write high-privilege C code, you've already lost.
Posted Jun 20, 2017 9:43 UTC (Tue)
by moltonel (guest, #45207)
[Link] (1 responses)
Posted Jun 20, 2017 10:13 UTC (Tue)
by matthias (subscriber, #94967)
[Link]
If the attacker has the ability to run his own code with privileges, everything is already lost. No need for an exploit.
Posted Jun 20, 2017 6:54 UTC (Tue)
by vbabka (subscriber, #91706)
[Link]
Well, it's 900 lines of .patch file text, but the diffstat is around 300 added+deleted, so not that much.
It's large because, as explained in the commit log, the old 1 stack guard page code simply extended to N pages made many accounting issues visible, because the guard page(s) were part of the VMA's [start, end] addresses. The patch deletes that approach and replaces it so that the gap is always between VMA boundaries. That means adjusting the code to check allowed VMA placement/enlargement so that it maintains the gap if the next/prev VMA is a stack one.
Posted Jun 20, 2017 9:55 UTC (Tue)
by moltonel (guest, #45207)
[Link] (9 responses)
That's going to mess up with the performance profile (allocating pages earlyer than expected) and decrease total performance in case the app wasn't going to touch those pages at all.
> This would protect remote attacks but wouldn't prevent an attacker to write his own stack allocation for local privilege escalation.
Assuming we accept the performance hit, can we use the same technique in the kernel ? Disable overcommit ? Or is the kernel not aware of what the app is considering its stack space ?
Posted Jun 20, 2017 10:39 UTC (Tue)
by nix (subscriber, #2304)
[Link] (7 responses)
It's... not common for applications to allocate page-size structures on the heap that are not optimized out and then not use them for anything. I suppose functions that have big local variables and then do early exit based only on the parameters, but in that case the compiler can adapt to adjust the stack only after the early exits, if this is really significant (which I very much doubt).
Posted Jun 20, 2017 15:15 UTC (Tue)
by zblaxell (subscriber, #26385)
[Link] (6 responses)
In one project I found an innocuous-looking state structure that turned out to have ~5MB of unused bytes in the middle, buried under a pyramid of macro expansion, arrays, nested members, and unreadable coding style. The code did use all the other members in the struct, on both sides of the hole.
Also it's fairly common in userland to do IO to a buffer on the stack, where the buffer is huge and the IO is tiny.
Posted Jun 20, 2017 16:30 UTC (Tue)
by gutschke (subscriber, #27910)
[Link] (5 responses)
Do you really commonly see programs allocate many hundred of kilobytes if not many megabytes on the stack? That's not a pattern that I have encountered frequently. Buffers this large are more commonly allocated on the heap.
I am not saying it doesn't happen. Anything stupid that you can think of, somebody else probably thought of before. But common? Hopefully not.
Posted Jun 21, 2017 11:14 UTC (Wed)
by PaXTeam (guest, #24616)
[Link]
Posted Jun 21, 2017 11:24 UTC (Wed)
by nix (subscriber, #2304)
[Link] (2 responses)
Posted Jun 21, 2017 14:57 UTC (Wed)
by zblaxell (subscriber, #26385)
[Link] (1 responses)
On the other hand, if a function is being called in a loop then the probes keep happening over and over even though the page faults don't, so the probing gets expensive.
For programs that handle toxic data there might not be a quick and easy solution--they might just have to suck up the cost of doing probes all the time, or use other techniques (e.g. constant-stack algorithm proofs, coding standards forbidding alloca() and sparse structures, etc.) to make sure stack overflows don't happen.
Since changes to alloca require recompiling the program, it's up to individual applications to make the performance/security tradeoff anyway. Isn't there already a compiler option to do this?
Posted Jun 22, 2017 22:37 UTC (Thu)
by mikemol (guest, #83507)
[Link]
LTO will need to be careful to let these considerations bubble up to the final binary, however.
Posted Oct 3, 2019 13:18 UTC (Thu)
by ychevali (guest, #134753)
[Link]
Posted Jun 26, 2017 9:25 UTC (Mon)
by anton (subscriber, #25547)
[Link]
Posted Jun 26, 2017 9:09 UTC (Mon)
by anton (subscriber, #25547)
[Link]
Posted Jun 20, 2017 14:50 UTC (Tue)
by BenHutchings (subscriber, #37955)
[Link]
alloca() can't be implemented as a real function, so it's only "in" glibc in the sense that the definition is in a glibc header. Further, that definition just defers to the compiler's pseudo-function __builtin_alloca(). So even rebuilding against an updated glibc isn't enough to fix this. glibc has been updated to make its own use of alloca() safer, though.
Posted Jun 19, 2017 21:01 UTC (Mon)
by roc (subscriber, #30627)
[Link] (5 responses)
Posted Jun 19, 2017 21:02 UTC (Mon)
by roc (subscriber, #30627)
[Link]
Posted Jun 19, 2017 21:45 UTC (Mon)
by roc (subscriber, #30627)
[Link] (3 responses)
Posted Jun 19, 2017 23:31 UTC (Mon)
by nix (subscriber, #2304)
[Link]
Posted Jun 20, 2017 18:54 UTC (Tue)
by dd9jn (✭ supporter ✭, #4459)
[Link] (1 responses)
Posted Jun 22, 2017 22:02 UTC (Thu)
by cesarb (subscriber, #6266)
[Link]
Posted Jun 19, 2017 22:31 UTC (Mon)
by zblaxell (subscriber, #26385)
[Link] (9 responses)
In userland, if alloca() wants more than a page, it can run a heaver stack-smashing check, like probing each page of the allocated area in stack-growth order, or checking some data in the heap about the current thread's stack limits. Not doing that in the kernel is perhaps understandable due to the cost, but the capability should be there for those who need it.
I've occasionally wondered what would happen if stacks were not accessible to other threads in the same process (assuming the VM context thrashing involved was magically zero cost, which probably pushes this paragraph into the realm of wishful thinking). Obviously it would break some existing programs, but it smells like bad practice in general (I see student programmers pass pointers to ephemeral variables from the caller's stack to threads all the time, with immediately disastrous results). There might be some simple heuristic (e.g. if thread A creates or joins thread B, let thread B access thread A's stack in case thread B has been given a pointer to a result A needs to store there) that's good enough for current defensible program behavior.
Posted Jun 19, 2017 23:26 UTC (Mon)
by Cyberax (✭ supporter ✭, #52523)
[Link] (1 responses)
Posted Jun 20, 2017 1:40 UTC (Tue)
by zblaxell (subscriber, #26385)
[Link]
That's pretty much how C++11 async functions work, and should be covered by the heuristic exception for "thread A creates thread B".
It wouldn't work if there was a persistent worker thread pool (i.e. the functions are executed by previously existing threads that continue to exist after the result is computed, so there is no creator/created or join relationship). It might be possible to infer data dependencies from mutex locks or higher-level objects (promise/future pairs) but maybe there's too many false positives. Or one could mark worker pool threads differently (e.g. some new pthread_attr) wrt access to other threads' stacks.
Posted Jun 19, 2017 23:32 UTC (Mon)
by excors (subscriber, #95769)
[Link]
I think that would break reasonable code like:
std::atomic_int n;
which passes a pointer to n (on the current thread's stack) to a bunch of worker threads (that probably weren't created by this thread).
Posted Jun 19, 2017 23:36 UTC (Mon)
by nix (subscriber, #2304)
[Link] (3 responses)
I alternate between thinking this scheme is wonderful and should be widely emulated, and thinking it is insane and its authors should be punished by being forced to debug programs written this way (but then, they already have been).
Posted Jun 20, 2017 15:08 UTC (Tue)
by nybble41 (subscriber, #55106)
[Link] (2 responses)
That is... diabolical. Genius, but diabolical. A similar concept employed by Chicken Scheme is to start out the same way, using CPS and allocating on the C stack, but then after copying the live data to the heap just perform a longjmp() to unwind back to a trampoline function at the top of the original stack. That seems slightly saner than abusing alloca() to set the stack pointer.
Posted Jun 21, 2017 11:26 UTC (Wed)
by nix (subscriber, #2304)
[Link] (1 responses)
Posted Jun 21, 2017 14:41 UTC (Wed)
by zblaxell (subscriber, #26385)
[Link]
...like some eager tools maintainer implementing alloca() parameter sanity checks, perhaps? ;)
Posted Jun 20, 2017 13:10 UTC (Tue)
by niner (subscriber, #26151)
[Link] (1 responses)
Posted Jun 20, 2017 16:20 UTC (Tue)
by zblaxell (subscriber, #26385)
[Link]
It seems to me there's more fundamental problems to be solved before this one. How does a garbage collecting thread handle ordinary race conditions when accessing data on other thread stacks? Invasive locking? Indirect references through forwarding objects?
I'm not sure I like the idea of solving that case, largely because the difference between "frees approximately the right memory" and "frees exactly the right memory" can be pretty huge when there are adversaries throwing pointy things into your stack and heap.
Posted Jun 19, 2017 20:17 UTC (Mon)
by PaXTeam (guest, #24616)
[Link]
Posted Jun 19, 2017 20:47 UTC (Mon)
by mjw (subscriber, #16740)
[Link] (1 responses)
Posted Jun 20, 2017 6:04 UTC (Tue)
by cpitrat (subscriber, #116459)
[Link]
Posted Jun 19, 2017 23:40 UTC (Mon)
by jengelh (subscriber, #33263)
[Link] (3 responses)
Maybe the i286's segmented memory model wasn't all that useless! Set %cs, %ds and %ss to non-overlapping regions of memory, and if %sp overflows, it will just wrap back onto the same stack you already had, not touching other regions or threads. The proposed memory-type bits are implicit and sort of given by way of the selectors.
So… let's extend that to 64 bits? The segment registers appear to already be 64 bit in LM (they were not renamed like ax->eax->rax was). One extra thing is needed, an MSR, or TSS field/CR reg, to configure a modulus for %rsp, so that it wraps at a set boundary (e.g. 21 bit) on ADD/SUB/PUSH/POP instructions.
Posted Jun 20, 2017 5:18 UTC (Tue)
by eru (subscriber, #2753)
[Link]
Actually the stack overflow probably traps, because it goes out of the size allocated for the stack segment. At least if you use it on 386/486/Pentium, where wrapping completely around is less likely. This has been used in a proprietary OS I have worked with. Almost everything there is in separately allocated, 386- supported segments (probably one of the very few OS'es to use the segmentation features as Intel designers intended!), but all the other pain caused by segmented memory probably makes this not worthwhile.
Posted Jun 20, 2017 8:47 UTC (Tue)
by jikos (subscriber, #43140)
[Link]
That would not really make the situation any better, as that'd effectively allow the attacker to overflow the stack using the same attack vector and manipulate contents of the stack, turning this into a rather boring and easy to exploit stack overflow.
Fortunately that's not how x86 behaves with respect to segment limits; as long as the address goes over the limit, it faults.
Posted Jul 6, 2017 3:37 UTC (Thu)
by mathstuf (subscriber, #69389)
[Link]
Posted Jun 20, 2017 3:50 UTC (Tue)
by ikm (guest, #493)
[Link] (5 responses)
Posted Jun 20, 2017 5:34 UTC (Tue)
by flussence (guest, #85566)
[Link] (1 responses)
Posted Jun 20, 2017 6:39 UTC (Tue)
by thestinger (guest, #91827)
[Link]
Posted Jun 20, 2017 15:01 UTC (Tue)
by BenHutchings (subscriber, #37955)
[Link] (2 responses)
Posted Jun 20, 2017 15:37 UTC (Tue)
by ikm (guest, #493)
[Link] (1 responses)
> In a single-threaded process, the address space reserved for the stack can be large and difficult to overflow. Multi-threaded processes contain multiple stacks, though; those stacks are smaller and are likely to be placed between other virtual-memory areas of interest. An accidental overflow could corrupt the area located below a stack; a deliberate overflow, if it can be arranged, could be used to compromise the system.
So, if I understood things right, the change was about growing the guard size of all of the program's threads.
Posted Jun 20, 2017 15:46 UTC (Tue)
by BenHutchings (subscriber, #37955)
[Link]
Posted Jun 20, 2017 14:05 UTC (Tue)
by NightMonkey (subscriber, #23051)
[Link] (2 responses)
P.S. Hire me if you need a nice Gentoo guy on your side. ;)
Posted Jun 21, 2017 11:20 UTC (Wed)
by nix (subscriber, #2304)
[Link]
You can apply it directly from there if you want, or wait a few hours.
Posted Jun 22, 2017 23:49 UTC (Thu)
by flussence (guest, #85566)
[Link]
Posted Jun 20, 2017 15:27 UTC (Tue)
by corbet (editor, #1)
[Link] (4 responses)
Posted Jun 20, 2017 15:46 UTC (Tue)
by Sesse (subscriber, #53779)
[Link] (1 responses)
/* Steinar */
Posted Jun 20, 2017 16:10 UTC (Tue)
by BenHutchings (subscriber, #37955)
[Link]
Posted Jun 20, 2017 16:11 UTC (Tue)
by BenHutchings (subscriber, #37955)
[Link] (1 responses)
Posted Jun 20, 2017 20:32 UTC (Tue)
by roc (subscriber, #30627)
[Link]
Posted Jun 21, 2017 13:03 UTC (Wed)
by arekm (subscriber, #4846)
[Link]
Preventing stack guard-page hopping
If the only full fix is currently recompiling the world with something expensive like -fstack-check or the various sanitizers, that is awfully worrying.
Preventing stack guard-page hopping
Preventing stack guard-page hopping
Preventing stack guard-page hopping
Preventing stack guard-page hopping
Preventing stack guard-page hopping
Preventing stack guard-page hopping
Preventing stack guard-page hopping
Preventing stack guard-page hopping
Preventing stack guard-page hopping
Preventing stack guard-page hopping
Preventing stack guard-page hopping
Preventing stack guard-page hopping
Preventing stack guard-page hopping
Preventing stack guard-page hopping
Preventing stack guard-page hopping
Preventing stack guard-page hopping
I don't think that that's a significant issue, but anyway: You just need to read the byte (the guard page is not readable, is it?). So all the not-yet-used stack pages can be the same page containing zeroes (which also means that the same cache line will be used for all these reads in a physically-tagged (i.e., normal these days) cache). Only when it is used for real, a physical page is allocated.
A better change is to modify alloca() in libc to touch at least one byte on each allocated page.
That's going to mess up with the performance profile (allocating pages earlyer than expected) and decrease total performance in case the app wasn't going to touch those pages at all.
Preventing stack guard-page hopping
This would protect remote attacks but wouldn't prevent an attacker to write his own stack allocation for local privilege escalation.
I don't think that preventing this attack scenario prevents any halfway-competent attack. If the attacker can write his own stack allocation, he can write it to jump over guard regions of any size; actually, he can put the memory writes to the area below the stack in his otherwise-regular stack-allocation code directly. In other words: If you allow the attacker to execute his code in a setting that can escalate priviledges, you are already owned, guard page or not.
Preventing stack guard-page hopping
Preventing stack guard-page hopping
Preventing stack guard-page hopping
Preventing stack guard-page hopping
Preventing stack guard-page hopping
Preventing stack guard-page hopping
Preventing stack guard-page hopping
Preventing stack guard-page hopping
Preventing stack guard-page hopping
A fairly common practice is to allocate some data, launch several worker threads to compute its parts and then join all the threads to get the final result. It's not uncommon for it to be allocated or have parts of on-stack data.
Preventing stack guard-page hopping
Preventing stack guard-page hopping
run_in_worker_threads_and_wait_for_them_all(iters, [&n] { n++; });
Preventing stack guard-page hopping
One obvious implication of having threads in a common address space and (naive) alloca() at the same time is that you can guide one thread's stack into another thread's address space no matter how far apart they are in memory. I learned this the hard way in 1998 as I was debugging a Linux program that was doing this accidentally across almost 2MB-wide stack gaps.
Indeed. The "Cheney on the MTA" paper describes a remarkable way of using this sort of alloca() abuse to implement a copying garbage collector using only the C stack: you write your C program in continuation-passing style, with GCed data in functions that never return but only call on to others that do the same, and then when you want to do a GC your collector copies the relevant data into a new "stack" on the heap and alloca()s to it (finding the right alloca() value via trivial pointer arithmetic from a variable on the local stack frame), then free()s the old stack.
Preventing stack guard-page hopping
Preventing stack guard-page hopping
Preventing stack guard-page hopping
Preventing stack guard-page hopping
Preventing stack guard-page hopping
Preventing stack guard-page hopping
Preventing stack guard-page hopping in GCC
Preventing stack guard-page hopping in GCC
Preventing stack guard-page hopping
Maybe the i286's segmented memory model wasn't all that useless! Set %cs, %ds and %ss to non-overlapping regions of memory, and if %sp overflows, it will just wrap back onto the same stack you already had, not touching other regions or threads.
Preventing stack guard-page hopping
Preventing stack guard-page hopping
Preventing stack guard-page hopping
Maximum number of threads
Maximum number of threads
Maximum number of threads
No, this only affects the main thread stack. glibc does not use the kernel's MAP_GROWSDOWN feature for new thread stacks. The stack guard size for new threads is controlled using pthread_attr_setguardsize().
Maximum number of threads
Maximum number of threads
Maximum number of threads
Preventing stack guard-page hopping
I'd love to protect my systems from this problem while we wait for a stable kernel release in 4.12 (though I usually wait for 4.*.2). Is there a nice three-step combo I can perform to mitigate this in the interim? Yes, I'm crazy enough to add a gcc flag and rebuild all my binaries. Yes, I'm crazy enough to disable or enable experimental kernel features. Of course, I read that -fstack-* gcc flags apparently don't work. Thanks in advance.
Preventing stack guard-page hopping
Preventing stack guard-page hopping
It would appear that the fix merged in 4.12-rc (and queued for stable) has a couple of problems. Dave Jones found an oopsable bug; the problem seems to be understood and a fix is in the works. The change in accounting for the guard region also broke checkpoint/restore in user space (CRIU). In this case, it's not yet clear how things can be fixed.
For added fun...
For added fun...
For added fun...
For added fun...
For added fun...
Preventing stack guard-page hopping