[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
|
|
Subscribe / Log in / New account

Local locks in the kernel

Local locks in the kernel

Posted Aug 12, 2020 9:49 UTC (Wed) by johill (subscriber, #25196)
Parent article: Local locks in the kernel

Nice overview, thanks!

Something's missing here though? The article says

On realtime systems, instead, local locks are actually sleeping spinlocks; they do not disable either preemption or interrupts. They are sufficient to serialize access to the resource being protected without increasing latencies in the system as a whole.

But the examples from the linked patchset do things like

+	struct squashfs_stream *stream;
+	int res;
+
+	local_lock(&msblk->stream->lock);
+	stream = this_cpu_ptr(msblk->stream);
[...]
So something must have been done here to avoid CPU migration of the task while it's in this new context? Or am I completely confused, and this is generally implied by any kind of locking? But that can't be true for mutexes for example?


to post comments

Local locks in the kernel

Posted Aug 12, 2020 12:59 UTC (Wed) by corbet (editor, #1) [Link] (2 responses)

You don't need to prevent preemption or migration to safely use per-CPU data, you just have to ensure exclusive access to that data. In throughput-oriented kernels, that exclusive access is indeed ensured by nailing the thread down on the CPU; it's a cheap way of doing things.

In the realtime world, though, the metric used to determine "cheap" changes, and monopolizing a CPU becomes expensive. So local locks are used to protect per-CPU data with a sleeping spinlock instead. A thread holding such a lock might indeed be migrated, and could find itself accessing data for the "wrong" CPU. But that is rarely a problem; the purpose of per-CPU data is to spread out data to reduce contention. The association with a specific CPU is not usually important. The lock will prevent unwanted concurrency regardless of where the thread is running, so the access is safe.

Local locks in the kernel

Posted Aug 12, 2020 14:05 UTC (Wed) by tglx (subscriber, #31301) [Link] (1 responses)

> A thread holding such a lock might indeed be migrated, and could find itself accessing data for the "wrong" CPU.

No. Holding a local lock prevents a thread from being migrated on RT. That's a property of the underlying 'sleeping' spinlock.
Otherwise the following code sequence would fail:

local_lock();
a = this_cpu_read(A);
this_cpu_write(B, a + x);
local_unlock();

Oops

Posted Aug 12, 2020 14:09 UTC (Wed) by corbet (editor, #1) [Link]

So who are you going to believe, that tglx guy or me?

Seriously, though, I stand corrected, apologies for the misinformation.

Local locks in the kernel

Posted Aug 12, 2020 14:15 UTC (Wed) by tglx (subscriber, #31301) [Link] (1 responses)

> So something must have been done here to avoid CPU migration of the task while it's in this new context? Or am I completely confused, and this is generally implied by any kind of locking? But that can't be true for mutexes for example?

https://www.kernel.org/doc/html/latest/locking/locktypes....

has all the rules documented and explains which locks prevent what on !RT and RT kernels.

tl;dr; version:

- Genuine sleeping locks (*mutex*, *semaphore*) never disable preemption, interrupts or migration independent of RT

- Regular spinning locks and local locks implicitly disable preemption (therefore migration) and possibly interrupts on !RT. On RT they are replaced with "sleeping" spinlocks which only disable migration

- Raw spinlocks implicitly disable preemption (therefore migration) and possibly interrupts independent of RT.

Local locks in the kernel

Posted Aug 12, 2020 14:23 UTC (Wed) by johill (subscriber, #25196) [Link]

Right, makes sense. I did think that must be the case, but it wasn't stated explicitly in the article and I hadn't come across the documentation yet. Thanks!


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds