[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
|
|
Subscribe / Log in / New account

Rethinking race-free process signaling

Rethinking race-free process signaling

Posted Apr 8, 2019 1:59 UTC (Mon) by ebiederm (subscriber, #35028)
In reply to: Rethinking race-free process signaling by dvdeug
Parent article: Rethinking race-free process signaling

My rule of thumb are limits like that should only be low enough to catch buggy programs,
not properly running programs that consume a few more resources than normal.

There is the other issue with more pids that if they get too large they get ungainly and difficult
to use. Which argues against making 4 million the default. But otherwise something like 4 million
would probably be a fine default for a limit like that.


to post comments

Rethinking race-free process signaling

Posted Apr 8, 2019 5:54 UTC (Mon) by eru (subscriber, #2753) [Link] (4 responses)

If the pid limit is 4 million, problems due to wraparound are rare, but they may occasionally happen, causing hard to trace bugs. Same with MAXINT. But if pid were a 64-bit number, and the limit the maximum of that, wraparound would never happen, so software could safely assume that pids are always unique.

Rethinking race-free process signaling

Posted Apr 8, 2019 7:27 UTC (Mon) by rbanffy (guest, #103898) [Link] (3 responses)

> But if pid were a 64-bit number, and the limit the maximum of that, wraparound would never happen

Cue to a meeting room with a dozen people dressed like characters from Things to Come trying to figure out why The Google stopped answering their questions.

Fine. It'll be a looooong time.

Rethinking race-free process signaling

Posted Apr 10, 2019 5:15 UTC (Wed) by eru (subscriber, #2753) [Link] (2 responses)

Before posting, I calculated that if the kernel creates one process every microsecond, it takes about 290 000 years for the 64-bit signed maxint to be reached. I don't think any system will have that kind of uptime.

Rethinking race-free process signaling

Posted Apr 10, 2019 18:07 UTC (Wed) by rbanffy (guest, #103898) [Link] (1 responses)

You have to let your imagination fly higher, eru. 290,000 years is a blink of an eye in cosmic terms and it's entirely possible a vast distributed and multiply redundant computer doing a Very Important Job for its users would both live that long (and be that reliable) it'd reach that limit well after everyone who designed it (and who would think fashion went too far this time) are dead or, at least, have moved on to more interesting pursuits. Also, if it's a thousand times faster, it'll get there much faster too.

Rethinking race-free process signaling

Posted Apr 12, 2019 6:19 UTC (Fri) by massimiliano (subscriber, #3048) [Link]

You have to let your imagination fly higher, eru. 290,000 years is a blink of an eye in cosmic terms...

If I let my imagination fly just a bit higher, in such a system this issue will be solved just like the current 2038 problem.

At some point the system will do a live migration to a 128 bit architecture, with conversion of the persistent state to appropriately sized values, and the "actor IDs" in the distributed systems will get a bit of fresh air with a wraparound time of 2^32*290k years, whatever that means...


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds