Mixed-criticality support in seL4
The world contains an increasing number of "cyberphysical systems" implementing various types of safety-critical functionality. Fly-by-wire systems for aircraft and factory automation systems are a couple of examples. These systems are subject to an expensive safety-assurance process with costs that scale linearly (at least) with each line of code. When one considers that an Airbus A380 jetliner contains around 120 million lines of code, one can understand that developing that code can be a costly business.
The traditional approach to safety-critical code has been physical separation, using a different microcontroller for each function. That is how cars and aircraft have been built for some time. But those controllers are not cheap; a microcontroller fit for deployment in an automobile must be waterproof, resistant to dust, grease, acids, and vibration, and be able to operate in a wide range of temperatures. Such controllers are expensive, and there were typically about 100 of them in a car even five years ago. They are heavy, require a lot of cabling, and use a lot of energy.
The problem is compounded by the increasing need for sensors. Many sensors in a car support a number of different functions, meaning that they must be shared across processors.
The answer to this problem is consolidation: get a bigger processor and put a number of functions on it. That reduces costs while allowing better integration between functionalities; that integration is absolutely needed for autonomous vehicles. But this approach loses the physical isolation between functions, so that isolation must be restored in some other way that can still achieve certification.
Getting there, Heiser said, "involves killing a lot of trees". Specifications and development documents must be written. Effort must be put in to attach every requirement in the specifications to the lines of code that implement it. This work is expensive, to the point that, for the highest safety levels, the cost is prohibitive for anything more than a few thousand lines of code.
The answer is to use software isolation to separate the safety-critical code (which must be certified) from everything else. This an old concept, "in use since the stone age", but there is a problem: operating systems are hopeless at providing this kind of isolation. They do things like put device drivers into the kernel. Anything in the system can affect anything else. This kind of operating system cannot be used for systems of any complexity, he said.
Isolation starts by defining the "criticality" of every task that the system must perform. There are several levels, based on the consequences that result if the system fails to do its job properly; they go from "harmless" to "catastrophic". On any system running tasks with varying criticality, no task may depend on anything at a less critical level.
Solving the problem with seL4
The system that can support mixed-criticality workloads, Heiser said, is seL4. It completely isolates all running components by default, but makes it possible to create communication channels between components where they are needed. Its core mechanism is called "capabilities", but they differ from the capabilities found on a Linux system. A seL4 capability is an access token: it contains a reference to some object and a set of associated access rights. Processes must present the necessary capabilities with any operation, and the system will make a decision as to whether to allow that operation. It adds up to a fine-grained access-control mechanism.
There are eight different types of capability-protected objects in seL4:
- Thread-control blocks (processes)
- Address spaces
- Endpoints (ports for communications between processes)
- Notifications (a sort of completion mechanism)
- Capability spaces (to store capabilities themselves)
- Frames (areas of physical memory that can be mapped into address spaces)
- Interrupt objects
- Untyped (free) memory
All of these object types are "spatial", he said; the lack of any concept of time is a longstanding seL4 issue (which he was to return to shortly).
The seL4 microkernel has been subjected to a set of formal proofs of its security and safety properties. It is based on an abstract model written in a mathematical notation that is used to generate the C implementation. There is a machine-checked proof that the C implementation actually holds to the specification, and another proof that things are not broken in the compilation process. Yet another set of proofs show that the model implements the needed confidentiality, integrity, and availability properties. SeL4 is, he said, the only operating system in the world that has been verified in this way.
There has also been a worst-case execution time (WCET) analysis of all kernel operations that, he said, shows that seL4 is the world's fastest microkernel. Aspects that have not yet been verified are the initialization process, some memory-management unit operations, and multicore operation — but that work is in progress.
Toward a certifiable mixed-criticality scheduler
Once again, he noted that all of this verification only applies to spatial isolation, but the WCET work is a good start on the temporal problem. Beyond that, there is a new scheduling model in the works that will ensure that low-criticality code cannot interfere with higher-criticality code. The existing seL4 scheduler is relatively primitive, based on round-robin realtime scheduling that rotates through the runnable processes with the highest scheduler priority; it is good, but not sufficient. Among other problems, this scheduler will let the highest-priority process monopolize the CPU for as long as it chooses to. (Here, as usual, "priority" is a value used by the scheduler to pick the process to run first; it is not the same as the criticality assigned to any given functionality, though high-criticality tasks tend to be given high priorities).
As an example, consider a network driver. It will not be the most critical task on the system, but it may still need to occasionally preempt a high-criticality control loop to avoid losing packets. With the current scheduler, that driver must be given a high priority and trusted to not hog the CPU.
Another problem is data structures that are shared between criticalities. A common solution is to encapsulate that data in a separate server, running in its own address space. Other processes then communicate with the server to access the structure. Priority-inversion problems can result, though, when processes with different priorities access the server. Use of the priority-ceiling protocol can help, but requires getting the priorities set right. There are still problems where one client can starve another by repeatedly invoking the data-structure server, though.
Improving the situation naturally involves another set of requirements. A new scheduler must provide certifiable spatial and temporal isolation. It must be able to guarantee deadlines for high-criticality tasks while being able to share resources across criticalities. It should run low-criticality processes during otherwise slack periods. Capabilities should be used for time control.
Getting there, Heiser said, is not that hard with the right model. The classic time slice has been replaced by a "scheduler context". That, when combined with a capability, can be used to assign a given CPU budget and period to each process. Any process that exceeds its CPU budget during a given period can be throttled. Processes that have shorter periods are given a higher priority by the scheduler, but the budget limits the amount of CPU time that even the highest-priority process can consume.
The scheduler works by always running the highest-priority thread that has a non-zero budget. Any thread that has consumed its budget waits until the next period begins. If there are multiple processes with the same priority, round-robin scheduling is used.
In this model, the shared data-structure server mentioned above has no scheduler context of its own. Instead, it will use the context of the clients that invoke it. That causes this server to run at a priority appropriate for the client it is serving while charging that client for the CPU time used. It's a neat solution, but there is one little problem: what happens if the client's budget runs out while the server is running? These servers are usually single-threaded to keep them simple and certifiable, so simply throttling the server would deny service to other processes that may need it.
There are a few options for dealing with this situation. One would be to require that such servers be multithreaded; adding concurrency adds complexity, though, so this is not a great solution. Another would be to throttle the server until another client comes along, then borrow that client's time to finish the first client's work. In this case, the wrong process is paying for the mistake, which is not a good thing. The solution chosen for seL4, instead, is to provide a mechanism called "timeout exceptions". They can respond in a number of ways, including resetting the server, providing an emergency CPU-time budget, or implementing priority inheritance "if they really need to".
This new scheduler solves the problem, but it does not come for free: interrupt latency increases by 11% and scheduling itself is 19% more expensive. In both cases, the need to reprogram hardware timers is the source of most of the slowdown. But, he noted, scheduling in seL4 is "still an order of magnitude faster than Linux".
He wound down by noting that this work is being used in some real systems now. There are a couple of autonomous trucks under development for the US Army that are using it, and Boeing is working on a full-size autonomous helicopter. Certainly, as that helicopter flies overhead, one would hope that seL4 is indeed living up to its guarantees.
The video of this talk is available on YouTube.
[Your editor thanks the Linux Foundation and linux.conf.au for assisting
with his travel to the event.]
Index entries for this article | |
---|---|
Conference | linux.conf.au/2018 |
Posted Feb 2, 2018 0:35 UTC (Fri)
by ttelford (guest, #44176)
[Link] (5 responses)
Posted Feb 2, 2018 1:08 UTC (Fri)
by roc (subscriber, #30627)
[Link] (3 responses)
They obviously must have a formal *model* for the ARM ISA describing the behaviour of each instruction, so that they can prove the compiled seL4 binary does what it's supposed to do. Then in principle someone could try to prove things about an ARM implementation, e.g. that a VHDL description implements the model ... but there's nothing to suggest anyone has done anything like that.
Nevertheless, the seL4 verification work is amazing.
Posted Feb 2, 2018 1:55 UTC (Fri)
by ttelford (guest, #44176)
[Link]
Posted Feb 2, 2018 2:52 UTC (Fri)
by jeff_marshall (subscriber, #49255)
[Link]
There have been efforts to prove things about implementations of ISAs before. The Rockwell-Collins AAMP7 comes to mind. I don't know whether they verified the implementation of the AAMP7 down to the HDL (VHDL, Verilog, etc.), but they did go beyond the typical ISA specification from my perspective.
In my opinion, the biggest issue for the software security world is formalizing the specs for things that are currently unspecified for ISA implementers - especially the specification of which hardware properties have fixed or variable timing. Side channels that have typically been characterized as state side channels seem to be old news.
Posted Feb 8, 2018 15:50 UTC (Thu)
by azz (subscriber, #371)
[Link]
Posted Feb 16, 2018 21:55 UTC (Fri)
by HelloWorld (guest, #56129)
[Link]
Posted Feb 2, 2018 0:54 UTC (Fri)
by ras (subscriber, #33059)
[Link] (12 responses)
There was one point in talk that made it plain this is not just some academic toy that wasn't mentioned in this article. It was that it was deployed in the field, and in those deployments they had to do real world impossible to formally prove things like image analysis, and they did that by running a Linux VM as a seL4 process.
Posted Feb 2, 2018 2:20 UTC (Fri)
by ttelford (guest, #44176)
[Link] (6 responses)
By itself, it’s cool, but it’s also a break in the chain — a formally verified processor and microkernel, and then slap a VM running an unverified kernel and a pile of unverified drivers on top.
I’m sure I’m not alone in wanting to see formal verification all the way to a Posix-like userspace, rather than punting to a different kernel.
I’m also sure I’m not alone in saying “I don’t have time to do it.”
Posted Feb 2, 2018 3:51 UTC (Fri)
by k8to (guest, #15413)
[Link] (3 responses)
I would be interested in finding out if I'm overlooking something.
Posted Feb 2, 2018 5:51 UTC (Fri)
by ttelford (guest, #44176)
[Link] (2 responses)
It’s more of a “devil I know” sort of thing.
Honestly, I would not be at all surprised if POSIX has so many poorly defined corners that formal verification is impossible. How does the quote go? “There are dark corners in the Bourne shell, and people use all of them.”
The more I think of it, the more I realize posix is probably a bad goal.
Posted Feb 2, 2018 6:28 UTC (Fri)
by ttelford (guest, #44176)
[Link] (1 responses)
While seL4 handles real time very well, it’s designers claim it was designed to work well as part of a general purpose OS.
It’s a plausible claim - QNX is a performant RTOS that is also a surprisingly capable posix-compliant desktop OS.
I’d love to see seL4 outdo QNX.
Posted Feb 3, 2018 1:27 UTC (Sat)
by k8to (guest, #15413)
[Link]
There are many other examples of where posix apis are not ideal.
Posted Feb 2, 2018 7:27 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Feb 2, 2018 10:53 UTC (Fri)
by epa (subscriber, #39769)
[Link]
Posted Feb 2, 2018 2:45 UTC (Fri)
by JdGordy (subscriber, #70103)
[Link] (4 responses)
Posted Feb 2, 2018 6:12 UTC (Fri)
by ttelford (guest, #44176)
[Link] (2 responses)
OKL4 is a descendant of NICTA’s L4
NICTA originated seL4, and OK labs obtained rights to seL4. I don’t know how seL4 and OKL4 differ.
OKL4 itself is used in Qualcomm wireless modems, so there’s a decent chance you use it every day.
It’s been said that Apple’s Secure Enclave uses a sibling to OKL4, which also originated in NICTA.
It’s interesting to think that there are iPhone and iPad models that run two different versions of L4 microkernel in addition to the hybrid XNU kernel.
Posted Feb 10, 2018 7:37 UTC (Sat)
by zengtm (guest, #74989)
[Link] (1 responses)
Posted Feb 15, 2018 18:18 UTC (Thu)
by zengtm (guest, #74989)
[Link]
https://www.nsa.gov/resources/everyone/csfc/components-list/
Posted Feb 2, 2018 7:33 UTC (Fri)
by daniels (subscriber, #16193)
[Link]
Posted Feb 2, 2018 19:48 UTC (Fri)
by atelszewski (guest, #111673)
[Link] (7 responses)
I hate this never-ending purse of cost cutting in every possible corner.
The moment the breaking system in my car stops functioning, it will not matter
I prefer to pay more and have as many microcontrollers as required, to to make
I really don't like the idea of mixing safety critical code with video player code,
There is much more to having multiple microcontrollers,
This boils down to single point of failure.
Knowing this, I think you won't argue the less the better.
All this is actually more complex that it seems to be.
You might find some interesting videos on the area here:
Unfortunately, I don't think that cutting every possible penny is the way to go.
P.S.
--
Posted Feb 2, 2018 21:13 UTC (Fri)
by runekock (subscriber, #50229)
[Link] (2 responses)
Purchasers have a tendency to go by the numerical, hard data, rather than the hard-to-evaluate quality indication.
We all know that a high price by itself doesn't guarantee a high quality. There is probably some correlation between price and quality, but it's too nebulous to rely on in a specific purchase decision. So it's far easier to focus on something that can be easily evaluated, such as a performance number or a price.
Adding to that, our culture put a lot of stock in getting the best possible bargain. Your coffee shop example tells more about the culture than about your friends. It is considered a goal to save money, whether or not the savings will actually make a difference to you. And we all need goals, sometimes more than we need money.
Oh, I'm afraid I lost track of the original article. Is there any relation?
Posted Feb 2, 2018 21:39 UTC (Fri)
by atelszewski (guest, #111673)
[Link] (1 responses)
The relation is very well here, because the article mentions costs at the very beginning.
I won't say anything more cost-related here, because then we'll surely get OOT.
--
Posted Feb 4, 2018 20:46 UTC (Sun)
by alison (subscriber, #63752)
[Link]
Please note that I worked on automotive projects for 6 or 7 years, and I've never heard of anyone running the brakes from Linux, or from any general-purpose OS. Many of the incumbent MCUs in vehicles are unchanged from a decade ago, for perfectly good reasons. The point with consolidation is, for example, that having an additional MCU each for 15 video cameras would be crazy. Instead, the streams from the cameras are processed on a single processor, often running Linux.
The best-practices design pattern for new vehicle systems is a processor (or three) running a general-purpose OS like Linux or QNX, talking to a safety MCU (traditionally Renesas V850) that is running an RTOS. Heiser's talk makes a compelling case that seL4 is a great choice for the RTOS.
Posted Feb 4, 2018 22:08 UTC (Sun)
by JGR (subscriber, #93631)
[Link] (1 responses)
> And I prefer to have as little code sitting between
Posted Feb 5, 2018 18:59 UTC (Mon)
by atelszewski (guest, #111673)
[Link]
> Calls to emergency services is the most obvious thing that must work on demand.
> As an example, features like anti-lock brakes imply adding significant complexity and
--
Posted Feb 8, 2018 14:39 UTC (Thu)
by edeloget (subscriber, #88392)
[Link] (1 responses)
Sorry for this OT continuation.
Going from less efficient to more efficient is a business requirement, not a human requirement. I am, in many situations, perfectly happy in wasting time or money on things I moderately enjoy. But companies are not people ; they tend to not enjoy anything and thus any cost is seen as a burden that must, when possible, be lowered. There is no logical reason to no do it.
When I do work (it happens), I also try to be as efficient as I can be. If I find any way to be more efficient, I'll at least give it a try. This is not because I'm governed by 'efficiency first' and because I'm Vulcan (I am not). It's because despite the fact that I'm in love with my work, I prefer spending my time on doing something else (like seeing my children grow) :)
Posted Feb 11, 2018 18:41 UTC (Sun)
by jospoortvliet (guest, #33164)
[Link]
Posted Feb 4, 2018 21:52 UTC (Sun)
by rweikusat2 (subscriber, #117920)
[Link] (10 responses)
Translating this from mathebabble into English: It was implemented in a programming language almost nobody understands which a preprocessor turned into C. It is conjectured that the original source code is free of errors because .... eh, well, just because.
Sell this to someone else, Mr.
Posted Feb 4, 2018 22:34 UTC (Sun)
by Cyberax (✭ supporter ✭, #52523)
[Link] (8 responses)
But yeah, feel free to continue to misrepresent the groundbreaking work of seL4 team.
Posted Feb 4, 2018 23:06 UTC (Sun)
by rweikusat2 (subscriber, #117920)
[Link] (7 responses)
"It's math, therefore, it must be magic" doesn't work in this way.
And this applies on more levels: That someone believes to have proven something 'mathematically' doesn't mean the proof is correct. That's just the opinion of this someone and of anybody else who tried to verify the proof and also considered it correct. There's unfortunately no way out here. In the end, it all comes down to solving the halting problem. And someone reportedly proved that this is impossible (which may be obviously be wrong as well).
Posted Feb 4, 2018 23:29 UTC (Sun)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Feb 5, 2018 4:43 UTC (Mon)
by roc (subscriber, #30627)
[Link] (3 responses)
The seL4 proof is formalized in Isabelle/HOL. The Isabelle proof checking algorithm is very simple, mathematically well-understood, and since Isabelle has been used by lots of projects over a long time, confidence in the correctness of the Isabelle core implementation is very high. Therefore "it's just someone's opinion" is a massive misrepresentation of the degree of confidence here, and "it all comes down to solving the halting problem" is just wrong.
Posted Feb 5, 2018 10:27 UTC (Mon)
by farnz (subscriber, #17727)
[Link] (2 responses)
Also worth remembering that the halting problem only applies if you claim to have a perfect solution for all possible programs. The moment you weaken the problem to permit a "don't know whether it halts" state, you have a fully solvable problem (in the trivial case, the answer is "don't know" to all possible programs, although to be useful, you want it to give a definite answer for practical programs).
Thus, any argument from the halting problem requires you to show that the goal is a perfect classifier; if the algorithm is permitted an unknown state (whether explicitly called out as a third answer - such as "insoluble programs" - or by saying that the answers are either "definitely halts" and "may or may not halt" or "may halt" and "definitely does not halt"), you have escaped the halting problem.
Posted Feb 9, 2018 14:38 UTC (Fri)
by mstone_ (subscriber, #66309)
[Link] (1 responses)
Posted Feb 9, 2018 14:46 UTC (Fri)
by farnz (subscriber, #17727)
[Link]
The trick, as compilers use when evading the halting problem, is to confine the "don't knows" to uninteresting cases. As long as you can keep that set uninteresting, your approximate solution to the halting problem is good enough. It's when that set includes interesting programs from the user's point of view that you have a problem.
We use conservative approximations like this all over the place; while mathematicians prefer perfect results, engineers are happy to accept usable imperfect results.
Posted Feb 16, 2018 23:56 UTC (Fri)
by HelloWorld (guest, #56129)
[Link] (1 responses)
> There's unfortunately no way out here. In the end, it all comes down to solving the halting problem.
Posted Feb 19, 2018 15:24 UTC (Mon)
by bfields (subscriber, #19510)
[Link]
Posted Feb 9, 2018 1:02 UTC (Fri)
by flussence (guest, #85566)
[Link]
PHP and Javascript cowboyism doesn't fly in systems that, you know, actually fly.
Posted Feb 5, 2018 9:30 UTC (Mon)
by bristot-memorial (guest, #61569)
[Link]
"Another problem is data structures that are shared between criticalities....se of the priority-ceiling protocol can help." Nice, then they will see that the priority ceiling will add release jitter to the highest priority task, and then will notice that they will have to use other protocol, like, mutexes with priority inheritance? Where did I hear about it already... ah! On Linux! It is widely known that if you use priority ceiling, you will have scheduling "latency." But it really helps to reduce the response time... so, what do you prefer Response Time or Scheduling latency (release jitter or any scheduling delay, scheduler overhead... or any other name people do for it)?
"Any process that exceeds its CPU budget during a given period can be throttled. Processes that have shorter periods are given a higher priority by the scheduler." Ok! here they have something like CBS, but they are using the fixed-priority rate monotonic scheduling approach! Nice! that is a technology from... 1969? But wait, if they already implemented this, why is the "network driver" a problem? - answer: because they cannot estimate the period of the packets arriving - RM is not flexible enough to all real-world cases! But that does not mean that this is the only solution. So my question is: Is there a sufficient & necessary schedulability test for their scheduler?
"what happens if the client's budget runs out while the server is running?" Welcome to our world, check the "dl_boosted" variable on sched_deadline.
"This new scheduler solves the problem, but it does not come for free: interrupt latency increases by 11% and scheduling itself is 19% more expensive. In both cases, the need to reprogram hardware timers is the source of most of the slowdown. But, he noted, scheduling in seL4 is "still an order of magnitude faster than Linux""
Wellcome to our world again! And let me guess when you will start to have our numbers... When you finish implementing "some memory-management unit operations, and multicore operation"?
Because on PREEMPT_RT we do this in microseconds, will they implementation continue working on ns (i.e., an order of magnitude faster than Linux) with such features?
And I wonder, where is the mixed-criticality part of the schedulers...? Running Linux as a guest?
I read the papers, and the work is very nice! Seriously! Very nice!
But please, stop the comparison of orange and apples. It is not possible to compare this OS with Linux. Moreover, there are other OSs with such features running on real systems. Verified or not, they are certified for it.
I remember thinking it was interesting to learn that the "provably correct" part of seL4 includes a proof of the ARM ISA.Mixed-criticality support in seL4
Apparently ARM was been modeled mathematically and formally verified, just as seL4 was.
It makes sense: Formal verification of software only gets you so far if the hardware invalidates its assurances. (SPECTRE is a great example of that...)
I've been meaning to get seL4 running on a Raspberry Pi 3, but haven't had time to give it a whirl yet.
I hope it can do more than run the idle loop.
Mixed-criticality support in seL4
No links in that page, no.
Google’s first few hits for “ARM Formal verification”, on the other hand, has a number of good resources, including a paper from Carnegie Mellon University.
Mixed-criticality support in seL4
Mixed-criticality support in seL4
ARM are indeed working on formal verification of their processor specifications and implementations; I saw an excellent talk from Alastair Reid of ARM at the RS4 workshop in Glasgow in September on this topic. His web site has lots of interesting stuff about what he's working on.
Mixed-criticality support in seL4
Mixed-criticality support in seL4
I don't think a formal specification of the ISA and a verification of the implementation would have helped at all. In a sense, Spectre isn't really a bug: all the instructions do what they're supposed to and yield intended results. So even if your specification specifies the exact semantics of all CPU instructions and the processor implements exactly those semantics, you'd still be vulnerable. The problem is that some instructions take a bit more or a bit less time to execute depending on memory contents that you can't read directly, so if you measure these timings accurately, you can draw conclusions about those memory contents. So to prevent this you'd have to specify exactly how long every instruction is supposed to take, and that would probably preclude many of the optimisations that make modern processors fast.
Mixed-criticality support in seL4
Mixed-criticality support in seL4
Mixed-criticality support in seL4
Mixed-criticality support in seL4
Mixed-criticality support in seL4
Mixed-criticality support in seL4
Mixed-criticality support in seL4
Mixed-criticality support in seL4
Mixed-criticality support in seL4
Mixed-criticality support in seL4
Mixed-criticality support in seL4
Mixed-criticality support in seL4
Mixed-criticality support in seL4
Mixed-criticality support in seL4
This comes not only from the manufacturers, but very much from consumers.
Hearing my friends, complaining the cup of coffee was to expensive, when they easily
could buy the whole coffee shop, makes me sick.
much that I paid $1000 less for the car. Actually, it will not matter at all, because
it will malfunction in the worst possible situation, meaning that somebody dies.
the system as safe as possible. And I prefer to have as little code sitting between
the break pedal and wheels as possible.
sitting in single chip. Cut the cost on mobile phones, where the worst that might
happen is someone missing a pokemon.
than just isolating the safety critical code.
Electronics components fail or might fail.
https://www.youtube.com/channel/UC7QA0wx1m_aLyctDusU4X7g/...
This negative comment is not directed at the presentation author,
nor at the article author.
I actually love hearing about software ideas, especially in the embedded world.
I just don't want this software to sit in my car ;-)
Best regards,
Andrzej Telszewski
Mixed-criticality support in seL4
Mixed-criticality support in seL4
Anyway, I agree with your comment.
Best regards,
Andrzej Telszewski
Mixed-criticality support in seL4
Mixed-criticality support in seL4
> happen is someone missing a pokemon.
I can think of plenty of worse scenarios for mobile phone failures.
Calls to emergency services is the most obvious thing that must work on demand.
> the break pedal and wheels as possible.
As an example, features like anti-lock brakes imply adding significant complexity and code to the path between the brake pedal and the wheels. There is trade-off between the complexity added by such features and the benefits (safety and otherwise) to the user (and to the manufacturer).
Mixed-criticality support in seL4
That's true. But I don't think it is like that in practice.
I don't believe that mobile phones undergo safety related design and processes.
> code to the path between the brake pedal and the wheels.
That is true. But I didn't mean such a feature, which is actually safety related.
What I meant, was running safety-critical and entertainment code on the same physical chip.
Best regards,
Andrzej Telszewski
Mixed-criticality support in seL4
Mixed-criticality support in seL4
Mixed-criticality support in seL4
Mixed-criticality support in seL4
Mixed-criticality support in seL4
Mixed-criticality support in seL4
This particular proof does exactly that. It proves that the specification (and the code) upholds certain invariants (no buffer overflows, dangling pointer access, etc.) and guarantees the integrity and confidentiality ( https://sel4.systems/Info/FAQ/proof.pml ) of the resulting system. It also lists the assumptions required for this.
Mixed-criticality support in seL4
Mixed-criticality support in seL4
Mixed-criticality support in seL4
Mixed-criticality support in seL4
Mixed-criticality support in seL4
Once a proof has been formalized to a sufficient degree, checking said proof for correctness is an entirely mechanical task and is therefore done by a computer program. It is *not* a fallible human who does the proof checking. Can we trust the program that checks the proof to be correct? Well, there is a non-zero chance that there's a bug in the proof checker that allows broken proofs to pass, and that there is such a bug in the seL4 proof, and that nobody noticed that bug despite the fact that this proof checking stuff isn't that complicated and the program has been used on millions of proofs before. But that is so phenomenally unlikely that it doesn't belong in the same category as “some dude wrote a proof and some other dude thinks it's correct”.
By the way, there is a dissertation on the topic of how to write reliable theorem provers: https://repositories.lib.utexas.edu/handle/2152/ETD-UT-20...
Now that's just complete bunk.
"Once a proof has been formalized to a sufficient degree, checking said proof for correctness is an entirely mechanical task and is therefore done by a computer program."
For what it's worth, mathematicians almost never do this. Proofs are written for and checked by humans.
Computer scientists seem to deal with computer-readable proofs more often?
Mixed-criticality support in seL4
Mixed-criticality support in seL4
Mixed-criticality support in seL4