The Extensible Firmware Interface - an introduction
Actually, that's not true. Depending on where you start from, there was either some toggle switches used to enter enough code to start booting from something useful, a ROM that dumped you straight into a language interpreter or a ROM that was just barely capable of reading a file from tape or disk and going on from there. CP/M was usually one of the latter, jumping to media that contained some hardware-specific code and a relatively hardware-agnostic OS. The hardware-specific code handled receiving and sending data, resulting in it being called the "Basic Input/Output System." BIOS was born.
When IBM designed the PC they made a decision that probably seemed inconsequential at the time but would end up shaping the entire PC industry. Rather than leaving the BIOS on the boot media, they tied it to the initial bootstrapping code and put it in ROM. Within a couple of years vendors were shipping machines with reverse engineered BIOS reimplementations and the PC clone market had come into existence.
There's very little beauty associated with the BIOS, but what it had in its favor was functional hardware abstraction. It was possible to write a fairly functional operating system using only the interfaces provided by the system and video BIOSes, which meant that vendors could modify system components and still ship unmodified install media. Prices nosedived and the PC became almost ubiquitous.
The BIOS grew along with all of this. Various arbitrary limits were gradually removed or at least papered over. We gained interfaces for telling us how much RAM the system had above 64MB. We gained support for increasingly large drives. Network booting became possible. But limits remained.
The one that eventually cemented the argument for moving away from the traditional BIOS turned out to be a very old problem. Hard drives still typically have 512 byte sectors, and the MBR partition table used by BIOSes stores sectors in 32-bit variables. Partitions above 2TB? Not really happening. And while in the past this would have been an excuse to standardize on another BIOS extension, the world had changed. The legacy BIOS had lasted for around 30 years without ever having a full specification. The modern world wanted standards, compliance tests and management capabilities. Something clearly had to be done.
And so for the want of a new partition table standard, EFI arrived in the PC world.
Expedient Firmware Innovation
[2] To be fair to Intel, choosing to have drivers be written in C rather than Forth probably did make EFI more attractive to third party developers than Open Firmware
EFI is intended to fulfill the same role as the old PC BIOS. It's a pile of code that initializes the hardware and then provides a consistent and fairly abstracted view of the hardware to the operating system. It's enough to get your bootloader running and, then, for that bootloader to find the rest of your OS. It's a specification that's 2,210 pages long and still depends on the additional 727 pages of the ACPI spec and numerous ancillary EFI specs. It's a standard for the future that doesn't understand surrogate pairs and so can never implement full Unicode support. It has a scripting environment that looks more like DOS than you'd have believed possible. It's built on top of a platform-independent open source core that's already something like three times the size of a typical BIOS source tree. It's the future of getting anything to run on your PC. This is its story.
Eminently Forgettable Irritant
[4] Those of you paying attention have probably noticed that the PEI sounds awfully like a BIOS, EFI sounds awfully like an OS and bootloaders sound awfully like applications. There's nothing standing between EFI and EMACS except a C library and a port of readline. This probably just goes to show something, but I'm sure I don't know what.
The DXE layer is what's mostly thought of as EFI. It's a hardware-agnostic core capable of loading drivers from the Firmware Volume (effectively a filesystem in flash), providing a standardized set of interfaces to everything that runs on top of it. From here it's a short step to a bootloader and UI, and then you're off out of EFI and you don't need to care any more[4].
The PEI is mostly uninteresting. It's the chipset-level secret sauce that knows how to turn a system without working RAM into a system with working RAM, which is a fine and worthy achievement but not typically something an OS needs to care about. It'll bring your memory out of self refresh and jump to the resume vector when you're coming out of S3. Beyond that? It's an implementation detail. Let's ignore it.
The DXE is where things get interesting. This is the layer that presents the interface embodied in the EFI specification. Devices with bound drivers are represented by handles, and each handle may implement any number of protocols. Protocols are uniquely identified with a GUID. There's a LocateHandle() call that gives you a reference to all handles that implement a given protocol, but how do you make the LocateHandle() call in the first place?
This turns out to be far easier than it could be. Each EFI protocol is represented by a table (ie, a structure) of data and function pointers. There's a couple of special tables which represent boot services (ie, calls that can be made while you're still in DXE) and runtime services (ie, calls that can be made once you've transitioned to the OS), and in turn these are contained within a global system table. The system table is passed to the main function of any EFI application, and walking it to find the boot services table then gives a pointer to the LocateHandle() function. Voilà.
So you're an EFI bootloader and you want to print something on the screen. This is made even easier by the presence of basic console io functions in the global EFI system table, avoiding the need to search for an appropriate protocol. A "Hello World" function would look something like this:
#include <efi.h> #include <efilib.h> EFI_STATUS efi_main (EFI_HANDLE image, EFI_SYSTEM_TABLE *systab) { SIMPLE_TEXT_OUTPUT_INTERFACE *conout; conout = systab->ConOut; uefi_call_wrapper(conout->OutputString, 2, conout, L"Hello World!\n\r"); return EFI_SUCCESS; }
In comparison, graphics require slightly more effort:
#include <efi.h> #include <efilib.h> extern EFI_GUID GraphicsOutputProtocol; EFI_STATUS efi_main (EFI_HANDLE image, EFI_SYSTEM_TABLE *systab) { EFI_GRAPHICS_OUTPUT_PROTOCOL *gop; EFI_GRAPHICS_OUTPUT_MODE_INFORMATION *info; UINTN SizeOfInfo; uefi_call_wrapper(BS->LocateProtocol, 3, &GraphicsOutputProtocol, NULL, &gop); uefi_call_wrapper(gop->QueryMode, 4, gop, 0, &SizeOfInfo, &info); Print(L"Mode 0 is running at %dx%d\n", info->HorizontalResolution, info->VerticalResolution); return 0; }
Extremely Frustrating Issues
So far it all sounds straightforward from the bootloader perspective. But EFI is full of surprising complexity and frustrating corner cases, and so (unsurprisingly) attempting to work on any of this rapidly leads to confusion, anger and a hangover. We'll explore more of the problems in the next part of this article.
Index entries for this article | |
---|---|
Kernel | Externsible firmware interface |
GuestArticles | Garrett, Matthew |
Posted Aug 11, 2011 5:40 UTC (Thu)
by DeletedUser34808 ((unknown), #34808)
[Link] (2 responses)
Posted Aug 11, 2011 9:39 UTC (Thu)
by dmk (guest, #50141)
[Link]
Posted Aug 11, 2011 12:09 UTC (Thu)
by corbet (editor, #1)
[Link]
Posted Aug 11, 2011 12:09 UTC (Thu)
by etienne (guest, #25256)
[Link] (10 responses)
Basically I see these problems:
I am not sure that all those are improvement, compared to having a boot-loader on the first sector of one of the disk, boot-loader that can be upgraded by standard package management.
Disclaimer: my pet project is the Gujin boot-loader.
Posted Aug 12, 2011 8:09 UTC (Fri)
by marcH (subscriber, #57642)
[Link]
EFI is designed to initialize the minimum amount of hardware required to run the bootloader. Now I am not familiar enough with various hardware configurations to know if this will be faster than old BIOSes in *every* case but I am pretty sure it will be in most.
Posted Aug 22, 2011 12:46 UTC (Mon)
by marcH (subscriber, #57642)
[Link] (8 responses)
Could you elaborate on this? EFI relies on a dedicated partition; that should not put any limit on the other partitions.
Posted Aug 22, 2011 14:05 UTC (Mon)
by etienne (guest, #25256)
[Link] (7 responses)
It seems to me like obvious, if you want your Linux on the same disk as the EFI partition, they have to use the same partition system (standard MBR partition or GPT or even others).
Also, because EFI needs its own partition, what happens in the context of multiple disks?
The problem is not simple with "the first sector of the first disk is loaded and executed at boot", but adding EFI seems to me like making it more complex.
I have seen people using quite a few cold-swappable hard disks to boot multiple operating systems, to support real applications on different setup but the same PC. There is large Post-it notes on each disk...
Posted Aug 22, 2011 14:11 UTC (Mon)
by marcH (subscriber, #57642)
[Link] (6 responses)
And? Please give a sample use case which actually demonstrates a limitation.
Concerning all the other points I got bored reading before I could find any problem new to EFI.
Posted Aug 22, 2011 14:56 UTC (Mon)
by etienne (guest, #25256)
[Link] (5 responses)
No partition table that EFI will understand?
Posted Aug 22, 2011 15:40 UTC (Mon)
by mjg59 (subscriber, #23239)
[Link] (4 responses)
Posted Aug 22, 2011 22:37 UTC (Mon)
by marcH (subscriber, #57642)
[Link] (3 responses)
However I still do not really see how this is a limitation *in practice*.
Posted Aug 23, 2011 9:56 UTC (Tue)
by etienne (guest, #25256)
[Link] (2 responses)
Lets take a real life example: the situation right now is that the first sector of the first disk is loaded, after that the ROM BIOS hopes for the best after jumping to the first instruction of the MBR.
Posted Aug 23, 2011 10:50 UTC (Tue)
by marcH (subscriber, #57642)
[Link] (1 responses)
> ..., but you will need to upgrade your BIOS.
And? I have actually solved real boot problems (USB-ZIP/FDD/HDD, PXE,...) by upgrading BIOSes, this is not a pie in the sky. Note that, from the vendor's perspective it will be much cheaper to develop an upgrade in C (EFI) as opposed to assembly (BIOS).
If your PC is too old for maintenance then you will do what most people do in such cases: you will buy a new PC with an updated EFI/BIOS/whatever. Consider this very similar case: how many people actually try to shoehorn a brand new & big SATA drive into an old & slow pre-SATA PC? Practically none.
Working on a Windows 7 PC right now I often envy Apple, am tired of backward-compatibility and wish it were much more often thrown out of the window...
Posted Aug 23, 2011 13:20 UTC (Tue)
by etienne (guest, #25256)
[Link]
I would agree with you saying the world would be a simpler place if we did not have all those "strange" configurations to support, if the users would use their computer and OS for what it was designed for.
Posted Aug 11, 2011 14:31 UTC (Thu)
by dd9jn (✭ supporter ✭, #4459)
[Link]
Posted Aug 11, 2011 15:31 UTC (Thu)
by jhhaller (guest, #56103)
[Link] (11 responses)
Does EFI make this any better, such as integrating all of the delays for individual device/capability setups into the main setup, or enumerating hardware in parallel? If you can work that back into a future article, that would be great.
Posted Aug 11, 2011 15:39 UTC (Thu)
by mjg59 (subscriber, #23239)
[Link] (1 responses)
Posted Aug 12, 2011 8:06 UTC (Fri)
by marcH (subscriber, #57642)
[Link]
Posted Aug 11, 2011 15:50 UTC (Thu)
by nye (guest, #51576)
[Link] (4 responses)
Not just server configurations. For about 3 years now all my Debian systems have taken around as long, if not longer, to get from power on->grub than from grub->login screen.
Sometimes there are options you can disable to improve matters, eg my main desktop has two SATA controllers built in to the motherboard, one of which also does PATA. Disabling it means losing PATA and a couple of SATA ports, but saving about 10s from each boot. When grub->desktop takes ~40-50s, that's a lot. With an SSD grub->desktop is likely to be more like 20-30s, so it's even more significant.
Posted Aug 14, 2011 20:32 UTC (Sun)
by cortana (subscriber, #24596)
[Link] (3 responses)
Posted Aug 15, 2011 17:58 UTC (Mon)
by knobunc (subscriber, #4678)
[Link]
Posted Aug 23, 2011 10:22 UTC (Tue)
by etienne (guest, #25256)
[Link] (1 responses)
Let's imagine yesterday kernel managing yesterday device.
Back to today, I have currently booted the today kernel, which knows how to drive the super-duper function, and because it has recognised the more powerful today's device, it has enabled the super-duper function in the device by setting the previously "do not touch" bit (now enable super-duper function bit) to the device register.
Now, if I kexec yesterday's kernel, it will not change the "do not touch" bit, but it does not have the super-duper function driver neither.
Posted Aug 23, 2011 10:28 UTC (Tue)
by cortana (subscriber, #24596)
[Link]
Debian's kexec-tools package for instance always boots /vmlinuz, which is a symlink maintained by the various linux-image-* packages and which usually points to the latest installed kernel. IMO this is a mistake, and kexec should attempt to boot into the currently installed kernel, if it still exists.
This doesn't sound too hard--just look at the results of uname(2) and then look for a matching file in /boot. I think this would make your scenario less likely, however (at least in Debian's case) they only use a different filename for different kernel releases and ABI-changing updates to the same release, so the file /boot/vmlinux-3.0.0-1-amd64 that currently exists on my system may have been updated since the system booted with a file of the same name.
Posted Aug 11, 2011 17:17 UTC (Thu)
by jcm (subscriber, #18262)
[Link] (1 responses)
FWIW, I'm a huge fan of things like EFI. And layers aren't a problem if you don't have to care about the layer beneath. For example, on ARM systems, we are increasingly booting using EFI (a good thing) and sometimes even doing so in convoluted ways (x-loader->u-boot->tianocore) but it doesn't matter because the time taken to do these steps is minimal overall. I'm far more worried about layers of complexity being added in the desktop than in EFI, which is at least a cross-vendor, cross-platform standard we can all use.
Jon.
Posted Aug 11, 2011 23:14 UTC (Thu)
by nix (subscriber, #2304)
[Link]
I'm sure that in future articles Matthew will assure us that EFI implementations are bug-free flawless jewels of perfect software engineering which work brilliantly on all operating systems, just as we would expect from the geniuses and wizards who work on BIOSes. (It's true: they do, as long as the set of operating systems consists of one single version of Windows on one single hardware configuration and it was only booted once during testing while holding a horseshoe and a rabbit's foot above the monitor.)
Posted Aug 13, 2011 0:54 UTC (Sat)
by Lennie (subscriber, #49641)
[Link] (1 responses)
I keep looking at the CoreBoot supported mainboards list, but it isn't on it.
Posted Aug 18, 2011 21:20 UTC (Thu)
by jd (guest, #26381)
[Link]
Posted Aug 12, 2011 8:19 UTC (Fri)
by marcH (subscriber, #57642)
[Link] (3 responses)
I think that is far from having been the only reason (maybe I just missed the smiley?)
BIOS implementations are all proprietary assembly code. EFI is at least 95% open-source C.
> It's the chipset-level secret sauce that knows how to turn a system without working RAM into a system with working RAM, which is a fine and worthy achievement but not typically something an OS needs to care about.
Note that initializing memory has become incredibly complex.
> [2] To be fair to Intel, choosing to have drivers be written in C rather than Forth probably did make EFI more attractive to third party developers than Open Firmware
EFI also supports byte code applications and drivers for obvious portability reasons.
> There's nothing standing between EFI and EMACS except a C library and a port of readline.
I think EFI cannot send email yet, but that will come eventually.
http://catb.org/jargon/html/Z/Zawinskis-Law.html
Posted Aug 17, 2011 16:29 UTC (Wed)
by xilun (guest, #50638)
[Link] (2 responses)
The open source part (EDK/EDK2) does only contains very abstracted software infrastructure, arguably quite badly designed (e.g. you can see the hand of the psychopaths of MS putting GUID everywhere even and especially where it does not make any beginning of sense). I don't see why anybody would want to write programs for that environment.
What could be interesting to take for something which has the actual intent of booting a computer, like coreboot+its various payloads, instead of merely providing an ms-dos like os where linking is done by guid and the whole thing is burned down on your motherboard, is not open source at all. Depending on the cpu and chipset vendor, there are not even public datasheet for booting your chips.
Posted Aug 17, 2011 16:43 UTC (Wed)
by marcH (subscriber, #57642)
[Link]
> ... is not open source at all
Still whining but plain wrong as anyone can see.
> Depending on the cpu and chipset vendor,...
Usual and well-known consequence of a BSD license.
PS: we miss your fair comparison with older BIOSes.
Posted Aug 18, 2011 21:24 UTC (Thu)
by jd (guest, #26381)
[Link]
Posted Aug 12, 2011 13:13 UTC (Fri)
by njwhite (subscriber, #51848)
[Link] (1 responses)
Coreboot always struck me (as a layman), like a nicely engineered system. Does it suffer from intractable BIOS issues enough that EFI is an improvement?
Posted Aug 12, 2011 15:33 UTC (Fri)
by mjg59 (subscriber, #23239)
[Link]
If all you want to do is boot Linux then Coreboot definitely lets you achieve that with less overhead, and if vendors used it as the basis for the PEI layer we'd benefit from having the source to their setup code and could use that to identify bugs. But we'd still run into the common BIOS issue that vendors *will* introduce bugs, and we still need to work around them. Even if we have the source code and can rebuild the full BIOS image with a bugfix, having someone flash a third-party BIOS image would probably void the manufacturer warranty. So we're still at the mercy of the vendors in terms of bugfixes, and the open source nature of Coreboot doesn't benefit us hugely in that respect.
Posted Aug 12, 2011 20:11 UTC (Fri)
by mturquette (subscriber, #54268)
[Link] (14 responses)
It goes to show that we should jump to the Linux kernel much much earlier in the boot process. Modern SoCs like TI's OMAP have a configuration header that you can append to your binary (Linux kernel) to skip the whole boot loader issue entirely.
There have been real implementations of this on OMAP where the usual U-boot has been removed entirely; at power on ROM code parses the CH, then *boom* kernel decompression and party time.
I know other platforms can do similar stuff, and I also know that x86 is a world away from ARM SoCs... but still I can't help but think that BIOS, EFI, OpenFirmware, etc are all starting to sound a bit archaic.
Posted Aug 14, 2011 21:33 UTC (Sun)
by giraffedata (guest, #1954)
[Link] (13 responses)
It doesn't have to be the same Linux that will ultimately run on the machine; just something to load the real OS.
That's probably naive, but what's the problem? Is Linux too big? Too slow?
Reflecting on the fact that one often has to get a new motherboard or at least risk an in-situ BIOS upgrade in order to boot from a new type of device, I've often thought that motherboards should be able to do one thing at boot time: read bytes from USB storage device plugged into a particular socket inside the box and branch to them.
I'd put a Linux boot image on there, configured to load and bring up the real system. If I got a new kind of boot device, I'd just build a new Linux system with a driver for it and store it on that USB stick. If I accidentally bricked the system that way, I'd just plug in a different USB stick and recover.
Posted Aug 15, 2011 9:25 UTC (Mon)
by marcH (subscriber, #57642)
[Link] (6 responses)
> I think it goes to show it would be simpler just to use Linux in place of DXE.
Isn't there any actual "LinuxBIOS" project out there?
> That's probably naive, but what's the problem? Is Linux too big? Too slow?
Not in the right place at the right time? GPLed?
Maybe a bit of all these.
Posted Aug 15, 2011 14:23 UTC (Mon)
by giraffedata (guest, #1954)
[Link] (5 responses)
It doesn't matter to me if it's USB, and I'm not aware of anyone proposing anything else.
The essential element of the concept is a separate removable storage device just for the bootloader. I can't think of anything other than a USB memory stick that would be practical for that, but if there is something, I'm all for it.
Posted Aug 15, 2011 22:08 UTC (Mon)
by marcH (subscriber, #57642)
[Link] (4 responses)
Posted Aug 18, 2011 19:36 UTC (Thu)
by giraffedata (guest, #1954)
[Link] (3 responses)
Posted Aug 22, 2011 12:41 UTC (Mon)
by marcH (subscriber, #57642)
[Link] (2 responses)
Granted, you can imagine dream and modular firmware code that can be selectively trimmed down on a per-platform basis. EFI might get there but why would you do this for (expensive) PC motherboards? A development effort to lose some of your customers?!
By the way USB + managed flash is among the most expensive solutions; this matters for low cost embedded systems.
Posted Aug 22, 2011 19:53 UTC (Mon)
by giraffedata (guest, #1954)
[Link] (1 responses)
I think you're pointing out the dilemma of standardization. You pick one of many paths in order to reap the benefits of uniformity, but at the cost of going down a path that isn't ideal for some, or even all, particular cases. Of course, we standardize all the time and companies that were relying on the protocol that didn't get chosen suck it up and switch.
Unless you're talking about losing customers because it costs more, you've misunderstood the proposal, because customers can still use all of those boot protocols -- the ROM loads from the USB device the bootloader that knows how to load Windows from a SATA drive.
I agree my scheme is not appropriate for embedded systems. It would add very little and cost a lot.
Posted Aug 22, 2011 22:35 UTC (Mon)
by marcH (subscriber, #57642)
[Link]
Yes something like that.
> you've misunderstood the proposal, because customers can still use all of those boot protocols -- the ROM loads from the USB device the bootloader that knows how to load Windows from a SATA drive.
I think I understood the proposal; it looks like we have different customers in mind. I am considering the "ROM loads from X" part while you are considering the "bootloader that knows..." part.
Posted Aug 19, 2011 9:53 UTC (Fri)
by etienne (guest, #25256)
[Link] (5 responses)
Well, they have first to recognise that the USB disk first sector contains executable code, else they will often jump to a lot of zero instructions.
But what I wanted to say is that at the bootloader level you do not have USB, you have OHCI, UHCI, EHCI or xHCI - and more in the embedded world.
Posted Aug 19, 2011 16:10 UTC (Fri)
by giraffedata (guest, #1954)
[Link] (4 responses)
I think I'd prefer that it not do that check. It's an opportunity for something to become incompatible.
But that is just between the bootstrap ROM and the USB host controller, both of which are permanently attached to the same piece of hardware in my scheme. So there's no chance of incompatibility.
The standardness of USB that I'm interested in is between the host controller/hub and the USB stick, such that you can 1) easily get media; and 2) easily write the media using pretty much any computer.
Posted Aug 22, 2011 14:39 UTC (Mon)
by etienne (guest, #25256)
[Link] (3 responses)
Well, they could have trusted that if the device contains a partition which is marked bootable, the device is itself bootable, but most BIOS manufacturer did not do so.
Note that in your scheme, you still want to simply write you Linux kernel to the USB disk (and not do "cat kernel > /dev/sdb") so you need a bootloader which will analyse the partitions and the filesystem inside it - so once the bootloader is running it still need access to the underlying device to access that file (using BIOS/EFI disk services), the BIOS/EFI still need to respond to USB interrupts.
Posted Aug 22, 2011 20:13 UTC (Mon)
by giraffedata (guest, #1954)
[Link] (2 responses)
I think you've misunderstood what the ROM bootloader does in this scheme. It is intentionally too dumb to choose a boot device. It always loads the next stage from the same physical place (a particular USB socket inside the box). The code loaded can use fancy intelligence that evolves with technology to choose a boot device from which to load the OS.
I don't know what you're saying because there are at least 3 things that can be called "bootloader" here. One is in ROM, one is in the first block of the USB first-level boot device, and one is the complete Linux system that resides on the first-level boot device.
The ROM boot loader doesn't know anything about partition tables or filesystems. It knows how to read the first block of a USB storage volume, plugged into a socket/hub/controller permanently married to that ROM. The contents of the USB volume is much like we once put on floppy disks: not "cat kernel >/dev/sdb", but "cat bootstrap loader2 kernel >/dev/sdb". Maybe the volume actually has a partition table and the kernel thus loaded knows how to find filesystems on the volume.
Posted Aug 23, 2011 9:36 UTC (Tue)
by etienne (guest, #25256)
[Link] (1 responses)
I wish it worked that way in the real world.
Posted Aug 23, 2011 18:05 UTC (Tue)
by giraffedata (guest, #1954)
[Link]
And in case it wasn't clear, this is exactly the point I opened the thread with, if BIOS means "the code fixed to the motherboard that boots the OS."
I would like to see that intelligence moved into a physically changeable, easily creatable storage device and suggest that a Linux system on a USB disk is the best choice for that.
Posted Aug 14, 2011 14:49 UTC (Sun)
by Julie (guest, #66693)
[Link]
And the usual high-quality comments have plugged a few knowledge gaps, too. Thanks Matthew and everyone.
Posted Aug 14, 2011 21:48 UTC (Sun)
by giraffedata (guest, #1954)
[Link] (1 responses)
So is EFI just bootstrap software? If so, it deserves a better comparison to BIOS, because starting up the computer is just a small part of the BIOS concept. What BIOS is about is providing a hardware abstraction layer, so CP/M could write to a disk drive of a model the authors of CP/M had never heard of. That aspect of BIOS was obsoleted by microprocessors getting cheap enough that we could put one in every peripheral device. Now the hardware abstraction layer is in the peripherals.
One thing is clear: EFI needs a better name - a descriptive one. Extensible Firmware Interface sounds like a patent title. "Firmware" isn't a set of function - it's just an implementation technology.
Posted Aug 15, 2011 9:21 UTC (Mon)
by marcH (subscriber, #57642)
[Link]
In theory "EFI" is just the interface. Its implementations are abusively called EFI as well. That's not unusual.
Posted Aug 23, 2011 16:38 UTC (Tue)
by landley (guest, #6789)
[Link]
The PC was _designed_ as a 16 bit sequel to the CP/M boxes, the 8086 was a 16-bit upgrade to the 8080, the OS was a 16 bit port of CP/M (either CP/M itself from Digital Research or Tim Patterson's QDOS clone rebranded by Micro-dash-soft), the ISA bus was the S100 bus with unused wires removed (there was even an adapter to fit the big cards in the small slot, the voltage and timing of signals was all the same you just had to connect up the appropriate pins). The PC had a BIOS so it could run CP/M (DOS 1.0 was a straight CP/M clone, then Paul Allen extended it by adding a bunch of Unix features during 2.0 as a transition path to Xenix. Until he came down with Hodgkins Lymphoma in 1983, Microsoft's planned successor to DOS was Xenix. Messing around with OS/2 and Windows happened after Paul Allen left and Gates started following blindly after IBM and Apple because Allen had been the ideas guy, Gates was always about cashing in on _other_ people's ideas.)
So when Compaq cloned IBM's PC bios, it was not exactly a new idea. Despite IBM's best efforts: that's what it was FOR.
I mirrored an old geocities site on CP/M history way back when that covers some of this:
http://landley.net/history/mirror/cpm/history.html
The Extensible Firmware Interface - an introduction
The Extensible Firmware Interface - an introduction
Sigh. I actually caught that while preparing the article and got the answer from Matthew. I put the footnote in the right place, but forgot to add the reference...one of those weeks. It's there now, sorry for the confusion.
[5]
The Extensible Firmware Interface - an introduction
- If the BIOS is really setting a minimum OS, it needs to initialise all the managed devices (wait for SATA and USB enumeration to complete), work that needs to be completely forgotten and restarted by Linux a bit later (unless Linux trusts EFI descriptions - USB3 PCI card on USB2 aware EFI).
- If booting from CDROM/DVD needs to still work, everything this minimum OS has initialised has to be undone, and the memory below 1 Mbyte has to be untouched.
- If booting from removable (USB) device shall still be supported, you still have the problem to tell Linux from which device it has booted (multiple possible root file-system present)
- If EFI analyses partitions, it limits the partition system that can be installed (removable devices)
- If EFI analyses file-systems, it limits the file-systems that can be used (removable devices)
- If EFI displays characters, it limits the fonts, font sizes and number of characters (UTF-8) that can be used.
- If EFI does more than put an ethernet frame on the wire and get an ethernet frame from the wire, and can do TCP, it means a lot of information has to be exchanged in between EFI and Linux (IP address, DHCP data...)
- EFI do not solve the problem that "the Linux kernel has been upgraded since last boot", so shall we boot the older kernel because it is a resume operation, or shall we boot the latest kernel because that is a full boot operation, or are we in recovery mode and should boot an older but known to work kernel?
OK, the "old BIOS" is not perfect and each version has some bugs, and it does not provide a standard way to put an ethernet frame on the wire (ethernet card no more have a FLASH memory for their own BIOS driver), and you can't know which keyboard region is installed, but being lower level it has proven to be quite flexible.
The Extensible Firmware Interface - an introduction
The Extensible Firmware Interface - an introduction
> - If EFI analyses file-systems, it limits the file-systems that can be used (removable devices)
The Extensible Firmware Interface - an introduction
- What if you have two disks with each an EFI partition, none of those disk being the primary disk (BIOS disk 0x80)?
- What if those disks are software RAID-0?
- What if those disks are RAID-1 and contains only half of the files?
- What if each disk contains different Linux distribution and need to boot even if the other disk is not present (hard disk swapping bay)?
- What if you have to use a special partition index to boot your USB thumb drive (BIOS restriction)?
- What if one disk is extremely unreliable (hardware failure or partitions overlapping) and you want to try to recover its content by booting another clean distribution?
- What if there were bugs in BIOS/EFI during the transition to 4 Kbytes per hardware sectors? Is the partition table in 512 bytes or 4096 bytes sector?
- What if you have a 2048 bytes/sector disk (DVD-RAM) with Linux installed but there cannot be partitions (no partition supported by Linux on CD/DVD, just "superfloppy")?
The Extensible Firmware Interface - an introduction
The Extensible Firmware Interface - an introduction
The Extensible Firmware Interface - an introduction
The Extensible Firmware Interface - an introduction
The Extensible Firmware Interface - an introduction
The "first disk" is where the BIOS can be a bit configured, there is most of the time a boot menu where you can select what the "first disk" is, and if the first disk is not readable (disk broken or any other reason), then the next disk is tried.
Something nobody planned 30 years ago happens: disk sector sizes are going to increase from 512 to 4096 bytes per sector.
With current systems you hopefully will still have the first sector loaded (for any sector size one can imagine) and one can hope that at least the first 512 bytes are correctly loaded, and it should be sufficient for the bootloader to manage the situation (I tried to make that possible with Gujin, but cannot test because those disks are still under NDA).
Now, I believe EFI will handle 512 and 4096 bytes/sector at the SATA interface, but in few years there may be something else - and because EFI is IHMO too "intelligent", to boot you will not just need to upgrade a package in your distribution, but you will need to upgrade your BIOS.
The Extensible Firmware Interface - an introduction
The Extensible Firmware Interface - an introduction
- for me, if it is possible to do in hardware, Linux should try to support it in software (because we do not know what will be the future).
- for you, if the number of people doing strange things is statistically insufficient, there is no point in trying to support it (because anyway we will have new PC including the evolution if it is good enough).
PC ROM BIOS
The Extensible Firmware Interface - an introduction
The Extensible Firmware Interface - an introduction
The Extensible Firmware Interface - an introduction
The Extensible Firmware Interface - an introduction
The Extensible Firmware Interface - an introduction
The Extensible Firmware Interface - an introduction
The Extensible Firmware Interface - an introduction
It is not to criticise kexec, that probably works perfectly, it is just about one use case: booting a different kernel release.
That yesterday device has some configuration bits labelled "do not touch", i.e. bits that you should write to the same value that you read them.
Having "do not touch" bits in configuration words is a usual way to plan for extensibility, those bits are currently zero, but in future (backward compatible) version of the device they may have a meaning, like enabling the super-duper new function.
So yesterday kernel behaves perfectly well, preserving the value of the "do not touch" bits, and obviously do not have the driver of the super-duper function.
I am not saying this pattern happens often, I am not saying the system will always misbehave, I am just saying there is a risk.
The Extensible Firmware Interface - an introduction
The Extensible Firmware Interface - an introduction
The Extensible Firmware Interface - an introduction
The Extensible Firmware Interface - an introduction
The Extensible Firmware Interface - an introduction
The Extensible Firmware Interface - an introduction
The Extensible Firmware Interface - an introduction
The Extensible Firmware Interface - an introduction
The Extensible Firmware Interface - an introduction
The Extensible Firmware Interface - an introduction
The Extensible Firmware Interface - an introduction
The Extensible Firmware Interface - an introduction
I think it goes to show it would be simpler just to use Linux in place of DXE.
The Extensible Firmware Interface vs early Linux
The Extensible Firmware Interface vs early Linux
The Extensible Firmware Interface vs early Linux
You favor USB but other people/platforms favor something else. Anything else.
The Extensible Firmware Interface vs early Linux
I can't tell what point posting the URL listing lots of serial communication protocols is supposed to make. Is this perhaps a response to my saying I don't know of anyone proposing a protocol other than USB for talking to a bootstrap-only device? Has someone proposed some other serial communications protocol for that? Is there one you would recommend?
The Extensible Firmware Interface vs early Linux
The Extensible Firmware Interface vs early Linux
The Extensible Firmware Interface vs early Linux
A significant number of the interfaces in this list are used for booting. If you "simplify" your firmware design by hardcoding to any of these you will make users of the others unhappy.
A development effort to lose some of your customers?!
By the way USB + managed flash is among the most expensive solutions; this matters for low cost embedded systems.
The Extensible Firmware Interface vs early Linux
The Extensible Firmware Interface vs early Linux
How some do it is a little bit boring; in my experience with Gujin few BIOS will check that the Windows MBR is present by checking bytes at offset 0x0c to be identical to the Windows MBR ones.
IHMO checking the first byte would be better (either jmp or cli) when the 0xAA55 signature is present.
The complexity for the boot sequence is that you may be on a BIOS/EFI which only handle a previous generation (only USB1 on USB2 hardware, only USB2 on USB3 PCIe card) so you know where (PCI address) is the device you are booting from, but in Linux this PCI address do not exists because the full USB chipset is supported.
Handling these corner cases (as I tried in Gujin) is not really easy.
The Extensible Firmware Interface vs early Linux
Well, they have first to recognise that the USB disk first sector contains executable code, else they will often jump to a lot of zero instructions.
at the bootloader level you do not have USB, you have OHCI, UHCI, EHCI or xHCI - and more in the embedded world.
The Extensible Firmware Interface vs early Linux
I would say, checking that the beginning of the MBR is not completely blank is still acceptable, to allow someone to boot without removing all USB stick plugged-in - but checking Windows MBR assembly crosses the limit.
The Extensible Firmware Interface vs early Linux
I would say, checking that the beginning of the MBR is not completely blank is still acceptable, to allow someone to boot without removing all USB stick plugged-in - but checking Windows MBR assembly crosses the limit.
Note that in your scheme, you still want to simply write your Linux kernel to the USB disk (and not do "cat kernel > /dev/sdb") so you need a bootloader which will analyse the partitions and the filesystem inside it - so once the bootloader is running it still need access to the underlying device to access that file
The Extensible Firmware Interface vs early Linux
> It is intentionally too dumb to choose a boot device.
In real PC you buy today, the BIOS is doing a lot more than that, that is why people have so much problems booting off USB.
After a lot of debugging Gujin, I can tell you that (for instance) the EEEPc 900 BIOS will not report it has an extended BIOS disk service interface if it detect the 0x29 signature at offset 38. Also, the "HP Compaq 8000 Elite" PC will crash if it is booted with a USB device which do not contains 0x1F 0xFC 0xBE signature at 0xB (the windows MBR opcodes).
There is a reason why Grub has now the Windows MBR opcodes at its MBR beginning...
Also, I do not know of a single BIOS which will not test the 0xAA55 signature at the end of the MBR, and I have found BIOSes which will only boot if there is only a single partition on the device, and it is the 4th one - else the ROM code decides (without meaningful message displayed) that the device is not bootable, and the next device has to be tried.
(I think that was the test for ATAPI 100 Mbytes floppies).
The BIOS should be dumb, but in practice it does look at the MBR content, and because "there is no more floppies" the only test case is: does Windows boot.
The Extensible Firmware Interface vs early Linux
The BIOS should be dumb, but in practice it [isn't].
The Extensible Firmware Interface - an introduction
The Extensible Firmware Interface - an introduction
The Extensible Firmware Interface - an introduction
The Extensible Firmware Interface - an introduction