I mentioned in passing during my Linux.Conf.Au talk that we were looking at moving portions of the video drivers into the kernel. Others, including Alan Coopersmith have started to chime in on this note and I thought I should write a bit more about what I was thinking.
At least a few of the goals are as follows:
- BIOS-free Suspend/Resume support
- Text-mode panic
- Flicker-free graphical boot
- Fancy animated user switching
A significant non-goal here is an in-kernel unified API for graphics acceleration. Our current high-level architecture for rendering is in good shape, with user-mode filling a ring buffer with hardware-specific commands and the kernel just dispatching buffers.
Ok, so as you can see from the list above, I'm just talking about device detection, configuration and video mode selection.
Why this sudden desire to move video mode selection into the kernel? Well, for years, we were told by several video card vendors that there was "no way" we could ever figure out how to do mode setting without using the BIOS. If you believe you must use the BIOS, it's difficult to see how that can be done from within the kernel; executing the BIOS requires either an x86 emulator or vm86 support, neither of which belong in the kernel. For a long time, I believed the video chip vendors. Silly me.
Several people, including Luc Verhaegen, had been moving BIOS-based drivers towards native mode setting. While the BIOS was a nice crutch, real support requires us to follow other operating systems and program the hardware directly. It's the only way to get at the full range of hardware capabilities, although it does require a lot more code. And, some machines will not quite work right until magic tweaks are added. On balance, the number of machines fixed is greater than the number of machines broken (by a huge margin, given the lack of BIOS support for the native panel mode support in many laptops).
Now, if you have to have BIOS support for video mode selection, you have to wait for usermode to wake up before you can program the video card. There's not a lot you can do early in the kernel, without some joyous adventure involving initrd devices that include a complete X server. Ick.
Once you embrace native mode setting, suddenly the range of options opens up and you can think about what might be possible if this code were moved down into the kernel. Of course, the first thought is how little you can move.
So, with that, we can start enumerating what capabilities must be in kernel mode to solve the list of problems above.
First, to get BIOS-free suspend/resume support working, we already require the ability to save and restore the entire graphics state, including synchronizing with applications performing graphical operations. For video systems involving external chips (most, these days), this also means saving and restoring the state of those chips.
While we may mock Windows and the BSOD, many people would be far happier with that than the current state of an Xorg system where kernel panics are not announced at all; instead, the screen simply freezes and the user has no idea what has happened. Even the ability to escape from this state is limited to those with the secret handshake knowledge. Getting back to a simple text mode and displaying the panic message requires the ability to program a fixed mode from within the kernel, and the ability to lock out other users of the graphics device (perhaps waiting for the hardware pipeline to drain, if necessary).
Eliminating the screen flashing during boot-up will require the ability to set the desired final graphical mode as soon as the kernel is running (or, even from the boot loader?). I suggest that the precise target mode can be computed during system installation time (and changed for next boot by the user session), so this only requires the ability to read the desired mode configuration from somewhere at boot time. Kristian Høgsberg has been working on an early switch to the final graphical mode, but hasn't managed to eliminate the flashing.
I envision fancy user switching being implemented by a transition from one X server to another rather than trying to get two user sessions running inside the same X server. Right now, we switch users with a VT switch where the first session switches back to text mode and the second switches back to graphical mode. Flash, Flash. Having the kernel recognise that the two X servers were using the same mode will eliminate the flashing nicely, the only remaining issue is how to animate from one session to the next. I think that's largely a matter of being able to allocate multiple front buffers and creating an animation client that uses two X server front buffer images to construct the intermediate representations.
Finally, building a kernel API for all of this has become possible because we've all come to recognise that there are commonalities in mode selection across video hardware. The Intel driver had some separation between CRTC and output when we started working on it. Luc's Via driver has been moving this direction for some time. While the Radeon driver uses a different approach (it sets everything on the card every time you touch anything), it too recognises the distinction between CRTC and Output. Matthew Tippett suggested that even the closed source ATI driver worked this way as he, along with Kevin Martin, redesigned RandR 1.2 to work this way.
Because we now have a reasonably common abstraction across a wide range of drivers, it now seems tractable to produce a common API. We've already started this inside the X server itself; the hope is that working on the common layer there will inform choices about how the kernel API should look. This will include
- Mode setting primitives.
- GPU ring buffer management
- Memory management of some kind.
Yes, this makes modesetting just an addition to the existing DRM drivers; they're responsible for the GPU and memory management stuff, and the modesetting driver needs both of those bits to perform the operations listed above, so it cannot be 'underneath' the DRM driver. We welcome our new DRM overlords.