I had some ideas, here goes, please flame my gross inaccuracy.
The Kernel Of The Future, John Tate.
Currently we have hybrid-kernels, monokernels, and microkernels. We currently have at least one very good implementation of each of these and they all hold good ideas. The purpose of this short article is to look at another design of kernel architecture based around an Open Source model, at least on an API level. It is also based around a number of new ideas brought in by the L4 microkernel, and Microsoft’s .NET strategy.
Microkernels work by putting as little work in the kernel as possible, and they have evolved quite a lot over the years. Currently the best implementation around is the L4 microkernel. This works by having servers that do certain tasks. The kernel just handles basic process, memory, and hardware-address permission tasks. Servers that run in the user-space part of the system instead handle hardware, networking, and things that were decided don’t really belong in kernel space. This provides a far smaller kernel footprint and also other stability benefits.
The stability benefits can be seen as obvious, if something handling a device has a serious crash it will just bring itself down and not the entire system. If there is a security flaw in the networking stack, instead of needing to update the entire kernel and reboot a simple server can be restarted. On top of that new features can be added to the system easily.
Monokernels on the other hand put a lot of features in the kernel, and this has performance benefits because it does not rely on communication between the kernel and servers like microkernel systems. The kernel itself is a lot more complex but the system surrounding it is simplified, a microkernel has a simplified kernel and complex systems surrounding it.
Complexity and simplicity both trade off in the other direction. Microsoft Windows 5.x (2000, XP, 2003 Server) have very complex internals based on a hybrid kernel design, a very powerful set of API features (COM, COM+) and recently a even more complex runtime/API (.NET) that allow the system to have a simplified way of handling the system for the user. Most typical UNIX systems have a simplified design that gives the user a more complex way of handling the system – which is better is almost indefinable because of how much it depends on the user themselves.
Because microkernels do not depend on as many system calls for tasks it can quite potentially provide a great desktop performance because the servers can be preempted easily by the kernel, unlike most other kernels where system calls interrupt everything to complete their task. Certain tasks like file system operations that take longer than expected can thus slow down the entire system. The Linux Kernel unlike every other kernel I am aware of can preempt system calls and has in doing so proven that a system can greatly increase performance by doing so.
The problem with microkernel servers as opposed to system calls within a monokernels for features is that it leaves a small performance footprint. In older implementations of microkernels such as GNU Hurd the performance loss can sit at around 40-60% because they put far to much overhead on checking IPC. The L2, L3, and L4 microkernels based off mach slowly improved this. The current L4 implementations only loose around 5% of the performance.
I propose a dynamic micro/hybrid kernel that would effectively have a kernel that can use system calls and IPC/servers to handle certain tasks. The idea is that modular code is written that can work either independently (or semi-independently with some kind of server module loading system) or as part of the kernel (like a Linux kernel module), which is accessed by system calls, without needing to reload the entire kernel to do so.
This would provide the best of both worlds. A system used as a high performance network file server could have its file system and networking stack as modules of the kernel accessed by system calls, whereas servers could handle less important features such as graphics, accessed by IPC. In another light, someone who likes high performance games could have the kernel load a module for graphics and memory mapping while other features are loaded as servers.
Of course this would make the systems the help the userland and the rest of the Operating System more complex, a program using the network stack would need to know if it should be using IPC to a server or system calls to the kernel. The way around this would be to have an advanced programming library that can split API tasks of the operating system into modes, and the API itself decides what to use based on preset variables – or the kernel could be given an extra job of handling API calls for low level features with a set of virtual system calls, which if the feature is a server the kernel handles the IPC or otherwise the virtual system call is overridden with a real one, thus removing a further “checking” bottleneck from system calls.
Place to discuss anything, almost. No politics, religion, Microsoft, or anything else that I (the nazi censor) deem inappropriate.
1 post • Page 1 of 1