conferences | speakers | series

The microkernel overhead

home

The microkernel overhead
FOSDEM 2012

Since the famous [Tanenbaum-Torvalds debate](http://en.wikipedia.org/wiki/Tanenbaum–Torvalds_debate) the general public sticks to the golden rule of thumb: Microkernel systems, while nice and elegant, are just academic toys. Due to the infamous communication overhead and other self-imposed limitations, they are never going to be as useful for the general use (in terms of performance) as the good old monolithic systems. Since the 1990s many researchers (especially the people around L4) have struggled to lower the overhead using the most extraordinary tricks. Others have tried to apply the microkernel design to mission critical, safety critical and other niche targets, where the benefits of the microkernel design clearly outweights the drawbacks. And other folks spent years creating hybrid systems to get the best (and hopefully not the worst) of the both worlds. But are the drawbacks of the microkernels fundamental?

The way computers are designed, the way programmers think and the way the IT economy works have changed profoundly over the last 20 years. We no longer try hard to save every single CPU cycle and every single byte of RAM in every single routine. We acknowledge that spending 20 % more on a faster CPU and more RAM to run inteligebly designed software is a better idea than spending 20 % more each year on maintaing software with tons of ugly performance hacks and quirks. Our machines are massively concurrent and we tend to (or are forced to) think more in terms of effective parallel algorithms than just plain sequential throughput. So perhaps it is time to reconsider the true impact of the microkernel overhead given the present conditions and requirements. Key topics: - Reasons for the microkernel overhead - Qualitative and quantitative analysis of the overhead - Ways to minimize it - Ways to live with it - Ways to embrace it

Speakers: Martin Děcký