A common misconception about the Linux kernel is that it's secure, or that one can go a long time without worrying about kernel security updates. Neither of these are even remotely true. New versions of Linux are released almost every week, often containing security fixes buried among the many other changes. These releases typically don't make explicit mention of the changes having security implications. As a result, many "stable" or "LTS" distributions don't know which commits should be backported to their old kernels, or even that something needs backporting at all. If the problem has a public CVE assigned to it, maybe your distro will pick it up. Maybe not. Even if a CVE exists, at least in the case of Ubuntu and Debian especially, users are often left with kernels full of known holes for months at a time. Arch doesn't play the backporting game, instead opting to provide the newest stable releases shortly after they come out.


(this post brought to you by copying-and-pasting from the source webpage html into an emacs buffer and running pandoc.el to convert it to markdown and then pasting the result into mastodon)

@emacsomancer Things like this are a big part of why I get excited about microkernels and the idea of exokernels (iirc, there's only been the one made for research). The kernel only matters so much for security because we made it responsible for doing literally everything remotely useful.

@architect The idea being that some of these things would be part of userland or some other non-kernel level and so the security implications would not be as great?

Or that dividing up things up different would make certain types of security-implicating bugs less likely (by reducing certain types of complexity)?

Or something else?

@emacsomancer yes.
It allows bugs in say, the network stack from enabling a direct jump to a root shell, because those parts of the system aren't even the same binary. So while security issues would still be serious issues that need to be fixed, you'd have some degree of isolation purely from the realities of the implementation, it also means that while any given service could go down, you won't have a kernel panic if your filesystem driver chokes or a kernel module crashes.

@emacsomancer If memory serves, this is a good talk by Andrew Tanenbaum on how these sorts of things are mitigated in MINIX3:

@emacsomancer Exokernels take this concept even further by basically only being responsible for booting the system, then they let other programs handle everything else like bringing up the filesystems, starting processes, etc.
In this model, Linux could be implemented as a library to allow execution of Linux binaries, and the same could be done for other operating systems.
It's pretty basic, but the wikipedia page is a good place to start for more info:

@architect Do you think there's enough interest in the Linux kernel development community to make this anything that's likely to be feasible any time soon? Or would it take some sort of major security fail to provide impetus for a shift of this kind?


@emacsomancer @vertigo I don't recall if there's been much work on Linux unikernels, but they'd fit into an exokernel system quite easily as their whole design is to essentially have the kernel and necessary programs into a single process started after boot.
Even without that work though, the existing Linux compat layers in use could be used to create a "liblinux" that provides the expected syscall interfaces.

@emacsomancer @architect @vertigo In paravirtualization mode I guess it might qualify, since IIRC the guest runs in user mode in that case. But the way exokernels provide access to the hardware is the interesting part, and I don’t think Xen qualifies in that regard.

@architect @vertigo @emacsomancer For example, an exokernel can multiplex a single block device and a single network card without virtualizing it, by using BPF-like filters. Xen doesn’t do that AFAIK.

@freakazoid @vertigo @emacsomancer The boundary for these things is a bit fuzzy, but the Xen hypervisor is a microkernel, iirc. Since its focus is virtualization, it could probably be switched to an Exokernel, though from what I recall reading an Exokernel wouldn't need virtualization for anything other than running non-native CPU architectures. Without that need, each "guest OS" is just another process, no different from ed or nginx, just more complex


I'm possibly slightly biased (being a long time user) but I like the #GrapheneOS roadmap for progressing in a sensible direction, while leveraging the usability/security gained from the massive work going into #Android and the ecosystem of apps & devices around it

Graphene currently only works on Pixels - Google (and others) put a load of work into fixing security issues in the kernels of devices that still get updates
@freakazoid @vertigo @emacsomancer

@dazinism I know *why* Graphene chooses Pixels. Unfortunately, that still doesn't make Pixels super great devices for my needs. (No card slot, 128Gb max internal storage - doesn't work for me.)

I'm also compelled to use apps that require Google Play Services. Running LineageOS+microG ends up being the optimal solution for me (and I can use it on e.g. devices with sdcard storage or 512Gb internal).

@architect @freakazoid @vertigo


Graphene has just implemented a shim so you can install all the Google Play stuff as normal unprivileged sandboxed apps and still get most the functionality.

You can even install them in a secondary user profile on the phone and just use apps there. Graphene has the unique feature of being able to kill secondary users (and flush encryption keys from memory) without having to restart the phone!

Its pretty revolutionary
@architect @freakazoid @vertigo

@dazinism @emacsomancer @architect @freakazoid Maybe I'm a bit daft, but looking through the conversation thread, I fail to see what motivated a topic-change from a conversation of unikernels and exokernels to Android.

Is it possible to remove me from future posts on this thread? Thanks!

Soz - although the link to the GrapheneOS roadmap explains the plans to gradually move from linux kernel to a microkernal architecture with a linux comparability layer. @emacsomancer @architect @freakazoid

@dazinism That's pretty cool! When the upcoming Pixel 6 Pro is supported by Graphene, it might be usable for me. (The rumoured specs of the Pixel 6 Pro are almost as good my current 2019 phone.)


Will provide more complete functionality than microG and will avoid the large work needed to port microG to a new version (takes months) holding up the OS being able to move to latest full updates

P.S. not ideal but can use USB thumb drive or even some external harddrives if you need more than available on device and cant just have stuff on a server somewhere.
@architect @freakazoid @vertigo

@dazinism Unfortunately, I need the data to be device-local and the device to be "pocket-friendly", so neither a server-based nor thumb-drive-based solution is feasible.

Hopefully the Pixel 6 (Plus, probably) will end up being a reasonable device.

The shim-approach sounds pragmatic, though the disadvantage would be that Google Play Services would still be running, and I find it's a power-hog compared to microG.

@emacsomancer in short, the concept of "stable" distribution and "old releases are more secure" is the most dangerous lie that Debian made into a widespread belief.

Yes, like in Exim [21Nails]( you have remote exploits because some commits were not recognized as security fixes to be backported. Apparently same applies to Linux kernel.

@hxd @steko

It seems like the optimal model may be frequent updates to everything, with some sort of snapshotting at the machine-level so the user can quickly roll-back breaking changes if necessary.

@emacsomancer @hxd if I understand correctly, this is what Nix and Guix do.

@steko @hxd

It is. One could also use ZFS or BTRFS snapshotting.

Sign in to participate in the conversation

A Mastodon instance for programming language theorists and mathematicians. Or just anyone who wants to hang out.