A common misconception about the Linux kernel is that it's secure, or that one can go a long time without worrying about kernel security updates. Neither of these are even remotely true. New versions of Linux are released almost every week, often containing security fixes buried among the many other changes. These releases typically don't make explicit mention of the changes having security implications. As a result, many "stable" or "LTS" distributions don't know which commits should be backported to their old kernels, or even that something needs backporting at all. If the problem has a public CVE assigned to it, maybe your distro will pick it up. Maybe not. Even if a CVE exists, at least in the case of Ubuntu and Debian especially, users are often left with kernels full of known holes for months at a time. Arch doesn't play the backporting game, instead opting to provide the newest stable releases shortly after they come out.
@emacsomancer Things like this are a big part of why I get excited about microkernels and the idea of exokernels (iirc, there's only been the one made for research). The kernel only matters so much for security because we made it responsible for doing literally everything remotely useful.
@architect The idea being that some of these things would be part of userland or some other non-kernel level and so the security implications would not be as great?
Or that dividing up things up different would make certain types of security-implicating bugs less likely (by reducing certain types of complexity)?
Or something else?
It allows bugs in say, the network stack from enabling a direct jump to a root shell, because those parts of the system aren't even the same binary. So while security issues would still be serious issues that need to be fixed, you'd have some degree of isolation purely from the realities of the implementation, it also means that while any given service could go down, you won't have a kernel panic if your filesystem driver chokes or a kernel module crashes.
@emacsomancer Exokernels take this concept even further by basically only being responsible for booting the system, then they let other programs handle everything else like bringing up the filesystems, starting processes, etc.
In this model, Linux could be implemented as a library to allow execution of Linux binaries, and the same could be done for other operating systems.
It's pretty basic, but the wikipedia page is a good place to start for more info: https://en.wikipedia.org/wiki/Exokernel
@emacsomancer @vertigo I don't recall if there's been much work on Linux unikernels, but they'd fit into an exokernel system quite easily as their whole design is to essentially have the kernel and necessary programs into a single process started after boot.
Even without that work though, the existing Linux compat layers in use could be used to create a "liblinux" that provides the expected syscall interfaces.
@freakazoid @vertigo @emacsomancer The boundary for these things is a bit fuzzy, but the Xen hypervisor is a microkernel, iirc. Since its focus is virtualization, it could probably be switched to an Exokernel, though from what I recall reading an Exokernel wouldn't need virtualization for anything other than running non-native CPU architectures. Without that need, each "guest OS" is just another process, no different from ed or nginx, just more complex
I'm possibly slightly biased (being a long time user) but I like the #GrapheneOS roadmap for progressing in a sensible direction, while leveraging the usability/security gained from the massive work going into #Android and the ecosystem of apps & devices around it
@dazinism I know *why* Graphene chooses Pixels. Unfortunately, that still doesn't make Pixels super great devices for my needs. (No card slot, 128Gb max internal storage - doesn't work for me.)
I'm also compelled to use apps that require Google Play Services. Running LineageOS+microG ends up being the optimal solution for me (and I can use it on e.g. devices with sdcard storage or 512Gb internal).
Graphene has just implemented a shim so you can install all the Google Play stuff as normal unprivileged sandboxed apps and still get most the functionality.
You can even install them in a secondary user profile on the phone and just use apps there. Graphene has the unique feature of being able to kill secondary users (and flush encryption keys from memory) without having to restart the phone!
@dazinism That's pretty cool! When the upcoming Pixel 6 Pro is supported by Graphene, it might be usable for me. (The rumoured specs of the Pixel 6 Pro are almost as good my current 2019 phone.)
Will provide more complete functionality than microG and will avoid the large work needed to port microG to a new version (takes months) holding up the OS being able to move to latest full updates
@dazinism Unfortunately, I need the data to be device-local and the device to be "pocket-friendly", so neither a server-based nor thumb-drive-based solution is feasible.
Hopefully the Pixel 6 (Plus, probably) will end up being a reasonable device.
The shim-approach sounds pragmatic, though the disadvantage would be that Google Play Services would still be running, and I find it's a power-hog compared to microG.
@emacsomancer in short, the concept of "stable" distribution and "old releases are more secure" is the most dangerous lie that Debian made into a widespread belief.
Yes, like in Exim [21Nails](https://www.qualys.com/2021/05/04/21nails/21nails.txt) you have remote exploits because some commits were not recognized as security fixes to be backported. Apparently same applies to Linux kernel.
A Mastodon instance for programming language theorists and mathematicians. Or just anyone who wants to hang out.