For the last few months, I’ve on-and-off been working on a new microkernel / RTOS that I’m calling K5. Naturally, your first question will be, why would you make a new RTOS when there are so many good ones already? Just written in Rust alone there is Hubris, TockOS, MnemOS, Embassy, RTIC, and a bunch of other's I haven't listed. K5 has a few unique goals:

  1. It aims to target microcontrollers, low-power SOCs, and crossover MCUs. Many RTOSs aims to just target one of these groups
  2. K5 has strict isolation between tasks and the kernel. Many RTOSes targeting microcontrollers share address space between each task and the kernel. K5 requires a strict separation between applications. So bugs in one component will not affect other components
  3. K5 is a microkernel with a capability system based on seL4. Drivers are fully run in userspace, and all interactions between tasks are mediated through capabilities. This is a key difference over an RTOS like Zeyphr where drivers run in kernel space.
  4. K5 aims to have the best possible developer experience on a wide variety of processors. Rust developers largely expect their projects to “just build”, and K5 aims to continue that.
  5. Last but not least, K5 aims to be verified by various formal verification methods. Using mostly safe rust helps a good deal here, but there are all sorts of other things that we can verify. It would be fun to verify the upper bounds of run-time for task-switching.

Design

K5 is somewhere between a library OS, and a more traditional operating system. K5 is BYOB (bring your own binary). You are responsible for bringing your own main function that starts a series of "tasks". A task is a set of codes that shares memory space. You can think of it like an app on your phone or a little like a process in a POSIX OS. Right now tasks are laid out in memory linearly on MCUs. This applies to both in the executable space and in memory space. On SOCs with MMUs K5 will (though does not currently) map tasks into their own virtual space.

Message Passing

K5’s design is loosely based on L4 series kernels. Everything in K5 is done through message passing between tasks. Tasks can only send to each other if they have a capability that references that task's ID. This is the key to K5’s permissions and security model. Capabilities are granted by the kernel to tasks. When a task wants to send a message to another they give K5 the capability along with a reference to some memory. K5 then either copies or “loans” that memory out to the other task. A loan means the memory stays in the same physical place and is either granted access or with an MPU mapped into a task's memory space. Loans are a fantastic way to implement large buffer passing because no copying takes place at all! They do require a little bit of extra work on the driver's end.

Interface Definitions

Microkernels necessitate lots of message passing, and you can easily get into a place where each driver has bespoke calling conventions. But bespoke calling conventions a friendly OS does not make. That’s why most microkernels have an IDL, an interface definition language. Mach (what Darwin MacOS kernel is based on) has MIG, Mach interface generator. Hubris has IDOL. And K5 has Piton. Microkernels necessitate lots of message passing, and you can easily get into a place where each driver has bespoke calling conventions. But bespoke calling conventions a friendly OS does not make. That’s why most microkernels have an IDL, an interface definition language. Mach (what Darwin MacOS kernel is based on) has MIG, Mach interface generator. Hubris has IDOL. And K5 has Piton. Piton is in its early stages right now, but the general idea is to allow the generation of types and client/server impls from a unified source file.

struct "Test" {
 field "foo" "u16"
 field "bar" "u32"
 field "boolean" "bool"
 field "array" "[u8; 20]"
}

struct "Foo" {
 field "foo" "Test"
 field "bar" "Bar"
}

enum "Bar" {
  variant "test"
  variant "b" "u8"
}

service "Driver" {
   method "send" "Bar" "Foo"
   method "close"
}

Broadly Piton will support a style of message passing where the client loans the server memory that includes both the request and the response. Then on a response, the server gives the memory back to the client, including the response. Types will be verified using the

Scheduler

K5's scheduler is influenced heavily by the MCS system from seL4. In K5 each task is given a priority, a budget, and a cooldown. Threads are scheduled in a round-robin fashion in descending order of priority (7 is the highest, while 0 is the lowest). When a thread is first scheduled on the CPU, it is given an amount of time it is allowed to execute, its budget. With each tick this budget is reduced, when it reaches zero the thread is "exhausted" and execution is paused. It is then added to the queue of exhausted threads. On each tick, the exhausted list is scanned for threads whose cooldown has elapsed, once a cooldown is elapsed the thread is rescheduled. This technique is almost identical to MCS with some terminology differences. MCS has a "period" which is equivalent to k5's budget + cooldown.

Both K5 and MCS allow the user to "loan" out their CPU time to another task. In K5 this is done exclusively through the call syscall, which sends a message to another thread and waits for a response. When call is executed the current threads budget is loaned out to the receiving task, and execution is immediately transferred. This allows you to implement "passive-servers", threads that lack their own budgets and are only ever scheduled when invoked by a client.

Usage

CLI

As I've discussed above, one of the primary goals of K5 is to be as developer-friendly as possible. Generally, the Rust embedded eco-system has great dev-ex, but historically this isn't true with RTOSs or embedded development. The heart of K5's good developer experience comes from the k5 CLI tool. Running k5 flash will flash your entire project to whatever device you have, using your requested method. For most Cortex-M devices this uses probe.rs. For RISC-V SOCs like the Allwinner D1, a platform native tool is used. k5 logs allows you to run the program on the target device, and then get log output. Right now two sources are supported, wc

The goal is to be as plug-and-play as possible for each device. I want the user to be able to download a project and run k5 logs to get log output ASAP. No fussing about with custom toolchains, or long dependency installs.

Take a look at the log output of a running K5 program below, for how easy it is to build, flash, and get logs out of K5.

Userspace

Right now K5 has a pure Rust userspace library, the goal is to be minimal and easy to use. For instance, listing a tasks capabilities looks like this:

let caps = userspace::caps().unwrap();

K5 has a task lookup facility, also facilitated by capabilities. You can look up a driver and then send it a message like so

let caps = userspace::caps().unwrap();
let endpoint = caps[0].cap_ref.connect().unwrap();
let mut buf = Page([0xFFu8; 32]);
let resp = endpoint.call_io(&mut buf);

What's Next?!

I have many plans for K5. In particular, I am currently working on flushing out Piton. After that, I am going to work on how K5 handles interrupts. Right now a task can't be registered with an interrupt handler. After that work is done I plan to work on MPU support on RISC-V.

If any of that sounds interesting to you, I'd love your help. You can find K5 on Github: https://github.com/sphw/k5