Friday, November 27, 2020

Linux: Comparison of netlink vs ioctl mechnaisms for configuration control in kernel space

Why bother?

If you are writing a new kernel module or adding configurability to an existing one, typically you need some means by which you can communicate with the kernel module from user space.

I came across this discussion on an internet message board and had saved it for offline reading. Unfortunately, I am not able to find the website to refer it. If you find it please send me a note. Here are the key points that were made for the comparison:

  • Polling vs direct: Kernel services can send information directly to user applications over Netlink, while you’d have explicitly poll the kernel with ioctl functions, a relatively expensive operation.

  • Synchronous vs offline: Netlink communication is fairly asynchronous, with each side receiving messages at some point after the other side sends them. ioctls are purely synchronous: “Hey kernel, WAKE UP and do this now”

  • Multicast support: Netlink supports multicast communications between the kernel and multiple user-space processes, while ioctls are strictly one-to-one.

  • Reliability: Netlink messages can be lost for various reasons (e.g. out of memory), while ioctls are generally more reliable due to their immediate-processing nature.

  • OS support: Netlink is effectively Linux-only; there’s an RFC that extends its utility to the software-defined networking (SDN) world, but I don’t know of anyone who’s actually implemented it for widespread adoption. In contrast, code written to use common ioctls (e.g. the terminal I/O series) is largely portable across platforms.

You will find multiple discussions on the internet which might have more comparisons, but the above ones concisely capture the most important aspects.

Pro tip:
At a high level - use these simple guide-lines.
For sending control info: ioctl should be your first choice, unless there’s an overriding reason, due to its immediacy and reliable delivery.
For sending data: For occasional data passing, ioctl should work fine. For bulk data, and especially if you’re look at asynchronous operation, Netlink is preferred.

Linux: Applications of kdump or kexec

Why use kdump

You are designing an embedded linux system and you want to ensure that - in general there are no or few crashes. However, it is especially important that when crashes do occur, we are able to collect all the possible dumps.

In the case of kernel crashes, it is especially important to get the entire gdb-like dump to ensure we can check the state of the memory/variables when the issue happens.

How does kdump work - A high level view

It is important to note that in the above sequence, when the crash happens, the crash/capture kernel boots in the context of the main kernel. Since it uses the same file system, we use the same init.d. In the init.d we check which kernel context we are running in e.g. -s vmcore FS to do the dump.
To setup the crash kernel, it is typically passed using command line parameters in the uboot:

crashkernel=256M@1892M ckernel=1

ProTip1: Setting up the crash kernel requires a good knowledge of how the platform is laid out and the memory organization. Sending incorrect parameters, can and likely will cause a lot of unexpected behavior.
ProTip2: Use other advanced options like -pic to give position indepenent code and also ask the SW not to reset the irq lines/controllers.

Reference

  1. Kernel documentation on kdump
  2. Red hat kdump crash recovery

Monday, November 23, 2020

Linux: Comma separated arguments in an If Statement

 What happens when you write code like this:

    if ((x,y) == true) {

Is it even a legal condition to put in? If yes, why would you use it?

We have an example you could try out:

bash-4.1$ cat test.cc 
#include <iostream>
using namespace std;
int main() {
    int x = 1, y =0;
    if ((x,y) == true) {
        cout << “X TRUE PATH" << endl;
    } else {
        cout << “Y FALSE PATH" << endl;
    }
}