The Embedded Open Source Summit in Prague was a significant event for our company, as it marked our first major conference since the Pandemic. As dedicated professionals in the field of embedded Linux systems, attending this summit allowed us to gain valuable insights, learn from industry experts, and connect with like-minded individuals.
This year’s summit featured six distinct tracks, namely the Embedded Linux Conference, Zephyr Project Developer Summit, Automotive Linux Summit Europe, Embedded IoT Summit, Safety-Critical Software Summit, and the LF Energy Embedded Summit 2023. Our primary focus was on Linux and security-related topics. In this blog post, we will provide a brief overview of the noteworthy talks and presentations that left a lasting impact on us.
In this talk, Michael Opdenacker from Bootlin compared various file systems that can be used on an embedded board’s eMMC or SD card. So the comparison and benchmarks included not just read-only filesystems like SquashFS and EROFS, but also writable ones like ext4, XFS, btrfs, nilfs2 and F2FS.
The comparisons included read and write benchmarks, image size, size of the kernel module and how tiny a minimal filesystem could be packed.
Regarding filesystem size, SquashFS came out as a clear winner, while EROFS wins in terms of read speed. We’ve also discussed in more detail in a previous blog post.
If a writable filesystem is desired, with a focus on simplicity and image size, ext4 wins across the board.
If you’ve ever been to an Embedded Linux Conference, you’ll know Steven Rostedt.
Though not in Batman costume this time, he gave a great talk on
More specifically, how he resurrected this almost dead tool and how this helps speed up
every ChromeOS boot.
ureadahead records page faults during bootup and uses this information
on every subsequent boot to pre-cache the required pages in the Kernel’s page cache.
One of the great changes Steven made as part of his
ureadahead rewrite was
to remove the dependency on non-mainline tracepoints such that it now works
with vanilla Kernels.
Steven concluded with his list of future ideas for
ureadahead and his call
for the embedded community to get involved. The following discussion showed
that effective performance boost for embedded devices might be lower though
as the read patterns might be different. Also, compression e.g. with SquashFS
might put more load on the CPU which could reduce the I/O benefits.
As always, Tim Bird’s status report on embedded Linux was a conference highlight. In addition to presenting various development statistics, Tim focused on the fact that Linux is now the operating system of choice for most space vehicles. When comparing the current state of embedded Linux to that of 10 or 15 years ago, the overall outlook is very positive.
Tim also shared some unfortunate news: Wolfgang Denk, the founder of u-boot and DENX, passed away in 2022. As a major contributor to embedded Linux and some of us having personally met and worked with Wolfgang, we agree with Tim that he will definitely be missed! Moreover, elinux.org, a valuable resource for the community, has lost its funding and now faces an uncertain future.
Bernhard Rosenkränzer from BayLibre provided a comprehensive overview of the toolchain landscape in 2023. In addition to the traditional GNU toolchain (GCC, binutils, and glibc), we now have excellent tools from the LLVM project as well. Bernhard conducted an unbiased comparison of all the available options and emphasized that having more choices leads to increased test coverage for a project. He also introduced lesser-known libc implementations that could prove valuable for deep embedded systems.
One of the key takeaways from his presentation was that building the same project using different toolchains aids in bug discovery. This is because each compiler has its unique approach to detecting code issues. From a security perspective, this is something we can attest to!
Our own Richard Weinberger gave a nice talk on his work-in-progress debugging and evaluation tool for MTD. With MUSE he utilizes the existing FUSE interfaces and (slightly) extended them to now support writing MTD drivers in userspace.
Richard’s motivation for creating MUSE stems from him maintainer work in various Kernel raw flash subsystems: NAND, MTD, UBI, UBIFS. While the Kernel already has tools for this (mtdram, block2mtd, nandsim) all of them have their drawbacks and using them can be quite cumbersome. Everybody who had to figure out the proper NAND IDs for nandsim will be able to relate ;-)
Richard started with an overview of his various attempts for MUSE and why FUSE has a great fit for his use case. He also gave more detailed insights into the FUSE interface, how he used them for MUSE and how it might be useful for other use cases as well.
As we ourselves know from experience with our customers proper real-time behavior is often hard to achieve. Jan Altenberg from OSADL presented his findings from latency measurements in virtualized scenarios.
He evaluated multiple scenarios from using containerized environments with Docker
and fully virtualized settings with KVM and Jailhouse. His focus was primarily on
thread latency (tested with
cyclictest) and the influence of blocking kernel code (e.g. some
misbehaving device driver) and shared hardware resources like the CPU cache on real-time behavior in
host and guest OS.
The core takeaway from this talk is that correct assignment of hardware resources like CPU cores and configuration of hardware is essential to achieving good results. As one would expect, containerized environments with a shared OS Kernel have much more influence on real-time behavior on the whole system than properly configured virtualized environments. In Jan’s test, the most effective isolation was achieved when he used Jailhouse. Though even there, effects from the CPU cache will still cause disturbance between individual guest OSes depending on the CPU and guest OS segmentation.
Having access to the large OSADL real-time testing infrastructure definitely helped with this. In case you want to check out the OSADL QA Farm for real-time for yourself here.
Jan Kiszka of Siemens AG presented
kas (Bavarian for “cheese” or “nonsense”), a simple yet efficient
tool for setting up and performing Yocto builds.
Traditionally, when building a Yocto-based project, it is the responsibility of the project owner to
manage and checkout all the associated meta layers, create various configuration files, and initiate the build process.
kas allows describing the entire project in a single YAML file and provides tools for checking
out, managing, and building the project.
Additionally, there is an accompanying wrapper called
kas-container, which runs the entire Yocto
build within a Docker or Podman container.
This approach ensures that the build environment is completely separate from the host system,
minimizing any potential side effects during the build.
The project has been in existence since 2017, and at sigma star gmbh, we have been using it with great success for the majority of our Yocto projects since 2021.
On a similar subject as the talk on
kas, however with different goals and a
different implementation, Alexander Kanavin presented the new official Yocto
way to manage and reproduce build configurations and layers. Historically,
the Yocto Project did not provide such tools, which changed with the latest release(s).
In the current release (mickledore), the
bitbake-layers tool is able to save its
configuration, which can afterward be restored with
oe-setup-build (patch set is currently under code-review), respectively.
In the future, an additional high-level tool for setting up Yocto builds, called
oe-setup may be possibly available via PyPi, making build initialization even
simpler. Also an integration with
kas using a plugin is possible in the future.
Pengutronix’s GPU expert, Lucas Stach, provided a crisp overview of the pipelining and scheduling of GPU based rendering pipelines. The takeaway message was that GPUs are primarily optimized for throughput, not latency. A rendering pipeline involving shared resources, multiple compositing steps, and frame write synchronized with vertical refresh can easily rake up a latency in the double digit milliseconds.
This becomes relevant in the context of real-time applications, where we are interested in optimizing for stable, low latency. Generally, when discussing real-time properties, one should keep in mind that latency, jitter and throughput are separate concepts. “Real-time” does not necessarily mean “real-fast”. GPUs are primarily optimized for the latter.
Arnout gave an interesting talk on how to track vulnerabilities across all your dependencies with Buildroot and Yocto. Both projects include tooling to use CVE identifiers published via the NVD for tracking vulnerabilities. It was interesting to see that the current tooling sometimes makes tracking known vulnerabilities within your custom Linux distribution quite cumbersome.
Arnout went on explaining how tracking vulnerabilities across open source software using CVE and CPE databases is even further complicated by the fact that the information in these database is often not fully accurate. For example affected version numbers for vulnerabilities are incorrect or project names do not match up. For example OpenSSL as maintained upstream should be treated differently than OpenSSL maintained by Debian as the maintainers backport certain security fixes to older versions.
The talk concluded with a discussion of alternative ways for tracking vulnerabilities: OSV (Open Source Vulnerabilities) which look like an interesting alternative to CVEs if they get widely adopted.
In this presentation, Marek Vasut presented his various approaches to access embedded display outputs remotely for testing. It turned out, that this could actually be pretty useful for some of our customer’s projects too.
His presentation gave an in-depth overview about the various implementation options and the overall development odyssey. He started off with an introduction of high-bandwidth interfaces usable for capturing display output (USB 3.0 is the only real fitting candidate) and gave a rough overview over embedded system display buses.
Then, Marek presented the first, failed approach of using an FPGA with a bridge chip as a USB Video-Class (UVC) device, which had seemed like the obvious solution, but failed because of inflexibility and implementation problems. The second implementation, using the bridge chip as a FIFO directly, surprisingly worked out in the end.
After an initial, more or less “hacky” (albeit working) solution using some signal analyzer software, he designed his own PCBs and software for this use case. Marek also made his KiCad schematics and code openly available.
As always, the conference was a great exchange of ideas, featuring numerous captivating talks. Moreover, the hallway track provided an excellent opportunity for networking and intense discussions. We had the pleasure of meeting many fascinating individuals and engaging in stimulating conversations.
During our time at the summit, we had more to experience than just the conference sessions and talks. As a group, we took the chance to visit Prague Castle and enjoy some sightseeing. We admired the impressive architecture, explored the historic halls, and appreciated the cultural heritage of this captivating city. To wrap up the conference, we participated in a closing game hosted by Tim Bird, which added an entertaining element. The game included a few quiz questions related to Prague and its famous landmarks, which made our sightseeing excursion even more meaningful.