Skip to content

OpenAMP Application Services Sub group Meeting Notes 2021

Nathalie Chan King Choy edited this page Nov 2, 2021 · 2 revisions

Table of Contents

2021-10-28

Agenda

  • Dan: Hypervisor-less virtio update
    • Activity review?
    • Status update?
    • Reference implementation progress
    • Demo? of reference implementation
    • Code overview / walkthrough
  • Additional discussion topics, if any (TBD)
  • Next steps (all)
  • For those interested to see what has been contributed to-date for our reference implementation, you can review the RSL (Real-time Services for Linux) daemon code as a diff from upstream kvmtool here and you can review the current hypervisor-less virtio Zephyr code here.

Attended

  • Wind River: Dan Milea, Maarten Koning, Josh Pincus
  • Linaro: Bill Mills, Alex Bennee
  • Nordic: Carles Cufi, Hubert Mis
  • Bay Libre: Carlo Caione
  • Arm: Hannes Tschofenig
  • ST: Loic Pallardy
  • Xilinx: Stefano Stabellini, Tomas Evensen, Nathalie Chan King Choy
  • OpenAMP Maintainers: Ed Mooring

Action Items

  • Maarten: Post the slides to OpenAMP Google Drive and share the slides on the mailing list
  • Bill & Maarten: Post answer to Carles' question on the mailing list
  • Dan: draw a ladder diagram of how standard process asking daemon to do something without knowing that it's asking daemon to do it.
  • Dan: describe in email what you did for VxWorks or how you plan to close this gap for Zephyr
  • Dan: send more detailed updates to app-services mailing list

Notes

  • Recording: Download from Zoom
    • Passcode: is87be?C
  • Alex: Upstream or Xilinx QEMU?
    • Dan: Xilinx b/c there is an available recipe to emulate ZCU102
  • Bill: Synchronization mechanism just a SW construct?
    • Dan: Yes
  • Bill: Relies on atomic ops between the 2 processors, or assumes 32-bit write is atomic?
    • Dan: Yes & yes. Standard virtio implementation uses memory barriers (loads/stores & everything sync'ed). What I'm talking about is just during config phase. The synchronization doesn't happen after each write.
  • Bill: So you're working on 2 solutions
      • Static feature & queue init
      • Other synchronization model that other ppl may be able to make work for them?
    • Dan: Want to avoid changing virtio implementation in either runtime too much. Sync has minimal change that only impacts config & leaves rest as-is. Static means removing more of config code, but everything becomes less configurable at runtime. Another goal is to reuse as much Linux infrastructure as possible. (vhost, vsock)
  • Carles: Is the objective of this work to allow communication between Zephyr and Linux in the same manner that the current OpenAMP codebase already does? Given that OpenAMP already comes with an implementation of virtio/virtqueue.
    • Bill:
      • 1 vector: If you look at RPMsg, it's a communication multiplexer on top of single virtqueue. Virtqueue based on legacy virtio format. Impl in kernel & in the library today assume legacy implementation.
      • 2nd vector: WR has taken lead on: Would be great if we could use traditional virtio protocols on top of some other mechanism & doing this work. This is the 1st time we're seeing the pieces come together for this work.
      • Xilinx does RPMsg from user space & might use same low level comm mechanisms that Dan needs for his work. Can Dan's work & a RPMsg user space co-exist in same system ?
      • Would hope kernel based RPMsg & Dan's daemon could co-exist
      • Would like to homogenize these 2 approaches in future & timeline TBD
    • Maarten:
      • Doing POC without constraints of historic use case allows us to have POSIX APIs, sockets, files integrated into rich OS infrastructure like Zephyr. Then apps can have APIs they expect.
      • Then can look at how to leverage, scale down/up without compromising the core values of the existing use cases.
      • There is some difference in requirements between bare metal and low level OpenAMP APIs. Still need to do the analysis where it's in our interest to do consolidation vs where it will compromise core values.
    • Bill:
      • Think these will grow together.
      • In traditional OpenAMP, virtio I2C has already been defined. Can we have RPMsg virtqueue and a virtio I2C device defined and working in the kernel so it just looks like IIC device to user space & multiple processes can interact w/ it.
      • Think there will be big, super tiny & stuff in the middle
    • Hannes: It would be useful to describe this in an email to the list
    • Maarten: For small code bases, OK to do special things. In bigger systems, need to do more alignment & consolidation. Because sometimes super optimized. Definitely should look at alignment opportunities.
    • Stefano: Similar in Xen. This approach is interesting for another reason. 2 problems solved differently & separately.
      • VM-VM comm. Not trying to virtualize any devices.
        • In the demo looking at vhost, vsock for comm in a crowded solution space. Many solutions besides RPMsg for VM-VM.
      • Providing virtual devices.
        • Interesting story. This could become THE approach b/c not many alternatives.
        • So, WR's project has potential to be focus of collaboration for the different use cases.
        • Makes sense to develop together in a single project.
    • Maarten: Unifying approach that spans all the different configs. Also useful to compute islands & can re-use the back ends (rsl daemon) and runtimes at syst architecture level, regardless of bare metal or virtualized environment.
    • Stefano: Key aspect also interesting: Doesn't require privileges on KVM tool side.
    • Maarten: We haven't really called out the de-privileging of back-end
    • Bill: You will have some memory mapping, so will probably be a more fine grained permission model. Will need some permission model for low-level interfaces.
    • Maarten: Best practice of minimum privilege & in this case it will be less than with kvm.
    • Stefano: If there is API to pre-share some memory, then that's all you need. Much more secure than mapping all of guest memory of all guests when the connection is created.
    • Bill: Daemon great for file system access, vsock. From network, there are some use cases where you want the virt net to be in the kernel & not in a daemon, so the OS can pretend it's talking to a real device.
    • Dan: Standard setup uses tap interface
    • Stefano: End result is same - DomU have access to real network
    • Bill: The packets are pinging up to daemon & coming back down.
    • Dan: Enabling virtio net was not main focus, it just happens to work over hypervisorless virtio transport. We have af-vsock which can replace any traffic going over virtio net, and vsock uses less queues.
    • Bill: Not that we shouldn't demo network, just that there may be multiple implementations that make sense for network.
  • Zephyr virtio
    • Zephyr is a special cookie :)
    • Configuration is static. Everything needs to be defined at compile time. Not much flexibility.
    • Tweaked Zephyr for virtio console to init things really early. Virtio stack needs to be tweaked & no longer a straightforward port from FreeBSD, which we could do on VxWorks.
    • Features supported match kvmtool
    • Hypervisorless virtio implies MMIO. Focused on Arm targets & MMIO is the standard.
    • Single buffer queueing: WIP, functional MVP Running on QEMU for Arm A53
    • Trying to decide what to focus on next. Maybe virtio-net for Zephyr….?
  • Hypervisorless virtio demo with VxWorks on QEMU aarch64 with R5 for ZCU102
    • Tap interface on Linux
    • Started RSL daemon: lkvm w/ bunch of additional parameters to specify shared memory, map to tap interface, enable vsock & 9p. Pretty standard - comes from kvmtool.
    • Started VxWorks on R5 w/ remoteproc. Has virtio net interface. Can parse DTB fragment to extract info for virtio devices (console, vsock, network, 9p). Showed stuff on the file system.
    • Can ping VxWorks & Linux from 1 side to the other
    • Interesting part: Vsock. Netcat vsock can be used on Linux to listen for connections from VxWorks. e.g. Type text on VxWorks side & it shows up on Linux side. Vsock is meant to be used as transport for higher level services like debugger. For compute islands, can have vsock support & proxy everything else over vsock.
    • Bill: Is Netcat vsock a standard tool?
      • Dan: Yes.
    • Bill: Would be helpful if you could draw a ladder diagram of how standard process asking daemon to do something without knowing that it's asking daemon to do it.
    • Dan: Using vhost vsock (ie. Linux impl of vsock) means af-vsock is automagically available in that Linux runtime. So any user space application using af-vsock sockets can connect to guest, which is identified by CID, transparent. So, netcat vsock is regular user space application that hooks into support in Linux kernel & everything goes through vhost/vsock kernel module to the daemon, which is listening. The daemon does hypervisorless virtio to the auxiliary runtime & back.
    • Bill: But, it's involved in not only setting it up, but in each data packet transfer?
    • Dan: Yes, b/c don't have hypervisors any more. So, each data transaction needs to be notified. Regular virtio backends are easier, using encapsulated back-end implementation from kvmtool. Vsock is special. There are other implementations of vsock - future work to look at Amazon Firecracker VMM w/ vsock in user space & guest somewhere. So, removing complexity from proxying scheme. Benefits & drawbacks. Benefit is being able to run netcat from user space & go through af-vsock transparently to communicate w/ aux runtime.
    • Stefano: As reference, there is VM-VM communication for Xen (Argo). Sender provides a buffer to Xen & makes a hypercall, hypercall transfers to receiver & receiver fetches from buffer, larger than single request. They have similar issue. They provide nice send/receive APIs, even to user space Linux. Not TCP/IP v4 compatible. Solve by providing library that provides the compatibility. Library gets ld preloaded before the apps. Ld preload trick is what typically gets used b/c removes an intermediary.
    • Bill: They grab socket syscalls?
    • Stefano: Yes
  • Zephyr demo 1: Interrupt driven console on generic QEMU aarch64 on A53
    • Hello world with shell enabled & virtio console. Printk timer every 10 seconds.
    • On right is virtio console, launched via command line argument to QEMU A53. Using socket at /tmp/foo and can socat to /tmp/foo.
    • Good: Well integrated in Zephyr infrastructure.
    • Bad: So well integrated that it's really Zephyr-specific. Would be hard to make a module like OpenAMP is.
  • Zephyr demo 2: Polling
    • Bill: Console is only integration in Zephyr right now?
    • Dan: Yes. This is why we are asking for help. Have been bugging the author of Zephyr virtio framework for features but there is only 1 person (he's a WR teammate not on the call).
  • What's next:
  • Josh: Are you sending interrupts / handling interrupts between the endpoints? or is everything polled?
  • Carles: So, will the Zephyr virtio infrastructure replace https://github.com/OpenAMP/open-amp/tree/master/lib/virtio in the future?
    • Dan:
      • Virtio implementation in OpenAMP is somewhat limited in how it's configured & uses just 2 queues. API you use w/ OpenAMP is RPMsg & virtqueue implementation is hidden from end users.
      • Regular virtio implies device-specific virtqueues. Not meant to replace virtio in OpenAMP, rather to complement it.
    • Bill: Short answer is "no".
  • Bill: Is there any visibility into the kernel module? Can we publish?
    • Dan: Think we need more documentation first. UIO is handled by device tree entry specifying reserved memory range, which Linux on ZCU102 creates UIO device & can map that device in the RSL daemon. Kernel module only provides notifications (similar to regular mailbox drivers). It exports to user space the means to interact w/ it like a regular device on any RTOS. On Linux there's a char device that supports select. That's what RSL daemon uses to get notifications. There's a sysfs entry that allows RSL daemon to push notifications to aux runtime. Hackish & meant to be replaced.
    • Bill: Mailbox indicates "something happened, go look"?
    • Dan: Yes. Josh working on virtio MMIO MSI's on kvmtool (in same kvmtool repo) to avoid muxing/demuxing. Fully using mailbox is a way to enhance performance.
  • Bill: Zephyr implementation is plain & not hooked into hypervisorless virto yet?
    • Dan: Not yet
    • Bill: So don't have visibility into what magic needs to happen on remote side. Can you describe in email what you did for VxWorks or how you plan to close this gap for Zephyr?
    • Dan: Reuse as much virtio framework as possible on aux runtime. Getting stuff in & out of shared memory w/ bounce buffers. Can change device drivers to use shared memory region, or I changed virtio framework to copy data to/from shared mem region in aux runtime.
    • Bill: What are you doing for VxWorks?
    • Dan: bounce buffers. Virtio framework on VxWorks based loosely on FreeBSD. Tweaked framework implementation, so 1 place to have all the changes, instead of tweaking several device drivers.
  • Josh: so interrupts do not use the framework. interrupts use the standard hardware interface which has been configured to send IPI's to the other endpoint?
    • Dan: Exactly
  • Bill: This is fantastic. Appreciate the hard work. This is a turning point for project for ppl to understand what you're doing.
    • Dan: Hoping for fully open end-to-end implementation. That's why looking to Zephyr as an aux runtime in hypervisorless virtio setup
  • Dan: Will send more detailed updates to app-services mailing list.

2021-05-11

Agenda

  • Presentation from Dan on MMIO-based inter-runtime application services

Attended

  • Maarten Koning
  • Dan Milea
  • Stefano Stabellini
  • Mark Asselstine
  • Apankoke
  • Loic Pallardy (ST)
  • Mark Dapoz
  • Mihai Dragusu
  • Tomas Evensen
  • Nathalie Chan King Choy
  • Hannes Tschofenig

Action items

  • Maarten: Will discuss next steps at TSC in terms of resourcing & activities around open source implementation for guest, using Zephyr

Notes

  • Link to the Zoom recording
  • Link to Dan's slides
  • Dan: Hypervisor-less Virtio update
    • 4 scenarios we're considering for RSL
      • Core isolation
      • Core offload
      • Virtual partitioning
      • Physical partitioning
    • Common across these use cases: Have Linux running alongside
      • Linux becomes offload engine for the RTOS b/c not intended or difficult to support on RTOS
    • VirtIO is omnipressent
    • Tweaked LKVM to drop the KVM part & become a VMM
    • Pulled in support for shared memory regions (will be in next version of VirtIO)
    • Hypervisorless virtio relies on moving data buffers in virtqueues from any location in guest to well defined shared memory region
    • Setup of shared memory region
      • Starts w/ DTB fragment to describe the devices
      • Standard virtio device headers
      • Shared memory
      • Table of empirical memory usage
    • RSLD sources published on OpenAMP GitHub
    • Disentangling Vsock notification infrastructure to be friendly for OpenAMP setup required adding notification layer. PMM acts as proxy for Vhost and captures vhost notifications & pushes to virtio front-end (guest). Same for guest notifications to vhost.
      • Now vhost becomes an offload engine for RTOS
      • Can also use vsock channels for socat, ncat tools to create channel between PMM & guest. On either side can have standard network services. File system is WIP.
    • Next: Prototype runs on i8. Goal is to run on Arm and create a demo with Zephyr & RSL Daemon.
  • Demo showing all the bits & pieces
    • Show Vsock can proxy GDB
    • Task on RTOS that GDB attached to. Can do GDB session from other side
    • Channels created using ncat. Opens local port. Ncat creates vsock connection.
  • Stefano: What can be used as notification mechanism? Interrupt? MSI?
    • Dan: Pretty much anything
  • Stefano: I thought vhost is more like specialized linux virtio accelerator. How can it work when running on separate cluster?
    • Dan: It's all shared memory. Take virtio from hypervisor based setup & use it as conduit between 2 entities (PMM & guest). Single pair of notifications in between (affects performance, but keeps things generic & portable). Works as in standard virtio - guest & PMM interact using virtqueues, which get signalled. Signals can be interrupts, which go from 1 side to the other & single pair of them. All the devices are checked for buffer availaibility. 1 virtuqueue notification triggers validation. It's memory mapped protocol + bunch of notification.
    • Stefano: Don't understand the vhost part
    • Dan: Vhost is hack. On Linux, vsock & vhost depend on each other. Hard to disentangle them. To get vsock to work with this slim virtio MMIO setup was to introduce proxy. Instead of skipping VMM (ultimate goal), forced presence of VMM to act as indirection layer.
    • Maarten: Linux thinks the PMM is the guest
    • Stefano: You are proxying the Vhost requests?
    • Dan: Yes
    • Dan: Outcome of investigation: Vsock usable with vhost means also virtio-net
    • Maarten: Once you get Vsock working, which has large address space for channels, you don't need an IP stack. Reduces shared memory requirement for using virtio, which was a concern when we first started app-services.
  • Next steps:
    • Moving things to another RTOS would make the solution even more portable => Arm
    • What are preferred platforms?
      • ZCU102 can be fully virtualized with some QEMU instances running both clusters
      • Off the shelf Arm target?
      • Tomas: Putting together an end-to-end demonstrating this, system DT, etc.
        • Targets: Xilinx board, ST board, QEMU, whoever else can chip in some work
        • Ed is allocated to start working on this
        • In next TSC, we can create a small group to get going on this
  • Dan: Hypervisor-less setup => Can do virtio backends for different guests b/c RSL daemon process running on Linux
    • Overlaps w/ other discussions incl Stratos
  • Stefano: Have you tried to run Linux on front-end side? Challenge to keep on radar: Normally virtio doesn't restrict pages used in queues in communication channel. So, front-end can choose any memory it likes.
    • Dan: Not yet. Going w/ RTOS means containerized layers. All our changes were in virtio framework (transport implementation). All the front-end drivers are unchanged. RTOS implementation uses bounce buffers (suboptimal, but limits changes to a well-defined layer).
  • Stefano: Is setup of shared memory & interrupt notification in PMM or outside?
    • Dan: Right now it is in PMM, but could be separate setup process. PMM would need to know the info about the shared memory & push it to the guest or create the data structures for guest to access in the shared memory. Notifications can be pretty much anything, as long as PMM can get them.
    • Stefano: Would be cool to run on Xen. Should not be difficult: Someone would need to do memory & interrupt mapping.
    • Maarten: Hypervisor-less virtio can be run on hypervisor b/c good hypervisor should get out of the way
    • Stefano: Another group is trying to do something similar with kvmtool, but you have done majority of the work. Maybe should point them your way.
    • Maarten: As long as patches being submitted are not entangled with Xen
  • Stefano: Did you fork kvmtool or wrap?
    • Dan: Fork, but still compatible with non-hypervisor-less use cases
    • Maarten: Additive, Incremental capabilities
  • Maarten: Will discuss next steps at TSC in terms of resourcing & activities around open source implementation for guest, using Zephyr. Starting point has been open sourced. At a point where we can scale this up.
  • Hannes: Thought there is already virtio port for Zephyr
Clone this wiki locally