Skip to content

Commit

Permalink
examples: dual_qemu_ivshmem: add RPMsg over IVSHMEM sample code.
Browse files Browse the repository at this point in the history
The additions include a backend to glue the Zephyr IVSHMEM
device driver into the openAMP code making it usable to send
data between two QEMU instances using the RPMsg protocol.

Also a custom shell command in the host side application
is provided to send string messages for a number of times.

Signed-off-by: Felipe Neves <felipe.neves@linaro.org>
  • Loading branch information
uLipe committed Jul 20, 2023
1 parent 7f1fb3b commit 322fa13
Show file tree
Hide file tree
Showing 17 changed files with 895 additions and 0 deletions.
3 changes: 3 additions & 0 deletions examples/zephyr/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,12 @@ This repository contains Zephyr example applications. The main purpose of this
repository is to provide references and demo on the use of OpenAMP on Zephyr based applications. features demonstrated in this example are:

- [rpmsg multi service][rms_app] application
- [dual qemu ivshmem][dqi_app] application



[rms_app]: #rpmsg_multi_services/README.md
[dqi_app]: #dual_qemu_ivshmem/README.md

## Getting Started

Expand Down
2 changes: 2 additions & 0 deletions examples/zephyr/dual_qemu_ivshmem/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
/host/build/*
/remote/build/*
247 changes: 247 additions & 0 deletions examples/zephyr/dual_qemu_ivshmem/README.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,247 @@
OpenAMP RPMSG over IVSHMEM system reference sample
##################################################
This application sample implements an RPMSg echo communication between
two QEMU instances based on the ARM Cortex A53 CPU, to use the RPMsg protocol
from the openAMP a backend was built on top of the Zephyr Inter-VM Shared
Memory (IVSHMEM) driver is implemented. A simple shell command provides to the
user an interaction with the application allowing to send data to a number
of times and receiving that data echoed from the other QEMU instance.

Prerequisites
*************

* Tested with Zephyr version 3.4.0

* Tested with Zephyr SDK 0.16.1

* QEMU needs to available.

Note:
*****

* The Vendor-ID 0x1AF4 is being used from all other Zephyr in-tree samples, currently there
is no Vendor allocated with the selected number.

ivshmem-server needs to be available and running. The server is available in
Zephyr SDK or pre-built in some distributions. Otherwise, it is available in
QEMU source tree.

Optionally the ivshmem-client, if available, can help to troubleshoot the
IVSHMEM communication between QEMU instances.

Preparing IVSHMEM server before doing anything:
***********************************************

#. The ivshmem-server utillity for QEMU can be found into the zephyr sdk
directory, in:
``/path/to/your/zephyr-sdk/zephyr-<version>/sysroots/x86_64-pokysdk-linux/usr/xilinx/bin/``

#. You may also find ivshmem-client utillity, it can be useful to debug if everything works
as expected.

#. Run ivshmem-server. For the ivshmem-server, both number of vectors and
shared memory size are decided at run-time (when the server is executed).
For Zephyr, the number of vectors and shared memory size of ivshmem are
decided at compile-time and run-time, respectively.For Arm64 we use
vectors == 2 for the project configuration in this sample. Here is an example:

.. code-block:: console
# n = number of vectors
$ sudo ivshmem-server -n 2
$ *** Example code, do not use in production ***
#. Appropriately set ownership of ``/dev/shm/ivshmem`` and
``/tmp/ivshmem_socket`` for your deployment scenario. For instance:

.. code-block:: console
$ sudo chgrp $USER /dev/shm/ivshmem
$ sudo chmod 060 /dev/shm/ivshmem
$ sudo chgrp $USER /tmp/ivshmem_socket
$ sudo chmod 060 /tmp/ivshmem_socket
Building and Running
********************
There are host and remote side projects, and they should be built individually, for the host side
open a terminal and then type:

.. code-block:: console
$ cd path/to/this-repo/examples/zephyr/dual_qemu_ivshmem/host
$ west build -pauto -bqemu_cortex_a53
For the remote side, open another terminal window and then type:

.. code-block:: console
$ cd path/to/this-repo/examples/zephyr/dual_qemu_ivshmem/remote
$ west build -pauto -bqemu_cortex_a53
* Note: Warnings that appear are from ivshmem-shell subsystem and can be ignored.

After getting the both applications built, open two terminals and run each
instance separately, please note the Host instance ``MUST`` be run ``FIRST`` and remote
instance ``AFTER``, this is needed in order to make both instances to know what is the
other peer IVSHMEM ID.

For example to run host instance:

.. code-block:: console
$ cd path/to/this-repo/examples/zephyr/dual_qemu_ivshmem/host
$ west build -t run
For the remote instance, just go to the remote side directory in another terminal:

.. code-block:: console
$ cd path/to/this-repo/examples/zephyr/dual_qemu_ivshmem/remote
$ west build -t run
Expected output:
****************
After running both host and remote QEMU instances in their own terminal tabs, and
in the ``RIGHT ORDER``, that is it, first the host instance followed by remote instance
go to the host instance terminal, you should see something like this:

.. code-block:: console
uart:~$ *** Booting Zephyr OS build v3.4.0-rc2-91-gbf0f58d69816 ***
Hello qemu_cortex_a53 - Host Side, the communication over RPMsg is ready to use!
If nothing appears, make sure you are running the remote instance after this one, because
the host side after started to run, wait for the remote one to get running, and after
this it becomes ready to use.

Having the initial boot message, go to the remote instance, and check it initial message
on console you may see something like this:

.. code-block:: console
*** Booting Zephyr OS build v3.4.0-rc2-91-gbf0f58d69816 ***
Hello qemu_cortex_a53 - Remote Side, the communication over RPMsg is ready to use!
Then go back to the host side terminal window, and issue the custom shell command
``rpmsg_ivshmem send`` and you should see how to use that:

.. code-block:: console
uart:~$ rpmsg_ivshmem send
send: wrong parameter count
send - Usage: rpmsg_ivshmem send <string> <number of messages>
Send a string to the remote side, specify also how many times it should be sent,
this command will send the data over RPMsg-IVSHMEM backend and the remote side
will reply back echoing the sent string, on the host terminal this should take
an output similar like the shown below:

.. code-block:: console
uart:~$ rpmsg_ivshmem send "RPMsg over IVSHMEM" 10
Remote side echoed the string back:
[ RPMsg over IVSHMEM ]
at message number 1
Remote side echoed the string back:
[ RPMsg over IVSHMEM ]
at message number 2
Remote side echoed the string back:
[ RPMsg over IVSHMEM ]
at message number 3
Remote side echoed the string back:
[ RPMsg over IVSHMEM ]
at message number 4
Remote side echoed the string back:
[ RPMsg over IVSHMEM ]
at message number 5
Remote side echoed the string back:
[ RPMsg over IVSHMEM ]
at message number 6
Remote side echoed the string back:
[ RPMsg over IVSHMEM ]
at message number 7
Remote side echoed the string back:
[ RPMsg over IVSHMEM ]
at message number 8
Remote side echoed the string back:
[ RPMsg over IVSHMEM ]
at message number 9
Remote side echoed the string back:
[ RPMsg over IVSHMEM ]
at message number 10
On the remote side terminal window is possible also to check the messages
arriving from host:

.. code-block:: console
*** Booting Zephyr OS build v3.4.0-rc2-91-gbf0f58d69816 ***
Hello qemu_cortex_a53 - Remote Side, the communication over RPMsg is ready to use!
uart:~$ Host side sent a string:
[ RPMsg over IVSHMEM ]
Now echoing it back!
Host side sent a string:
[ RPMsg over IVSHMEM ]
Now echoing it back!
Host side sent a string:
[ RPMsg over IVSHMEM ]
Now echoing it back!
Host side sent a string:
[ RPMsg over IVSHMEM ]
Now echoing it back!
Host side sent a string:
[ RPMsg over IVSHMEM ]
Now echoing it back!
Host side sent a string:
[ RPMsg over IVSHMEM ]
Now echoing it back!
Host side sent a string:
[ RPMsg over IVSHMEM ]
Now echoing it back!
Host side sent a string:
[ RPMsg over IVSHMEM ]
Now echoing it back!
Host side sent a string:
[ RPMsg over IVSHMEM ]
Now echoing it back!
Host side sent a string:
[ RPMsg over IVSHMEM ]
Now echoing it back!
This sample supports huge message number in order to do stress testing, something like
``rpmsg_ivshmem send "Test String" 10000000000000``, can be used for that, notice that
this command is blocking and have a 5 second timeout, returning if something goes wrong,
for example shutdown the remote side unexpectedly:

.. code-block:: console
uart:~$ rpmsg_ivshmem send "RPMsg over IVSHMEM" 10
Remote side response timed out!
uart:~$
Known limitation:
*****************
The limitation of this sample is in respect to the instances shutdown, if for some
reason host side or remote side get turned-off it ``MUST NOT`` be reinitialized individually,
in case of occurrence, both instances should be stopped and re-initialized following the
order constraints mentioned before (first run the host side followed by the remote side).
12 changes: 12 additions & 0 deletions examples/zephyr/dual_qemu_ivshmem/host/CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
# Copyright (c) 2023 Linaro
# SPDX-License-Identifier: Apache-2.0
cmake_minimum_required(VERSION 3.20.0)

find_package(Zephyr REQUIRED HINTS $ENV{ZEPHYR_BASE})
project(openamp_rpmsg_over_ivshmem_host)

target_include_directories(app PRIVATE ../rpmsg_ivshmem_backend)

target_sources(app PRIVATE
src/main.c
../rpmsg_ivshmem_backend/rpmsg_ivshmem_backend.c)
6 changes: 6 additions & 0 deletions examples/zephyr/dual_qemu_ivshmem/host/app.overlay
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
/*
* Copyright 2023 Linaro.
*
* SPDX-License-Identifier: Apache-2.0
*/
#include "boards/pcie_ivshmem.dtsi"
18 changes: 18 additions & 0 deletions examples/zephyr/dual_qemu_ivshmem/host/boards/pcie_ivshmem.dtsi
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
/*
* Copyright 2023 Linaro.
*
* SPDX-License-Identifier: Apache-2.0
*/
#include <zephyr/dt-bindings/pcie/pcie.h>

/ {
ivhsmem {
ivshmem0: ivshmem {
compatible = "qemu,ivshmem";

vendor-id = <0x1af4>;
device-id = <0x1110>;
status = "okay";
};
};
};
19 changes: 19 additions & 0 deletions examples/zephyr/dual_qemu_ivshmem/host/boards/qemu_cortex_a53.conf
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
# Copyright (c) 2023 Linaro
# SPDX-License-Identifier: Apache-2.0

CONFIG_PCIE_CONTROLLER=y
CONFIG_PCIE_ECAM=y

# Hungry PCI requires at least 256M of virtual space
CONFIG_KERNEL_VM_SIZE=0x80000000

# Hungry PCI requires phys addresses with more than 32 bits
CONFIG_ARM64_VA_BITS_40=y
CONFIG_ARM64_PA_BITS_40=y

# MSI support requires ITS
CONFIG_GIC_V3_ITS=y

# ITS, in turn, requires dynamic memory (9x64 + alignment constrains)
# Additionally, our test also uses malloc
CONFIG_HEAP_MEM_POOL_SIZE=1048576
20 changes: 20 additions & 0 deletions examples/zephyr/dual_qemu_ivshmem/host/prj.conf
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
# Copyright (c) 2023 Linaro
# SPDX-License-Identifier: Apache-2.0

CONFIG_PCIE=y
# required by doorbell
CONFIG_PCIE_MSI=y
CONFIG_PCIE_MSI_X=y
CONFIG_PCIE_MSI_MULTI_VECTOR=y
CONFIG_POLL=y

CONFIG_VIRTUALIZATION=y
CONFIG_IVSHMEM=y
CONFIG_IVSHMEM_DOORBELL=y

CONFIG_SHELL=y
CONFIG_IVSHMEM_SHELL=n
CONFIG_OPENAMP=y

# Comment this line when building to the slave instance
CONFIG_OPENAMP_SLAVE=n
Loading

0 comments on commit 322fa13

Please sign in to comment.