Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Relationship between physical and virtual interfaces #115

Open
daniel-pebble opened this issue Apr 20, 2024 · 2 comments
Open

Relationship between physical and virtual interfaces #115

daniel-pebble opened this issue Apr 20, 2024 · 2 comments

Comments

@daniel-pebble
Copy link

I understand this is a new project, so you might not have answers to some of my questions.

I've deployed the "media-proxy.yaml" DaemonSet onto a Kubernetes cluster, along with the "sample-app-tx". I've set the sample arguments to stream to a Telestream PRISM at 720p and 25fps, but the stream is received but cannot decode the video. Should this work, or is the sample app designed only to stream between the tx and rx samples?

I notice that the media-proxy DaemonSet uses "kernel:eth0", which is the virtual container interface. However, I don't fully understand how this would map to the physical network adapter. In my system, I have four interfaces: two Broadcom 1Gbps and two Intel X710 25Gbps interfaces. As far as I can tell, the streaming is going via the Intel NIC, but my assumption is that this isn't bypassing the kernel. Could you explain the relationship between the physical and virtual interfaces between the MTL manager and the host?

@daniel-pebble daniel-pebble changed the title Relationship between phyiscal and virtual interfaces Relationship between physical and virtual interfaces Apr 20, 2024
@Mionsz
Copy link
Collaborator

Mionsz commented Jul 5, 2024

Hi :-),

If you still have interest in the application stack we can schedule a meeting for introduction and know-how of the project. Feel free to propose a time that fits your needs, and pass the info here and/or here: milosz.linkiewicz@intel.com

Thank you for reaching out with your questions regarding the Media Communications Mesh project. I understand you are experiencing issues with video decoding when streaming to a Telestream PRISM device at 720p and 25fps using the "sample-app-tx". Let's address your concerns one by one, also having in mind that there was rather big release couple of days ago, for example ffmpeg plugin was published (https://github.com/OpenVisualCloud/Media-Communications-Mesh/tree/main/ffmpeg-plugin).

Streaming Compatibility with Telestream PRISM:

The sample applications provided with the Media Communications Mesh are primarily designed to demonstrate the capabilities of the framework and to facilitate testing between the tx (transmit) and rx (receive) samples within the MCM environment. While the sample apps should theoretically be capable of streaming to any device that supports the relevant protocols, there may be additional configuration or compatibility considerations that need to be addressed for successful integration with third-party devices like Telestream PRISM.

To troubleshoot the issue:

Ensure that the Telestream PRISM device is configured to accept the specific media format being transmitted by the sample app. Check for any required settings or parameters that might be needed for the PRISM device to decode the stream correctly. If the issue persists, it may be beneficial to capture and analyze the network traffic to verify that the packets are being formed and transmitted as expected.

First of all try running checks on your host machine:

sudo dmesg | grep "Intel(R) Ethernet Connection E800 Series Linux Driver" || echo 'Error!'
sudo dmesg | grep "The DDP package was successfully loaded" || echo 'Error!'

You should get information that ICE drivers are loaded and that they are using Kahawai version (this is the most important part):

image

Also keep in mind that running MTL Manager on the host is a mandatory for communication outside the MCM Media Proxy scope. I encourage you to using the newly added FFmpeg container instead of SDK applications, you can also utilize more generic FFmpeg build, that was released couple of days ago and It got a much better documentation, Intel® Tiber™ Broadcast Suite: https://github.com/OpenVisualCloud/Intel-Tiber-Broadcast-Suite

Working example of 2-node bare metal run:

I have build docker files using latest release (main branch) by running ./build_docker.sh and as a result I've got images mcm/media-proxy:latest, mcm/ffmpeg:latest and mtl/mtl-manager:latest had to be build separately. I am using 2 baremetal nodes with DPDK with Kahawai configured. To start media-proxy, ffmpeg and mtl-manager containers I use the same command on Node 1 and Node 2. I attach host path dir with assets from local /opt/assets to inside the docker /opt/new_assets and I use RAW video as an input in yuv422p10le pixel format:

Non interactive mtl-manager:

docker run -it --privileged  -u 0:0 --net=host -v /var/run/imtl:/var/run/imtl -v /tmp/mcm/memif:/run/mcm -v /dev/hugepages:/dev/hugepages -v /dev/hugepages1G:/dev/hugepages1G -v /sys/fs/bpf:/sys/fs/bpf -v /dev/shm:/dev/shm -v /dev/vfio:/dev/vfio mtl-manager:latest

Interactive and blocking the terminal, version of media-proxy:

docker run -it --entrypoint="/bin/bash" --privileged -u 0:0 --net=host -v /var/run/imtl:/var/run/imtl -v /tmp/mcm/memif:/run/mcm -v /dev/hugepages:/dev/hugepages -v /dev/hugepages1G:/dev/hugepages1G -v /sys/fs/bpf:/sys/fs/bpf -v /dev/shm:/dev/shm -v /dev/vfio:/dev/vfio mcm/media-proxy:latest

On Rx node, I start media-proxy by:

# node01s01c, virt-function
# vfio-user: 0000:a8:01.1:
media_proxy --dev=0000:a8:01.1 --ip=192.168.96.1

On Tx node, I start media-proxy by:

# node01s02c, virt-function
# vfio-user: 0000:98:01.1:
media_proxy --dev=0000:98:01.1 --ip=192.168.96.2

Interactive and blocking the terminal, version of ffmpeg:

docker run -it --entrypoint="/bin/bash" --privileged -u 0:0 --net=host -v /var/run/imtl:/var/run/imtl -v /tmp/mcm/memif:/run/mcm -v /dev/hugepages:/dev/hugepages -v /dev/hugepages1G:/dev/hugepages1G -v /sys/fs/bpf:/sys/fs/bpf -v /dev/shm:/dev/shm -v /dev/vfio:/dev/vfio -v /opt/assets:/opt/new_assets mcm/ffmpeg:latest

Then to start Rx on node 1 (node01s01c), where I set rtp retransmission to my PC address so that I can easily attach to the stream using ffplay (ex. ffplay -protocol_whitelist file,rtp,udp -i sdp_file.sdp) type:

ffmpeg -stream_loop -1 -framerate 60 -video_size 1920x1080 -pixel_format yuv422p10le -i /opt/new_assets/demo/netflix_toddler_1920x1080_60fps_10bit_le.yuv -f mcm -frame_rate 60 -video_size 1920x1080 -pixel_format yuv422p10le -protocol_type auto -payload_type st20 -ip_addr 192.168.96.1 -port 9001 -
ffmpeg -re -f mcm -frame_rate 60 -video_size 1920x1080 -pixel_format yuv422p10le -protocol_type auto -payload_type st20 -ip_addr 192.168.96.2 -port 9001 -i - -preset ultrafast -vcodec libx264 -b:v 16384k -strict -2 -sdp_file /opt/new_assets/sdp_file.sdp -f rtp rtp://192.168.0.123:12345

For the last part - the lopped Tx that is being run on the node 2 (node01s02c) I type:

ffmpeg -stream_loop -1 -framerate 60 -video_size 1920x1080 -pixel_format yuv422p10le -i /opt/new_assets/demo/netflix_toddler_1920x1080_60fps_10bit_le.yuv -f mcm -frame_rate 60 -video_size 1920x1080 -pixel_format yuv422p10le -protocol_type auto -payload_type st22 -ip_addr 192.168.96.1 -port 9001 -

If all goes correct, the working applications looks more or less like thie:

image

Relationship Between Physical and Virtual Interfaces:

The "kernel:eth0" notation:

The "kernel:eth0" notation in the media-proxy DaemonSet configuration refers to the virtual network interface within the container. The Media Transport Library (MTL) and DPDK work together to facilitate high-performance packet processing. DPDK can be configured to use different types of poll mode drivers (PMDs) to interact with the network interfaces. When using DPDK, the data plane traffic can bypass the kernel's networking stack, allowing for lower latency and higher throughput.

In your setup, the Intel X710 25Gbps interfaces are likely being used for the media streaming. The MTL manager is responsible for configuring the DPDK environment and managing the resources, including the binding of the physical NICs to the DPDK-compatible driver. This setup allows the media-proxy to communicate directly with the physical NIC, bypassing the kernel's network stack for data plane traffic.

Double check Intel NIC:

To confirm that the streaming is indeed going via the Intel NIC and bypassing the kernel, you can check the DPDK configuration and the bindings of the network interfaces. Look for the DPDK logs or use tools like dpdk-devbind.py to verify which interfaces are being used by DPDK. Additionally, ensure that the correct NICs are being targeted in the configuration and that the necessary hugepages and other resources are allocated appropriately for DPDK's operation.

If you require further assistance or have additional questions, please feel free to reach out. We are committed to supporting you and ensuring the successful deployment and operation of the Media Communications Mesh in your environment.

Best regards
Miłosz Linkiewicz

@Mionsz
Copy link
Collaborator

Mionsz commented Jul 5, 2024

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants