Zenoh-Flow provides a zenoh-based dataflow programming framework for computations that span from the cloud to the device.
Zenoh-Flow allow users to declare a dataflow graph, via a YAML file, and use tags to express location affinity and requirements for the operators that makeup the graph. When deploying the dataflow graph, Zenoh-Flow automatically deals with distribution by linking remote operators through zenoh.
A dataflow is composed of set of sources — producing data, operators — computing over the data, and sinks — consuming the resulting data. These components are dynamically loaded at runtime.
Remote source, operators, and sinks leverage zenoh to communicate in a transparent manner. In other terms, the dataflow the dafalow graph retails location transparency and could be deployed in different ways depending on specific needs.
Zenoh-Flow provides several working examples that illustrate how to define operators, sources and sinks as well as how to declaratively define they dataflow graph by means of a YAML file.
First, let's build an example runtime to run the examples
cargo build --release -p runtime
This will create the runtime binary in ./target/release/runtime
First, compile the relevant examples:
cargo build --release -p manual-source -p example-fizz -p example-buzz -p generic-sink
This will create, depending on your OS, the libraries that the pipeline will fetch.
To run all components on the same Zenoh Flow runtime:
./target/release/runtime --graph-file ./graphs/fizz_buzz_pipeline.yaml --runtime foo
Note: in that particular case the --runtime foo
is discarded.
In a first machine, run:
./target/release/runtime --graph-file ./graphs/fizz-buzz-multiple-runtimes.yaml --runtime foo
In a second machine, run:
./target/release/runtime --graph-file ./graphs/fizz-buzz-multiple-runtimes.yaml --runtime bar
First, compile the relevant examples:
cargo build --release -p camera-source -p face-detection -p video-sink
This will create, depending on your OS, the libraries that the pipeline will fetch.
To run all components on the same Zenoh Flow runtime:
./target/release/runtime --graph-file ./graphs/face_detection.yaml --runtime foo
Note: in that particular case the --runtime foo
is discarded.
In a first machine, run:
./target/release/runtime --graph-file ./graphs/face-detection-multi-runtime.yaml --runtime gigot
In a second machine, run:
./target/release/runtime --graph-file ./graphs/face-detection-multi-runtime.yaml --runtime nuc
In a third machine, run:
./target/release/runtime --graph-file ./graphs/face-detection-multi-runtime.yaml --runtime leia
First, compile the relevant examples:
cargo build --release -p camera-source -p object-detection-dnn -p video-sink
This will create, depending on your OS, the libraries that the pipeline will fetch.
Then please update the files ./graphs/dnn-object-detection.yaml
and ./graphs/dnn-object-detection-multi-runtime.yaml
by changing the neural-network
, network-weights
, and network-classes
to match the absolute path of your Neural Network configuration
To run all components on the same Zenoh Flow runtime:
./target/release/runtime --graph-file ./graphs/dnn-object-detection.yaml --runtime foo
Note: in that particular case the --runtime foo
is discarded.
In a first machine, run:
./target/release/runtime --graph-file ./graphs/dnn-object-detection-multi-runtime.yaml --runtime foo
In a second machine, run:
./target/release/runtime --graph-file ./graphs/dnn-object-detection-multi-runtime.yaml --runtime cuda
In a third machine, run:
./target/release/runtime --graph-file ./graphs/dnn-object-detection-multi-runtime.yaml --runtime bar
ffmpeg
and run the following command: ffmpeg -framerate 15 -pattern_type glob -i 'I1*.png' -c:v libx264 I1.mp4
.
First, compile the relevant examples:
cargo build --release -p video-file-source -p object-detection-dnn -p video-sink
This will create, depending on your OS, the libraries that the pipeline will fetch.
Then please edit the file ./graphs/car-pipeline-multi-runtime.yaml
by changing the neural-network
, network-weights
, and network-classes
to match the absolute path of your Neural Network configuration.
You also need to edit the file
in ./graphs/car-pipeline-multi-runtime.yaml
to match the absolute path of your video file.
In a first machine, run:
./target/release/runtime --graph-file ./graphs/car-pipeline-multi-runtime.yaml --runtime gigot
In a second machine, run:
./target/release/runtime --graph-file ./graphs/car-pipeline-multi-runtime.yaml --runtime cuda
In a third machine, run:
./target/release/runtime --graph-file ./graphs/car-pipeline-multi-runtime.yaml --runtime macbook