Skip to content

Commit

Permalink
Remove leading $ from bash commands
Browse files Browse the repository at this point in the history
  • Loading branch information
sea-bass committed Dec 14, 2023
1 parent d9d0cf8 commit 92f4eb3
Show file tree
Hide file tree
Showing 6 changed files with 76 additions and 70 deletions.
16 changes: 8 additions & 8 deletions moveit2/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,8 @@ The MoveIt2 Dockerfile installs all of the prerequisite system dependencies to b

To build the docker image, run:

```
$ ./build.sh
```bash
./build.sh
```

The build process will take about 30 minutes, depending on the host computer.
Expand All @@ -17,8 +17,8 @@ The build process will take about 30 minutes, depending on the host computer.

After building the image, you can see the newly-built image by running:

```
$ docker image list
```bash
docker image list
```

The output will look something like this:
Expand All @@ -34,8 +34,8 @@ The new image is named **openrobotics/moveit2:latest**.

There is a run.sh script provided for convenience that will run the spaceros image in a container.

```
$ ./run.sh
```bash
./run.sh
```

Upon startup, the container automatically runs the entrypoint.sh script, which sources the MoveIt2 and Space ROS environment files.
Expand All @@ -49,7 +49,7 @@ spaceros-user@8e73b41a4e16:~/moveit2#

Run the following command to launch the MoveIt2 tutorials demo launch file:

```
```bash
ros2 launch moveit2_tutorials demo.launch.py rviz_tutorial:=true
```

Expand All @@ -63,7 +63,7 @@ You can now follow the [MoveIt2 Tutorial documentation](https://moveit.picknik.a

To run the Move Group C++ Interface Demo, execute the following command:

```
```bash
ros2 launch moveit2_tutorials move_group.launch.py
```

Expand Down
6 changes: 3 additions & 3 deletions renode_rcc/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,15 +3,15 @@ Building [RTEMS Cross Compilation System (RCC)](https://www.gaisler.com/index.ph

## Usage
To run the simulator in docker container
```
$ renode
```bash
renode
```

> If you face GTK protocol error then exit the container, run `xhost + local:` and restart the conatiner to allow other users (including root) run programs in the current session.

In the renode window, run
```
```bash
start
s @renode-rtems-leon3/leon3_rtems.resc
```
12 changes: 6 additions & 6 deletions rtems/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,16 +10,16 @@ example is an hello world example build using `sparc-rtems5-gcc` tool.
tinyxml2 is modified to work with RTEMS build using `sparc-rtems5-gcc` tool..

## Compile
```
$ cd /root/tinxyxml2
$ ./doit
```bash
cd /root/tinxyxml2
./doit
```

## Save build artefacts
After compiling the binaries for the target, save the binaries and test payload for later use in renode.

```
$ docker cp containerId:/root/tinyxml2/tinyxml2.o <path to spaceros_ws>/docker/renode_rcc/build/
```bash
docker cp containerId:/root/tinyxml2/tinyxml2.o <path to spaceros_ws>/docker/renode_rcc/build/

$ docker cp containerId:/root/tinyxml2/xmltest.exe <path to spaceros_ws>/docker/renode_rcc/build/
docker cp containerId:/root/tinyxml2/xmltest.exe <path to spaceros_ws>/docker/renode_rcc/build/
```
74 changes: 37 additions & 37 deletions space_robots/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,25 +11,25 @@ This is for Curiosity Mars rover and Canadarm demos.
The demo image builds on top of the `spaceros` and `moveit2` images.
To build the docker image, first build both required images, then the `space_robots` demo image:

```
$ cd docker/spaceros
$ ./build.sh
$ cd ../moveit2
$ ./build.sh
$ cd ../space_robots
$ ./build.sh
```bash
cd docker/spaceros
./build.sh
cd ../moveit2
./build.sh
cd ../space_robots
./build.sh
```

## Running the Demo Docker

run the following to allow GUI passthrough:
```
$ xhost +local:docker
```bash
xhost +local:docker
```

Then run:
```
$ ./run.sh
```bash
./run.sh
```

Depending on the host computer, you might need to remove the ```--gpus all``` flag in ```run.sh```, which uses your GPUs.
Expand All @@ -38,8 +38,8 @@ Depending on the host computer, you might need to remove the ```--gpus all``` fl

### Curiosity Mars rover demo
Launch the demo:
```
$ ros2 launch mars_rover mars_rover.launch.py
```bash
ros2 launch mars_rover mars_rover.launch.py
```

On the top left corner, click on the refresh button to show camera feed.
Expand All @@ -48,72 +48,72 @@ On the top left corner, click on the refresh button to show camera feed.

Open a new terminal and attach to the currently running container:

```
$ docker exec -it <container-name> bash
```bash
docker exec -it <container-name> bash
```

Make sure packages are sourced:

```
$ source ~/spaceros/install/setup.bash
```bash
source ~/spaceros/install/setup.bash
```

```
$ source ~/demos_ws/install/setup.bash
```bash
source ~/demos_ws/install/setup.bash
```

#### Available Commands

Drive the rover forward

```
$ ros2 service call /move_forward std_srvs/srv/Empty
```bash
ros2 service call /move_forward std_srvs/srv/Empty
```

Stop the rover

```
$ ros2 service call /move_stop std_srvs/srv/Empty
```bash
ros2 service call /move_stop std_srvs/srv/Empty
```

Turn left

```
$ ros2 service call /turn_left std_srvs/srv/Empty
```bash
ros2 service call /turn_left std_srvs/srv/Empty
```

Turn right

```
$ ros2 service call /turn_right std_srvs/srv/Empty
```bash
ros2 service call /turn_right std_srvs/srv/Empty
```

Open the tool arm:

```
$ ros2 service call /open_arm std_srvs/srv/Empty
```bash
ros2 service call /open_arm std_srvs/srv/Empty
```

Close the tool arm:

```
$ ros2 service call /close_arm std_srvs/srv/Empty
```bash
ros2 service call /close_arm std_srvs/srv/Empty
```

Open the mast (camera arm)

```
$ ros2 service call /mast_open std_srvs/srv/Empty
```bash
ros2 service call /mast_open std_srvs/srv/Empty
```

Close the mast (camera arm)

```
$ ros2 service call /mast_close std_srvs/srv/Empty
```bash
ros2 service call /mast_close std_srvs/srv/Empty
```

### Canadarm demo

```
$ ros2 launch canadarm canadarm.launch.py
```bash
ros2 launch canadarm canadarm.launch.py
```
18 changes: 9 additions & 9 deletions spaceros/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,8 @@ The [Earthly](https://earthly.dev/get-earthly) utility is required to build this

To build the image, run:

```
$ ./build.sh
```bash
./build.sh
```

The build process will take about 20 or 30 minutes, depending on the host computer.
Expand All @@ -19,8 +19,8 @@ The build process will take about 20 or 30 minutes, depending on the host comput

After building the image, you can see the newly-built image by running:

```
$ docker image list
```bash
docker image list
```

The output will look something like this:
Expand All @@ -37,8 +37,8 @@ The new image is named **osrf/space-ros:latest**.
The `rocker` library is required to run the built image, install it by `sudo apt-get install python3-rocker`.
There is a run.sh script provided for convenience that will run the spaceros image in a container.

```
$ ./run.sh
```bash
./run.sh
```

Upon startup, the container automatically runs the entrypoint.sh script, which sources the Space ROS environment file (setup.bash).
Expand Down Expand Up @@ -164,8 +164,8 @@ Sometimes it may be convenient to attach additional terminals to a running Docke

With the Space ROS Docker container running, open a second host terminal and then run the following command to determine the container ID:

```
$ docker container list
```bash
docker container list
```

The output will look something like this:
Expand All @@ -177,7 +177,7 @@ d10d85c68f0e openrobotics/spaceros "/entrypoint.sh …" 28 minutes ago U

The container ID in this case, is *d10d85c68f0e*. So, run the following command in the host terminal:

```
```bash
docker exec -it d10d85c68f0e /bin/bash --init-file "install/setup.bash"
```

Expand Down
20 changes: 13 additions & 7 deletions zynq_rtems/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ To simplify collecting and compiling dependencies on a wide range of systems, we
You will need to install Docker on your system first, for example, using `sudo apt install docker.io`
Then, run this [script](https://github.com/space-ros/docker/blob/main/zynq_rtems/build_dependencies.sh):

```
```bash
cd /path/to/zynq_rtems
./build_dependencies.sh
```
Expand All @@ -28,7 +28,7 @@ This will typically take at least 10 minutes, and can take much longer if either

Next, we will use this "container full of dependencies" to compile a sample application.

```
```bash
cd /path/to/zynq_rtems
./compile_demos.sh
```
Expand All @@ -41,7 +41,8 @@ The build products will land in `zynq_rtems/hello_zenoh/build`.
The emulated system that will run inside QEMU needs to have a way to talk to a virtual network segment on the host machine.
We'll create a TAP device for this.
The following script will set this up, creating a virtual `10.0.42.x` subnet for a device named `tap0`:
```

```bash
./start_network_tap.sh
```

Expand All @@ -51,30 +52,34 @@ We will need three terminals for this demo:
* Zenoh-Pico publisher (in RTEMS in QEMU)

First, we will start a Zenoh router:
```

```bash
cd /path/to/zynq_rtems
cd hello_zenoh
./run_zenoh_router
```
This will print a bunch of startup information and then continue running silently, waiting for inbound Zenoh traffic. Leave this terminal running.

In the second terminal, we'll run the Zenoh subscriber example:
```

```bash
cd /path/to/zynq_rtems
cd hello_zenoh
./run_zenoh_subscriber
```

In the third terminal, we will run the RTEMS-based application, which will communicate with the Zenoh router and thence to the Zenoh subscriber.
The following script will run QEMU inside the container, with a volume-mount of the `hello_zenoh` demo application so that the build products from the previous step are made available to the QEMU that was built inside the container.
```

```bash
cd /path/to/zynq_rtems
cd hello_zenoh
./run_rtems.sh
```

The terminal should print a bunch of information about the various emulated Zynq network interfaces and their routing information.
After that, it should contact the `zenohd` instance running in the other terminal. It should print something like this:

```
Opening zenoh session...
Zenoh session opened.
Expand Down Expand Up @@ -127,6 +132,7 @@ This is a good thing.
# Clean up

If you would like, you can now remove the network tap device that we created in the previous step:
```

```bash
zynq_rtems/stop_network_tap.sh
```

0 comments on commit 92f4eb3

Please sign in to comment.