Skip to content

Commit 6901f2f

Browse files
authored
Merge pull request #7 from Kotochleb/feature/readme-update
Proofread README and update build instructions
2 parents 18267ee + 63c9dd4 commit 6901f2f

File tree

1 file changed

+57
-29
lines changed

1 file changed

+57
-29
lines changed

README.md

+57-29
Original file line numberDiff line numberDiff line change
@@ -1,70 +1,98 @@
1-
A python wrapper around M3T tracker from https://github.com/DLR-RM/3DObjectTracking/tree/master
1+
# PYM3T
22

3-
# Installation
3+
A python wrapper around M3T tracker from [DLR-RM/3DObjectTracking](https://github.com/DLR-RM/3DObjectTracking/tree/master).
44

5-
`git clone git@github.com:MedericFourmy/pym3t.git --recursive`
5+
## Installation
66

7-
Install dependencies with conda:
8-
`conda env create --name pym3t --file environment.yaml`
7+
To install pym3t, you can use pip or poetry.
98

10-
Then
11-
`pip install .`
9+
We strongly suggest to install it in either a
10+
[venv](https://docs.python.org/fr/3/library/venv.html) or a
11+
[conda environment](https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html).
12+
13+
### Example with conda
14+
15+
```
16+
git clone https://github.com/agimus-project/pym3t
17+
cd pym3t
18+
conda env create -f environment.yml
19+
conda activate pym3t
20+
pip install .
21+
```
22+
23+
### Example with venv
24+
25+
> [!NOTE]
26+
> M3T relies on [GLFW](https://www.glfw.org/). Before building ensure it is installed.
27+
> For Ubuntu run `apt-get install libglfw3 libglfw3-dev`
28+
29+
30+
```
31+
git clone https://github.com/agimus-project/pym3t
32+
cd pym3t
33+
python -m venv .venv
34+
source .venv/bin/activate
35+
pip install .
36+
```
1237

1338
# Example scripts
14-
As example of usage of the library, scripts are provided:
15-
* `run_image_dir_example.py`: single object tracking using color and depth images from filesystem
16-
* `run_webcam_example.py`: single object tracking with first camera device detected by the system (webcam or other usb camera usually)
17-
* `run_realsense_example.py`: single object tracking with realsense camera
39+
As example usage of the library, we provide several scripts:
40+
* `run_image_dir_example.py`: single object tracking using color and depth images from filesystem;
41+
* `run_webcam_example.py`: single object tracking with first camera device detected by the system (webcam or other usb camera usually);
42+
* `run_realsense_example.py`: single object tracking with realsense camera.
1843

19-
:question:: check available options with `python <script>.py -h`
44+
> [!IMPORTANT]
45+
> For all examples, you need a object mesh in the Wavefront **.obj** format with name **<object_id>.obj**. Upon first execution, a set of sparse template views are generated which can take some time.
2046
21-
For all examples, you need a object mesh in the wavefront .obj format with name <object_id>.obj. Upon first execution, a set of sparse template views are generated which can take some time.
47+
> [!TIP]
48+
> Check available options with `python <script name>.py -h`
2249
2350
## Running image per image
2451
----
25-
For this example you need a set of of recorded sequential color (and potentially depth) images stored in a directory.
26-
The color images `color*.png` and `depth*.png` need have names with lexicographic order (e.g. color_000000.png, color_000001.png, color_000002.png...)
52+
To run this example you need a set of of recorded sequential color (and potentially depth) images stored in a directory.
53+
The color images **color\*.png** and **depth\*.png** need to have names with lexicographic order (e.g. *color_000000.png*, *color_000001.png*, *color_000002.png*, ...)
2754
Calibrated camera intrinsics in the formate described in config/cam_d435_640.yaml also need to be provided.
2855

2956
Color only:
30-
```
57+
``` bash
3158
python examples/run_image_dir_example.py --use_region -b obj_000014 -m <path/to/obj/dir> -i <path/to/image/dir> -c config/cam_d435_640.yaml --stop
3259
```
3360

3461
Color + depth:
35-
```
62+
``` bash
3663
python examples/run_image_dir_example.py --use_region --use_depth -b obj_000014 -m <path/to/obj/dir> -i <path/to/image/dir> -c config/cam_d435_640.yaml --stop
3764
```
3865

3966
Keyboard commands:
40-
- `q`: exit
41-
- `any other key`: When running with --stop/-s argument, continue to next image
67+
- `q`: exit;
68+
- `any other key`: When running with **--stop** or **-s** argument, continue to next image.
4269

4370
## Running with webcam
44-
To bypass camera calibration, a reasonable horizontal fov (~50-70 degrees) can be assumed to get camera intrinsics
45-
```
71+
To bypass camera calibration, a reasonable horizontal fov (50 - 70 degrees) can be assumed to get camera intrinsics
72+
``` bash
4673
python examples/run_webcam_example.py --use_region -b obj_000014 -m <path/to/obj/dir>
4774
```
4875

4976
Keyboard commands:
50-
- `q`: exit
51-
- `d`: reset object pose to initial guess
52-
- `x`: start/restart tracking
77+
- `q`: exit;
78+
- `d`: reset object pose to initial guess;
79+
- `x`: start/restart tracking.
5380

5481
## Running with realsense camera
5582
----
5683
Color only:
57-
```
84+
```bash
5885
python examples/run_realsense_example.py --use_region -b obj_000014 -m <path/to/obj/dir>
5986
```
6087

6188
----
89+
6290
Color + depth:
63-
```
91+
```bash
6492
python examples/run_realsense_example.py --use_region --use_depth -b obj_000014 -m <path/to/obj/dir>
6593
```
6694

6795
Keyboard commands:
68-
- `q`: exit
69-
- `d`: initialize object pose
70-
- `x`: start/restart tracking
96+
- `q`: exit;
97+
- `d`: initialize object pose;
98+
- `x`: start/restart tracking.

0 commit comments

Comments
 (0)