|
1 |
| -A python wrapper around M3T tracker from https://github.com/DLR-RM/3DObjectTracking/tree/master |
| 1 | +# PYM3T |
2 | 2 |
|
3 |
| -# Installation |
| 3 | +A python wrapper around M3T tracker from [DLR-RM/3DObjectTracking](https://github.com/DLR-RM/3DObjectTracking/tree/master). |
4 | 4 |
|
5 |
| -`git clone git@github.com:MedericFourmy/pym3t.git --recursive` |
| 5 | +## Installation |
6 | 6 |
|
7 |
| -Install dependencies with conda: |
8 |
| -`conda env create --name pym3t --file environment.yaml` |
| 7 | +To install pym3t, you can use pip or poetry. |
9 | 8 |
|
10 |
| -Then |
11 |
| -`pip install .` |
| 9 | +We strongly suggest to install it in either a |
| 10 | +[venv](https://docs.python.org/fr/3/library/venv.html) or a |
| 11 | +[conda environment](https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html). |
| 12 | + |
| 13 | +### Example with conda |
| 14 | + |
| 15 | +``` |
| 16 | +git clone https://github.com/agimus-project/pym3t |
| 17 | +cd pym3t |
| 18 | +conda env create -f environment.yml |
| 19 | +conda activate pym3t |
| 20 | +pip install . |
| 21 | +``` |
| 22 | + |
| 23 | +### Example with venv |
| 24 | + |
| 25 | +> [!NOTE] |
| 26 | +> M3T relies on [GLFW](https://www.glfw.org/). Before building ensure it is installed. |
| 27 | +> For Ubuntu run `apt-get install libglfw3 libglfw3-dev` |
| 28 | +
|
| 29 | + |
| 30 | +``` |
| 31 | +git clone https://github.com/agimus-project/pym3t |
| 32 | +cd pym3t |
| 33 | +python -m venv .venv |
| 34 | +source .venv/bin/activate |
| 35 | +pip install . |
| 36 | +``` |
12 | 37 |
|
13 | 38 | # Example scripts
|
14 |
| -As example of usage of the library, scripts are provided: |
15 |
| -* `run_image_dir_example.py`: single object tracking using color and depth images from filesystem |
16 |
| -* `run_webcam_example.py`: single object tracking with first camera device detected by the system (webcam or other usb camera usually) |
17 |
| -* `run_realsense_example.py`: single object tracking with realsense camera |
| 39 | +As example usage of the library, we provide several scripts: |
| 40 | +* `run_image_dir_example.py`: single object tracking using color and depth images from filesystem; |
| 41 | +* `run_webcam_example.py`: single object tracking with first camera device detected by the system (webcam or other usb camera usually); |
| 42 | +* `run_realsense_example.py`: single object tracking with realsense camera. |
18 | 43 |
|
19 |
| -:question:: check available options with `python <script>.py -h` |
| 44 | +> [!IMPORTANT] |
| 45 | +> For all examples, you need a object mesh in the Wavefront **.obj** format with name **<object_id>.obj**. Upon first execution, a set of sparse template views are generated which can take some time. |
20 | 46 |
|
21 |
| -For all examples, you need a object mesh in the wavefront .obj format with name <object_id>.obj. Upon first execution, a set of sparse template views are generated which can take some time. |
| 47 | +> [!TIP] |
| 48 | +> Check available options with `python <script name>.py -h` |
22 | 49 |
|
23 | 50 | ## Running image per image
|
24 | 51 | ----
|
25 |
| -For this example you need a set of of recorded sequential color (and potentially depth) images stored in a directory. |
26 |
| -The color images `color*.png` and `depth*.png` need have names with lexicographic order (e.g. color_000000.png, color_000001.png, color_000002.png...) |
| 52 | +To run this example you need a set of of recorded sequential color (and potentially depth) images stored in a directory. |
| 53 | +The color images **color\*.png** and **depth\*.png** need to have names with lexicographic order (e.g. *color_000000.png*, *color_000001.png*, *color_000002.png*, ...) |
27 | 54 | Calibrated camera intrinsics in the formate described in config/cam_d435_640.yaml also need to be provided.
|
28 | 55 |
|
29 | 56 | Color only:
|
30 |
| -``` |
| 57 | +``` bash |
31 | 58 | python examples/run_image_dir_example.py --use_region -b obj_000014 -m <path/to/obj/dir> -i <path/to/image/dir> -c config/cam_d435_640.yaml --stop
|
32 | 59 | ```
|
33 | 60 |
|
34 | 61 | Color + depth:
|
35 |
| -``` |
| 62 | +``` bash |
36 | 63 | python examples/run_image_dir_example.py --use_region --use_depth -b obj_000014 -m <path/to/obj/dir> -i <path/to/image/dir> -c config/cam_d435_640.yaml --stop
|
37 | 64 | ```
|
38 | 65 |
|
39 | 66 | Keyboard commands:
|
40 |
| -- `q`: exit |
41 |
| -- `any other key`: When running with --stop/-s argument, continue to next image |
| 67 | +- `q`: exit; |
| 68 | +- `any other key`: When running with **--stop** or **-s** argument, continue to next image. |
42 | 69 |
|
43 | 70 | ## Running with webcam
|
44 |
| -To bypass camera calibration, a reasonable horizontal fov (~50-70 degrees) can be assumed to get camera intrinsics |
45 |
| -``` |
| 71 | +To bypass camera calibration, a reasonable horizontal fov (50 - 70 degrees) can be assumed to get camera intrinsics |
| 72 | +``` bash |
46 | 73 | python examples/run_webcam_example.py --use_region -b obj_000014 -m <path/to/obj/dir>
|
47 | 74 | ```
|
48 | 75 |
|
49 | 76 | Keyboard commands:
|
50 |
| -- `q`: exit |
51 |
| -- `d`: reset object pose to initial guess |
52 |
| -- `x`: start/restart tracking |
| 77 | +- `q`: exit; |
| 78 | +- `d`: reset object pose to initial guess; |
| 79 | +- `x`: start/restart tracking. |
53 | 80 |
|
54 | 81 | ## Running with realsense camera
|
55 | 82 | ----
|
56 | 83 | Color only:
|
57 |
| -``` |
| 84 | +```bash |
58 | 85 | python examples/run_realsense_example.py --use_region -b obj_000014 -m <path/to/obj/dir>
|
59 | 86 | ```
|
60 | 87 |
|
61 | 88 | ----
|
| 89 | + |
62 | 90 | Color + depth:
|
63 |
| -``` |
| 91 | +```bash |
64 | 92 | python examples/run_realsense_example.py --use_region --use_depth -b obj_000014 -m <path/to/obj/dir>
|
65 | 93 | ```
|
66 | 94 |
|
67 | 95 | Keyboard commands:
|
68 |
| -- `q`: exit |
69 |
| -- `d`: initialize object pose |
70 |
| -- `x`: start/restart tracking |
| 96 | +- `q`: exit; |
| 97 | +- `d`: initialize object pose; |
| 98 | +- `x`: start/restart tracking. |
0 commit comments