diff --git a/conf/gettingstarted.ork b/conf/gettingstarted.ork new file mode 100644 index 0000000..4ecf6a4 --- /dev/null +++ b/conf/gettingstarted.ork @@ -0,0 +1,44 @@ +source2: + type: RosKinect + module: 'object_recognition_ros.io' + parameters: + rgb_frame_id: camera_rgb_optical_frame + depth_image_topic: /camera/depth_registered/image_raw + depth_camera_info: /camera/depth_registered/camera_info + rgb_image_topic: /camera/rgb/image_rect_color + rgb_camera_info: /camera/rgb/camera_info + +sink1: + type: TablePublisher + module: 'object_recognition_tabletop' + inputs: [source2] + +sink2: + type: Publisher + module: 'object_recognition_ros.io' + inputs: [source2] + + +pipeline1: + type: TabletopTableDetector + module: 'object_recognition_tabletop' + inputs: [source2] + outputs: [sink1] + parameters: + table_detector: + min_table_size: 1000 + plane_threshold: 0.01 + +pipeline2: + type: TabletopObjectDetector + module: 'object_recognition_tabletop' + inputs: [source2, pipeline1] + outputs: [sink2] + parameters: + object_ids: 'all' + tabletop_object_ids: 'REDUCED_MODEL_SET' + threshold: 0.65 + db: + type: CouchDB + root: http://localhost:5984 + collection: object_recognition diff --git a/doc/source/conf.py b/doc/source/conf.py index 717530c..cfab0f7 100644 --- a/doc/source/conf.py +++ b/doc/source/conf.py @@ -32,7 +32,7 @@ 'sphinx.ext.doctest', 'sphinx.ext.graphviz', 'sphinx.ext.intersphinx', - 'sphinx.ext.pngmath', + 'sphinx.ext.imgmath', 'sphinx.ext.todo', 'sphinx.ext.viewcode'] @@ -113,7 +113,7 @@ # The theme to use for HTML and HTML Help pages. Major themes that come with # Sphinx are currently 'default' and 'sphinxdoc'. -html_theme = 'ecto_theme' +html_theme = 'sphinxdoc' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the @@ -223,9 +223,9 @@ """ -intersphinx_mapping = {'orkcapture': ('http://wg-perception.github.com/capture', None), - 'orkcore': ('http://wg-perception.github.com/object_recognition_core', None), - 'orklinemod': ('http://wg-perception.github.com/linemod', None), - 'orkrenderer': ('http://wg-perception.github.com/ork_renderer', None), - 'orktabletop': ('http://wg-perception.github.com/tabletop', None), +intersphinx_mapping = {'orkcapture': ('http://wg-perception.github.io/capture', None), + 'orkcore': ('http://wg-perception.github.io/object_recognition_core', None), + 'orklinemod': ('http://wg-perception.github.io/linemod', None), + 'orkrenderer': ('http://wg-perception.github.io/ork_renderer', None), + 'orktabletop': ('http://wg-perception.github.io/tabletop', None), } diff --git a/doc/source/index.rst b/doc/source/index.rst index 23f13bd..aef6308 100644 --- a/doc/source/index.rst +++ b/doc/source/index.rst @@ -1,15 +1,19 @@ .. _object_recognition_tutorials: -Object Recognition Tutorials -############################ +Object Recognition Kitchen (ORK) Tutorials +########################################## -Object recognition is a difficult problem in itself. Implementations of it are even more difficult because of the different layers of complexity: data capture, training, detection. Add several layers of complexity for each of those, as well as some robotics integration problems and you see where it's going. +Welcome to the ORK Tutorials! These tutorials are designed to show how to use +specific tools and detection pipelines that are part of +:ref:`ORK `. -The following tutorials do not go over the object recognition steps in order of execution but complexity. You probably need the last tutorials fo your real world applications but the first ones will actually make sure that your setup is fine. +The following tutorials do not go over the object recognition steps in order of +execution but complexity. You probably need the last tutorials for your real +world applications, but the first tutorials are simpler, and will make sure +that your system is set up correctly. .. toctree:: - :maxdepth: 2 + :maxdepth: 1 ./tutorial01/tutorial.rst - ./tutorial02/tutorial.rst ./tutorial03/tutorial.rst diff --git a/doc/source/tutorial01/tutorial.rst b/doc/source/tutorial01/tutorial.rst index 8065f52..bb78afe 100644 --- a/doc/source/tutorial01/tutorial.rst +++ b/doc/source/tutorial01/tutorial.rst @@ -1,95 +1,119 @@ .. _tutorial01: -Object Recognition DB -##################### +Database Management +################### -In the Object Recognition Kitchen, everything is stored in a database: objects, models, training data. We'll walk you through the basics of the DB in this tutorial, you will: +In the Object Recognition Kitchen, everything is stored in a database (DB): +objects, models, training data. We'll walk you through the basics of the DB in +this tutorial. You will learn how to: - * preparing object's mesh to add to the DB - * learn how to manually add an object to the DB - * visualize data in the ``ORK`` DB + * Prepare an object's mesh to add to the DB + * Manually add an object to the DB + * Visualize data in the ORK DB Introduction ************ -Make sure you followed the steps in the core :ref:`DB instructions `, especially to get the 3d visualizer in the DB. +Make sure you followed the steps in the core +:ref:`DB instructions `, especially to get +the 3d visualizer in the DB. -The example we will use is a can of Coke as it's somewhat universal :) For real life experiments, just get the iconic red can and there should not be too many appearance changes. +In this tutorial, the object you will use is a can of Coke (12 fl. oz./355 mL), +since it's somewhat universal. For real life experiments, just get the iconic +red can and there should not be too many appearance changes. Prepare object's mesh ********************* -Object's mesh is important for object detection in ORK. Object's mesh must be in format .stl/obj. - -You can prepare your object's mesh by following the ORK's capture procedure (very well explained in Quick Guide). Otherwise, you can use any software that allows mesh creation to prepare your mesh. Or you can use meshes that are free on the internet. - -Once you have your mesh, make sure it have the right size and note it's origin point before you upload it onto the DB. As in the following snapshot of blender's screen, you can see that the coke's mesh has a different position to the origin point than the bottle's mesh. +An object's mesh is important for object detection in ORK. The object's mesh +must be in ``.stl`` or ``.obj`` format. + +ORK's :ref:`capture ` +capability is designed to automatically create an ``.stl`` or +``.obj`` file from many images of your object. In this way, you can create a +mesh from an object without having to model the object in a CAD or modeling +program. Finally, you can also use a mesh that you find on the internet. Read +the :ref:`capture ` page for details. + +If you are using a mesh that you made yourself or downloaded from the internet, +make sure that it is the the right size (the default units are meters). Also, +note its origin point before you upload it onto the DB. As can be seen in the +following screenshot from Blender (a mesh modeling program), the can's mesh has +a different position relative to the origin point than the bottle's mesh. .. image:: blender_coke_bottle_pos.png :width: 100% - -In ORK, object's position returned by ORK is the position of the origin point of the object's mesh. +When ORK publishes an object's position, that position is the origin point +of the object's mesh. Creating an object in the DB **************************** -ORK is about recognizing objects so you need to store objects in the DB first. Some pipelines like :ref:`ORK 3d capture ` have an interface to create those for you. But you can also do it with the scripts from the core. +ORK is about recognizing objects so you need to store objects in the DB first. +Some pipelines, like :ref:`ORK 3d capture `, have an +interface to create objects for you. But you can also do it with the scripts +from the ``object_recognition_core``. .. toggle_table:: - :arg1: Non-ROS - :arg2: ROS + :arg1: ROS + :arg2: Without ROS -.. toggle:: Non-ROS +.. toggle:: ROS .. code-block:: sh - - ./ork_core/apps/dbscripts/object_add.py -n coke -d "A universal can of coke" - -.. toggle:: ROS + + rosrun object_recognition_core object_add.py -n coke -d "A can of Coca-Cola" + +.. toggle:: Without ROS .. code-block:: sh - - rosrun object_recognition_core object_add.py -n coke -d "A universal can of coke" -You can then check this object in the DB by going to http://localhost:5984/_utils/database.html?object_recognition/_design/objects/_view/by_object_name + ./ork_core/apps/dbscripts/object_add.py -n coke -d "A can of Coca-Cola" + +You can then check this object is loaded into the DB by going to +http://localhost:5984/_utils/database.html?object_recognition/_design/objects/_view/by_object_name .. image:: db_screenshot01.png :width: 100% -If you click on it, you can see the info you entered about the object, especially the object id: +If you click on it, you can see the info you entered about the object, +including the object id: .. image:: db_screenshot02.png :width: 100% - Manually adding a mesh for the object ************************************* -First, check out the object id of your object using the DB interface: each element of the DB (objects included) has its own hash as a unique identifier (in case you give the same name to different objects) and that is how you should refer to objects. To upload the mesh (use an .stl/.obj one): +First, find you object's ID using the DB interface. Each +element in the DB (objects included) has its own hash as a unique identifier +(in case you give the same name to different objects), and that is how you +should refer to objects. To upload the mesh (use an .stl/.obj one): .. toggle_table:: - :arg1: Non-ROS - :arg2: ROS + :arg1: ROS + :arg2: Without ROS -.. toggle:: Non-ROS +.. toggle:: ROS .. code-block:: sh - - ./ork_core/apps/dbscripts/mesh_add.py YOUR_OBJECT_ID YOUR_COKE_BLEND_PATH --commit -.. toggle:: ROS + rosrun object_recognition_core mesh_add.py YOUR_OBJECT_ID `rospack find object_recognition_tutorials`/data/coke.obj --commit + +.. toggle:: Without ROS .. code-block:: sh - - rosrun object_recognition_core mesh_add.py YOUR_OBJECT_ID `rospack find object_recognition_tutorials`/data/coke.obj --commit + ./ork_core/apps/dbscripts/mesh_add.py YOUR_OBJECT_ID YOUR_COKE_BLEND_PATH --commit Visualizing the object ********************** -Now, if you want to visualize the object in the db, you can just go to the visualization URL at http://localhost:5984/or_web_ui/_design/viewer/meshes.html and you should see the following: +Now, if you want to visualize the object in the db, you can visit the +visualization URL at http://localhost:5984/or_web_ui/_design/viewer/meshes.html. +You should something similar to the following: -.. image:: db_screenshot03.png +.. image:: db_screenshot03.png :width: 100% @@ -100,17 +124,16 @@ You also have a method to delete an object (it will delete all other elements in .. toggle_table:: - :arg1: Non-ROS - :arg2: ROS - -.. toggle:: Non-ROS + :arg1: ROS + :arg2: Without ROS +.. toggle:: ROS .. code-block:: sh - ./ork_core/apps/dbscripts/object_delete.py OBJECT_ID + rosrun object_recognition_core object_delete.py OBJECT_ID -.. toggle:: ROS +.. toggle:: Without ROS .. code-block:: sh - rosrun object_recognition_core object_delete.py OBJECT_ID + ./ork_core/apps/dbscripts/object_delete.py OBJECT_ID diff --git a/doc/source/tutorial02/orkCoke.png b/doc/source/tutorial02/orkCoke.png deleted file mode 100644 index 331f75e..0000000 Binary files a/doc/source/tutorial02/orkCoke.png and /dev/null differ diff --git a/doc/source/tutorial02/orktables.png b/doc/source/tutorial02/orktables.png deleted file mode 100644 index db7e320..0000000 Binary files a/doc/source/tutorial02/orktables.png and /dev/null differ diff --git a/doc/source/tutorial02/tutorial.rst b/doc/source/tutorial02/tutorial.rst index 2e8f19d..cc4523c 100644 --- a/doc/source/tutorial02/tutorial.rst +++ b/doc/source/tutorial02/tutorial.rst @@ -1,128 +1,9 @@ -.. _tutorial02: +.. Old page, retained to avoid breaking links -Object Recognition Using Tabletop -################################# +:orphan: -:ref:`Tabletop ` is a simple pipeline for object recognition that only requires the mesh of an object for training/detection. +.. _tutorial2:: -Through this tutorial, you will: - - * learn how to use the ``tabletop`` pipeline to find planes - * learn how to use the ``tabletop`` pipeline to find certain kinds of objects - * use the ``ORK`` RViz plugins - - -Let's first set up the working environment together! - -Setup the working environment -***************************** - -Hardware -======== - -To see tabletop in action, we will need to have - * a 3D camera (such as a Kinect, a Xtion), - * a computer that can run ROS - * some plane surfaces (such as a table, a wall, or the ground under your feet ;-) ) - * and optionally, some COKE can if you want to test the object detection feature of ORK_tabletop :-) - -Software -======== - -We need to have ORK installed on the computer. Installation of ORK is quite easy and clearly explained in :ref:`here `. We need ``rqt_reconfigure`` and ``RViz`` to configure the 3D camera and visualize the detected planes and objects. To install those tools, just run the following command: - -.. code-block:: sh - - sudo apt-get install ros--rviz ros--rqt_reconfigure ros--openni* - -Configuring the 3D camera and ``RViz`` parameters -================================================= - -In separate terminals, run the following commands: - -.. code-block:: sh - - roslaunch openni2_launch openni2.launch - rosrun rviz rviz - -Set the Fixed Frame (top left of the ``RViz`` window) to ``/camera_depth_optical_frame``. Add a PointCloud2 display, and set the topic to ``/camera/depth/points``. Turning the background to light gray can help with viewing. This is the unregistered point cloud in the frame of the depth (IR) camera. It is not matched with the RGB camera images. Now let's look at a registered point cloud, aligned with the RGB data. Open the dynamic reconfigure GUI: - -.. code-block:: sh - - rosrun rqt_reconfigure rqt_reconfigure - -And select ``/camera/driver`` from the drop-down menu. Enable the ``depth_registration`` checkbox. Now go back to ``RViz``, and change your PointCloud2 topic to ``/camera/depth_registered/points``. Set Color Transformer to RGB8. You should see a color, 3D point cloud of your scene. - -(Detailed explanation can be found here: http://wiki.ros.org/openni2_launch) - -Finding planes -************** - -In order to find planes using ORK_Tabletop, run the following command: - -.. code-block:: sh - - rosrun object_recognition_core detection -c `rospack find object_recognition_tabletop`/conf/detection.table.ros.ork - -Then go to ``RViz`` graphical window, and add the OrkTable display. Now you should see some planes detected by ORK_Tabletop if your camera is pointing to some plane surfaces. - -.. image:: orktables.png - :width: 100% - - -Finding objects -*************** - -If you follow the installation guide (http://wg-perception.github.io/object_recognition_core/install.html#install), you know that ORK uses couchDB to manage the objects database. In order to have tabletop detect objects, we need to feed the databases with objects' 3D models. - -When you first installed ORK, my database was empty. Luckily, ork tutorials comes with 3D model of a coke can. So, download the tutorials: - - -.. code-block:: sh - - git clone https://github.com/wg-perception/ork_tutorials - -then uploaded it to the ORK database: - - -.. code-block:: sh - - rosrun object_recognition_core object_add.py -n "coke " -d "A universal can of coke" - rosrun object_recognition_core mesh_add.py - -If you also did these steps to upload objects, then when opening the link http://localhost:5984/or_web_ui/_design/viewer/objects.html you should see the coke object listed in your database. - -As everything is set up; let's see how ork_tabletop detects our coke can. In a terminal, run - - -.. code-block:: sh - - rosrun object_recognition_core detection -c `rospack find object_recognition_tabletop`/conf/detection.object.ros.ork - -Go back to ``RViz`` , and add the ``OrkObject`` display. Now if you have a coke can placed on one of the detected planes, ork_tabletop should see it and your beautiful ``RViz`` interface should be displaying it, like this: - -.. image:: orkCoke.png - :width: 100% - - -**Notice:** In the image, you only see the coke because OrkTable is unchecked in ``RViz`` interface. This should not be the case on your beautiful ``RViz`` unless you actually uncheck that box ;-) - -A video resuming these steps can be found `here `_. - -F.A.Q. -****** - -**Problem:** ORK_tabletop complained about the 3D inputs or seems to wait for ROS topic forever. Why? - -**Answer:** That happened to me a couple of times, too. That may be because ORK_Tabletop is not listening to the topics that the 3D camera is publishing. Just open the configuration file called in the detection command and check if the default topics are the same as what are published by the 3D camera. If that's not the case, just uncomment the parameter option and modify these topics accordingly. And hopefully, tabletop would be happy with this modification and show off its power the next time you run it. - - -**Problem:** When running the tabletop detection command, you run into the below exception message. How to fix it?:: - - /usr/include/boost/smart_ptr/shared_ptr.hpp:412: boost::shared_ptr::reference boost::shared_ptr::operator*() const [with T = xn::NodeInfo, boost::shared_ptr::reference = xn::NodeInfo&]: Assertion `px != 0' failed - -**Answer:** This means that tabletop receives no messages from one (or several) ROS topics that it subscribes as input. When you run into this exception, please verify if those ROS topics is publishing messages as expected (tips: use 'rostopic echo ) and then relaunch your tabletop pipeline. - -Now that you see things on the ``RViz``, why don't you just move the 3D camera around to see how fast ORK_tabletop detects thing? ;-) - -Have fun exploring! +This document has been replaced by the ORK Getting Started Guide (which uses +the ``tabletop`` detector). The tutorial is available from +:ref:`ORK Home `. \ No newline at end of file diff --git a/doc/source/tutorial03/tutorial.rst b/doc/source/tutorial03/tutorial.rst index 947dc3a..dc523f6 100644 --- a/doc/source/tutorial03/tutorial.rst +++ b/doc/source/tutorial03/tutorial.rst @@ -1,62 +1,75 @@ .. _tutorial03: -Object Recognition Using Linemod +Object Recognition Using LineMOD ################################# -:ref:`Linemod ` is a pipeline that implements one of the best methods for generic rigid object recognition and it proceeds using very fast template matching. For more information, check the LINE-MOD approach from http://ar.in.tum.de/Main/StefanHinterstoisser. +:ref:`Linemod ` is a pipeline that implements a robust +best methods for generic rigid object recognition and it proceeds using very +fast template matching. For more information on LineMOD, please read the papers +written by +`Stefan Hinterstoisser `_. -Through this tutorial, you will: +This tutorial assumes that you are using ROS Through this tutorial, you will: * learn how to use the ``linemod`` pipeline to learn objects * learn how to use the ``linemod`` pipeline to detect objects - * use the ``ORK`` RViz plugins + * use the ORK RViz plugins -Setup the working environment -***************************** +Set Up Your Environment +*********************** Hardware ======== -To see Linemod in action, we will need to have - * a 3D camera (such as a Kinect or a Xtion), - * a PC with ROS installed, - * and optionally, some CAN to test the object detection +To see Linemod in action, you will need the following: + * A 3D camera (such as a Kinect or Asus Xtion) + * A PC with ROS installed + * A can of Coke (12 fl. oz./355 mL) Software ======== -You need to have ORK installed on the computer. Installation of ORK is quite easy and clearly explained in :ref:`here `. We need ``rqt_reconfigure`` and ``RViz`` to configure the 3D camera and visualize the detected objects. To install those tools, just run the following command: - -.. code-block:: sh - - sudo apt-get install ros--rviz ros--rqt_reconfigure ros--openni* +You need to have ORK installed on the computer. If you have completed the +:ref:`Getting Started Guide `, then you are good to go. +You can also find detailed installation instructions at the +:ref:`Installation page `. Configuring the 3D camera and ``RViz`` parameters ================================================= +This step is similar to steps 7 and 8 of the Getting Started Guide. At first, launch the OpenNI driver: .. code-block:: sh roslaunch openni2_launch openni2.launch +If you are using the Orbbec Astra, replace ``openni2`` with ``astra``. + Run RViz .. code-block:: sh rosrun rviz rviz - -Set the Fixed Frame (in Global Options, ``Displays`` window) to ``/camera_depth_optical_frame``. Add a PointCloud2 display and set the topic to ``/camera/depth/points``. This is the unregistered point cloud in the frame of the depth (IR) camera and it is not matched with the RGB camera images. -For visualization of the registered point cloud, the depth data could be aligned with the RGB data. To do it, launch the dynamic reconfigure GUI: + +Set the Fixed Frame (in Global Options, ``Displays`` window) to +``/camera_depth_optical_frame``. Add a PointCloud2 display and set the topic to +``/camera/depth/points``. This is the unregistered point cloud in the frame of +the depth (IR) camera and it is not matched with the RGB camera images. +For visualization of the registered point cloud, the depth data could be +aligned with the RGB data. To do it, launch the dynamic reconfigure GUI: .. code-block:: sh rosrun rqt_reconfigure rqt_reconfigure - -Select ``/camera/driver`` from the drop-down menu and enable the ``depth_registration`` checkbox. -In ``RViz``, change the PointCloud2 topic to ``/camera/depth_registered/points`` and set the Color Transformer to ``RGB8`` to see both color and 3D point cloud of your scene. -The detailed explanation can be found here: http://wiki.ros.org/openni2_launch. + +Select ``/camera/driver`` from the drop-down menu and enable the +``depth_registration`` checkbox. In ``RViz``, change the PointCloud2 +topic to ``/camera/depth_registered/points`` and set the Color Transformer +to ``RGB8`` to see both color and 3D point cloud of your scene. +For details on how the OpenNI camera drivers work, please read the documentation +at http://wiki.ros.org/openni2_launch. Object detection @@ -65,28 +78,19 @@ Object detection Setup the object database ========================= -The Object Recognition Kitchen manages objects using :ref:`couchDB ` database. Thus, in order to learn objects, you need to store their 3D models in the database first. You can check the detailed :ref:`DB tutorial ` or the following brief explanation. - -When you install ORK, the database is empty. Luckily, ORK tutorials comes with a 3D mesh of a coke that can be downloaded here: - -.. code-block:: sh - - git clone https://github.com/wg-perception/ork_tutorials - -You can upload the object and its mesh to the database with the scripts from the core: - -.. code-block:: sh - - rosrun object_recognition_core object_add.py -n "coke " -d "A universal can of coke" - rosrun object_recognition_core mesh_add.py - -Once uploaded, you can then check the object in the database by going to http://localhost:5984/_utils/database.html?object_recognition/_design/objects/_view/by_object_name - +Your database must have an object loaded into it to perform detection. If you've +done the Getting Started guide, you already have the soda can model in your +database, and you can skip this step. If not, go to the +:ref:`Getting Started Guide ` and complete steps 4 and +5. Training ======== -Now, you can learn objects models from the database. Execute the Linemod in the training mode with the configuration file through the ``-c`` option. The configuration file should define a pipeline that reads data from the database and computes objects models. +Now, you can learn objects models from the database. The following command +will start LineMOD in training mode, with the configuration file specified by +the ``-c`` option. The configuration file defines a pipeline that reads +data from the database and computes object models. .. code-block:: sh @@ -96,29 +100,43 @@ Now, you can learn objects models from the database. Execute the Linemod in the Detection ========= -Once learned, objects can be detected from the input point cloud. In order to detect object continuously, execute the Linemod in the detection mode with the configuration file that defines a source, a sink, and a pipeline, as explained in http://wg-perception.github.io/object_recognition_core/detection/detection.html. +Once learned, objects can be detected from the input point cloud. In order to +detect object continuously, execute the following command to start LineMOD in +detection mode. The configuration file defines a source, a sink, and a pipeline, +as explained in +http://wg-perception.github.io/object_recognition_core/detection/detection.html. .. code-block:: sh rosrun object_recognition_core detection -c `rospack find object_recognition_linemod`/conf/detection.ros.ork - Visualization with RViz ======================= - -Now, go to ``RViz`` and add the ``OrkObject`` in the ``Displays`` window. Select the ``OrkObject`` topic and the parameters to display: object id, name, and confidence. -Here, we show an example of detecting two objects (a coke and a head of NAO) and the outcome visualized in RViz: + +Now, go to ``RViz`` and add the ``OrkObject`` in the ``Displays`` window. Select +the ``OrkObject`` topic and the parameters to display: object id, name, and +confidence. +Here, we show an example of detecting two objects (a coke and a head of NAO) +and the outcome visualized in RViz: .. image:: Screenshot_2014_11_07_13_24_46.png :width: 100% -For each recognized object, you can visualize its point cloud and also a point cloud of the matching object from the database. For this, compile the package with the CMake option ``-DLINEMOD_VIZ_PCD=ON``. -Once an object is recognized, its point cloud from the sensor 3D data is visualized as shown in the following image (check blue color). The cloud is published under the ``/real_icpin_ref`` topic. +For each recognized object, you can visualize its point cloud and also a point +cloud of the matching object from the database. For this, compile the package +with the CMake option ``-DLINEMOD_VIZ_PCD=ON``. Once an object is recognized, +its point cloud from the sensor 3D data is visualized as shown in the following +image (check blue color). The cloud is published under the ``/real_icpin_ref`` +topic. .. image:: Screenshot_pc_ref.png :width: 100% -For the same recognized object, we can visualize the point cloud of the matching object from the database as shown in the following image (check yellow color). The point cloud is created from the mesh stored in the database by visualizing at a pose returned by Linemod and refined by ICP. The cloud is published under the ``/real_icpin_model`` topic. +For the same recognized object, you can visualize the point cloud of the +matching object from the database as shown in the following image (check yellow +color). The point cloud is created from the mesh stored in the database by +visualizing at a pose returned by Linemod and refined by ICP. The cloud is +published under the ``/real_icpin_model`` topic. .. image:: Screenshot_pc_model.png :width: 100%