Skip to content

pickxiguapi/Embodied-FSD

Repository files navigation

From Seeing to Doing: Bridging Reasoning and Decision for Robotic Manipulation (Embodied-FSD)

From Seeing to Doing: Bridging Reasoning and Decision for Robotic Manipulation

[🌐 Website] [πŸ“„ Paper] [πŸ€— Models] [🎯 Datasets] [πŸ’¬ Demo]


πŸ“– Introduction

We present FSD (From Seeing to Doing) with:

Embodied-FSD Model: We develop FSD, a novel vision-language model that generates intermediate representations through spatial relationship reasoning, providing fine-grained guidance for robotic manipulation. It integrates Spatial Relationship-Focused Chain-of-Thought (Sr-CoT) reasoning while maintaining powerful general capabilities.

VABench: We propose VABench, a more challenging benchmark for evaluating visual aids generation capabilities in robotic manipulation scenarios.

image

Figure 1: Overview of FSD

image

Figure 2: Spatial relationship-focused reasoning process (SrCoT).

πŸ“° News

  • [2025-07] ⚑️ We have updated SIMPLER ENV branch and LLM-based evaluation!
  • [2025-07] πŸ”¬ We have updated the detailed training, inference, and evaluation code and readme. VABench evaluation benchmark is officially released!
  • [2025-05] πŸ“ Code repository is now public - welcome to try FSD for robotic manipulation!

βš™οΈ Setup (Same as LLaVA)

  1. Clone this repository and navigate to Embodied-FSD folder
git clone https://github.com/haotian-liu/LLaVA.git
cd LLaVA
  1. Install Package
conda create -n llava python=3.10 -y
conda activate llava
pip install --upgrade pip  # enable PEP 660 support
pip install -e .
# we recommend transformers==4.31.0
pip install transformers==4.31.0
  1. Install additional packages for training cases
pip install -e ".[train]"
pip install flash-attn --no-build-isolation

πŸš€ Inference

Affordance Point Example

Task instruction: Move the yellow block in the middle of the table.

Before prediction (original image):

Original input image

Run the example code:

cd Embodied-FSD/
python affordance_point_inference_example.py

After prediction (visualization result):

Visualization result with predicted points

Visual Trace Example

Task instruction: put carrot on plate.

Before prediction (original image):

Original input image

Run the example code:

cd Embodied-FSD/
python visual_trace_inference_example.py

After prediction (visualization result):

Visualization result with predicted points


🎯 Training

We mainly use the LLaVA and ASMv2 codebases to develop FSD. We appreciate these excellent works. The training process of FSD is divided into two stages: the first stage focuses on embodied reasoning and general spatial reasoning, while the second stage focuses on visual aids generation.

Data Preparation

Please download the required datasets and organize them in ./data from constituting datasets:

After downloading all datasets, organize the data as follows in ./data:

β”œβ”€β”€ coco
β”‚   β”œβ”€β”€ train2014
β”‚   └── train2017
β”œβ”€β”€ gqa
β”‚   └── images
β”œβ”€β”€ ocr_vqa
β”‚   └── images
β”œβ”€β”€ textvqa
β”‚   └── train_images
└── vg
β”‚   β”œβ”€β”€ VG_100K
β”‚   └── VG_100K_2
β”œβ”€β”€ CLEVR_v1.0
β”‚   └── images
β”œβ”€β”€ Visual7W
β”‚   └── images
β”œβ”€β”€ flickr30k
β”‚   └── images
β”œβ”€β”€ sam
β”‚   β”œβ”€β”€ sa_000000
β”‚   β”œβ”€β”€ sa_000001
β”‚   β”œβ”€β”€ sa_000002
β”‚   └── sa_000003
β”œβ”€β”€ st_vqa(cauldron,llava_format)
β”œβ”€β”€ raven(cauldron)
β”œβ”€β”€ vsr(cauldron,llava_format)
β”œβ”€β”€ CLEVR-Math(MathV360K)
β”œβ”€β”€ Super-CLEVR(MathV360K)
β”œβ”€β”€ SAT_images
β”œβ”€β”€ kitti
β”œβ”€β”€ 2d3ds
β”œβ”€β”€ bridge_data_v2
β”‚   β”œβ”€β”€ bridge_data_v1
β”‚   β”œβ”€β”€ bridge_data_v2
β”‚   β”œβ”€β”€ flap
β”‚   β”œβ”€β”€ rss
β”‚   └── icra 
β”œβ”€β”€ droid  
β”‚   β”œβ”€β”€ ILIAD+j807b3f8+2023-05-11-17h-34m-39s
β”‚   └── ...
β”œβ”€β”€ rtx 
β”‚   β”œβ”€β”€ fractal20220817_data
β”‚   β”œβ”€β”€ ucsd_kitchen_dataset_converted_externally_to_rlds
β”‚   β”œβ”€β”€ jaco_play 
β”‚   └── ucsd_pick_and_place_dataset_converted_externally_to_rlds         
β”œβ”€β”€ object_ref
β”œβ”€β”€ region_ref

Stage1: General Embodied/Spatial Reasoning

In this stage, we train the model to enhance spatial reasoning ability. We finetune the FSD model based on the ASMv2.

The JSON data used in Stage 1: Dataset Link

# Stage 1: spatial reasoning
bash scripts_fsd/stage1-fsd.sh

Stage2: Robotics-Focused Fine-tuning

In this stage, we enhance the model with robotics manipulation data and advanced visual aids generation.

The JSON data used in Stage 2: Dataset Link

# Stage 2: visual aids generation
bash scripts_fsd/stage2-fsd.sh

πŸ“Š Weak-to-Strong Dataset

As in ASMv2, in our dataset, we use to annotate target objects and to annotate spatial relations. Each bounding box is normalized to integer values within the range [0, 1000). Note: When training and outputting coordinates, we first pad the image into a square and then output the normalized coordinates on the square image. Special attention should be paid to this conversion process.

πŸ“ Evaluation

We used the lmms-eval framework to complete the evaluation of all benchmarks, and we are grateful for their outstanding work!

VABench Evaluation

vabench_point_dataset.parquet and vabench_visual_trace_dataset.parquet are used for VABench-Point and VABench-Visual Trace, respectively. In the parquet files, the instruction column contains the task instructions, the images column contains the images, and the answer column contains the answers. When evaluating VABench-Point, we calculate the proportion of predicted points that fall within the answer bounding boxes as the accuracy. For VABench-Visual Trace, we compute the MAE and RMSE between the predicted trajectories and the ground-truth trajectories. Note that, to ensure fair comparison across images of different sizes, both the predicted results and the ground-truth results are converted to the 0-1000 normalized coordinate system of the padded images (since FSD predictions are already in this format, no conversion is needed for them).

We have also provided a method for evaluating visual trace generation using LLM-based evaluation.

Step 1️⃣: Get Model Output

Task instruction: Put the orange object inside the basket.
FSD output: 
<Description>The image shows an <ref>orange object</ref><box>[[622, 424, 763, 583]]</box> sitting in a blue sink. To the left of the sink is a <ref>yellow dish rack</ref><box>[[19, 174, 494, 477]]</box>. A white spatula is positioned in front of the orange object.\n
</Description>
<Reasoning>\nTo move the orange object into the yellow dish rack, start by identifying the current position of the orange object at <point>[[694, 540]]</point>. \nLift the object slightly upwards and to the left, moving towards the dish rack. \nThe path should curve gently to avoid any obstacles, passing through intermediate points like <point>[[639, 440]]</point> and <point>[[513, 340]]</point>. \nFinally, lower the object into the dish rack, ending at the target position <box>[[213, 273, 339, 419]]</box> with the final point at <point>[[257, 390]]</point>.
</Reasoning>
<Answer>The visual trace for placing the orange object into the yellow dish rack is \n<point>[[694, 540], [682, 515], [639, 440], [597, 377], [513, 340], [419, 330], [337, 343], [257, 390]]</point>.
</Answer>

Step 2️⃣: Visualization

Visualization result with predicted points

Step 3️⃣: LLM Evaluation

We need to input prompts containing instructions along with visualized images, and have LLM perform the scoring.

Here is the prompt:

You are an expert evaluator in robotic manipulation and visual reasoning. Your job is to assess the quality of predicted trajectories based on task instructions and visual inputs.

You are given:
- A task instruction describing an object manipulation task.
- An image showing a predicted trajectory.

**Note:**
- In the image, the red circle indicates the start point, and the blue diamond indicates the end point.
- The trajectory represents the predicted movement path of the manipulated object, not the robot or end-effector.
- You should **evaluate the predicted trajectory as a proposed motion for the object that is supposed to be moved**, based on the task instruction β€” **not based on the static positions of objects in the image**. The objects have not actually moved.

**Evaluation Criteria (listed in order of importance):**

1. **Task Alignment and Success (most important)**  
   - Does the trajectory clearly and accurately fulfill the task instruction?  
   - **The trajectory must start at the correct location and end at a target position that aligns with the task goal.**  
   - Large deviations in the starting or ending point (e.g., wrong object, wrong destination, or stopping short of the goal) should result in a low score, even if the rest of the trajectory is smooth.  
   - If the task is not accomplished (due to incorrect goal interpretation or spatial execution), the score should be low regardless of other qualities.

2. **Feasibility**  
   - Is the movement physically plausible, smooth, and continuous?  
   - Are there any unrealistic discontinuities, sharp turns, or impossible transitions?  
   - Even if the movement is feasible, it should not receive a high score if the task is not completed.

3. **Obstacle Avoidance / Safety**  
   - Does the trajectory reasonably avoid collisions with surrounding objects?  
   - Minor risks may be tolerated if the task is completed successfully, but major or clear collisions should reduce the score.

**Scoring Guideline:**
- If the task is **not accomplished**, or if the start or end point is significantly incorrect, the score should typically be **4 or below**.
- If the task is completed but the trajectory has issues (e.g., roughness, minor risk of collision), a score in the **6–8** range is appropriate.
- A **score of 9–10** should be given only when the trajectory clearly completes the task, with good start/end accuracy, smooth motion, and reasonable safety.

Based on these criteria, provide a single overall score from 1 (very poor) to 10 (excellent), reflecting how well the task is accomplished.

Respond strictly in the following format:
Score: <1-10>  
Explanation: <brief justification>

The task instruction is:  
{task_instruction}

Please give your response.

LLM Output:

Score: 10
Explanation: The trajectory starts at the orange object and ends inside the basket, accurately fulfilling the instruction to put the orange object inside the basket. The path is smooth, continuous, and physically plausible, with no sharp turns or unrealistic movements. There is no significant risk of collision with other objects. Task is fully accomplished with good safety and feasibility.

The complete visualization and evaluation code can be executed:

cd Embodied-FSD-Github/visual_trace_llm_score
python gpt_score_example.py

SIMPLERENV Evaluation

We refer to the SIMPLERENV FSD branch.


πŸ™ Acknowledgments

We sincerely thank the following outstanding open-source projects and research works, which have provided an important foundation and support for the development of FSD:


πŸ“œ License

This project is licensed under the Apache 2.0 License. For details, please see the LICENSE file.


πŸ“š Citation

If you use FSD in your research, please cite our paper:

@misc{yuan2025seeingdoingbridgingreasoning,
      title={From Seeing to Doing: Bridging Reasoning and Decision for Robotic Manipulation}, 
      author={Yifu Yuan and Haiqin Cui and Yibin Chen and Zibin Dong and Fei Ni and Longxin Kou and Jinyi Liu and Pengyi Li and Yan Zheng and Jianye Hao},
      year={2025},
      eprint={2505.08548},
      archivePrefix={arXiv},
      primaryClass={cs.RO},
      url={https://arxiv.org/abs/2505.08548}, 
}

About

Official code for "From Seeing to Doing: Bridging Reasoning and Decision for Robotic Manipulation"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published