Lucas Chen, Yitian Gao, Sicheng Wang, Francesco Fuentes, Laura H. Blumenschein, Zachary Kingston
This repository contains a differentiable realtime forward-dynamics simulator for an extendable, soft vine robot that accompanies our RoboSoft paper "Physics-Grounded Differentiable Simulation for Soft Growing Robots". Also included here is the code for system identification for our model from real vine trials and computer vision code for extracting vine positions from video data.
Make sure you have torch in your python environment.
Install the dependencies in requirements.txt
Alternatively, there is a conda environment.yml but it is unlikely to work due to specific cuda driver versions.
To see the simulation rollout with some default params and scene, run this. It's mainly for debugging changes in the sim itself
python -m sim.mainTo do fitting, run: (It uses the test rollouts in sim_results, but the full dataset of 500 (1.3G) rollouts need to be uploaded somewhere)
python -m sim.fittingDuring fitting you can run tensorboard --logdir=runs to see tensorboard, but I recommend using the vscode integration, the button above import tensorboard opens it as a tab in vscode
sim/main.py: Simulates and displays a simple rollout with hardcoded obstacles. This is good for if you're getting started or making tweaks to the physics themselves
paper_vis.py Script to generate the timing benchmark figure.
simulated_data: Contains code and data relating to generating an using simulated data from the other vine simulator. The file gen_rects.py creates a dataset of random rects, which can be fed into the other sim, and sim_results renders the sim rollouts. Some sample rollouts have been provided in sim_output
videoparser Code for converting videos of real trials into frame-by-frame vine positions (as a sequence of points) as well as obstacle positions (as a set of line segments). Check the videoparser/README.md file in there for more details
datais a directory of trial videos as well as extracted vine segmentations.sim_outare the vine positions from each simulated rolloutframer.pyconverts videos into frames, performs the right homography tranformation to align the testbed surface onto the corners of the frame, optical flow, segmentation, and centerline extraction for the vine.classifieris an early version of framer where we tried other segmentation methods, like k-means color thresholding, which didn't work because color is too variable to be the only discriminating feature.processor.pyManual parts of labelling the obstacles and workspace boundaries.betterprocessor.pyTakes simulated vine rollouts and overlays them onto the real video, to generate certain figures in the paper. You can see these results as the png images in this directory.
sim The simulator and fitting code itself. There are a bunch of variants which we used for the trials in the paper. However, they are based on the structure in fitting_real.py
vine.pyThe core simulator code. Defines vine parameters and state. Definesevolve()function, which takes in a state (position and velocity), then solves the QP to generate the next state.solver.pyCalled fromvine.pyand performs the actual QP solving. There are a bunch of QP solvers here, batched, unbatch, gradients, no gradients.render.pyVine rendering code. Also has a sns variant.fitting_*These files do fitting on real data (stuff from video parser), data from the other simulator, and a small dataset we found in the repo.read_yitianSmall converter from videoparser outputs to a format usable byvine.py.sqrtmsqrtm implementation from thistest_*Various tests.
models Trained MLPs for out bending model. All of them work pretty much the same, but model_360_good is a bit better
goodruns Tensorboard logs of good fitting sessions. For referencing what the loss curves should look like.