Wentao Hu* · Jia Zheng* · Zixin Zhang* · Xiaojun Yuan · Jian Yin · Zihan Zhou
Note
In our follow-up work, CAD2Program, we discovered that a modern vision models (e.g., ViT) can understand engineering drawings. For detailed implementation, please check vit branch.
Note
This branch contains the implementation of PlankAssembly, which supports three types of inputs: (1) visible and hidden lines, (2) visible edges only, and (3) sidefaces. For raster images as inputs, please refer to the raster branch. For comparison with PolyGen, please refer to the polygen branch.
Our code has been tested with Python 3.8, PyTorch 1.10.0, CUDA 11.3, and PyTorch Lightning 1.7.6.
Clone the repository, then create and activate a plankassembly conda environment using the following commands.
# clone repository
git clone https://github.com/manycore-research/PlankAssembly.git
# create conda env
conda env create --file environment.yml
conda activate plankassemblyIf you encounter any issue with provided conda environment, you may install dependencies manually using the following commands.
conda create -n plankassembly python=3.8
conda activate plankassembly
conda install pytorch==1.10.0 torchvision==0.11.0 torchaudio==0.10.0 cudatoolkit=11.3 -c pytorch -c conda-forge
pip install pytorch-lightning==1.7.7 torchmetrics==0.11.4 rich==12.5.1 'jsonargparse[signatures]'
pip install detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu113/torch1.10/detectron2-0.6%2Bcu113-cp38-cp38-linux_x86_64.whl
conda install -c conda-forge pythonocc-core=7.6.2
pip install numpy shapely svgwrite svgpathtools trimesh setuptools==59.5.0 html4visionThe dataset can be found on Hugging Face Datasets. Please download the data first, then unzip the data in the project workspace.
The released dataset only contains 3D shape programs. To prepare the data for training and testing, please run the following commands.
We use PythonOCC to render three-view orthogonal engineering drawings and save them as SVG files.
# render complete inputs
python dataset/render_complete_svg.py
# render noisy inputs, please specify the noise ratio
python dataset/render_noisy_svg.py --data_type noise_05 --noise_ratio 0.05
# render visible inputs
python dataset/render_visible_svg.pyThen, pack the input line drawings and output shape programs into JSON files.
python dataset/prepare_info.py --data_path path/to/data/rootTo visualize the 3D model, we could generate the ground-truth 3D meshes from shape.
python misc/build_gt_mesh.py --data_path path/to/data/rootUse the following command to train a model from scratch:
# train a model with complete lines as inputs
python trainer_complete.py fit --config configs/train_complete.yamlUse the following command to test with a pre-trained model:
# infer a model with complete lines as inputs
python trainer_complete.py test \
--config configs/train_complete.yaml \
--ckpt_path path/to/checkpoint.ckpt \
--trainer.devices 1To compute the evaluation metrics, please run the following command:
python evaluate.py --data_path path/to/data/dir --exp_path path/to/lightning_log/dirTo visualize the results, we build 3D mesh models from predictions:
python misc/build_pred_mesh.py --exp_path path/to/lightning_log/dirThen, we use HTML4Vision to generate HTML files for mesh visualization (please refer to here for more details):
python misc/build_html.py --exp_path path/to/lightning_log/dirThe 2D images presented in our paper are rendered using bpy-visualization-utils.
The checkpoints can be found on Hugging Face Models. Or click the links below to download the checkpoint for the corresponding model type directly.
- Model trained on complete inputs: here
- Model trained on visible inputs only: here
- Model trained on sideface inputs: here
PlankAssembly is licensed under the AGPL-3.0 license. The code snippets in the third_party folder are available under Apache-2.0 License.