-
Clone our re-blocking Github repo
-
create an .env file in the main folder (hidden file?)
- create and enter Mapbox Access Token for the Mapbox Web API
- enter your file path to the parcel and building data (NYC is on our Dropbox)
MAPBOX_ACCESS_TOKEN="$YOUR-API-KEY" LOCAL_PATH="$YOUR-DROPBOX-PATH/Million Neighborhoods/"
-
Run the Notebook parcels-buildings.ipynb to export images (parcels and buildings)
-
Combine the images with the script combine_images.sh for Pix2Pix HD (leave the individual ones for CycleGan)
-
Clone the model’s Github repo
- Read the readme.md
- install all dependencies (see readme)
- install/update cuda for linux
-
Move the generated images (from step 1 or from Dropbox → Images) into /datasets/parcels
- →/test, /train, and /val for Pix2Pix
- TestA, TestB, TrainA, TrainB etc. for CycleGan
-
Open the repo folder in your terminal
cd …/pytorch-CycleGAN-and-pix2pix/
-
Train the model
- (I named the output folder according to the parameters I used not to forget them…)
**Pix2Pix HD** python3 [train.py](http://train.py/) --dataroot ./datasets/parcels --name test --model pix2pix --direction AtoB --n_epochs 100 --batch_size 60 **CycleGan** python3 --dataroot ./datasets/parcels --name parcels_cycle-gan_25k_e50_b50_A-B --model cycle_gan --direction AtoB --n_epochs 10 --batch_size 1
-
Output
- See output in /checkpoints/FOLDERNAME/web/index.html
- Monitor the training logs with logs-visualised.ipynb (WIP)
- Copy your folder (without the training data) to our Dropbox → Model-Training if you want to share the results 😊
- See /options → base_options.py, test_options.py, and train_options.py for the full parameter list (and to set standards)
- So far, the goal has been to increase batch_size to just below memory collapse
watch -n0.1 nvidia-smi → Nvidia GPU monitor
- Find the trained model here
- CycleGan → Dropbox
- Trained on 25k Images (Train A/B: 20k, Buffer: 2.5k, Val: 2.5k)
- 50 Epochs, batch Size 50
- Direction A→B
- Pix2Pix → needs to be retrained due to storage issue
- CycleGan → Dropbox
- Issue: No NVIDIA GPU → NO CUDA
- ARM GPU (MPS) can generally be used with PyTorch but doesn’t seem to work with our model
- Run on CPU → add to terminal command when running training script: “--gpu_ids -1”
python3 [train.py](http://train.py/) --dataroot ./datasets/parcels --name facades_pix2pix --model pix2pix --direction BtoA **--gpu_ids -1**