Some PyTorch Trainloop Example My vision of training loop in pure PyTorch. With DDP, configs, handsome logging and more Training: Distributed: CUDA_VISIBLE_DEVICES=0,1 ./dist_train.sh --config=./configs/baseline.yml Single GPU: python train.py --config=./configs/baseline.yml