Pytorch implementation of tiny-yolov2 imagenet 1K results. I will share different LR schedule and theirs performance metrics. From 1-7 of the below results ADAM optimizer is used with a Batch size of 210, total epoch num = 40. From 8-10 of the results trained with ADAM batch size of 64.
Data augmentation and lr is changed across trials. Ryzen 1700x with single gtx 1080,python3.6,pytorch v1 are used in all trials.
- best top5 is around %71.7. For data augmentation only randomCrop,Resize, horizontal flip are applied.When top1,top5 data is somewhat buggy but shows general trend.
- best top5 is around %76.6. for data augmentation following code is applied.
trainDataset1 = datasets.ImageFolder(
args.dir + 'train',
transforms.Compose([
transforms.Resize(256),
transforms.RandomRotation((-10,10)),
transforms.ColorJitter(brightness = 1,contrast = 1,saturation = 1,hue = 0.3),
transforms.CenterCrop(224),
transforms.ToTensor(),
# normalize,
]))
trainDataset2 = datasets.ImageFolder(
args.dir + 'train',
transforms.Compose([
transforms.Resize(256),
transforms.RandomCrop(224),
transforms.ToTensor(),
# normalize,
]))
trainLoader = DataLoader(ConcatDataset([trainDataset1,trainDataset2])
,batch_size=args.batchSize, shuffle=True,
num_workers=8, pin_memory=True, sampler=None)
- Difference with 4th trial is that i applied weigth decay to adam. Not Working and terminated.
optim.Adam(net.parameters(), lr=args.lr,weight_decay=0.001)
Lr = 0.02, weigth_decay = 0. Not working terminated.
top5 = 77.04 Lr = 0.012, LR rate is aggresively changed. first %50 iteration, lr is fixed at 0.012. consecutive %10 increased to 0.024. and then cosine annehealing is applied for the rest.
top5 = 76.13 Lr = 0.008, BatchSize = 64 LR rate is aggresively changed.
top5 = 77.81 Lr = 0.006, BatchSize = 64 LR rate is aggresively changed. Best result I had. Have some overfitting in training data.
top5 = 75.5 Lr = 0.012, BatchSize = 64 LR rate is aggresively changed.