Skip to content

details of n_gpu and batch size #13

Answered by fcakyon
Leonmel asked this question in Q&A
Discussion options

You must be logged in to vote

This calculation is for a single GPU. If you don't have enough vram then you should lower the BATCH_MULTIPLIER value and lr will be scaled automatically.

If you will train with more than 1 GPU then multiply with number of GPUs as lr=0.01 / 8 * BATCH_MULTIPLIER * LR_MULTIPLIER * NUM_GPUS

Replies: 5 comments 4 replies

Comment options

You must be logged in to vote
4 replies
@nalemadi
Comment options

@fcakyon
Comment options

@nalemadi
Comment options

@fcakyon
Comment options

Answer selected by fcakyon
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
3 participants
Converted from issue

This discussion was converted from issue #8 on September 29, 2022 17:24.