You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@rekriz11@ruyimarone what would it take to support inference on the large (8 x 80GB A100) OPT model? Do you know if that's possible with the public metaseq software? And do all GPUs still need to be on the same host, or can they be distributed?
ccmaymay
changed the title
Look up supporting multiple machines (large OPT, BLOOM)
Plan for extra-large model support (OPT 175B, Bloom 176B)
Aug 4, 2022
No description provided.
The text was updated successfully, but these errors were encountered: