You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
How can I increase the number of trainable adapter parameters using AdapterHub while training an adapter module for GPT2 text? Despite trying various methods, I have been unable to increase the number of trainable adapter parameters. Can someone provide a code example or suggest a solution to this issue? I unfortunately could not find anything on this while navigating the documentation.
The text was updated successfully, but these errors were encountered:
Hey @skylersterling , how you can increase the number of parameters depends on the adapter architecture you are using. If you are using a bottleneck adapter (e.g. with the PfeifferConfig or HoulsbyConfig) you can change the reduction factor. If you have a smaller reduction factor the bottleneck becomes less narrow hence, the number of trainable parameters increases.
Thank you! Took a bit of experimenting but it seems that for some reason, increasing the reduction factor initially increases the amount of parameters from the default of 42000 to 5 billion haha :-:
This issue has been automatically marked as stale because it has been without activity for 90 days. This issue will be closed in 14 days unless you comment or remove the stale label.
How can I increase the number of trainable adapter parameters using AdapterHub while training an adapter module for GPT2 text? Despite trying various methods, I have been unable to increase the number of trainable adapter parameters. Can someone provide a code example or suggest a solution to this issue? I unfortunately could not find anything on this while navigating the documentation.
The text was updated successfully, but these errors were encountered: