-
Notifications
You must be signed in to change notification settings - Fork 339
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Difficulties making the glue example work #460
Comments
Thanks for your question.
You have Additionally, to train an adapter make sure you use the |
This issue has been automatically marked as stale because it has been without activity for 90 days. This issue will be closed in 14 days unless you comment or remove the stale label. |
This issue has been automatically marked as stale because it has been without activity for 90 days. This issue will be closed in 14 days unless you comment or remove the stale label. |
This issue was closed because it was stale for 14 days without any activity. |
Environment info
transformers
version: 4.21.0Conda environment:
Information
Model I am using (Bert, XLNet ...): bert-base-uncased, distilbert-base-uncased
Language I am using the model on (English, Chinese ...): glue task sst2
Adapter setup I am using (if any):
The problem arises when using:
The tasks I am working on is:
To reproduce
Steps to reproduce the behavior:
from transformers.adapters import (AdapterConfig, AutoAdapterModel, AdapterTrainer, MultiLingAdapterArguments)
--model_name_or_path bert-base-uncased --output_dir ./output --task_name sst2
If I run it with an own script, I got the same result:
If I update the transformer library (every version above 4.22.0), I got a complete different error:
Expected behavior
I would expect to be able to load the appropriate model using AutoAdapterModel to further add the classification head to the model. I would also expect to be able to use the latest transformers library (see last trace stack).
The text was updated successfully, but these errors were encountered: