Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Difficulties making the glue example work #460

Closed
3 of 4 tasks
dennis-zyska opened this issue Dec 12, 2022 · 4 comments
Closed
3 of 4 tasks

Difficulties making the glue example work #460

dennis-zyska opened this issue Dec 12, 2022 · 4 comments
Labels
question Further information is requested Stale

Comments

@dennis-zyska
Copy link

dennis-zyska commented Dec 12, 2022

Environment info

  • transformers version: 4.21.0
  • Platform: Linux-3.10.0-1160.71.1.el7.x86_64-x86_64-with-glibc2.17
  • Python version: 3.10.6
  • Huggingface_hub version: 0.10.1
  • PyTorch version (GPU?): 1.12.1 (False)
  • Tensorflow version (GPU?): not installed (NA)
  • Flax version (CPU?/GPU?/TPU?): not installed (NA)
  • Jax version: not installed
  • JaxLib version: not installed
  • Using GPU in script?:
  • Using distributed or parallel set-up in script?:

Conda environment:

name: <internal>
channels:
 - pytorch
 - defaults
dependencies:
 - python>=3.8.5
 - pip>=22.1.2
 - cudatoolkit>=11.3.1
 - pytorch>=1.12.1
 - numpy>=1.23.1
 - pip:
   - pytorch-lightning>=1.7.7
   - transformers==4.21.0
   - torchmetrics>=0.9.3
   - datasets>=2.4.0
   - pydevd-pycharm~=213.7172.26
   - wandb>=0.13.3
   - pyJoules>=0.5.1
   - scipy>=1.9.1
   - sklearn>=0.0
   - deepspeed>=0.7.2
   - codecarbon>=2.1.4
   - pynvml>=11.4.1
   - adapter-transformers>=3.1.0
   - lightning-transformers>=0.2.3
   - evaluate>=0.3.0

Information

Model I am using (Bert, XLNet ...): bert-base-uncased, distilbert-base-uncased

Language I am using the model on (English, Chinese ...): glue task sst2

Adapter setup I am using (if any):

The problem arises when using:

  • the official example scripts: (give details below)
  • my own modified scripts: (give details below)

The tasks I am working on is:

  • an official GLUE/SQUaD task: sst2
  • my own task or dataset: (give details below)

To reproduce

Steps to reproduce the behavior:

  1. Download script from https://github.com/adapter-hub/adapter-transformers/blob/master/examples/pytorch/text-classification/run_glue.py
  2. Change import lines of AdapterConfig, AutoAdapterModel, AdapterTrainer, MultiLingAdapterArguments to
    from transformers.adapters import (AdapterConfig, AutoAdapterModel, AdapterTrainer, MultiLingAdapterArguments)
  3. Run script with parameters:
    --model_name_or_path bert-base-uncased --output_dir ./output --task_name sst2
Traceback (most recent call last):
  File "<internal_path>/run_glue_adapters.py", line 681, in <module>
	main()
  File "<internal_path>/run_glue_adapters.py", line 376, in main
	model = AutoAdapterModel.from_pretrained(
  File "<internal_path>/miniconda3/envs/<internal_path>/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 445, in from_pretrained
	model_class = _get_model_class(config, cls._model_mapping)
  File "<internal_path>/miniconda3/envs/<internal_path>/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 359, in _get_model_class
	supported_models = model_mapping[type(config)]
  File "<internal_path>/miniconda3/envs/<internal_path>/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 565, in __getitem__
	return self._load_attr_from_module(model_type, model_name)
  File "<internal_path>/miniconda3/envs/<internal_path>/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 579, in _load_attr_from_module
	return getattribute_from_module(self._modules[module_name], attr)
  File "<internal_path>/miniconda3/envs/<internal_path>/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 539, in getattribute_from_module
	return getattribute_from_module(transformers_module, attr)
  File "<internal_path>/miniconda3/envs/<internal_path>/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 539, in getattribute_from_module
	return getattribute_from_module(transformers_module, attr)
  File "<internal_path>/miniconda3/envs/<internal_path>/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 539, in getattribute_from_module
	return getattribute_from_module(transformers_module, attr)
  [Previous line repeated 983 more times]
  File "<internal_path>/miniconda3/envs/<internal_path>/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 538, in getattribute_from_module
	transformers_module = importlib.import_module("transformers")
  File "<internal_path>/miniconda3/envs/<internal_path>/lib/python3.10/importlib/__init__.py", line 126, in import_module
	return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1024, in _find_and_load
  File "<frozen importlib._bootstrap>", line 170, in __enter__
  File "<frozen importlib._bootstrap>", line 196, in _get_module_lock
  File "<frozen importlib._bootstrap>", line 72, in __init__
RecursionError: maximum recursion depth exceeded while calling a Python object

If I run it with an own script, I got the same result:

Traceback (most recent call last):
 File "<internal_path>/main.py", line 211, in <module>
   model = AutoAdapterModel.from_pretrained(args.model, config=config) 
 File "<internal_path>/conda/miniconda3/envs/<internal_path>/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 445, in from_pretrained
   model_class = _get_model_class(config, cls._model_mapping)
 File "<internal_path>/conda/miniconda3/envs/<internal_path>/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 359, in _get_model_class
   supported_models = model_mapping[type(config)]
 File "<internal_path>/conda/miniconda3/envs/<internal_path>/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 565, in __getitem__
   return self._load_attr_from_module(model_type, model_name)
 File "<internal_path>/conda/miniconda3/envs/<internal_path>/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 579, in _load_attr_from_module
   return getattribute_from_module(self._modules[module_name], attr)
 File "<internal_path>/conda/miniconda3/envs/<internal_path>/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 539, in getattribute_from_module
   return getattribute_from_module(transformers_module, attr)
 File "<internal_path>/conda/miniconda3/envs/<internal_path>/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 539, in getattribute_from_module
   return getattribute_from_module(transformers_module, attr)
 File "<internal_path>/conda/miniconda3/envs/<internal_path>/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 539, in getattribute_from_module
   return getattribute_from_module(transformers_module, attr)
 [Previous line repeated 984 more times]
 File "<internal_path>/conda/miniconda3/envs/<internal_path>/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 538, in getattribute_from_module
   transformers_module = importlib.import_module("transformers")
 File "<internal_path>/conda/miniconda3/envs/<internal_path>/lib/python3.10/importlib/__init__.py", line 126, in import_module
   return _bootstrap._gcd_import(name[level:], package, level)
 File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
 File "<frozen importlib._bootstrap>", line 1024, in _find_and_load
 File "<frozen importlib._bootstrap>", line 170, in __enter__
 File "<frozen importlib._bootstrap>", line 196, in _get_module_lock
 File "<frozen importlib._bootstrap>", line 72, in __init__
RecursionError: maximum recursion depth exceeded while calling a Python object

If I update the transformer library (every version above 4.22.0), I got a complete different error:

  File "<internal_path>/conda/miniconda3/envs/<internal_path>/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1093, in _get_module
	return importlib.import_module("." + module_name, self.__name__)
  File "<internal_path>/conda/miniconda3/envs/<internal_path>/lib/python3.10/importlib/__init__.py", line 126, in import_module
	return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "<internal_path>/conda/miniconda3/envs/<internal_path>/lib/python3.10/site-packages/transformers/adapters/configuration.py", line 9, in <module>
	from .utils import get_adapter_config_hash, resolve_adapter_config
  File "<internal_path>/conda/miniconda3/envs/<internal_path>/lib/python3.10/site-packages/transformers/adapters/utils.py", line 22, in <module>
	from ..utils import get_from_cache, is_remote_url
ImportError: cannot import name 'get_from_cache' from 'transformers.utils' (<internal_path>/conda/miniconda3/envs/<internal_path>/lib/python3.10/site-packages/transformers/utils/__init__.py)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "<internal_path>/main.py", line 21, in <module>
	from transformers.adapters import PfeifferConfig, HoulsbyConfig, ParallelConfig, \
  File "<frozen importlib._bootstrap>", line 1075, in _handle_fromlist
  File <internal_path>/conda/miniconda3/envs/<internal_path>/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1083, in __getattr__
	module = self._get_module(self._class_to_module[name])
  File "<internal_path>/conda/miniconda3/envs/<internal_path>/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1095, in _get_module
	raise RuntimeError(
RuntimeError: Failed to import transformers.adapters.configuration because of the following error (look up to see its traceback):
cannot import name 'get_from_cache' from 'transformers.utils' (<internal_path>/miniconda3/envs/<internal_path>/lib/python3.10/site-packages/transformers/utils/__init__.py)

Expected behavior

I would expect to be able to load the appropriate model using AutoAdapterModel to further add the classification head to the model. I would also expect to be able to use the latest transformers library (see last trace stack).

@dennis-zyska dennis-zyska added the bug Something isn't working label Dec 12, 2022
@hSterz
Copy link
Member

hSterz commented Dec 14, 2022

Thanks for your question.

If I update the transformer library (every version above 4.22.0), I got a complete different error

You have adapter-transformers and transformers installed in the same environment. adapter-transformers is a direct fork of transformers. They both share a namespace which can lead to problems if both are installed in the same environment. So make sure you only have adapter-transformers installed. This might solve your problem.

Additionally, to train an adapter make sure you use the --train_adapter argument when calling the script. Otherwise, it will fine-tune the full model.

@adapter-hub-bert
Copy link
Member

This issue has been automatically marked as stale because it has been without activity for 90 days. This issue will be closed in 14 days unless you comment or remove the stale label.

@calpt calpt added question Further information is requested and removed bug Something isn't working labels Mar 27, 2023
@adapter-hub-bert
Copy link
Member

This issue has been automatically marked as stale because it has been without activity for 90 days. This issue will be closed in 14 days unless you comment or remove the stale label.

@adapter-hub-bert
Copy link
Member

This issue was closed because it was stale for 14 days without any activity.

@adapter-hub-bert adapter-hub-bert closed this as not planned Won't fix, can't repro, duplicate, stale Jul 11, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested Stale
Projects
None yet
Development

No branches or pull requests

4 participants