Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem when running training with num_vectors_per_token>1 #7

Open
slawomir-gilewski-wttech opened this issue Aug 30, 2023 · 1 comment

Comments

@slawomir-gilewski-wttech

Hi!
After modifying the config yaml file by increasing the num_vectors_per_token parameter, when I try to run the training I get an error:

Traceback (most recent call last):
File "/home/azureuser/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/main.py", line 596, in
trainer.fit(model, data)
File "/home/azureuser/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 771, in fit
self._call_and_handle_interrupt(
File "/home/azureuser/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 724, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File "/home/azureuser/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 812, in _fit_impl
results = self._run(model, ckpt_path=self.ckpt_path)
File "/home/azureuser/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1237, in _run
results = self._run_stage()
File "/home/azureuser/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1324, in _run_stage
return self._run_train()
File "/home/azureuser/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1354, in _run_train
self.fit_loop.run()
File "/home/azureuser/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/venv/lib/python3.10/site-packages/pytorch_lightning/loops/base.py", line 204, in run
self.advance(*args, **kwargs)
File "/home/azureuser/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/venv/lib/python3.10/site-packages/pytorch_lightning/loops/fit_loop.py", line 269, in advance
self._outputs = self.epoch_loop.run(self._data_fetcher)
File "/home/azureuser/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/venv/lib/python3.10/site-packages/pytorch_lightning/loops/base.py", line 204, in run
self.advance(*args, **kwargs)
File "/home/azureuser/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/venv/lib/python3.10/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 208, in advance
batch_output = self.batch_loop.run(batch, batch_idx)
File "/home/azureuser/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/venv/lib/python3.10/site-packages/pytorch_lightning/loops/base.py", line 204, in run
self.advance(*args, **kwargs)
File "/home/azureuser/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/venv/lib/python3.10/site-packages/pytorch_lightning/loops/batch/training_batch_loop.py", line 88, in advance
outputs = self.optimizer_loop.run(split_batch, optimizers, batch_idx)
File "/home/azureuser/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/venv/lib/python3.10/site-packages/pytorch_lightning/loops/base.py", line 204, in run
self.advance(*args, **kwargs)
File "/home/azureuser/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/venv/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 203, in advance
result = self._run_optimization(
File "/home/azureuser/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/venv/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 249, in _run_optimization
closure()
File "/home/azureuser/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/venv/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 148, in call
self._result = self.closure(*args, **kwargs)
File "/home/azureuser/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/venv/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 134, in closure
step_output = self._step_fn()
File "/home/azureuser/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/venv/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 427, in _training_step
training_step_output = self.trainer._call_strategy_hook("training_step", *step_kwargs.values())
File "/home/azureuser/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1766, in _call_strategy_hook
output = fn(*args, **kwargs)
File "/home/azureuser/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/venv/lib/python3.10/site-packages/pytorch_lightning/strategies/strategy.py", line 333, in training_step
return self.model.training_step(*args, **kwargs)
File "/home/azureuser/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/ldm/models/diffusion/ddpm.py", line 442, in training_step
loss, loss_dict = self.shared_step(batch)
File "/home/azureuser/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/perfusion/perfusion.py", line 125, in shared_step
loss = self(x, c, mask=mask)
File "/home/azureuser/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/azureuser/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/perfusion/perfusion.py", line 129, in forward
encoding = self.cond_stage_model.encode(c['c_crossattn'], embedding_manager=self.embedding_manager)
File "/home/azureuser/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/ldm/modules/encoders/modules.py", line 285, in encode
return self(text, **kwargs)
File "/home/azureuser/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/azureuser/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/ldm/modules/encoders/modules.py", line 280, in forward
z = self.transformer(input_ids=tokens, **kwargs)
File "/home/azureuser/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/azureuser/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/ldm/modules/encoders/modules.py", line 259, in transformer_forward
return self.text_model(
File "/home/azureuser/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/azureuser/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/ldm/modules/encoders/modules.py", line 219, in text_encoder_forward
hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids,
File "/home/azureuser/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/azureuser/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/ldm/modules/encoders/modules.py", line 143, in embedding_forward
inputs_embeds = embedding_manager(input_ids, inputs_embeds)
File "/home/azureuser/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/azureuser/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/perfusion/embedding_manager.py", line 139, in forward
[tokenized_text[row][:col], placeholder_token.repeat(num_vectors_for_token).to(device),
AttributeError: 'int' object has no attribute 'repeat'

I figured this is probably because placeholder_token is supposed to be a tensor, therefore I fixed it by adding placeholder_token = torch.tensor(placeholder_token).to(device) line to perfusion/embedding_manager.py in the else clause after checking if self.max_vectors_per_token == 1 (line 116).

It seems to be working after that :)

@euminds
Copy link

euminds commented Sep 1, 2023

Hi

can you meet the problem:

I tried the command

 python main.py \
    --name teddy \
    --base ./configs/perfusion_teddy.yaml \
    --basedir ./ckpt \
    -t True \
    --gpus 0,

but got the error below

Traceback (most recent call last):
File "/data_heat/rjt_project/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/main.py", line 461, in
model = instantiate_from_config(config.model)
File "/data_heat/rjt_project/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/main.py", line 137, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "/data_heat/rjt_project/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/perfusion/perfusion.py", line 54, in init
super().init(*args, **kwargs)
File "/data_heat/rjt_project/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/ldm/models/diffusion/ddpm.py", line 565, in init
self.instantiate_cond_stage(cond_stage_config)
File "/data_heat/rjt_project/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/ldm/models/diffusion/ddpm.py", line 640, in instantiate_cond_stage
model = instantiate_from_config(config)
File "/data_heat/rjt_project/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/ldm/util.py", line 175, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()), **kwargs)
File "/data_heat/rjt_project/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/ldm/modules/encoders/modules.py", line 120, in init
self.tokenizer = CLIPTokenizer.from_pretrained(version)
File "/home/user/miniconda3/envs/perfusion_v1/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1841, in from_pretrained
return cls._from_pretrained(
File "/home/user/miniconda3/envs/perfusion_v1/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2004, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/home/user/miniconda3/envs/perfusion_v1/lib/python3.10/site-packages/transformers/models/clip/tokenization_clip.py", line 334, in init
with open(vocab_file, encoding="utf-8") as vocab_handle:
TypeError: expected str, bytes or os.PathLike object, not NoneType

Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants