Skip to content

AMD Compatability? #52

@Derx-cs

Description

@Derx-cs

I am trying to get it to work on an AMD Radeon RX 7800 XT, but when I try to start it, Xformers throws an error, that it requires cuda.
Running "inference_online.py --acceleration none" i get:
host: 0.0.0.0
port: 7860
reload: False
mode: default
max_queue_size: 0
timeout: 0.0
safety_checker: False
taesd: True
ssl_certfile: None
ssl_keyfile: None
debug: False
acceleration: none
engine_dir: engines
config_path: ./configs/prompts/personalive_online.yaml

WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
PyTorch 2.1.0+cu121 with CUDA 1201 (you have 2.1.0+cpu)
Python 3.10.11 (you have 3.10.19)
Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)
Memory-efficient attention, SwiGLU, sparse and more won't be available.
Set XFORMERS_MORE_DETAILS=1 for more details
S:\Miniconda\envs\personalive\lib\site-packages\diffusers\models\dual_transformer_2d.py:20: FutureWarning: DualTransformer2DModel is deprecated and will be removed in version 0.29. Importing DualTransformer2DModel from diffusers.models.dual_transformer_2d is deprecated and this will be removed in a future version. Please use from diffusers.models.transformers.dual_transformer_2d import DualTransformer2DModel, instead.
deprecate("DualTransformer2DModel", "0.29", deprecation_message)

host: 0.0.0.0
port: 7860
reload: False
mode: default
max_queue_size: 0
timeout: 0.0
safety_checker: False
taesd: True
ssl_certfile: None
ssl_keyfile: None
debug: False
acceleration: none
engine_dir: engines
config_path: ./configs/prompts/personalive_online.yaml

WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
PyTorch 2.1.0+cu121 with CUDA 1201 (you have 2.1.0+cpu)
Python 3.10.11 (you have 3.10.19)
Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)
Memory-efficient attention, SwiGLU, sparse and more won't be available.
Set XFORMERS_MORE_DETAILS=1 for more details
S:\Miniconda\envs\personalive\lib\site-packages\diffusers\models\dual_transformer_2d.py:20: FutureWarning: DualTransformer2DModel is deprecated and will be removed in version 0.29. Importing DualTransformer2DModel from diffusers.models.dual_transformer_2d is deprecated and this will be removed in a future version. Please use from diffusers.models.transformers.dual_transformer_2d import DualTransformer2DModel, instead.
deprecate("DualTransformer2DModel", "0.29", deprecation_message)
S:\Miniconda\envs\personalive\lib\site-packages\torch_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
return self.fget.get(instance, owner)()
Some weights of the model checkpoint were not used when initializing UNet2DConditionModel:
['conv_norm_out.weight, conv_norm_out.bias, conv_out.weight, conv_out.bias']
Failed to enable xformers: torch.cuda.is_available() should be True but is False. xformers' memory efficient attention is only available for GPU
S:\Downloads\PersonaLive\inference_online.py:240: DeprecationWarning: on_event is deprecated, use lifespan event handlers instead.
Read more about it in the
FastAPI docs for Lifespan Events.
@self.app.on_event("shutdown")
init done

Am I missing something, am I save to use newer, rocm compatible versions, or am I at a loss, because it is only available for NVIDIA GPUs with cuda?
I am running a Windows 10 Pro with Extended upgrades

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions