Skip to content

YuChuXi/RRFVCRM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

78 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

RRFVCM Welcome to RRVtuber

Language Documentation

δΈ­ζ–‡ζ–‡ζ‘£

Introduction

An AI with Visual for generating voice and actions based on the RWKV model architecture

Project Prospects

  • This project can be flexibly applied to various local deployments of AI virtual anchors or physical robots, which has the characteristics of saving computing power and power consumption in the future. This AI project has the functions of visual emotion expression and action generation, and is currently in the process of improving various functions.And it also already have the Visual.

Conventions

  • Commands in the documentation are to be executed in the project's root directory unless otherwise specified
  • python and python3 are the same

πŸ›  Preparation

Setting Up the Environment

  1. Install Python
  2. Install CUDA/ROCm and the corresponding version of PyTorch
  3. Install the required libraries
pip install -r requirements.txt
  1. If you are using an AMD GPU, add the following commands to ~/.bashrc (using gfx1100 as an example, you can find the specific model by running rocminfo)
export ROCM_PATH=/opt/rocm
export HSA_OVERRIDE_GFX_VERSION=11.0.0
  1. Run the following commands
sudo usermod -aG render $USERNAME 
sudo usermod -aG video $USERNAME 

If you are an AMD user and want to add cuda operator parallelization, it will be a bit trouble. You need to modify the rwkv standard library, and it may not work.

Really want to RUN? Well ~~~

cd ~/.local/lib/python3.10/site-packages/rwkv
vim ./model.py
  • Change lines 37, 46, 472, 505 from extra_cuda_cflags=["--use_fast_math", "-O3", "--extra-device-vectorization"] to extra_cuda_cflags=["-O3", "--hipstdpar", "-xhip"]
  • Globally search for os.environ["RWKV_CUDA_ON"] = '0' and change it to os.environ["RWKV_CUDA_ON"] = '1'
python webui.py

Good luck!

You will find a hip directory under ~/.local/lib/python3.10/site-packages/rwkv, which contains the converted CUDA parallel operators
Failed? Globally search for os.environ["RWKV_CUDA_ON"] = '1' and change it to os.environ["RWKV_CUDA_ON"] = '0'

πŸ“₯ Download Pre-trained Weights

Pre-trained weights are stored in ./weights/

If the video memory is large enough, you can try a larger pre-trained weight model

πŸ“ Modify Pre-trained Weights Path

  • Line 19 in ./models/rwkv6/dialogue.py
  • Line 19 in ./models/rwkv6/continuation.py
  • Line 17 in ./models/music/run.py
  • Line 11 in ./models/language_test.py
  • line 19 and line 20 in ./models/visualRWKV/app/app_gpu.py

πŸ§ͺ Verification

  • Execute
python models/language_test.py
  • If it interacts normally, the preparation work is correct

πŸš€ Quick Run Language Model (It is Available now!)

python webui.py

πŸ‘€ Quick Run Visual-RWKV Model (It is Available now!οΌ‰

python webui.py

Adjust the model running strategy in line 19 of models/rwkv6/dialogue.py, default "cuda fp16"

Adjust the Visual-RWKV model running strategy in line 24 ofmodels/visualRWKV/app/app_gpu.py default "cuda fp16"

Alic is a noob in the DeepLearning ,but it's could be running

πŸ“‚ Project Structure

🧠 Training (It is not Available now!)- Wait for YuChuXi, She is a lazy little fox

  • Training requires OpenSeeFace to extract facial features. After installation, configure the path in config/openseeface.json
  • For some datasets, automatic speech annotation may be required DeepSpeech

πŸ“¦ Prepare Data

You can prepare the data yourself or refer to the following datasets

βš™οΈ Data Preprocessing

  • Video or audio slicing (default 25FPS * 40s per slice, corresponding to non-language model 25FPS * 1024CTX) python
  • Extract hubert and f0 python
  • Extract facial features from video python

🎢 Train T2F0

  • Wait for YuChuXi,She is a lazy little fox

🎢 Train TF02M

  • Wait for YuChuXi,She is a lazy little fox

🌟 Extensions

Try rwkv-music-demo

cd ./models/music
python ./run.py
  • The model path is on line 17 of run.py. If it does not run properly, change line 22 from "strategy='cuda fp32'" to "strategy='cpu fp32'"

State Tuning

Refer to https://github.com/JL-er/RWKV-PEFT

rwkv-language-test

  • Go to ./models/rwkv/
  • Run python language_test.py

❓ Having Issues?

  • If yuo can can't run webui.pyIn most cases, the command line terminal may not be able to connect to the Huggingface website. Please try using a proxy and set the proxy in the command line terminal.
export https_proxy=http://127.0.0.1:[port]
export http_proxy=http://127.0.0.1:[port]
  • parselmouth installation failed: temporarily downgrade setuptools to below 58.0

Other

Future Directions

Acknowledgements

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •