2026 Update! This project will be undergoing an overhaul after a release is made for the Linux-CoumfyUI-Launcher repo. There have been significant tech developments in the last year since Studio Whip began that will be bringing major improvements to the entirety of the project.
AI-enhanced collaborative content production suite for movies, comics, and interactive visual novels. Create compelling stories with seamlessly integrated tools designed to enhance your creative talents.
There is a mostly incomplete website in construction here, and you can see recent dev proposals/activity in the GitHub issues.
- Story-Driven Platform: Create original screenplays with visual story-building tools and advanced LLM integrations.
- P2P Real-Time Collaboration: Create together remotely for free, without requiring third-party servers.
- Storyboarding: Generate images, draw sketches, and position 3D assets to create dynamic animatic storyboards linked to your script.
- Audio Editing: Sequence generated/recorded dialogue, music, and SFX within your timelines.
- Video Production: Develop storyboards into rendered scenes using integrated image and video generation models.
- Professional Color Grading: Make expressive color choices in a color-managed environment using scopes, primary/secondary adjustments, and AI-assisted tools.
- Node-Based Compositing: Combine multiple visual elements (renders, footage, effects) into final shots using a flexible node graph system.
System requirements heavily depend on the size and type of AI models you choose to run locally. You can find many models on platforms like Hugging Face and Civitai.
The table below summarizes recommended hardware specifications for different tiers of usage, focusing on local inference:
- AI Performance (AI TOPS) is measured using FP8 precision.
- CPU performance estimates use PassMark CPU Mark scores.
- Storage estimates are minimums for the Studio-Whip base install, and some models. Your actual needs will be higher depending on the number and size of models and project assets. NVMe SSDs are highly recommended.
| Tier | Use Case | RAM | VRAM | AI TOPS | Storage | CPU Performance |
|---|---|---|---|---|---|---|
| Entry-Level | Development, Testing | 32GB | 8GB | 250 | 32GB | 20K+ |
| Mid-Range | Education, Personal Projects | 32GB | 16GB | 500 | 128GB | 30K+ |
| High-End | Advanced Projects, Video | 64GB | 24GB+ | 1000 | 512GB | 50k+ |
| Enterprise | Fast Generation | 128GB | 96GB+ | 4000 | 1TB | 80K+ |
The following table provides example model combinations suitable for each hardware tier when running locally. These are just suggestions; you can:
- Mix and match models based on your specific tasks (writing, image gen, video gen, etc...).
- Use fewer, larger models or more, smaller models depending on VRAM/RAM.
- Choose models optimized for specific hardware (e.g., INT4/FP8 quantizations if supported).
- Combine local models with cloud APIs.
- Distribute models accross CPU and GPU
Hover over models for license
| Tier | Creative Writing | Instruct | Image Generation | Video Generation |
|---|---|---|---|---|
| Entry-Level | Use the Instruct model | Not Practical | ||
| Mid-Range | ||||
| High-End | ||||
| Enterprise |
- Vulkan SDK: 1.3 or later
- Rust: Latest stable version (via Rustup)
- A GPU Nvidia is sugested due to compatibility and inferance performance, but not strictly required
- Clone the repository:
git clone https://github.com/<your-repo>/studio-whip.git - Navigate to the project:
cd studio-whip/rust - Build:
cargo run --release
Install Windows Subsystem for Linux, this allows you to run the linux shell script utlities located into /rust/utilities
- Open Powershell as admin and install wsl
wsl --install - Find availible linux distros
wsl --list --online - Install the latest Ubuntu LTS
wsl --install --<distro> - Launch the Linux distribution: Win+R
Ubuntu - Windows paths in Ubutu are located in
/mnt/<lowercase-drive-letter>/* - You may need to install
dos2unixwithin your Linux environment to convert windows line endings.- Install it:
sudo apt update && sudo apt install dos2unix - example usage :
dos2unix llm_prompt_tool.sh
- Install it:
After installing the Linux Subsystem, add the Vulkan SDK's glslc compiler to your system variables:
- Press
Win + R, typeSystemPropertiesAdvanced, and clickEnvironment Variables. - Under "System Variables" or "User Variables," select
Pathand clickEdit. - Click
Newand add:C:\VulkanSDK\<version>\Bin(replace<version>with your installed version). - Click
OKto save. - Verify with
glslc --versionin PowerShell. It should output the compiler version.
Check out the architecture overview, modules documentation, Roadmap, and prompt_tool.sh to get started.
No. This is a complex early development software with partial and unimplemented features. It will take at least a year to enter plausable production use.