Skip to content

🎬 AI-powered YouTube Shorts automation tool using LLMs, real-time search, and text-to-speech. Create engaging short-form videos with automated research, voiceovers, and subtitles.

Notifications You must be signed in to change notification settings

mikeoller82/VideoGraphAI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

49 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

VideoGraphAI 🎬

VideoGraphAI License Python Version Streamlit Contributions Welcome Status

An open-source AI-powered YouTube Shorts automation tool that revolutionizes content creation using graph-based agents and state-of-the-art AI technologies.

Features β€’ Installation β€’ Usage β€’ Contributing β€’ License

πŸ“š Table of Contents

🌟 Overview

VideoGraphAI streamlines the creation of YouTube Shorts using advanced AI technologies. Built with Streamlit, it offers end-to-end video production capabilities from content research to final rendering. The system leverages various AI models and APIs to create engaging, relevant content automatically.

✨ Key Features

  • πŸ” Real-time Research: Automated content research using Tavily Search API
  • πŸ“ AI Script Generation: Flexible LLM compatibility (OpenAI, Groq, etc.)
  • 🎨 Dynamic Visuals: Image generation via TogetherAI (FLUX.schnell)
  • 🎀 Professional Audio: Voiceovers using F5-TTS
  • πŸ“Ί Automated Subtitles: Synchronized captions with Gentle
  • πŸ–₯️ User-Friendly Interface: Built with Streamlit for easy operation

πŸ”„ Workflow

  1. Input β†’ User provides topic, timeframe, and video length
  2. Research β†’ AI researches recent events using graph agents
  3. Content Creation β†’ Generates titles, descriptions, hashtags, and script
  4. Media Production β†’ Creates storyboard and acquires media assets
  5. Audio & Subtitles β†’ Generates voiceover and synchronized captions
  6. Compilation β†’ Assembles final video with all components
  7. Delivery β†’ Presents downloadable video through Streamlit interface

πŸ“‹ Prerequisites

  • Python 3.8+
  • FFmpeg
  • Docker (optional, recommended for Gentle server)
  • API Keys:
    • Groq API
    • Together AI API
    • Tavily Search API
    • F5-TTS (local installation)

πŸš€ Installation

1. Clone Repository

git clone https://github.com/mikeoller82/VideoGraphAI.git
cd VideoGraphAI

2. Environment Setup

# Option 1: Conda (Recommended)
conda create -n videographai python=3.8 pip
conda activate videographai

# Option 2: Virtual Environment
python3 -m venv venv
source venv/bin/activate  # Windows: venv\Scripts\activate

3. Install Dependencies

pip install -r requirements.txt

4. FFmpeg Installation

Click to expand installation instructions for your OS

Ubuntu/Debian

sudo apt update
sudo apt install ffmpeg

macOS

brew install ffmpeg

Windows

  • Download from ffmpeg.org
  • Add bin folder to system PATH

5. F5-TTS Setup

git clone https://github.com/SWivid/F5-TTS.git
cd F5-TTS
pip install -r requirements.txt
# Follow F5-TTS documentation for torch and CUDA setup then all you do is take your sample wav file and put it in /F5-TTS/src/f5_tts/infer/examples/basic
#  this will be inside your VideoGraphAI directory after you git clone it. then just config toml file either inside F5 there is a basic.toml file but in the functiion
#  genrate_voicoever is the override toml that will be used so you can just configure it there . Honestly its whatever you prefer then youl be good to go on any voicoever you want just need the a wav file of like 5 seconds to 8 seconds minimal
cd ..

Subtitle Command(Has to be ran before you run the application just one command)

docker run -d -p 8765:8765 lowerquality/gentle

βš™οΈ Configuration

Create a .env file:

GROQ_API_KEY=your_groq_api_key
BFL_API_KEY=your_black_forest_labs_api_key
TOGETHER_API_KEY=your_together_api_key
TAVILY_API_KEY=your_tavily_api_key

πŸ“ Usage

  1. Launch application:
streamlit run app.py
  1. Enter parameters:

    • Topic for your video
    • Time frame (past month/year/all)
    • Video length (60/120/180 seconds)
  2. Click "Generate Video" and wait for processing

πŸ”§ Troubleshooting

Common Issues and Solutions
  • API Issues: Verify API keys in .env
  • Gentle Server: Ensure server is running on port 8765
  • FFmpeg: Confirm PATH configuration
  • Dependencies: Check virtual environment activation
  • Video Issues: Review application logs
  • UI Problems: Clear browser cache

πŸ‘₯ Contributing

We welcome contributions! Please follow these steps:

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/AmazingFeature)
  3. Commit changes (git commit -m 'Add AmazingFeature')
  4. Push to branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

πŸ™ Acknowledgements

Powered By

Groq

  • F5-TTS: Advanced text-to-speech capabilities
@article{chen-etal-2024-f5tts,
    title={F5-TTS: A Fairytaler that Fakes Fluent and Faithful Speech with Flow Matching},
    author={Chen, Yushen and Niu, Zhikang and Ma, Ziyang and Deng, Keqi and Wang, Chunhui and Zhao, Jian and Yu, Kai and Chen, Xie},
    journal={arXiv preprint arXiv:2410.06885},
    year={2024}
}

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.


Made with ❀️ by the VideoGraphAI Community

⭐ Star us on GitHub

About

🎬 AI-powered YouTube Shorts automation tool using LLMs, real-time search, and text-to-speech. Create engaging short-form videos with automated research, voiceovers, and subtitles.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages