Skip to content

Python-based automated 2D animation tool that generates videos from text scripts and audio files. Uses AI for text analysis, lip sync, and animation cue extraction to create frame-by-frame character animations, then compiles them into a final video.

License

Notifications You must be signed in to change notification settings

Automate-Animation/synctoon

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

46 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

synctoon 🎬

Python License GitHub stars

Synctoon is a powerful, AI-driven 2D animation tool that automatically creates engaging animated videos from text scripts and audio files. Transform your stories into professional-looking animations with synchronized lip movements, dynamic character expressions, and intelligent scene composition.

🌟 See It In Action

Check out animations created with synctoon on our YouTube channel: Daily YG Stories

✨ Features

  • 🤖 AI-Powered Animation: Leverages Google Generative AI for intelligent text analysis and animation cue extraction
  • 🎭 Automatic Lip Sync: Generates phoneme data to synchronize character mouth movements with dialogue
  • 👁️ Dynamic Character Expressions: Supports head movements, eye expressions, and body language
  • 🎨 Multi-Character Support: Handle multiple characters with distinct visual assets and personalities
  • 🎵 Audio Synchronization: Perfectly aligns animation with audio timing using transcription services
  • 🖼️ Extensible Asset System: Easy-to-use character and background asset management
  • 📹 Frame-by-Frame Generation: Creates smooth animations by compositing individual frames
  • 🔄 Automated Workflow: Complete pipeline from script to final video

🚀 Quick Start

Prerequisites

  • Python 3.x
  • Docker (for audio transcription service)
  • Google API Key (for AI text analysis)

Installation

  1. Clone the repository:

    git clone https://github.com/Automate-Animation/synctoon.git
    cd synctoon
  2. Install dependencies:

    pip install -r requirements.txt
  3. Start the transcription service:

    cd Docker
    docker-compose up -d
  4. Configure your API key:

    • Open core/core.py
    • Replace GOOGLE_API_KEY = "..." with your actual Google API key

Create Your First Animation

  1. Prepare your story files (or use the provided examples):

    • Text script: example/story/your_story.txt
    • Audio file: example/story/your_story.mp3
  2. Generate the animation:

    cd core
    python create_animation.py -n "my_first_animation"
  3. Combine video with audio (final step):

    ffmpeg -i videos/my_first_animation.mp4 -i ../example/story/your_story.mp3 -c:v copy -c:a aac final_animation.mp4

🎯 Vision & Mission

Our goal is to democratize animation creation. We believe everyone should have access to powerful animation tools regardless of their technical background or budget. Synctoon is completely free and open-source, empowering storytellers, educators, content creators, and hobbyists to bring their ideas to life.

🗂️ Project Structure

synctoon/
├── core/                          # Main animation engine
│   ├── brain_requests/           # AI services integration
│   ├── image_manager/            # Character asset management
│   ├── images/                   # Character and background assets
│   │   ├── characters/          # Character image components
│   │   └── metadata/            # Asset positioning data
│   ├── utils/                    # Utility functions
│   ├── create_animation.py       # Main animation script
│   ├── frame_generator.py        # Frame composition logic
│   └── frame_to_video.py         # Video compilation
├── example/story/                # Sample scripts and audio
├── Docker/                       # Transcription service setup
└── requirements.txt              # Python dependencies

🎨 Character Customization

Synctoon supports extensive character customization:

  1. Add new characters in core/images/characters/character_X/
  2. Organize assets into folders: body/, eyes/, head/, mouth/, background/
  3. Configure positioning in metadata/metadata.json
  4. Support for emotions and expressions through asset variations

🔮 Future Roadmap

We're actively working on exciting new features:

  • 🖼️ Dynamic Background Elements: Add contextual images, doodles, and visual elements that enhance storytelling
  • 🌄 Real-time Background Generation: AI-powered background creation that adapts to story context
  • 🎭 Enhanced Character Library: Expanded collection of characters, emotions, and animations
  • 🎨 Visual Effects System: Particle effects, transitions, and cinematic elements
  • 📱 Web Interface: Browser-based animation creation tool
  • 🌍 Multi-language Support: International language and voice support
  • 🤝 Community Asset Marketplace: Share and discover character assets and backgrounds

👨‍💻 Connect With The Creator

Muhammad Kamal - Creator & Lead Developer

📍 Location: Pakistan (UTC -12:00)

🤝 Contributing

We welcome contributions from the community! Whether you're a developer, designer, or storyteller, there are many ways to help:

  • 🐛 Report bugs and suggest features
  • 💻 Submit code improvements and new features
  • 🎨 Create and share character assets
  • 📖 Improve documentation and tutorials
  • 🌟 Star the repository to show your support

For Developers

If you want to contribute code or understand the technical implementation details, please check our comprehensive Developer Guide which includes:

  • 🏗️ Architecture Overview: Complete system design and data flow
  • 🔧 Code Structure: Detailed breakdown of all components
  • 🧪 Testing Guidelines: How to write and run tests
  • 🚀 Development Setup: Local development environment setup
  • 📊 API Documentation: Integration details and examples
  • 🐛 Troubleshooting: Common issues and solutions

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

🙏 Acknowledgments

  • Google Generative AI for text analysis capabilities
  • Gentle for audio transcription and alignment
  • The open-source community for inspiration and support

Made with ❤️ for the global storytelling community

About

Python-based automated 2D animation tool that generates videos from text scripts and audio files. Uses AI for text analysis, lip sync, and animation cue extraction to create frame-by-frame character animations, then compiles them into a final video.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages