##boilerplate
Foundationally this is a boilerplate project, intended to help bootstrap a fullstack hybrid application with a python-based backend and a typescript-react based front end. For context, we're going to build an auto-storyboarding AI using large language model chaining techniques. It takes in short-form directional prompt to generate long form content full length stories.
Inference performance will vary depending on the service, but here are some rought benchmarks..
- Azure Openai Private EP (what I'm using) ~ 24 secs
- Openai (gpt, dalle) API endpoints ~ 92 secs
- with Stable diffusion (instead of dalle) ~ 232 secs
Init
Response
Backend
- FastAPI with
pydantic
typed models server for routing requests - Langchain LLM library for sequential prompt chaining
gpt3.5-turbo
- Reference Text-Transformerdalle
- Reference VIT
Frontend
-
Vite-Typescript React bootstrapping and typescript support + tailwind css.
-
Radix UI Primitives Scalable high performance primitives.
-
ReduxJS Toolkit Global state management for input and reusable components.
-
Roadmap Deployment [optional] Deployed as a set of docker containers on heroku cloud.
-
Docker Compose containerize fastapi (uvicorn container) and react
-
Heroku CLI allows deployment of containerized apps through git.
- Clone this repo.
git clone https://github.com/sinhaguild/storyboard-ai
- Create .env file
#.env
mv backend/.env.example backend/.env
- Set openai API key.
OPENAI_API_KEY='your-api-key-here'
- Run (with Docker)
docker compose up -d
- Cleanup
docker compose down -V
- Clone the repo and set environment variables.
- Run server
cd backend
python3 -m venv .venv
source .venv/bin/activate
uvicorn app.main:app --reload
- Run client
cd frontend
npm run dev