A Kaggle-style platform for crowdsourced prompt engineering competitions.
- Competition Listing: Browse all available prompt engineering competitions
- Competition Details: View detailed information, test cases, and leaderboards
- Multi-step Competition Creation: Create competitions with custom test cases and prizes
- Prompt Submission: Submit prompts and get automated evaluation
- User Profiles: Track your submissions and created competitions
- Automated Evaluation: Prompts are evaluated against test cases using OpenAI or Anthropic APIs
- Frontend: Next.js 14 (App Router), React, TypeScript, Tailwind CSS
- UI Components: shadcn/ui
- Backend: Next.js API Routes
- Database: PostgreSQL (Neon) with Prisma ORM
- Authentication: Firebase Auth
- Storage: Firebase Storage (training/validation data)
- Encryption: AES-256-GCM for API keys
- LLM Integration: OpenAI, Anthropic, Google AI (via Vercel AI SDK)
- Node.js 18+
- PostgreSQL database
- OpenAI API key (for GPT models)
- Anthropic API key (for Claude models)
- Clone the repository:
git clone <repository-url>
cd Optoprompt- Install dependencies:
npm install- Set up environment variables:
cp .env.example .envEdit .env.local and add:
DATABASE_URL: Your PostgreSQL connection stringFIREBASE_SERVICE_ACCOUNT_KEY: Your Firebase service account JSON (paste entire JSON as single-line string)- Get from: Firebase Console → Project Settings → Service Accounts → Generate New Private Key
API_KEY_ENCRYPTION_SECRET: Generate withopenssl rand -hex 32OPENAI_API_KEY: Your OpenAI API key (optional, for GPT models)ANTHROPIC_API_KEY: Your Anthropic API key (optional, for Claude models)GOOGLE_GENERATIVE_AI_API_KEY: Your Google AI API key (optional, for Gemini models)
- Set up the database:
npx prisma migrate dev
npx prisma generate- Run the development server:
npm run devOpen http://localhost:3000 in your browser.
-
Push your code to GitHub
-
Import the project to Vercel
-
Set up environment variables in Vercel:
DATABASE_URL: Your production PostgreSQL URL (we recommend using Neon or Supabase)FIREBASE_SERVICE_ACCOUNT_KEY: Paste your Firebase service account JSON as a single-line stringAPI_KEY_ENCRYPTION_SECRET: Generate withopenssl rand -hex 32OPENAI_API_KEY: Your OpenAI API key (optional)ANTHROPIC_API_KEY: Your Anthropic API key (optional)GOOGLE_GENERATIVE_AI_API_KEY: Your Google AI API key (optional)
-
Deploy!
For production, we recommend using one of these PostgreSQL hosting services:
- Vercel Postgres: Easy integration with Vercel
- Supabase: Free tier with generous limits
- Railway: Simple PostgreSQL hosting
- Neon: Serverless PostgreSQL
After setting up your database, run migrations:
npx prisma migrate deploySecurity Model:
- Organizers provide their own API keys (not platform keys)
- Keys encrypted with AES-256-GCM before storage
- Validation data stored in Firebase Storage (private, Admin SDK access only)
- Training data public (participants download for testing)
Evaluation Flow:
- User submits prompt → saved as PENDING
- Background job fetches validation data from Firebase
- Runs prompt against each test case using organizer's API key
- Compares outputs, calculates accuracy score
- Updates submission status to COMPLETED
- Updates competition leaderboard if score is best
Key Modules:
lib/evaluation.ts- Evaluation enginelib/llm.ts- Multi-provider LLM interfacelib/encryption.ts- API key encryptionfirebase/firebaseadmin-storage.ts- Private data access
├── app/
│ ├── api/ # API routes
│ │ ├── competitions/ # Competition CRUD
│ │ └── submissions/ # Submission handling & evaluation
│ ├── competitions/ # Competition pages (list, detail, create, submit)
│ └── _components/ # Shared components
├── lib/ # Core utilities
│ ├── evaluation.ts # Prompt evaluation engine
│ ├── llm.ts # LLM provider interface
│ ├── encryption.ts # API key encryption
│ └── types.ts # Shared types
├── firebase/ # Firebase integration
│ ├── firebasefrontend.ts # Client SDK (auth, public storage)
│ └── firebaseadmin-storage.ts # Admin SDK (private validation data)
└── prisma/schema.prisma # Database schema
| Endpoint | Method | Auth | Description |
|---|---|---|---|
/api/competitions |
GET | No | List all competitions |
/api/competitions |
POST | Yes | Create competition |
/api/competitions/[id] |
GET | No | Get competition details |
/api/competitions/[id] |
PUT | Yes | Update competition |
/api/competitions/[id] |
DELETE | Yes | Delete competition |
/api/competitions/[id]/submit |
POST | Yes | Submit prompt |
/api/competitions/[id]/leaderboard |
GET | No | Get leaderboard |
/api/submissions/[id] |
GET | Yes | Get submission status |
/api/practice |
POST | Yes | Submit practice challenge |
Auth header: Authorization: Bearer <firebase-token>
Organizers can create competitions with:
- Title, description, and requirements
- Model selection (GPT-4, GPT-3.5, Claude variants)
- Character/token limits for prompts
- Training and validation test cases (JSON format)
- Prize pool and distribution rules
- Start and end dates
- Optional target score (competition ends early if reached)
- Users submit their prompts
- System runs the prompt against validation test cases
- Each test case is evaluated using the specified LLM
- Results are compared with expected outputs
- Score is calculated based on accuracy
- Leaderboard is updated
Test cases should be uploaded as JSON arrays:
[
{
"input": "Your input text here",
"expectedOutput": "The expected output"
},
{
"input": "Another input",
"expectedOutput": "Another expected output"
}
]- Custom model support (users upload their own models)
- Automated prompting integration (DSPy)
- Advanced scoring metrics
- Email notifications
- Payment integration
- Real-time leaderboard updates
- Competition categories and tags
- Discussion forums per competition
Contributions are welcome! Please feel free to submit a Pull Request.
MIT License