(Note: The source code for this project is private. This repository serves as a detailed case study of the production backend architecture I built to power the live, hosted Brandflare platform.)
DomaiNudge is a fully functional, hosted AI platform that generates unique business names for real users. As the Backend & AI Developer, I was responsible for architecting and building the entire production backend to be high-performance, cost-effective, and scalable enough to serve live traffic.
- My Role: Backend & AI Developer (Powering the live React frontend)
- Status: Live Hosted Platform (Private Backend Repository)
- Live Demo: https://domainudge.com/
The goal was to build a robust backend that could automate the creative loop of name generation. It needed to be fast, smart, and cost-effective, handling complex AI prompts and providing instant, reliable feedback to a live, production frontend client
- AI-Powered Generation: Architected the backend service to leverage the
gpt-4model from OpenAI, translating simple user requests into intelligent, context-aware prompts - Real-Time Domain Checks: Integrated a
checkdomain.jsutility to make live, asynchronous API calls to multiple domain registrars, checking availability for each AI-generated name instantly. - High-Performance Caching: Implemented a
Rediscache to store OpenAI responses, dramatically speeding up repeat queries and significantly reducing API costs under load. - Clean REST API: Designed and built a well-structured
Node.js/ExpressREST API to serve the live production frontend, handling all business logic, AI orchestration, and cache management - Production-Ready Architecture: Built the API to handle requests asynchronously, allowing the React frontend to poll for results for a smooth user flow. The entire system was designed for a production environment, managing API keys, CORS, and error handling for public-facing users
| Area | Technology |
|---|---|
| Backend | Node.js, Express |
| AI | OpenAI (gpt-4) |
| Database/Cache | Redis (using ioredis) |
| API/Infra | dotenv, cors, nodemon |
This project's complexity was entirely on the backend, as it was designed to serve a live application reliably.
- API Call: The live React frontend makes a
POSTrequest to the/api/v1/names/generateendpoint on the Express backend, sending the user's prompt data. - Backend (Node.js): The
nameServicereceives and validates the request. - Challenge & Solution (Caching): My primary concern for a live application was OpenAI API costs and latency. To solve this, I implemented a Redis cache.
- A unique cache key is generated from the user's exact request parameters.
Redisis checked for this key. If it's a cache hit, the stored JSON is returned instantly (sub-10ms response).- If it's a cache miss, the backend constructs a detailed prompt and queries the
gpt-4API. - The AI's response is then parsed, stored in
Redis(with a 1-hour TTL), and sent back to the frontend client.
- I chose Redis over a simple in-memory
Mapbecause, for a production application, it provides persistence, can be scaled independently of the API server, and is a robust, industry-standard caching solution. Using theioredislibrary provided a clean, promise-based API to work with, making the asynchronous cache-aside logic clean and maintainable. This was the most critical architectural decision for making the project viable from a cost perspective
This architecture also sets the stage for several key backend features I planned to support the live platform:
- User Accounts: Add a full authentication service (e.g., JWT) to allow users to save their favorite name lists and search history, requiring a new
PostgreSQLorMongoDBdatabase for user data - Model Selection: Expose a new API endpoint to allow clients to select different AI models (e.g.,
gpt-3.5-turbo) for faster/cheaper generation.