Qualen turns requirements into compliant, auditable test suites in minutes. Think Jira-like workflows purpose-built for HIPAA-conscious teams: generate tests with AI, maintain end-to-end traceability, and export to your ALM tools.
- Problem: Healthcare QA is slow, spreadsheet-heavy, and difficult to audit.
- Solution: A unified workspace with AI-generated tests, Jira-style workflows, and traceability guards designed for compliance.
- Impact: 95% faster test creation, always audit-ready, and smoother ALM handoff.
- Ingest requirements (DOCX, PDF, CSV, or text).
- Generate positive/negative/boundary test cases via LLM.
- Maintain a live traceability matrix from Requirement → Tests → Evidence.
- Push curated tests to Jira/Polarion/Azure DevOps and keep IDs in sync.
- Guide teams with a conversational, context-aware QA copilot.
- Jira-style Boards & Tables: Familiar workflows for regulated teams.
- AI Test Generation: Positive, negative, and boundary suites mapped to each requirement.
- Traceability Matrix: Always-on coverage views; audit-ready reports.
- ALM Integrations: Push to Jira/ADO/Polarion and sync identifiers.
- HIPAA-Conscious Guardrails: Status badges, exportable artifacts, and explainable workflows.
- Conversational Copilot (Gemini): Context-aware chat that explains navigation and helps users get work done.
- Enterprise Data Protection Cues: Clear, visible trust signals and security-first defaults.
| Area | Capability | Status |
|---|---|---|
| Requirements | Upload DOCX/PDF/CSV/Text, manual create | Available |
| AI Generation | Positive/Negative/Boundary test suites | Available |
| Traceability | Requirement ↔ Test ↔ Evidence matrix | Available |
| ALM | Push to Jira/ADO/Polarion, sync IDs | Available (extensible) |
| Chat Copilot | Context-aware guidance | Demo on landing |
| Compliance | Audit-ready exports, status badges | Available |
- Home → "Launch Dashboard" to see Jira-like workspace.
- Requirements → Upload a sample DOCX/PDF or paste text.
- Approve AI-generated tests for a requirement.
- Open Traceability Matrix → verify end-to-end links.
- ALM Panel → push selected tests to Jira/ADO and view synced IDs.
- Open Chat → ask "Show tests for REQ-001" or "How do I export to ALM?"
High-level design optimized for reliability, explainability, and compliance:
- Frontend: Next.js 15 (App Router), TypeScript, shadcn/ui, Tailwind CSS.
- AI: Google Gemini (existing GEMINI_API_KEY) for generation and contextual assistance.
- Data & Analytics: BigQuery-backed analytics (columnar, scalable) for traceability insights.
- Services: Containerized workloads on Cloud Run (autoscaling, zero-to-peak elasticity).
- Networking & Security: VPC + IAM, least-privilege roles, signed URLs, encryption in transit/at rest.
- Observability: Metrics/logs/alerts with SLO-driven health checks.
- Event-driven Pipelines: Reliable orchestration for generation and syncing tasks.
Note: This repo contains the Next.js frontend and API routes. Cloud infrastructure references are outlined for clarity and can be adapted to your environment.
- Faster Time-to-Validation: Generate and approve complete suites in minutes.
- Reduced Risk: Built-in traceability and exportable evidence for audits.
- Fit to Workflow: Mirrors how teams already operate in Jira/ADO.
- Explainable AI: The copilot explains steps, states, and how to navigate the app.
- Clear separation of concerns, typed contracts, and composable UI.
- Server components for data-fetching; client components are lean and interactive.
- Explicit API routes for upload, generation, traceability, and ALM push.
- Tailwind + shadcn/ui for consistent, accessible UI — no styled-jsx.
- Extensible integration surfaces for ALM tools and data stores.
- / — Marketing page with feature overview
- /dashboard — Jira-like board and tables
- /requirements — Uploads, creation, and detail pages
- Optional Panels/Components
- RequirementCard, TestCaseTable, TraceabilityMatrix, ALMIntegrationPanel, SidebarChat, FiltersBar
Codebase directories of interest:
- src/app — Next.js routes (App Router)
- src/components — Reusable UI and feature components
- src/lib — LLM client (Gemini), utilities
- src/app/api — API routes
| Area | Technology |
|---|---|
| Framework | Next.js 15 (App Router), React, TypeScript |
| UI | shadcn/ui, Tailwind CSS, lucide-react |
| Tables | TanStack Table (planned/compatible) |
| AI | Google Gemini (env-configurable model) |
| Data/Analytics | BigQuery (design target) |
| Infra (target) | Containerized, Cloud Run-style autoscaling |
Prerequisites:
- Node.js 18+
- npm/pnpm/yarn/bun
- Google Gemini API key
- Install dependencies
npm install- Copy environment template and fill values
cp .env.example .envThen edit .env with your keys (see the Environment Variables tables below).
- Run the dev server
npm run devVisit http://localhost:3000
Use the provided .env.example as the source of truth. Key variables are grouped below.
| Variable | Example | Required |
|---|---|---|
| GEMINI_API_KEY | your-gemini-api-key | Yes |
| NEXT_PUBLIC_GEMINI_API_KEY | your-gemini-api-key | Optional (client demos) |
| NEXT_PUBLIC_GEMINI_MODEL_ID | gemini-2.0-flash | Yes (defaults recommended) |
| Variable | Example | Required |
|---|---|---|
| JIRA_API_TOKEN | your-jira-api-token | Optional (only if pushing to Jira) |
| JIRA_BASE_URL | https://your-domain.atlassian.net/ | Optional |
| JIRA_EMAIL | your-email@example.com | Optional |
| JIRA_PROJECT_KEY | YOURKEY | Optional |
| Variable | Example | Required |
|---|---|---|
| BIGQUERY_PROJECT_ID | your-bigquery-project-id | Optional (analytics) |
| BIGQUERY_DATASET | your-bigquery-dataset | Optional |
| BIGQUERY_TABLE_TESTS | your-tests-table | Optional |
| BIGQUERY_TABLE_REQUIREMENTS | your-requirements-table | Optional |
| GOOGLE_APPLICATION_CREDENTIALS_JSON | {"type":"service_account", ...} | Optional |
| Variable | Example | Required |
|---|---|---|
| TURSO_CONNECTION_URL | libsql://your-turso-database-url | Optional |
| TURSO_AUTH_TOKEN | your-turso-auth-token | Optional |
| Variable | Example | Required |
|---|---|---|
| BETTER_AUTH_SECRET | your-auth-secret | Optional (when auth enabled) |
Tip: Add only what you need for your demo. For a minimal run, set GEMINI_API_KEY and NEXT_PUBLIC_GEMINI_MODEL_ID.
| Command | Description |
|---|---|
| npm run dev | Start Next.js in development |
| npm run build | Create a production build |
| npm start | Start the production server |
| npm run lint | Lint the codebase |
Use .env.example as your source of truth. Copy only the variables you need into your platform's env settings.
Minimum required for a functional demo:
- GEMINI_API_KEY
- NEXT_PUBLIC_GEMINI_MODEL_ID (e.g., gemini-2.0-flash)
- Import the repo into Vercel
- Settings → Environment Variables → add keys from .env.example (at least the two above)
- Framework Preset: Next.js • Node.js 18+
- Build Command: vercel default (or npm run build) • Output: .vercel/output (handled automatically)
- Deploy
Tips:
- If using any server-only keys (e.g., JIRA_API_TOKEN), do NOT prefix with NEXT_PUBLIC_.
- Re-run deployment after updating env vars.
Build and run locally:
# Build production image
docker build -t qualen-app .
# Run container (map port and inject env file)
docker run --env-file .env -p 3000:3000 qualen-appThen open http://localhost:3000
Deploy to your registry (example):
# Tag + push
docker tag qualen-app ghcr.io/<your-org>/qualen-app:latest
docker push ghcr.io/<your-org>/qualen-app:latestUse the image on your platform (ECS, Fly.io, Render, Koyeb, etc.) and add env vars from .env.example.
# Build & push via Cloud Build
gcloud builds submit --tag gcr.io/<PROJECT_ID>/qualen-app
# Deploy to Cloud Run (managed)
gcloud run deploy qualen-app \
--image gcr.io/<PROJECT_ID>/qualen-app \
--platform managed \
--region <REGION> \
--allow-unauthenticated \
--port 3000
# Configure environment variables (repeat per var)
gcloud run services update qualen-app \
--region <REGION> \
--update-env-vars GEMINI_API_KEY=your-key,NEXT_PUBLIC_GEMINI_MODEL_ID=gemini-2.0-flash- npm run build completes without errors
- All required env vars from .env.example are set in the hosting platform
- If behind a proxy, ensure port 3000 is allowed or remap via your platform config
- POST /api/upload — Upload and parse requirements (DOCX/PDF/CSV/text)
- POST /api/generate-tests — Generate tests with Gemini and persist results
- GET /api/trace — Fetch the traceability matrix
- POST /api/push-alm — Push selected tests to Jira/ADO/Polarion; return synced IDs
- REST /api/requirements — CRUD for requirements
- REST /api/tests — CRUD for test cases
Note: Routes are designed as integration points; swap or extend backing services as needed.
- Requirement: { id, title, description, sourceId, status, createdAt }
- TestCase: { id, requirementId, type (positive|negative|boundary), steps, status, linkedAlmId }
- TraceLink: Derived views mapping Requirement → TestCase → Evidence
- HIPAA-conscious patterns: least-privilege access, encryption at rest/in transit.
- Enterprise data protection labeling in UI to reinforce trust.
- Exportable artifacts for reviews and audit trails.
- No styled-jsx anywhere; consistent, maintainable Tailwind styling.
- Powered by Gemini: no canned responses — conversational and context-aware.
- Navigation Help: "Where do I push to ALM?" or "Open traceability for REQ-001".
- Platform Explainability: clarifies states, steps, IDs, and next actions.
- Agentic Workflows: autonomous generation, linking, and exports.
- Deeper ALM Sync: bi-directional updates, status mirrors.
- Evidence Capture: inline attachments, execution logs, signatures.
- SSO & RBAC: enterprise-grade identity and authorization.
- Usage Analytics: dashboards for coverage, gaps, and release readiness.
- Issues and PRs welcome during/after the hackathon.
- Keep components small, typed, and accessible.
- Follow Tailwind + shadcn/ui conventions; avoid global overrides.
- Next.js, Tailwind, shadcn/ui, TanStack Table
- Google Gemini for LLM capabilities
- Inspiration from Jira-like workflows used by regulated teams