-
Notifications
You must be signed in to change notification settings - Fork 1
Architecture
Cornerstone is a full-stack TypeScript web application deployed as a single Docker container. It serves a React SPA from a Fastify HTTP server, backed by SQLite for persistence.
+-------------------+
| Reverse Proxy |
| (HTTPS/TLS) |
+---------+---------+
|
| HTTP
v
+--------------+---------------+
| Docker Container |
| |
| +-----------------------+ |
| | Fastify Server | |
| | (Node.js + ESM) | |
| | | |
| | /api/* REST API | |
| | /* Static SPA | |
| +----+----------+-------+ |
| | | |
| v v |
| +-------+ +----------+ |
| |SQLite | | React | |
| | (vol) | | SPA | |
| +-------+ | (static) | |
| +----------+ |
+------------------------------+
|
| HTTP (proxy)
v
+-----------------+
| Paperless-ngx |
| (external) |
+-----------------+
| Layer | Technology | Version |
|---|---|---|
| Server | Fastify | 5.x |
| Client | React | 19.x |
| Routing (client) | React Router | 7.x |
| Database | SQLite via better-sqlite3 | -- |
| ORM | Drizzle ORM | 0.38.x |
| Bundler (client) | Webpack | 5.x |
| Styling | CSS Modules | -- |
| Testing (unit/integration) | Jest (ts-jest) | 30.x |
| Testing (E2E) | Playwright | (TBD) |
| Language | TypeScript | ~5.9 |
| Runtime | Node.js | 24 LTS |
| Container | Docker (DHI Alpine) | -- |
See individual ADRs for detailed rationale: ADR Index
The project uses npm workspaces with three packages:
| Package | Name | Purpose |
|---|---|---|
shared/ |
@cornerstone/shared |
TypeScript types shared between server and client |
server/ |
@cornerstone/server |
Fastify REST API server |
client/ |
@cornerstone/client |
React SPA |
See ADR-007: Project Structure for the full directory layout.
- REST endpoints under
/api/prefix - Standard error response shape across all endpoints:
{ "error": { "code": "RESOURCE_NOT_FOUND", "message": "Human-readable description", "details": {} } } - Request validation via Fastify's JSON schema (AJV)
-
Pagination: Offset-based with
page(1-indexed, default 1) andpageSize(default 25, max 100). Response includes{ items: [...], pagination: { page, pageSize, totalItems, totalPages } }. See API Contract for full specification. -
Filtering: Query parameters per field (e.g.,
?status=in_progress&assignedUserId=...). Multiple filters are combined with AND logic. Text search viaqparameter. -
Sorting:
sortByandsortOrder(asc/desc) query parameters. One sort field per request.
See ADR-010: Authentication Architecture for detailed rationale.
Two authentication flows:
- Local authentication: Email/password login for the initial admin account (setup flow) and as a fallback. Passwords hashed with argon2id (OWASP-recommended).
-
OIDC authentication: OpenID Connect via
openid-clientv6 as the primary mechanism. Supports any standard provider (Keycloak, Auth0, Okta, Google, Azure AD, Authentik). Automatic user provisioning on first login.
Session management:
- Server-side sessions stored in the
sessionsSQLite table - Session token: 256-bit
crypto.randomByteshex string, delivered as an HttpOnly cookie (cornerstone_session) - Cookie flags:
HttpOnly=true,SameSite=Strict,Secure=true(configurable for dev) - 7-day lifetime (configurable via
SESSION_DURATIONenv var) - Lazy cleanup of expired sessions (hourly interval)
- Instant invalidation on user deactivation (all sessions deleted)
Route protection (Fastify hooks):
-
Authentication hook (
authenticate): GlobalpreHandleron all/api/*routes. Reads session cookie, validates session, loads user. Exempts public routes (health, auth/me, setup, login, OIDC endpoints). -
Authorization decorator (
requireRole('admin')): Route-levelpreHandlerthat checksuser.role. Returns 403 for insufficient permissions.
Roles:
| Role | Permissions |
|---|---|
| Admin | Full access: create, edit, delete everything + manage users |
| Member | Create and edit work items, budget entries, comments |
Frontend auth:
-
AuthContextReact context withuseAuth()hook - App initialization: calls
GET /api/auth/meto determine state (setup required / login / authenticated) - Components use
useAuth()for user info, login/logout/setup actions
On first launch with an empty database, the application requires initial admin setup before any users can authenticate. The setup flow ensures the first user account is created securely.
Client-side flow:
-
App initialization calls
GET /api/auth/meto determine the current state - If
setupRequired: trueis returned (no users exist), theAuthGuardautomatically redirects to/setup - The user fills out the setup form (email, display name, password)
- On submission, the client calls
POST /api/auth/setupwith the account details - After successful setup, the user is redirected to
/loginto sign in with the new credentials
Server-side protection:
- The
POST /api/auth/setupendpoint is only accessible when no users exist in the database - If users already exist, it returns
403 FORBIDDENwith error codeSETUP_COMPLETE - The endpoint validates password strength (minimum 12 characters)
- After creating the admin account, it returns
201 Createdwith the user object (no session created)
State detection via /api/auth/me:
{
"user": null,
"setupRequired": true,
"oidcEnabled": false
}-
setupRequired: true→ client redirects to/setup -
user: null(andsetupRequired: false) → client redirects to/login -
user: {...}→ client renders the authenticated app
The setup page is only accessible when setupRequired is true. After setup completes, the endpoint returns 403 and the client-side setup route redirects to login.
config -> errorHandler -> compress -> db -> auth -> routes -> static
The auth plugin registers after db (needs database access for session lookups) and before routes (to protect all route handlers by default).
- SQLite stored at
/app/data/cornerstone.dbinside the container (configurable viaDATABASE_URL) - Volume-mounted for persistence across container restarts
- WAL (Write-Ahead Logging) mode enabled at startup for better concurrent read performance
- Schema managed via hand-written SQL migrations (see
server/src/db/migrations/) - Drizzle ORM provides typed query building on top of better-sqlite3
- snake_case column naming convention
The database connection lifecycle is managed by a Fastify plugin (server/src/plugins/db.ts):
- Startup: Opens a better-sqlite3 connection, enables WAL mode, runs pending migrations, creates a Drizzle ORM instance
-
Request handling: All routes access the database via
fastify.db(Drizzle ORM instance with full schema type inference) -
Raw access: The underlying better-sqlite3 connection is available via
fastify.db.$clientwhen needed for pragmas or raw SQL -
Shutdown: The
onClosehook closes the connection, flushing WAL to the main database file
The plugin is registered first in the Fastify app to guarantee the database is available before any route handler executes. If migrations fail, the plugin throws and the server does not start.
All API errors are handled by a centralized Fastify error handler plugin (ADR-009). The pattern works as follows:
-
Route handlers and services throw
AppErrorsubclasses -- e.g.,throw new NotFoundError('User not found'). They never constructApiErrorResponseobjects directly. -
The
errorHandlerplugin catches all errors and formats them into the standardApiErrorResponseshape ({ error: { code, message, details? } }). -
AJV schema validation errors are handled automatically -- Fastify's built-in JSON schema validation produces validation errors that the plugin normalizes into
VALIDATION_ERRORresponses with field-level details. -
Unknown errors are sanitized in production -- Any error that is not an
AppErroror AJV validation error is returned asINTERNAL_ERROR(500). In production mode, the message is replaced with a generic string to prevent information leakage.
Error
+-- AppError (base: code, statusCode, message, details?)
+-- NotFoundError (NOT_FOUND, 404)
+-- ValidationError (VALIDATION_ERROR, 400)
+-- UnauthorizedError (UNAUTHORIZED, 401)
+-- ForbiddenError (FORBIDDEN, 403)
+-- ConflictError (CONFLICT, 409)
Error classes are defined in server/src/errors/AppError.ts. The ErrorCode type union is defined in @cornerstone/shared for cross-package type safety.
- 4xx errors are logged at
warnlevel (client mistakes, not server failures) - 5xx errors are logged at
errorlevel (genuine server failures for alerting) - All logging uses Fastify's
request.logfor request-scoped context
config -> errorHandler -> db -> routes
The error handler registers after config (to access NODE_ENV for production mode detection) but before routes (to catch errors from all route handlers).
See ADR-015: Paperless-ngx Integration Architecture for detailed rationale.
Communication pattern: All Paperless-ngx API requests are proxied through the Fastify server. The browser never communicates directly with Paperless-ngx. This keeps the API token secure on the server and avoids CORS issues.
Proxy endpoints under /api/paperless/ provide a curated subset of the Paperless-ngx API:
-
GET /api/paperless/status-- Check if Paperless-ngx is configured and reachable -
GET /api/paperless/documents-- Search/browse documents (with pagination, filtering, sorting) -
GET /api/paperless/documents/:id-- Single document metadata -
GET /api/paperless/documents/:id/thumb-- Document thumbnail (binary passthrough) -
GET /api/paperless/documents/:id/preview-- Document preview/PDF (binary passthrough) -
GET /api/paperless/tags-- List all Paperless-ngx tags
Document linking uses a polymorphic document_links table that stores references between Cornerstone entities (work items, household items, invoices) and Paperless-ngx document IDs. Links are managed via:
-
POST /api/document-links-- Create a link -
GET /api/document-links?entityType=...&entityId=...-- List links for an entity -
DELETE /api/document-links/:id-- Remove a link
Configuration: Two environment variables control the integration:
| Variable | Default | Description |
|---|---|---|
PAPERLESS_URL |
(none) | Base URL of the Paperless-ngx instance |
PAPERLESS_API_TOKEN |
(none) | API authentication token for Paperless-ngx |
The integration is enabled when both variables are set. If either is missing, proxy endpoints return 503 SERVICE_UNAVAILABLE.
API version pinning: All upstream requests include Accept: application/json; version=5 to ensure a stable API contract.
Caching: No server-side cache in the initial implementation. With fewer than 5 users, request volume is minimal. An in-memory LRU cache can be added later without changing the API contract.
- In production, Fastify serves the Webpack-built client from
client/dist/ - SPA fallback: any non-
/api/route servesindex.html - In development, Webpack dev server (port 5173) proxies
/api/*to Fastify (port 3000)
All configuration is via environment variables:
| Variable | Default | Description |
|---|---|---|
PORT |
3000 |
Server port |
HOST |
0.0.0.0 |
Server bind address |
DATABASE_URL |
/app/data/cornerstone.db |
SQLite database file path |
LOG_LEVEL |
info |
Pino log level |
NODE_ENV |
production |
Environment (production/development) |
| Variable | Default | Description |
|---|---|---|
SESSION_DURATION |
604800 |
Session lifetime in seconds (default: 7 days) |
SECURE_COOKIES |
true |
Set Secure flag on cookies; set to false for local dev without TLS |
OIDC_ISSUER |
(none) | OIDC provider issuer URL |
OIDC_CLIENT_ID |
(none) | OIDC client ID |
OIDC_CLIENT_SECRET |
(none) | OIDC client secret |
OIDC_REDIRECT_URI |
(none) | OIDC callback URL (e.g., https://cornerstone.example.com/api/auth/oidc/callback) |
OIDC is enabled when all four OIDC variables are set.
| Variable | Default | Description |
|---|---|---|
PAPERLESS_URL |
(none) | Base URL of the Paperless-ngx instance (e.g., http://paperless:8000) |
PAPERLESS_API_TOKEN |
(none) | API authentication token for Paperless-ngx (obtain from Paperless-ngx admin panel) |
Paperless-ngx integration is enabled when both PAPERLESS_URL and PAPERLESS_API_TOKEN are set. If either is missing, all /api/paperless/* endpoints return 503.
- Single Docker container built from a multi-stage Dockerfile
- Production image uses DHI (Docker Hardened Images) Alpine with a non-root user
- SQLite data persisted via a Docker volume at
/app/data/ - HTTPS handled by an upstream reverse proxy (nginx, Traefik, Caddy, etc.)
- Health check endpoint:
GET /api/health
The development sandbox environment has only 4 GB of total RAM. This is insufficient to run the full Jest test suite (34+ test files across 3 workspaces) with default parallelism settings. Each jsdom worker loads React, react-dom, react-router, and testing-library, consuming approximately 200--300 MB per worker. With Jest's default worker count (number of CPU cores minus one), the suite exhausts memory and the Node.js process is killed by the OOM reaper.
The following flags have been applied to all three test scripts (test, test:watch, test:coverage) in the root package.json:
| Flag | Value | Purpose |
|---|---|---|
--max-old-space-size |
2048 |
Caps the V8 heap at 2 GB (Node.js flag), preventing a single process from consuming all available memory |
--maxWorkers |
2 |
Limits Jest to 2 parallel workers instead of auto-detecting CPU cores |
--workerIdleMemoryLimit |
300MB |
Recycles any Jest worker whose memory exceeds 300 MB, preventing unbounded heap growth from jsdom leaks |
Example of the current test script:
node --max-old-space-size=2048 --experimental-vm-modules node_modules/.bin/jest --maxWorkers=2 --workerIdleMemoryLimit=300MB
These flags do not affect production -- they only apply to the development test runner. The --experimental-vm-modules flag is required for ESM support in Jest and is unrelated to the memory constraint.
-
Slower test runs: With only 2 workers, the full suite runs sequentially across fewer processes. Wall-clock time is roughly 2-3x longer than it would be with
--maxWorkers=autoon a machine with 4+ cores. -
Worker recycling overhead: The
--workerIdleMemoryLimitflag causes Jest to terminate and respawn workers mid-run, adding per-recycle overhead (approximately 1-2 seconds each time). - No functional impact: All tests produce the same results regardless of worker count or memory limits. Coverage numbers are unaffected.
When the sandbox memory increases (8 GB or more), revert the memory-constrained test scripts in the root package.json to their ideal configuration:
Current (constrained):
"test": "node --max-old-space-size=2048 --experimental-vm-modules node_modules/.bin/jest --maxWorkers=2 --workerIdleMemoryLimit=300MB",
"test:watch": "node --max-old-space-size=2048 --experimental-vm-modules node_modules/.bin/jest --watch --maxWorkers=2 --workerIdleMemoryLimit=300MB",
"test:coverage": "node --max-old-space-size=2048 --experimental-vm-modules node_modules/.bin/jest --coverage --maxWorkers=2 --workerIdleMemoryLimit=300MB"Target (unconstrained):
"test": "node --experimental-vm-modules node_modules/.bin/jest",
"test:watch": "node --experimental-vm-modules node_modules/.bin/jest --watch",
"test:coverage": "node --experimental-vm-modules node_modules/.bin/jest --coverage"Changes to make:
-
Remove
--max-old-space-size=2048from all three scripts (let V8 use its default heap limit) -
Remove
--maxWorkers=2from all three scripts (let Jest auto-detect the optimal worker count) -
Remove
--workerIdleMemoryLimit=300MBfrom all three scripts (or raise to512MBif jsdom memory leaks are still a concern) -
Keep
--experimental-vm-modules-- this is required for ESM support and is unrelated to memory
The database schema and API contract evolve incrementally as each epic is implemented. See:
- Schema -- current database schema documentation
- API Contract -- current REST API specification