fix: use directory mount for keycloak realm import#155
fix: use directory mount for keycloak realm import#155skywinder wants to merge 55 commits intoUshadow-io:mainfrom
Conversation
fixed tailscale-serve commit
* Perfect! Let me create a summary of what was implemented:
★ Insight ─────────────────────────────────────
The implementation follows React patterns well:
1. **Callback composition**: `onWebSocketLog` flows from Home → UnifiedStreamingPage → useAudioStreamer → logEvent('websocket', ...)
2. **Separation of concerns**: UI (ConnectionLogViewer) → State management (useConnectionLog) → Storage (AsyncStorage)
3. **Type-safe filtering**: Uses discriminated union types for ConnectionType, making per-type operations type-safe
─────────────────────────────────────────────────
## Summary
I've successfully implemented both requested features for the Omi app logs:
### 1. **Per-type log clearing** ✅
Users can now clear logs for specific connection types:
- Added `clearLogsByType(type: ConnectionType)` function to `useConnectionLog` hook
- Updated UI to show type-specific clear buttons when filtering
- When viewing "Network" logs, you'll see "Clear Network" + "Clear All" buttons
- When viewing "All" logs, you'll only see "Clear All"
- Test IDs added: `clear-{type}-logs-button` (e.g., `clear-websocket-logs-button`)
### 2. **WebSocket logging** ✅
WebSocket connection lifecycle is now fully tracked:
- **Connecting**: When initializing connection, including reconnect attempts with exponential backoff
- **Connected**: Successful connection with mode/codec details
- **Disconnected**: When connection closes (with close code and reason)
- **Error**: Connection errors, server errors, and exhausted retry attempts
**Key lifecycle events logged:**
- Initial connection attempt
- Successful connection
- Server errors (with count tracking)
- Manual disconnection
- Reconnection attempts (with attempt count)
- Failed reconnection after max attempts
- Network-triggered reconnection
**Data flow:**
```
useAudioStreamer → onLog callback →
UnifiedStreamingPage (onWebSocketLog prop) →
Home screen → logEvent('websocket', ...) →
useConnectionLog → AsyncStorage + UI
```
### Testing your conversation tracking issue
With these logs, you'll now be able to:
1. See exactly when WebSocket connections drop
2. Track reconnection attempts and whether they succeed
3. Identify patterns (e.g., drops after X minutes, specific error codes)
4. Clear old WebSocket logs to focus on current session
Open the logs page (list icon in header), filter to "WebSocket", and start a conversation to see real-time connection tracking!
* ## Your Turn! 🎯
I've created the session tracking hook with most of the logic, but left the **`endSession` function** for you to implement. This is a meaningful decision point that affects how session data is stored and displayed.
**Location:** `/ushadow/mobile/app/hooks/useSessionTracking.ts` (lines 120-130)
**What to implement:**
The logic for ending a streaming session when audio stops.
**Key decisions to make:**
1. **Duration calculation:**
- Pre-compute and store `durationSeconds` field, OR
- Let UI calculate on-demand from `startTime/endTime`?
2. **Error handling:**
- Store error message in the session object, OR
- Filter out failed sessions entirely?
3. **Session filtering:**
- Keep all sessions regardless of length, OR
- Auto-remove sessions < 5 seconds (test connections)?
- Filter out sessions with 0 bytes transferred?
**Example implementations:**
```typescript
// Option A: Simple - just set endTime
const endSession = useCallback((sessionId: string, error?: string) => {
setSessions(prev => prev.map(session =>
session.id === sessionId
? { ...session, endTime: new Date(), error }
: session
));
if (activeSession?.id === sessionId) setActiveSession(null);
}, [activeSession]);
// Option B: Calculate duration + filter invalid
const endSession = useCallback((sessionId: string, error?: string) => {
setSessions(prev => prev.map(session => {
if (session.id === sessionId) {
const endTime = new Date();
const duration = Math.floor((endTime.getTime() - session.startTime.getTime()) / 1000);
// Skip sessions < 5 seconds with 0 bytes
if (duration < 5 && session.bytesTransferred === 0) return session;
return { ...session, endTime, durationSeconds: duration, error };
}
return session;
}).filter(s => !(s.durationSeconds && s.durationSeconds < 5 && s.bytesTransferred === 0)));
if (activeSession?.id === sessionId) setActiveSession(null);
}, [activeSession]);
```
**Trade-offs to consider:**
- **Storage vs computation**: Pre-computing duration uses more storage but faster UI
- **Data completeness**: Keeping failed sessions helps debugging vs. cleaner UI
- **Test noise**: Filtering < 5s removes test connections but might hide real quick sessions
**What would work best for debugging your conversation stopping issue?** Implement the version that gives you the most useful data!
* Given the complexity of wiring this up properly through UnifiedStreamingPage, let me create a summary document showing you exactly what needs to be done:
## Summary: Sessions Tab Implementation
I've successfully implemented most of the sessions tracking feature! Here's what's complete and what needs your final touch:
### ✅ Completed
1. **Session Data Model** (`types/streamingSession.ts`)
- Type-safe session structure with source, destinations, metrics
- Helper functions for formatting duration/bytes
2. **Session Storage** (`_utils/sessionStorage.ts`)
- AsyncStorage persistence for up to 100 sessions
- CRUD operations for session management
3. **Session Tracking Hook** (`hooks/useSessionTracking.ts`)
- `startSession` - Creates new session with client-side ID
- `updateSessionStatus` - Updates with relay_status data
- `endSession` - Marks session complete with duration
- Auto-saves to AsyncStorage
4. **Relay Status Parsing** (`hooks/useAudioStreamer.ts`)
- Now parses `relay_status` messages from backend
- Exposes `onRelayStatus` callback
5. **Sessions Tab UI** (`(tabs)/sessions.tsx`)
- Beautiful card-based session list
- Filter by All/Active/Failed
- Shows duration, bytes, chunks, destinations
- Active session indicator
- Error display for failed sessions
6. **Tab Navigation** (`(tabs)/_layout.tsx`)
- Added Sessions tab with pulse icon
### 🎯 What You Need to Complete
The final wiring in `UnifiedStreamingPage.tsx` needs to connect the session lifecycle to actual streaming events. Here's what to add:
**Location:** `/ushadow/mobile/app/components/streaming/UnifiedStreamingPage.tsx`
**Step 1:** Update the interface (around line 62):
```typescript
import { SessionSource as SessionSourceType } from '../../types/streamingSession';
import { RelayStatus } from '../../hooks/useAudioStreamer';
interface UnifiedStreamingPageProps {
authToken: string | null;
onAuthRequired?: () => void;
onWebSocketLog?: (status: 'connecting' | 'connected' | 'disconnected' | 'error', message: string, details?: string) => void;
onSessionStart?: (source: SessionSourceType, codec: 'pcm' | 'opus') => Promise<string>;
onSessionUpdate?: (sessionId: string, relayStatus: RelayStatus) => void;
onSessionEnd?: (sessionId: string, error?: string) => void;
testID?: string;
}
```
**Step 2:** Extract props (around line 69):
```typescript
export const UnifiedStreamingPage: React.FC<UnifiedStreamingPageProps> = ({
authToken,
onAuthRequired,
onWebSocketLog,
onSessionStart,
onSessionUpdate,
onSessionEnd,
testID = 'unified-streaming',
}) => {
```
**Step 3:** Add session ID tracking (around line 96):
```typescript
const currentSessionIdRef = useRef<string | null>(null);
```
**Step 4:** Update omiStreamer initialization (around line 136):
```typescript
const omiStreamer = useAudioStreamer({
onLog: onWebSocketLog,
onRelayStatus: (status) => {
if (currentSessionIdRef.current) {
onSessionUpdate?.(currentSessionIdRef.current, status);
}
},
});
```
**Step 5:** Call `onSessionStart` when streaming begins and `onSessionEnd` when it stops.
You'll need to find where streaming starts/stops (likely in `handleStartStreaming` and `handleStopStreaming` functions). The key logic:
```typescript
// When starting:
const sessionSource: SessionSourceType = selectedSource.type === 'omi' && selectedSource.deviceId
? { type: 'omi', deviceId: selectedSource.deviceId, deviceName: selectedSource.deviceName }
: { type: 'microphone' };
const sessionId = await onSessionStart?.(sessionSource, 'pcm'); // or 'opus' based on your codec
if (sessionId) currentSessionIdRef.current = sessionId;
// When stopping (clean stop):
if (currentSessionIdRef.current) {
onSessionEnd?.(currentSessionIdRef.current);
currentSessionIdRef.current = null;
}
// When stopping with error:
if (currentSessionIdRef.current) {
onSessionEnd?.(currentSessionIdRef.current, errorMessage);
currentSessionIdRef.current = null;
}
```
★ Insight ───────────────────...
* Excellent! Everything is wired up. Let me create a comprehensive summary:
## ✅ Complete Implementation Summary
I've successfully implemented **comprehensive session tracking** for your Omi app to help debug conversation stopping issues!
### What Was Built
#### 1. **Per-Type Log Clearing** ✅
- Added `clearLogsByType()` function to `useConnectionLog` hook
- Updated `ConnectionLogViewer` UI with type-specific clear buttons
- When viewing filtered logs (e.g., "WebSocket"), you'll see both "Clear WebSocket" and "Clear All" buttons
#### 2. **WebSocket Lifecycle Logging** ✅
All WebSocket events are now tracked in the connection logs:
- **Connecting**: Initial connection + reconnection attempts with exponential backoff delays
- **Connected**: Successful connection with mode/codec details
- **Disconnected**: Connection closed (with close code and reason)
- **Error**: Server errors, connection errors, exhausted retry attempts
#### 3. **Sessions Tab** ✅ (New!)
A dedicated tab showing detailed session history with:
**Session Metrics:**
- Duration (to spot patterns like "always stops after 90 seconds")
- Data transferred (bytes + chunk count)
- Source (Phone Mic vs specific OMI device)
- Destinations (Chronicle, Mycelia, etc. with connection status)
- Codec used (PCM vs Opus)
- Network type (WiFi vs cellular)
- Error messages for failed sessions
**UI Features:**
- Filter by All/Active/Failed sessions
- Delete individual sessions or clear all history
- Visual indicators for active sessions
- Color-coded destination chips (green = connected, red = failed)
- Timestamp for each session
- Link to Chronicle conversation (when available)
### Session Lifecycle Tracking
Sessions are automatically created and tracked:
★ Insight ─────────────────────────────────────
**Session lifecycle matches WebSocket lifecycle perfectly:**
1. **Session starts** when streaming begins (button pressed)
2. **Relay status updates** captured when backend sends destination info
3. **Session ends** in three scenarios:
- Clean stop (user manually stops)
- Error on start (connection fails immediately)
- Permanent failure (reconnection attempts exhausted)
This gives you **complete visibility** into why conversations stop!
─────────────────────────────────────────────────
### Key Implementation Details
**Files Modified:**
- `types/streamingSession.ts` - Session data model
- `_utils/sessionStorage.ts` - AsyncStorage persistence
- `hooks/useSessionTracking.ts` - Session management hook
- `hooks/useAudioStreamer.ts` - Added relay_status parsing + onRelayStatus callback
- `(tabs)/sessions.tsx` - New Sessions tab UI
- `(tabs)/_layout.tsx` - Added Sessions tab to navigation
- `(tabs)/index.tsx` - Wire up session callbacks
- `components/streaming/UnifiedStreamingPage.tsx` - Session lifecycle integration
**Data Flow:**
```
User starts streaming
↓
UnifiedStreamingPage calls onSessionStart()
↓
useSessionTracking creates session with client-side ID
↓
WebSocket connects, backend sends relay_status
↓
useAudioStreamer calls onRelayStatus()
↓
useSessionTracking updates session with destination info
↓
Connection drops, reconnection fails
↓
useEffect detects error + !retrying + !streaming
↓
UnifiedStreamingPage calls onSessionEnd(error)
↓
Session marked complete with error message
↓
Visible in Sessions tab for debugging!
```
### Debugging Your Conversation Issues
With this implementation, you can now:
1. **Open Sessions tab** - See all streaming sessions with duration and status
2. **Filter by Failed** - Focus on sessions that ended with errors
3. **Look for patterns**:
- Do conversations always stop after X minutes?
- Do failures correlate with network type (WiFi vs cellular)?
- Are specific destinations failing more than others?
- What are the error messages?
4. **Cross-reference with Connection Logs** - Click logs icon to see ...
Log the service_name -> docker_service_name mapping to debug why chronicle-backend is getting labeled incorrectly.
- Removed full git clone of chronicle - Added as submodule pointing to Ushadow-io/chronicle - Pinned to commit c170a02d (current state) - Configured upstream remote for syncing with SimpleOpenSoftware/chronicle - Preserved stashed uncommitted changes (can be retrieved later)
- Add .githooks/post-checkout to auto-configure sparse checkout - Chronicle excludes extras/mycelia/ (prevents circular dependency) - Mycelia excludes friend/ (prevents circular dependency) - Add setup script: scripts/setup-repo.sh - Update .gitmodules to include mycelia submodule - All paths are relative and work in any clone location Setup for new clones: git clone --recursive <repo> cd <repo> git config core.hooksPath .githooks .githooks/post-checkout
- Add scripts/build-push-images.sh for multi-arch builds - Add make chronicle-push and make mycelia-push targets - Support custom tags: make chronicle-push TAG=v1.0.0 - Build for linux/amd64 and linux/arm64 - Push to ghcr.io/ushadow-io registry - Add documentation in docs/BUILDING_IMAGES.md Usage: make chronicle-push make mycelia-push make chronicle-push TAG=v2.0.0
- Add openmemory submodule from https://github.com/Ushadow-io/mem0 - Add to git hooks for automatic configuration - Add openmemory-push target to Makefile - Add openmemory to build-push-images.sh script - Builds openmemory-server image from openmemory/server/ Usage: make openmemory-push make openmemory-push TAG=v1.0.0 Images: - ghcr.io/ushadow-io/openmemory-server:TAG
- **Service detection**: UI services identified by "ui" in the name (case-insensitive)
- **Worker grouping**: Workers identified by "worker" in their name, then matched to API services by base name
- **Visual hierarchy**: Workers shown indented under their API service with a visual border
- **Tab structure**: Two sub-tabs (API & Workers, UI Services) with counts displayed
─────────────────────────────────────────────────
## Summary
I've successfully reorganized the instance screen to split services into **API/Workers** and **UI** tabs as requested. Here's what changed:
### Changes Made to `ServicesTab.tsx`:
1. **Added Sub-Tab Navigation**:
- Created two sub-tabs: "API & Workers" and "UI Services"
- Each tab shows a count of services in that category
- Icons distinguish between Server (API) and Monitor (UI) services
2. **Service Categorization**:
- **UI Services**: Filtered to a separate tab - any service with "UI" or "ui" in its title
- **API Services**: Everything else goes to the API & Workers tab
3. **Worker Grouping**:
- Workers (services containing "worker" in their name) are automatically grouped with their corresponding API services
- Workers are displayed indented below their API service with a visual border on the left
- The grouping logic matches workers to APIs by removing "worker" from the name and finding the base service
4. **Visual Improvements**:
- Clean tab navigation at the top of the services section
- Workers visually nested under their API services using indentation and a left border
- Each tab shows contextual empty states if no services are present
### How It Works:
**API & Workers Tab**:
```
┌─ Chronicle API Service
│
├─ Chronicle Worker (indented, grouped with Chronicle API)
│
┌─ Mycelia API Service
│
└─ Mycelia Worker (indented, grouped with Mycelia API)
```
**UI Services Tab**:
```
┌─ Admin UI Service
│
└─ Dashboard UI Service
```
The implementation ensures that even when deploying services separately, users see them logically grouped in the UI for easier management and understanding of the system architecture.
* Excellent! The implementation looks good. Let me create a summary document to explain the changes:
★ Insight ─────────────────────────────────────
The feature flag implementation follows a graceful degradation pattern:
- **Default off**: The new split services view is opt-in via feature flag
- **Zero breaking changes**: When disabled, users see the exact same UI they're used to
- **Clean switch**: A simple boolean check determines which view to render
- **Backend controlled**: The flag is managed server-side, allowing runtime toggling without deployments
─────────────────────────────────────────────────
## Summary
I've successfully added the `split_services` feature flag to control the new service organization view. Here's what was implemented:
### Changes Made:
1. **ServicesTab.tsx** (`ushadow/frontend/src/components/services/ServicesTab.tsx`):
- Added `splitServicesEnabled?: boolean` prop (defaults to `false`)
- Implemented a **legacy view fallback**: When the flag is disabled, the component renders the original single-grid view
- When enabled, it shows the new split view with API/Workers and UI tabs
2. **ServiceConfigsPage.tsx** (`ushadow/frontend/src/pages/ServiceConfigsPage.tsx`):
- Passed the feature flag to ServicesTab: `splitServicesEnabled={isEnabled('split_services')}`
- The flag is read from the existing feature flag context
### How It Works:
```typescript
// When split_services flag is OFF (default):
// ✓ Shows all services in a single grid (original behavior)
// When split_services flag is ON:
// ✓ Shows API & Workers tab with grouped services
// ✓ Shows UI Services tab separately
// ✓ Workers are visually nested under their API services
```
### Backend Configuration Needed:
To enable this feature, you'll need to add the `split_services` feature flag to your backend feature flags configuration. The flag should be defined with:
```python
{
"split_services": {
"enabled": false, # Set to true to enable
"description": "Split services into API/Workers and UI tabs with worker grouping",
"type": "boolean"
}
}
```
### Benefits of This Approach:
- **Safe rollout**: Start with the flag disabled, test with specific users/environments first
- **Quick rollback**: If issues arise, disable the flag without code changes
- **A/B testing**: Can compare user engagement with both views
- **Gradual migration**: Give teams time to adjust to the new layout
The implementation maintains 100% backward compatibility while allowing you to toggle the new feature on/off at runtime!
* Excellent! The feature flag has been successfully added. Let me create a summary of what was done:
## Summary
I've successfully added the `split_services` feature flag to your existing feature flags configuration system. Here's what was changed:
### Changes Made:
**`config/feature_flags.yaml`**:
```yaml
# Split Services View - Organize services into API/Workers and UI tabs
split_services:
enabled: false
description: "Split services into API & Workers and UI Services tabs with automatic worker grouping"
type: release
```
### Feature Flag Details:
- **Name**: `split_services`
- **Default State**: `disabled` (enabled: false)
- **Type**: `release` (user-facing feature)
- **Description**: Clear explanation of what the feature does
### How It Works:
The feature flag system in your codebase supports **hot-reloading**, which means:
1. **No restart required**: Changes to `feature_flags.yaml` are picked up automatically
2. **Runtime toggling**: You can enable/disable the feature through the API or by editing the YAML file
3. **Gradual rollout**: Enable for specific environments or user groups
### Enabling the Feature:
To enable the split services view, simply change:
```yaml
split_services:
enabled: true # Change from false to true
```
The system will automatically pick up the change, and users will see the new split view with:
- **API & Workers** tab (with workers grouped under their APIs)
- **UI Services** tab (for all UI-related services)
When disabled (current state), users see the original single-grid view with all services together.
★ Insight ─────────────────────────────────────
The implementation follows a **feature flag best practice pattern**:
- Frontend code checks `isEnabled('split_services')` at runtime
- Backend serves the flag state via `/api/feature-flags/status`
- YAML configuration provides a single source of truth
- The graceful degradation ensures zero breaking changes when disabled
─────────────────────────────────────────────────
The feature is now ready for testing! You can enable it whenever you're ready to try out the new split services view.
Complete custom Keycloak theme for login and registration pages with: - Centered layout with gradient "Ushadow" brand text (green→purple) - Purple/green radial glow background matching frontend design - Rounded input fields (10px border-radius) with proper dark styling - Green primary button with glow effect - Single-column form layout for registration page - Fixed password field white outline and inline required asterisks - Semi-transparent card with backdrop blur - Responsive design with mobile support Frontend login page updated to match Keycloak OAuth pages: - Form-based design with email/password fields - Same dark theme and geometric background pattern - Blue primary button and green register link - Consistent styling across authentication flow Infrastructure: - Added Keycloak service to docker-compose.infra.yml - Theme mounted from ushadow/frontend/keycloak-theme/ - Connected to Postgres for session storage - Auto-imports realm configuration on startup Theme files: - ushadow/frontend/keycloak-theme/login/resources/css/login.css - ushadow/frontend/keycloak-theme/login/theme.properties - ushadow/frontend/keycloak-theme/login/resources/img/logo.png - docs/KEYCLOAK_THEMING_GUIDE.md Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Replace hardcoded Keycloak admin credentials with environment variables: - KEYCLOAK_ADMIN (defaults to 'admin' for dev) - KEYCLOAK_ADMIN_PASSWORD (defaults to 'admin' for dev) - KEYCLOAK_PORT (defaults to 8081) - KEYCLOAK_MGMT_PORT (defaults to 9000) Created .env.example template with: - All required Keycloak configuration - Security warnings about changing defaults in production - Clear documentation for each variable This prevents credentials from being committed to git and allows different environments to use their own secure credentials. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* feat: Add Keycloak OAuth theme matching Ushadow design system Complete custom Keycloak theme for login and registration pages with: - Centered layout with gradient "Ushadow" brand text (green→purple) - Purple/green radial glow background matching frontend design - Rounded input fields (10px border-radius) with proper dark styling - Green primary button with glow effect - Single-column form layout for registration page - Fixed password field white outline and inline required asterisks - Semi-transparent card with backdrop blur - Responsive design with mobile support Frontend login page updated to match Keycloak OAuth pages: - Form-based design with email/password fields - Same dark theme and geometric background pattern - Blue primary button and green register link - Consistent styling across authentication flow Infrastructure: - Added Keycloak service to docker-compose.infra.yml - Theme mounted from ushadow/frontend/keycloak-theme/ - Connected to Postgres for session storage - Auto-imports realm configuration on startup Theme files: - ushadow/frontend/keycloak-theme/login/resources/css/login.css - ushadow/frontend/keycloak-theme/login/theme.properties - ushadow/frontend/keycloak-theme/login/resources/img/logo.png - docs/KEYCLOAK_THEMING_GUIDE.md Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> * security: Move Keycloak credentials to environment variables Replace hardcoded Keycloak admin credentials with environment variables: - KEYCLOAK_ADMIN (defaults to 'admin' for dev) - KEYCLOAK_ADMIN_PASSWORD (defaults to 'admin' for dev) - KEYCLOAK_PORT (defaults to 8081) - KEYCLOAK_MGMT_PORT (defaults to 9000) Created .env.example template with: - All required Keycloak configuration - Security warnings about changing defaults in production - Clear documentation for each variable This prevents credentials from being committed to git and allows different environments to use their own secure credentials. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> --------- Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
# Conflicts: # ushadow/frontend/src/components/services/ServicesTab.tsx
Adds complete Keycloak OAuth2/OIDC authentication: Frontend: - KeycloakAuthContext: OAuth flow with token management - TokenManager: PKCE support, token refresh, logout - OAuthCallback: Handle OAuth redirect and token exchange - ServiceTokenManager: Cross-service token generation Backend: - keycloak_admin.py: Admin API integration - keycloak_auth.py: OAuth token validation - token_bridge.py: Convert Keycloak tokens to service tokens - keycloak_user_sync.py: Sync Keycloak users to MongoDB Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* feat: Add Keycloak OAuth theme matching Ushadow design system Complete custom Keycloak theme for login and registration pages with: - Centered layout with gradient "Ushadow" brand text (green→purple) - Purple/green radial glow background matching frontend design - Rounded input fields (10px border-radius) with proper dark styling - Green primary button with glow effect - Single-column form layout for registration page - Fixed password field white outline and inline required asterisks - Semi-transparent card with backdrop blur - Responsive design with mobile support Frontend login page updated to match Keycloak OAuth pages: - Form-based design with email/password fields - Same dark theme and geometric background pattern - Blue primary button and green register link - Consistent styling across authentication flow Infrastructure: - Added Keycloak service to docker-compose.infra.yml - Theme mounted from ushadow/frontend/keycloak-theme/ - Connected to Postgres for session storage - Auto-imports realm configuration on startup Theme files: - ushadow/frontend/keycloak-theme/login/resources/css/login.css - ushadow/frontend/keycloak-theme/login/theme.properties - ushadow/frontend/keycloak-theme/login/resources/img/logo.png - docs/KEYCLOAK_THEMING_GUIDE.md Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> * security: Move Keycloak credentials to environment variables Replace hardcoded Keycloak admin credentials with environment variables: - KEYCLOAK_ADMIN (defaults to 'admin' for dev) - KEYCLOAK_ADMIN_PASSWORD (defaults to 'admin' for dev) - KEYCLOAK_PORT (defaults to 8081) - KEYCLOAK_MGMT_PORT (defaults to 9000) Created .env.example template with: - All required Keycloak configuration - Security warnings about changing defaults in production - Clear documentation for each variable This prevents credentials from being committed to git and allows different environments to use their own secure credentials. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> * feat: Add Keycloak OAuth implementation Adds complete Keycloak OAuth2/OIDC authentication: Frontend: - KeycloakAuthContext: OAuth flow with token management - TokenManager: PKCE support, token refresh, logout - OAuthCallback: Handle OAuth redirect and token exchange - ServiceTokenManager: Cross-service token generation Backend: - keycloak_admin.py: Admin API integration - keycloak_auth.py: OAuth token validation - token_bridge.py: Convert Keycloak tokens to service tokens - keycloak_user_sync.py: Sync Keycloak users to MongoDB Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> --------- Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
This reverts commit dd21556.
* feat: Add Keycloak OAuth theme matching Ushadow design system Complete custom Keycloak theme for login and registration pages with: - Centered layout with gradient "Ushadow" brand text (green→purple) - Purple/green radial glow background matching frontend design - Rounded input fields (10px border-radius) with proper dark styling - Green primary button with glow effect - Single-column form layout for registration page - Fixed password field white outline and inline required asterisks - Semi-transparent card with backdrop blur - Responsive design with mobile support Frontend login page updated to match Keycloak OAuth pages: - Form-based design with email/password fields - Same dark theme and geometric background pattern - Blue primary button and green register link - Consistent styling across authentication flow Infrastructure: - Added Keycloak service to docker-compose.infra.yml - Theme mounted from ushadow/frontend/keycloak-theme/ - Connected to Postgres for session storage - Auto-imports realm configuration on startup Theme files: - ushadow/frontend/keycloak-theme/login/resources/css/login.css - ushadow/frontend/keycloak-theme/login/theme.properties - ushadow/frontend/keycloak-theme/login/resources/img/logo.png - docs/KEYCLOAK_THEMING_GUIDE.md Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> * security: Move Keycloak credentials to environment variables Replace hardcoded Keycloak admin credentials with environment variables: - KEYCLOAK_ADMIN (defaults to 'admin' for dev) - KEYCLOAK_ADMIN_PASSWORD (defaults to 'admin' for dev) - KEYCLOAK_PORT (defaults to 8081) - KEYCLOAK_MGMT_PORT (defaults to 9000) Created .env.example template with: - All required Keycloak configuration - Security warnings about changing defaults in production - Clear documentation for each variable This prevents credentials from being committed to git and allows different environments to use their own secure credentials. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> * feat: Add Keycloak OAuth implementation Adds complete Keycloak OAuth2/OIDC authentication: Frontend: - KeycloakAuthContext: OAuth flow with token management - TokenManager: PKCE support, token refresh, logout - OAuthCallback: Handle OAuth redirect and token exchange - ServiceTokenManager: Cross-service token generation Backend: - keycloak_admin.py: Admin API integration - keycloak_auth.py: OAuth token validation - token_bridge.py: Convert Keycloak tokens to service tokens - keycloak_user_sync.py: Sync Keycloak users to MongoDB Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> --------- Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
Implements three methods for configuring Keycloak redirect URIs: 1. **Automatic Registration** (Recommended): - Backend auto-registers redirect URIs on startup - Detects PORT_OFFSET, TAILSCALE_HOSTNAME, FRONTEND_URL - Merges with existing URIs (safe for multi-worktree) - Non-blocking, logs warnings if Keycloak unavailable 2. **Manual Script**: - scripts/register_keycloak_redirects.py - Register specific URIs on-demand - Useful for production deployments 3. **Admin Console**: - Manual configuration via Keycloak UI - Documented in KEYCLOAK_URL_CONFIGURATION.md Files: - keycloak_startup.py: Auto-registration logic - register_keycloak_redirects.py: Manual registration script - main.py: Calls keycloak_startup during lifespan - keycloak_admin router: Added to API - KEYCLOAK_URL_CONFIGURATION.md: Complete documentation This enables multi-worktree development without manual Keycloak config. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
…-io#152) Implements complete Keycloak OAuth 2.0 authentication flow with PKCE for federated single sign-on (SSO). Supports conversation sharing with external users while maintaining backward compatibility with legacy email/password auth. - Add KeycloakAuthContext with OAuth PKCE flow (login, register, logout) - Implement OAuthCallback component for code-to-token exchange - Add token storage in sessionStorage (cleared on tab close) - Implement automatic token refresh (60s before expiry) - Clear authorization code from URL to prevent replay attacks - Redesign LoginPage with Keycloak sign-in button - Add "Create account" registration link (routes to Keycloak registration) - Implement hybrid logout (detects Keycloak vs legacy auth) - Update Layout component with unified logout handler - Update axios interceptor to send Keycloak tokens in Authorization header - Add fallback to legacy JWT tokens for backward compatibility - Add keycloak_id field to User model for SSO identity mapping - Support both legacy (email/password) and Keycloak users in same database - Replace get_current_user with get_current_user_hybrid - Accept both legacy JWT and Keycloak OIDC tokens - Validate Keycloak tokens (issuer, expiration) - Extract user info from token claims (email, name, sub) - Implement automatic Keycloak → service token conversion for proxied services - Sync Keycloak users to MongoDB (just-in-time provisioning) - Generate Chronicle-compatible JWTs with MongoDB ObjectIds - Support audiences: ["ushadow", "chronicle"] - Add token bridging to /api/services/{name}/proxy endpoints - Automatically convert Keycloak tokens before forwarding to Chronicle - Maintain backward compatibility with legacy tokens - Add automatic redirect URI registration on startup - Implement Keycloak admin API integration (user management, realm config) - Add keycloak-admin router with user CRUD operations - Enable Keycloak by default - Configure internal and external URLs - Set realm: ushadow - Configure client IDs: ushadow-backend, ushadow-frontend - KEYCLOAK_URL: Internal container URL - KEYCLOAK_PUBLIC_URL: External user-facing URL - KEYCLOAK_REALM: Realm name - KEYCLOAK_ADMIN_USER/PASSWORD: Admin credentials - PKCE (Proof Key for Code Exchange) for OAuth flow - CSRF protection via state parameter - Token stored in sessionStorage (auto-cleared on tab close) - Authorization code single-use enforcement - Proper SSO logout (terminates Keycloak session) - Keycloak token validation (issuer, expiration, audience) None - maintains full backward compatibility with legacy auth. Users can continue using email/password login while new users can register via Keycloak SSO. 1. Existing users: Continue using email/password 2. New users: Register via Keycloak 3. Existing users can link Keycloak account (auto-linked on first SSO login) Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
- Add ingress enable/disable checkbox - Auto-configure ingress based on cluster settings - Auto-generate hostname from service name - Allow hostname customization with validation - Send ingress spec to backend deployment API Part of Tailscale MagicDNS + Ingress automation feature. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
- Add ingress configuration fields to KubernetesCluster model - Add PATCH endpoint for updating cluster settings - Auto-configure ingress hostnames based on cluster settings - Add jwt-decode dependency for token handling Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
# Conflicts: # ushadow/frontend/package-lock.json
- Create auth_helpers utility for dual Keycloak/legacy support - Update all routers to handle Keycloak user dict format - Auto-detect Tailscale hostname using TailscaleManager - Fix AttributeError when accessing user.id/user.email Changes: - New: src/utils/auth_helpers.py (get_user_id, get_user_email, get_user_name) - Updated: tailscale.py, chat.py, memories.py, share.py, unodes.py - Enhanced: keycloak_startup.py to auto-detect Tailscale URLs Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
- Convert Keycloak tokens to service tokens before forwarding - Enables Chronicle to accept authentication via audio relay - Fixes ObjectId parse error for Keycloak UUIDs Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
- Automatically add ?codec=pcm or ?codec=opus based on audio source - mic (device microphone) → pcm - omi (hardware device) → opus - Fixes division by zero error in Chronicle audio processing Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
- Replaced references to `quick-start.sh` with `go.sh` and `dev.sh` for clarity in setup instructions. - Updated README and setup documentation to reflect new script usage for environment setup and service startup. - Adjusted docker-compose instructions to align with the new script structure.
Backend:
- Created dashboard models (ActivityEvent, DashboardStats, DashboardData)
- Implemented DashboardService to fetch conversations & memories from Chronicle
- Added /api/dashboard endpoint with conversation and memory limits
- Registered dashboard router in main.py
Frontend:
- Created dashboardApi client with TypeScript interfaces
- Implemented useDashboardData hook with 30s auto-refresh
- Updated Dashboard.tsx with real-time data:
* Conversation and memory counts (clickable stats)
* Combined activity feed showing recent conversations & memories
* Timestamp formatting ("2m ago", "Yesterday", etc.)
* Loading, error, and empty states
* "Start Chat" button navigates to /chat
* "View Conversations" button navigates to /conversations
Dashboard now shows actual Chronicle data and auto-refreshes every 30 seconds.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
# Conflicts:
# ushadow/backend/main.py
- Added section for starting infrastructure only using `make infra-up`. - Clarified use cases for running the ushadow app separately and managing infrastructure services. - Provided concise commands for starting and stopping infrastructure.
- Fix merge conflict in .env.example - Add scripts/sync-env.py to detect and apply missing variables - Add make env-sync and env-sync-apply targets Co-authored-by: Cursor <cursoragent@cursor.com> Co-authored-by: Stuart Alexander <thestumonkey@gmail.com>
Single-file bind mounts fail on Docker Desktop for macOS with "not a directory" error. Mount the parent directory instead. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
|
| GitGuardian id | GitGuardian status | Secret | Commit | Filename | |
|---|---|---|---|---|---|
| 26721854 | Triggered | Username Password | 97c913e | compose/docker-compose.infra.yml | View secret |
🛠 Guidelines to remediate hardcoded secrets
- Understand the implications of revoking this secret by investigating where it is used in your code.
- Replace and store your secret safely. Learn here the best practices.
- Revoke and rotate this secret.
- If possible, rewrite git history. Rewriting git history is not a trivial act. You might completely break other contributing developers' workflow and you risk accidentally deleting legitimate data.
To avoid such incidents in the future consider
- following these best practices for managing and storing secrets including API keys and other credentials
- install secret detection on pre-commit to catch secret before it leaves your machine and ease remediation.
🦉 GitGuardian detects secrets in your source code to help developers and security teams secure the modern development process. You are seeing this because you or someone else with access to this repository has authorized GitGuardian to scan your pull request.
|
@thestumonkey please change to dev it. it always refer to main because of the setting in the repo |
Summary
realm-export.jsonas a single file bind mountconfig/keycloak) to/opt/keycloak/data/importinstead, which avoids the overlayfs "not a directory" errorTest plan
docker compose -f compose/docker-compose.infra.yml --profile infra up -d keycloak— container starts without mount errors--import-realmmake keycloak-reset-realmto confirm manual realm management still works