Skip to content

Comments

Thestumonkey patch 1#165

Open
thestumonkey wants to merge 164 commits intomainfrom
thestumonkey-patch-1
Open

Thestumonkey patch 1#165
thestumonkey wants to merge 164 commits intomainfrom
thestumonkey-patch-1

Conversation

@thestumonkey
Copy link
Member

No description provided.

thestumonkey and others added 30 commits January 28, 2026 09:11
fixed tailscale-serve commit
* Perfect! Let me create a summary of what was implemented:

★ Insight ─────────────────────────────────────
The implementation follows React patterns well:
1. **Callback composition**: `onWebSocketLog` flows from Home → UnifiedStreamingPage → useAudioStreamer → logEvent('websocket', ...)
2. **Separation of concerns**: UI (ConnectionLogViewer) → State management (useConnectionLog) → Storage (AsyncStorage)
3. **Type-safe filtering**: Uses discriminated union types for ConnectionType, making per-type operations type-safe
─────────────────────────────────────────────────

## Summary

I've successfully implemented both requested features for the Omi app logs:

### 1. **Per-type log clearing** ✅

Users can now clear logs for specific connection types:
- Added `clearLogsByType(type: ConnectionType)` function to `useConnectionLog` hook
- Updated UI to show type-specific clear buttons when filtering
- When viewing "Network" logs, you'll see "Clear Network" + "Clear All" buttons
- When viewing "All" logs, you'll only see "Clear All"
- Test IDs added: `clear-{type}-logs-button` (e.g., `clear-websocket-logs-button`)

### 2. **WebSocket logging** ✅

WebSocket connection lifecycle is now fully tracked:
- **Connecting**: When initializing connection, including reconnect attempts with exponential backoff
- **Connected**: Successful connection with mode/codec details
- **Disconnected**: When connection closes (with close code and reason)
- **Error**: Connection errors, server errors, and exhausted retry attempts

**Key lifecycle events logged:**
- Initial connection attempt
- Successful connection
- Server errors (with count tracking)
- Manual disconnection
- Reconnection attempts (with attempt count)
- Failed reconnection after max attempts
- Network-triggered reconnection

**Data flow:**
```
useAudioStreamer → onLog callback →
UnifiedStreamingPage (onWebSocketLog prop) →
Home screen → logEvent('websocket', ...) →
useConnectionLog → AsyncStorage + UI
```

### Testing your conversation tracking issue

With these logs, you'll now be able to:
1. See exactly when WebSocket connections drop
2. Track reconnection attempts and whether they succeed
3. Identify patterns (e.g., drops after X minutes, specific error codes)
4. Clear old WebSocket logs to focus on current session

Open the logs page (list icon in header), filter to "WebSocket", and start a conversation to see real-time connection tracking!

* ## Your Turn! 🎯

I've created the session tracking hook with most of the logic, but left the **`endSession` function** for you to implement. This is a meaningful decision point that affects how session data is stored and displayed.

**Location:** `/ushadow/mobile/app/hooks/useSessionTracking.ts` (lines 120-130)

**What to implement:**
The logic for ending a streaming session when audio stops.

**Key decisions to make:**

1. **Duration calculation:**
   - Pre-compute and store `durationSeconds` field, OR
   - Let UI calculate on-demand from `startTime/endTime`?

2. **Error handling:**
   - Store error message in the session object, OR
   - Filter out failed sessions entirely?

3. **Session filtering:**
   - Keep all sessions regardless of length, OR
   - Auto-remove sessions < 5 seconds (test connections)?
   - Filter out sessions with 0 bytes transferred?

**Example implementations:**

```typescript
// Option A: Simple - just set endTime
const endSession = useCallback((sessionId: string, error?: string) => {
  setSessions(prev => prev.map(session =>
    session.id === sessionId
      ? { ...session, endTime: new Date(), error }
      : session
  ));
  if (activeSession?.id === sessionId) setActiveSession(null);
}, [activeSession]);

// Option B: Calculate duration + filter invalid
const endSession = useCallback((sessionId: string, error?: string) => {
  setSessions(prev => prev.map(session => {
    if (session.id === sessionId) {
      const endTime = new Date();
      const duration = Math.floor((endTime.getTime() - session.startTime.getTime()) / 1000);

      // Skip sessions < 5 seconds with 0 bytes
      if (duration < 5 && session.bytesTransferred === 0) return session;

      return { ...session, endTime, durationSeconds: duration, error };
    }
    return session;
  }).filter(s => !(s.durationSeconds && s.durationSeconds < 5 && s.bytesTransferred === 0)));

  if (activeSession?.id === sessionId) setActiveSession(null);
}, [activeSession]);
```

**Trade-offs to consider:**
- **Storage vs computation**: Pre-computing duration uses more storage but faster UI
- **Data completeness**: Keeping failed sessions helps debugging vs. cleaner UI
- **Test noise**: Filtering < 5s removes test connections but might hide real quick sessions

**What would work best for debugging your conversation stopping issue?** Implement the version that gives you the most useful data!

* Given the complexity of wiring this up properly through UnifiedStreamingPage, let me create a summary document showing you exactly what needs to be done:

## Summary: Sessions Tab Implementation

I've successfully implemented most of the sessions tracking feature! Here's what's complete and what needs your final touch:

### ✅ Completed

1. **Session Data Model** (`types/streamingSession.ts`)
   - Type-safe session structure with source, destinations, metrics
   - Helper functions for formatting duration/bytes

2. **Session Storage** (`_utils/sessionStorage.ts`)
   - AsyncStorage persistence for up to 100 sessions
   - CRUD operations for session management

3. **Session Tracking Hook** (`hooks/useSessionTracking.ts`)
   - `startSession` - Creates new session with client-side ID
   - `updateSessionStatus` - Updates with relay_status data
   - `endSession` - Marks session complete with duration
   - Auto-saves to AsyncStorage

4. **Relay Status Parsing** (`hooks/useAudioStreamer.ts`)
   - Now parses `relay_status` messages from backend
   - Exposes `onRelayStatus` callback

5. **Sessions Tab UI** (`(tabs)/sessions.tsx`)
   - Beautiful card-based session list
   - Filter by All/Active/Failed
   - Shows duration, bytes, chunks, destinations
   - Active session indicator
   - Error display for failed sessions

6. **Tab Navigation** (`(tabs)/_layout.tsx`)
   - Added Sessions tab with pulse icon

### 🎯 What You Need to Complete

The final wiring in `UnifiedStreamingPage.tsx` needs to connect the session lifecycle to actual streaming events. Here's what to add:

**Location:** `/ushadow/mobile/app/components/streaming/UnifiedStreamingPage.tsx`

**Step 1:** Update the interface (around line 62):
```typescript
import { SessionSource as SessionSourceType } from '../../types/streamingSession';
import { RelayStatus } from '../../hooks/useAudioStreamer';

interface UnifiedStreamingPageProps {
  authToken: string | null;
  onAuthRequired?: () => void;
  onWebSocketLog?: (status: 'connecting' | 'connected' | 'disconnected' | 'error', message: string, details?: string) => void;
  onSessionStart?: (source: SessionSourceType, codec: 'pcm' | 'opus') => Promise<string>;
  onSessionUpdate?: (sessionId: string, relayStatus: RelayStatus) => void;
  onSessionEnd?: (sessionId: string, error?: string) => void;
  testID?: string;
}
```

**Step 2:** Extract props (around line 69):
```typescript
export const UnifiedStreamingPage: React.FC<UnifiedStreamingPageProps> = ({
  authToken,
  onAuthRequired,
  onWebSocketLog,
  onSessionStart,
  onSessionUpdate,
  onSessionEnd,
  testID = 'unified-streaming',
}) => {
```

**Step 3:** Add session ID tracking (around line 96):
```typescript
const currentSessionIdRef = useRef<string | null>(null);
```

**Step 4:** Update omiStreamer initialization (around line 136):
```typescript
const omiStreamer = useAudioStreamer({
  onLog: onWebSocketLog,
  onRelayStatus: (status) => {
    if (currentSessionIdRef.current) {
      onSessionUpdate?.(currentSessionIdRef.current, status);
    }
  },
});
```

**Step 5:** Call `onSessionStart` when streaming begins and `onSessionEnd` when it stops.

You'll need to find where streaming starts/stops (likely in `handleStartStreaming` and `handleStopStreaming` functions). The key logic:

```typescript
// When starting:
const sessionSource: SessionSourceType = selectedSource.type === 'omi' && selectedSource.deviceId
  ? { type: 'omi', deviceId: selectedSource.deviceId, deviceName: selectedSource.deviceName }
  : { type: 'microphone' };

const sessionId = await onSessionStart?.(sessionSource, 'pcm'); // or 'opus' based on your codec
if (sessionId) currentSessionIdRef.current = sessionId;

// When stopping (clean stop):
if (currentSessionIdRef.current) {
  onSessionEnd?.(currentSessionIdRef.current);
  currentSessionIdRef.current = null;
}

// When stopping with error:
if (currentSessionIdRef.current) {
  onSessionEnd?.(currentSessionIdRef.current, errorMessage);
  currentSessionIdRef.current = null;
}
```

★ Insight ───────────────────...

* Excellent! Everything is wired up. Let me create a comprehensive summary:

## ✅ Complete Implementation Summary

I've successfully implemented **comprehensive session tracking** for your Omi app to help debug conversation stopping issues!

### What Was Built

#### 1. **Per-Type Log Clearing** ✅
- Added `clearLogsByType()` function to `useConnectionLog` hook
- Updated `ConnectionLogViewer` UI with type-specific clear buttons
- When viewing filtered logs (e.g., "WebSocket"), you'll see both "Clear WebSocket" and "Clear All" buttons

#### 2. **WebSocket Lifecycle Logging** ✅
All WebSocket events are now tracked in the connection logs:
- **Connecting**: Initial connection + reconnection attempts with exponential backoff delays
- **Connected**: Successful connection with mode/codec details
- **Disconnected**: Connection closed (with close code and reason)
- **Error**: Server errors, connection errors, exhausted retry attempts

#### 3. **Sessions Tab** ✅ (New!)
A dedicated tab showing detailed session history with:

**Session Metrics:**
- Duration (to spot patterns like "always stops after 90 seconds")
- Data transferred (bytes + chunk count)
- Source (Phone Mic vs specific OMI device)
- Destinations (Chronicle, Mycelia, etc. with connection status)
- Codec used (PCM vs Opus)
- Network type (WiFi vs cellular)
- Error messages for failed sessions

**UI Features:**
- Filter by All/Active/Failed sessions
- Delete individual sessions or clear all history
- Visual indicators for active sessions
- Color-coded destination chips (green = connected, red = failed)
- Timestamp for each session
- Link to Chronicle conversation (when available)

### Session Lifecycle Tracking

Sessions are automatically created and tracked:

★ Insight ─────────────────────────────────────
**Session lifecycle matches WebSocket lifecycle perfectly:**
1. **Session starts** when streaming begins (button pressed)
2. **Relay status updates** captured when backend sends destination info
3. **Session ends** in three scenarios:
   - Clean stop (user manually stops)
   - Error on start (connection fails immediately)
   - Permanent failure (reconnection attempts exhausted)

This gives you **complete visibility** into why conversations stop!
─────────────────────────────────────────────────

### Key Implementation Details

**Files Modified:**
- `types/streamingSession.ts` - Session data model
- `_utils/sessionStorage.ts` - AsyncStorage persistence
- `hooks/useSessionTracking.ts` - Session management hook
- `hooks/useAudioStreamer.ts` - Added relay_status parsing + onRelayStatus callback
- `(tabs)/sessions.tsx` - New Sessions tab UI
- `(tabs)/_layout.tsx` - Added Sessions tab to navigation
- `(tabs)/index.tsx` - Wire up session callbacks
- `components/streaming/UnifiedStreamingPage.tsx` - Session lifecycle integration

**Data Flow:**
```
User starts streaming
  ↓
UnifiedStreamingPage calls onSessionStart()
  ↓
useSessionTracking creates session with client-side ID
  ↓
WebSocket connects, backend sends relay_status
  ↓
useAudioStreamer calls onRelayStatus()
  ↓
useSessionTracking updates session with destination info
  ↓
Connection drops, reconnection fails
  ↓
useEffect detects error + !retrying + !streaming
  ↓
UnifiedStreamingPage calls onSessionEnd(error)
  ↓
Session marked complete with error message
  ↓
Visible in Sessions tab for debugging!
```

### Debugging Your Conversation Issues

With this implementation, you can now:

1. **Open Sessions tab** - See all streaming sessions with duration and status
2. **Filter by Failed** - Focus on sessions that ended with errors
3. **Look for patterns**:
   - Do conversations always stop after X minutes?
   - Do failures correlate with network type (WiFi vs cellular)?
   - Are specific destinations failing more than others?
   - What are the error messages?

4. **Cross-reference with Connection Logs** - Click logs icon to see ...
Log the service_name -> docker_service_name mapping to debug
why chronicle-backend is getting labeled incorrectly.
- Removed full git clone of chronicle
- Added as submodule pointing to Ushadow-io/chronicle
- Pinned to commit c170a02d (current state)
- Configured upstream remote for syncing with SimpleOpenSoftware/chronicle
- Preserved stashed uncommitted changes (can be retrieved later)
- Add .githooks/post-checkout to auto-configure sparse checkout
- Chronicle excludes extras/mycelia/ (prevents circular dependency)
- Mycelia excludes friend/ (prevents circular dependency)
- Add setup script: scripts/setup-repo.sh
- Update .gitmodules to include mycelia submodule
- All paths are relative and work in any clone location

Setup for new clones:
  git clone --recursive <repo>
  cd <repo>
  git config core.hooksPath .githooks
  .githooks/post-checkout
- Add scripts/build-push-images.sh for multi-arch builds
- Add make chronicle-push and make mycelia-push targets
- Support custom tags: make chronicle-push TAG=v1.0.0
- Build for linux/amd64 and linux/arm64
- Push to ghcr.io/ushadow-io registry
- Add documentation in docs/BUILDING_IMAGES.md

Usage:
  make chronicle-push
  make mycelia-push
  make chronicle-push TAG=v2.0.0
- Add openmemory submodule from https://github.com/Ushadow-io/mem0
- Add to git hooks for automatic configuration
- Add openmemory-push target to Makefile
- Add openmemory to build-push-images.sh script
- Builds openmemory-server image from openmemory/server/

Usage:
  make openmemory-push
  make openmemory-push TAG=v1.0.0

Images:
  - ghcr.io/ushadow-io/openmemory-server:TAG
- **Service detection**: UI services identified by "ui" in the name (case-insensitive)
- **Worker grouping**: Workers identified by "worker" in their name, then matched to API services by base name
- **Visual hierarchy**: Workers shown indented under their API service with a visual border
- **Tab structure**: Two sub-tabs (API & Workers, UI Services) with counts displayed
─────────────────────────────────────────────────

## Summary

I've successfully reorganized the instance screen to split services into **API/Workers** and **UI** tabs as requested. Here's what changed:

### Changes Made to `ServicesTab.tsx`:

1. **Added Sub-Tab Navigation**:
   - Created two sub-tabs: "API & Workers" and "UI Services"
   - Each tab shows a count of services in that category
   - Icons distinguish between Server (API) and Monitor (UI) services

2. **Service Categorization**:
   - **UI Services**: Filtered to a separate tab - any service with "UI" or "ui" in its title
   - **API Services**: Everything else goes to the API & Workers tab

3. **Worker Grouping**:
   - Workers (services containing "worker" in their name) are automatically grouped with their corresponding API services
   - Workers are displayed indented below their API service with a visual border on the left
   - The grouping logic matches workers to APIs by removing "worker" from the name and finding the base service

4. **Visual Improvements**:
   - Clean tab navigation at the top of the services section
   - Workers visually nested under their API services using indentation and a left border
   - Each tab shows contextual empty states if no services are present

### How It Works:

**API & Workers Tab**:
```
┌─ Chronicle API Service
│
├─ Chronicle Worker (indented, grouped with Chronicle API)
│
┌─ Mycelia API Service
│
└─ Mycelia Worker (indented, grouped with Mycelia API)
```

**UI Services Tab**:
```
┌─ Admin UI Service
│
└─ Dashboard UI Service
```

The implementation ensures that even when deploying services separately, users see them logically grouped in the UI for easier management and understanding of the system architecture.

* Excellent! The implementation looks good. Let me create a summary document to explain the changes:

★ Insight ─────────────────────────────────────
The feature flag implementation follows a graceful degradation pattern:
- **Default off**: The new split services view is opt-in via feature flag
- **Zero breaking changes**: When disabled, users see the exact same UI they're used to
- **Clean switch**: A simple boolean check determines which view to render
- **Backend controlled**: The flag is managed server-side, allowing runtime toggling without deployments
─────────────────────────────────────────────────

## Summary

I've successfully added the `split_services` feature flag to control the new service organization view. Here's what was implemented:

### Changes Made:

1. **ServicesTab.tsx** (`ushadow/frontend/src/components/services/ServicesTab.tsx`):
   - Added `splitServicesEnabled?: boolean` prop (defaults to `false`)
   - Implemented a **legacy view fallback**: When the flag is disabled, the component renders the original single-grid view
   - When enabled, it shows the new split view with API/Workers and UI tabs

2. **ServiceConfigsPage.tsx** (`ushadow/frontend/src/pages/ServiceConfigsPage.tsx`):
   - Passed the feature flag to ServicesTab: `splitServicesEnabled={isEnabled('split_services')}`
   - The flag is read from the existing feature flag context

### How It Works:

```typescript
// When split_services flag is OFF (default):
// ✓ Shows all services in a single grid (original behavior)

// When split_services flag is ON:
// ✓ Shows API & Workers tab with grouped services
// ✓ Shows UI Services tab separately
// ✓ Workers are visually nested under their API services
```

### Backend Configuration Needed:

To enable this feature, you'll need to add the `split_services` feature flag to your backend feature flags configuration. The flag should be defined with:

```python
{
  "split_services": {
    "enabled": false,  # Set to true to enable
    "description": "Split services into API/Workers and UI tabs with worker grouping",
    "type": "boolean"
  }
}
```

### Benefits of This Approach:

- **Safe rollout**: Start with the flag disabled, test with specific users/environments first
- **Quick rollback**: If issues arise, disable the flag without code changes
- **A/B testing**: Can compare user engagement with both views
- **Gradual migration**: Give teams time to adjust to the new layout

The implementation maintains 100% backward compatibility while allowing you to toggle the new feature on/off at runtime!

* Excellent! The feature flag has been successfully added. Let me create a summary of what was done:

## Summary

I've successfully added the `split_services` feature flag to your existing feature flags configuration system. Here's what was changed:

### Changes Made:

**`config/feature_flags.yaml`**:
```yaml
# Split Services View - Organize services into API/Workers and UI tabs
split_services:
  enabled: false
  description: "Split services into API & Workers and UI Services tabs with automatic worker grouping"
  type: release
```

### Feature Flag Details:

- **Name**: `split_services`
- **Default State**: `disabled` (enabled: false)
- **Type**: `release` (user-facing feature)
- **Description**: Clear explanation of what the feature does

### How It Works:

The feature flag system in your codebase supports **hot-reloading**, which means:

1. **No restart required**: Changes to `feature_flags.yaml` are picked up automatically
2. **Runtime toggling**: You can enable/disable the feature through the API or by editing the YAML file
3. **Gradual rollout**: Enable for specific environments or user groups

### Enabling the Feature:

To enable the split services view, simply change:
```yaml
split_services:
  enabled: true  # Change from false to true
```

The system will automatically pick up the change, and users will see the new split view with:
- **API & Workers** tab (with workers grouped under their APIs)
- **UI Services** tab (for all UI-related services)

When disabled (current state), users see the original single-grid view with all services together.

★ Insight ─────────────────────────────────────
The implementation follows a **feature flag best practice pattern**:
- Frontend code checks `isEnabled('split_services')` at runtime
- Backend serves the flag state via `/api/feature-flags/status`
- YAML configuration provides a single source of truth
- The graceful degradation ensures zero breaking changes when disabled
─────────────────────────────────────────────────

The feature is now ready for testing! You can enable it whenever you're ready to try out the new split services view.
Complete custom Keycloak theme for login and registration pages with:
- Centered layout with gradient "Ushadow" brand text (green→purple)
- Purple/green radial glow background matching frontend design
- Rounded input fields (10px border-radius) with proper dark styling
- Green primary button with glow effect
- Single-column form layout for registration page
- Fixed password field white outline and inline required asterisks
- Semi-transparent card with backdrop blur
- Responsive design with mobile support

Frontend login page updated to match Keycloak OAuth pages:
- Form-based design with email/password fields
- Same dark theme and geometric background pattern
- Blue primary button and green register link
- Consistent styling across authentication flow

Infrastructure:
- Added Keycloak service to docker-compose.infra.yml
- Theme mounted from ushadow/frontend/keycloak-theme/
- Connected to Postgres for session storage
- Auto-imports realm configuration on startup

Theme files:
- ushadow/frontend/keycloak-theme/login/resources/css/login.css
- ushadow/frontend/keycloak-theme/login/theme.properties
- ushadow/frontend/keycloak-theme/login/resources/img/logo.png
- docs/KEYCLOAK_THEMING_GUIDE.md

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Replace hardcoded Keycloak admin credentials with environment variables:
- KEYCLOAK_ADMIN (defaults to 'admin' for dev)
- KEYCLOAK_ADMIN_PASSWORD (defaults to 'admin' for dev)
- KEYCLOAK_PORT (defaults to 8081)
- KEYCLOAK_MGMT_PORT (defaults to 9000)

Created .env.example template with:
- All required Keycloak configuration
- Security warnings about changing defaults in production
- Clear documentation for each variable

This prevents credentials from being committed to git and allows
different environments to use their own secure credentials.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* feat: Add Keycloak OAuth theme matching Ushadow design system

Complete custom Keycloak theme for login and registration pages with:
- Centered layout with gradient "Ushadow" brand text (green→purple)
- Purple/green radial glow background matching frontend design
- Rounded input fields (10px border-radius) with proper dark styling
- Green primary button with glow effect
- Single-column form layout for registration page
- Fixed password field white outline and inline required asterisks
- Semi-transparent card with backdrop blur
- Responsive design with mobile support

Frontend login page updated to match Keycloak OAuth pages:
- Form-based design with email/password fields
- Same dark theme and geometric background pattern
- Blue primary button and green register link
- Consistent styling across authentication flow

Infrastructure:
- Added Keycloak service to docker-compose.infra.yml
- Theme mounted from ushadow/frontend/keycloak-theme/
- Connected to Postgres for session storage
- Auto-imports realm configuration on startup

Theme files:
- ushadow/frontend/keycloak-theme/login/resources/css/login.css
- ushadow/frontend/keycloak-theme/login/theme.properties
- ushadow/frontend/keycloak-theme/login/resources/img/logo.png
- docs/KEYCLOAK_THEMING_GUIDE.md

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

* security: Move Keycloak credentials to environment variables

Replace hardcoded Keycloak admin credentials with environment variables:
- KEYCLOAK_ADMIN (defaults to 'admin' for dev)
- KEYCLOAK_ADMIN_PASSWORD (defaults to 'admin' for dev)
- KEYCLOAK_PORT (defaults to 8081)
- KEYCLOAK_MGMT_PORT (defaults to 9000)

Created .env.example template with:
- All required Keycloak configuration
- Security warnings about changing defaults in production
- Clear documentation for each variable

This prevents credentials from being committed to git and allows
different environments to use their own secure credentials.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

---------

Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
# Conflicts:
#	ushadow/frontend/src/components/services/ServicesTab.tsx
Adds complete Keycloak OAuth2/OIDC authentication:

Frontend:
- KeycloakAuthContext: OAuth flow with token management
- TokenManager: PKCE support, token refresh, logout
- OAuthCallback: Handle OAuth redirect and token exchange
- ServiceTokenManager: Cross-service token generation

Backend:
- keycloak_admin.py: Admin API integration
- keycloak_auth.py: OAuth token validation
- token_bridge.py: Convert Keycloak tokens to service tokens
- keycloak_user_sync.py: Sync Keycloak users to MongoDB

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* feat: Add Keycloak OAuth theme matching Ushadow design system

Complete custom Keycloak theme for login and registration pages with:
- Centered layout with gradient "Ushadow" brand text (green→purple)
- Purple/green radial glow background matching frontend design
- Rounded input fields (10px border-radius) with proper dark styling
- Green primary button with glow effect
- Single-column form layout for registration page
- Fixed password field white outline and inline required asterisks
- Semi-transparent card with backdrop blur
- Responsive design with mobile support

Frontend login page updated to match Keycloak OAuth pages:
- Form-based design with email/password fields
- Same dark theme and geometric background pattern
- Blue primary button and green register link
- Consistent styling across authentication flow

Infrastructure:
- Added Keycloak service to docker-compose.infra.yml
- Theme mounted from ushadow/frontend/keycloak-theme/
- Connected to Postgres for session storage
- Auto-imports realm configuration on startup

Theme files:
- ushadow/frontend/keycloak-theme/login/resources/css/login.css
- ushadow/frontend/keycloak-theme/login/theme.properties
- ushadow/frontend/keycloak-theme/login/resources/img/logo.png
- docs/KEYCLOAK_THEMING_GUIDE.md

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

* security: Move Keycloak credentials to environment variables

Replace hardcoded Keycloak admin credentials with environment variables:
- KEYCLOAK_ADMIN (defaults to 'admin' for dev)
- KEYCLOAK_ADMIN_PASSWORD (defaults to 'admin' for dev)
- KEYCLOAK_PORT (defaults to 8081)
- KEYCLOAK_MGMT_PORT (defaults to 9000)

Created .env.example template with:
- All required Keycloak configuration
- Security warnings about changing defaults in production
- Clear documentation for each variable

This prevents credentials from being committed to git and allows
different environments to use their own secure credentials.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

* feat: Add Keycloak OAuth implementation

Adds complete Keycloak OAuth2/OIDC authentication:

Frontend:
- KeycloakAuthContext: OAuth flow with token management
- TokenManager: PKCE support, token refresh, logout
- OAuthCallback: Handle OAuth redirect and token exchange
- ServiceTokenManager: Cross-service token generation

Backend:
- keycloak_admin.py: Admin API integration
- keycloak_auth.py: OAuth token validation
- token_bridge.py: Convert Keycloak tokens to service tokens
- keycloak_user_sync.py: Sync Keycloak users to MongoDB

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

---------

Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
* feat: Add Keycloak OAuth theme matching Ushadow design system

Complete custom Keycloak theme for login and registration pages with:
- Centered layout with gradient "Ushadow" brand text (green→purple)
- Purple/green radial glow background matching frontend design
- Rounded input fields (10px border-radius) with proper dark styling
- Green primary button with glow effect
- Single-column form layout for registration page
- Fixed password field white outline and inline required asterisks
- Semi-transparent card with backdrop blur
- Responsive design with mobile support

Frontend login page updated to match Keycloak OAuth pages:
- Form-based design with email/password fields
- Same dark theme and geometric background pattern
- Blue primary button and green register link
- Consistent styling across authentication flow

Infrastructure:
- Added Keycloak service to docker-compose.infra.yml
- Theme mounted from ushadow/frontend/keycloak-theme/
- Connected to Postgres for session storage
- Auto-imports realm configuration on startup

Theme files:
- ushadow/frontend/keycloak-theme/login/resources/css/login.css
- ushadow/frontend/keycloak-theme/login/theme.properties
- ushadow/frontend/keycloak-theme/login/resources/img/logo.png
- docs/KEYCLOAK_THEMING_GUIDE.md

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

* security: Move Keycloak credentials to environment variables

Replace hardcoded Keycloak admin credentials with environment variables:
- KEYCLOAK_ADMIN (defaults to 'admin' for dev)
- KEYCLOAK_ADMIN_PASSWORD (defaults to 'admin' for dev)
- KEYCLOAK_PORT (defaults to 8081)
- KEYCLOAK_MGMT_PORT (defaults to 9000)

Created .env.example template with:
- All required Keycloak configuration
- Security warnings about changing defaults in production
- Clear documentation for each variable

This prevents credentials from being committed to git and allows
different environments to use their own secure credentials.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

* feat: Add Keycloak OAuth implementation

Adds complete Keycloak OAuth2/OIDC authentication:

Frontend:
- KeycloakAuthContext: OAuth flow with token management
- TokenManager: PKCE support, token refresh, logout
- OAuthCallback: Handle OAuth redirect and token exchange
- ServiceTokenManager: Cross-service token generation

Backend:
- keycloak_admin.py: Admin API integration
- keycloak_auth.py: OAuth token validation
- token_bridge.py: Convert Keycloak tokens to service tokens
- keycloak_user_sync.py: Sync Keycloak users to MongoDB

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

---------

Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
@gitguardian
Copy link

gitguardian bot commented Feb 22, 2026

⚠️ GitGuardian has uncovered 4 secrets following the scan of your pull request.

Please consider investigating the findings and remediating the incidents. Failure to do so may lead to compromising the associated services or software components.

🔎 Detected hardcoded secrets in your pull request
GitGuardian id GitGuardian status Secret Commit Filename
26721854 Triggered Username Password 97c913e compose/docker-compose.infra.yml View secret
27396401 Triggered Generic Password b47b315 ushadow/mobile/.env.local.example View secret
26807594 Triggered Username Password d20e5e5 setup/run.py View secret
27374642 Triggered Generic Password 252c314 robot_tests/resources/setup/test_env.py View secret
🛠 Guidelines to remediate hardcoded secrets
  1. Understand the implications of revoking this secret by investigating where it is used in your code.
  2. Replace and store your secrets safely. Learn here the best practices.
  3. Revoke and rotate these secrets.
  4. If possible, rewrite git history. Rewriting git history is not a trivial act. You might completely break other contributing developers' workflow and you risk accidentally deleting legitimate data.

To avoid such incidents in the future consider


🦉 GitGuardian detects secrets in your source code to help developers and security teams secure the modern development process. You are seeing this because you or someone else with access to this repository has authorized GitGuardian to scan your pull request.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants