Skip to content

Comments

add analytics dashboard with 7 charts and 3 API endpoints#6

Merged
9cb14c1ec0 merged 1 commit intomasterfrom
analytics
Feb 19, 2026
Merged

add analytics dashboard with 7 charts and 3 API endpoints#6
9cb14c1ec0 merged 1 commit intomasterfrom
analytics

Conversation

@9cb14c1ec0
Copy link
Owner

@9cb14c1ec0 9cb14c1ec0 commented Feb 19, 2026

Backend: volume (time-bucketed counts by level/source), top (top N sources/messages/users), and heatmap (source x level grid) endpoints with raw SQL for partition pruning. All gated behind get_team_member().

Frontend: ECharts-based dashboard at /teams/:teamId/analytics with log volume stacked bar, error rate area, level donut, top sources bar, top error messages table, logs per user bar, and source x level heatmap. Includes time range picker (24h/7d/30d/custom) and navigation links from Dashboard and LogsView.

Summary by CodeRabbit

  • New Features
    • Added comprehensive analytics dashboard with visualizations including log volume trends, error rate tracking, level breakdown, top sources, error messages, and user activity
    • Added source-level heatmap for detailed insights
    • Added time range picker with preset options (24h, 7d, 30d) and custom range support
    • Added Analytics navigation buttons to dashboard and logs views

Backend: volume (time-bucketed counts by level/source), top (top N
sources/messages/users), and heatmap (source x level grid) endpoints
with raw SQL for partition pruning. All gated behind get_team_member().

Frontend: ECharts-based dashboard at /teams/:teamId/analytics with
log volume stacked bar, error rate area, level donut, top sources bar,
top error messages table, logs per user bar, and source x level heatmap.
Includes time range picker (24h/7d/30d/custom) and navigation links
from Dashboard and LogsView.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@coderabbitai
Copy link

coderabbitai bot commented Feb 19, 2026

📝 Walkthrough

Walkthrough

A complete analytics dashboard feature is added with three backend FastAPI endpoints (volume, top, heatmap) computing aggregated log statistics, paired with frontend components including a time range picker, chart visualizations using ECharts, and composables for data fetching and chart rendering.

Changes

Cohort / File(s) Summary
Backend Analytics API
backend/app/api/__init__.py, backend/app/api/analytics.py
New analytics module with three FastAPI endpoints: analytics_volume (bucketed log counts), analytics_top (top values for source/message/user_id), and analytics_heatmap (source vs. level matrix). Includes time range validation, team access checks, and dynamic SQL aggregations.
Backend Analytics Schemas
backend/app/schemas/__init__.py, backend/app/schemas/analytics.py
New Pydantic response models: VolumeBucket, VolumeResponse, TopItem, TopResponse, HeatmapCell, HeatmapResponse for serializing analytics query results.
Frontend Dependencies
frontend/package.json
Added echarts, vue-echarts, axios, pinia, and @mdi/font to support charting and data management. Consolidated duplicate dependency entries.
Frontend API Client Types
frontend/src/api/client.ts
Added TypeScript interfaces mirroring backend schemas: VolumeBucket, VolumeResponse, TopItem, TopResponse, HeatmapCell, HeatmapResponse.
Frontend Time Range Component
frontend/src/components/TimeRangePicker.vue
New Vue 3 component providing preset time ranges (24h, 7d, 30d) and custom range selection, emitting time range and bucket interval on change.
Frontend Analytics Composables
frontend/src/composables/useAnalytics.ts, frontend/src/composables/useChartOptions.ts
useAnalytics fetches all analytics data (volume, top sources/errors/users, heatmap) in parallel. useChartOptions computes seven reactive chart options (volume bar, error rate line, level donut, top sources/users bar charts, heatmap, and errors table).
Frontend ECharts Configuration
frontend/src/plugins/echarts.ts
Plugin registering ECharts renderers and chart types (Bar, Line, Pie, Heatmap) and exporting VChart component for Vue templates.
Frontend Analytics Dashboard & Routing
frontend/src/router/index.ts, frontend/src/views/AnalyticsView.vue
New route /teams/:teamId/analytics and AnalyticsView component rendering a multi-panel dashboard with volume, error rate, level breakdown, top sources/errors/users, and heatmap charts. Includes team name fetching and range picker integration.
Frontend Navigation Updates
frontend/src/views/Dashboard.vue, frontend/src/views/LogsView.vue
Added analytics navigation buttons to both Dashboard (viewAnalytics function) and LogsView header, enabling quick access to the analytics dashboard.
Frontend Build Optimization
frontend/vite.config.ts
Added manualChunks configuration to isolate echarts and vue-echarts into a separate build chunk.

Sequence Diagram

sequenceDiagram
    participant User as Frontend User
    participant UI as AnalyticsView Component
    participant Composable as useAnalytics Composable
    participant API as Backend API
    participant DB as Database

    User->>UI: Opens analytics dashboard / selects time range
    activate UI
    UI->>Composable: Call fetchAll(range, bucket)
    activate Composable
    Composable->>API: POST /teams/{id}/analytics/volume
    Composable->>API: POST /teams/{id}/analytics/top (sources)
    Composable->>API: POST /teams/{id}/analytics/top (errors)
    Composable->>API: POST /teams/{id}/analytics/top (users)
    Composable->>API: POST /teams/{id}/analytics/heatmap
    
    par Parallel API Calls
        API->>DB: Query bucketed log counts
        API->>DB: Query top sources
        API->>DB: Query top error messages
        API->>DB: Query top user_ids
        API->>DB: Query source/level aggregates
    end
    
    DB-->>API: Return aggregated results
    API-->>Composable: VolumeResponse
    API-->>Composable: TopResponse (sources)
    API-->>Composable: TopResponse (errors)
    API-->>Composable: TopResponse (users)
    API-->>Composable: HeatmapResponse
    deactivate Composable
    
    Composable->>UI: Update reactive state
    UI->>UI: Compute chart options via useChartOptions
    UI->>UI: Render VChart components with options
    deactivate UI
    UI-->>User: Display volume, level, error rate, heatmap charts
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Poem

🐰✨ A Chart-Maker's Delight
Charts and buckets, oh what a sight!
Volume dancing through day and night,
Heatmaps glowing with source and level,
Analytics dashboard, a structured marvel!
— Your friendly CodeRabbit, hopping with joy 🐇📊

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title accurately and concisely summarizes the main changes: adding an analytics dashboard with 7 charts and 3 new API endpoints, which directly aligns with the comprehensive changeset across backend and frontend.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch analytics

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@9cb14c1ec0
Copy link
Owner Author

@coderabbitai review

@coderabbitai
Copy link

coderabbitai bot commented Feb 19, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (7)
frontend/src/composables/useChartOptions.ts (2)

44-56: Quadratic lookup when building volume series data.

For each (level, bucket) pair, find() scans the entire data.buckets array. With 30-day hourly data (720 buckets × 5 levels × 3600 entries ≈ 13M comparisons), this can cause a noticeable stall. Pre-index with a Map for O(1) lookups.

♻️ Proposed fix
     const bucketSet = [...new Set(data.buckets.map(b => b.bucket))].sort()
     const xLabels = bucketSet.map(formatBucket)

+    // Pre-index: "bucket|level" → count
+    const index = new Map<string, number>()
+    for (const b of data.buckets) {
+      index.set(`${b.bucket}|${b.level}`, b.count)
+    }
+
     // Group by level
     const series = ALL_LEVELS.map(level => {
       const counts = bucketSet.map(bucket => {
-        const entry = data.buckets.find(b => b.bucket === bucket && b.level === level)
-        return entry?.count ?? 0
+        return index.get(`${bucket}|${level}`) ?? 0
       })
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/composables/useChartOptions.ts` around lines 44 - 56, The
current series builder (involving ALL_LEVELS, bucketSet, and data.buckets) does
an O(n*m) lookup using data.buckets.find for every (level, bucket) pair causing
quadratic cost; fix it by pre-indexing data.buckets into a Map keyed by a
combined bucket+level string (or nested Map) before constructing series, then
replace the find calls inside the series generation with O(1) Map lookups so the
series creation loop (the function that builds series) uses the Map to get
entry.count (falling back to 0) and still applies LEVEL_COLORS[level] and
stack/type as before.

76-84: Same quadratic pattern in error-rate computation.

entries = data.buckets.filter(b => b.bucket === bucket) inside the bucketSet.map loop results in the same O(B × N) scan. The pre-built Map from the volume chart (or a separate grouping by bucket) would resolve this as well.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/composables/useChartOptions.ts` around lines 76 - 84, The rates
calculation loops bucketSet.map and calls data.buckets.filter for each bucket,
causing O(B×N) work; instead pre-group data.buckets once (e.g., build a Map
keyed by bucket or a reducer that accumulates total and error counts per bucket)
and then compute rates by looking up the bucket's aggregated totals in that Map
inside the bucketSet.map (update the rates variable to use the precomputed group
to derive total and errors rather than filtering entries each iteration).
frontend/src/composables/useAnalytics.ts (1)

29-45: Promise.all loses all results on single endpoint failure.

If any one of the five API calls fails, the catch block fires and all reactive refs remain null/stale—every chart shows "No data" even if four out of five endpoints succeeded. Consider Promise.allSettled so partial results can still be displayed.

♻️ Sketch using Promise.allSettled
-      const [volRes, srcRes, errRes, usrRes, hmRes] = await Promise.all([
+      const [volRes, srcRes, errRes, usrRes, hmRes] = await Promise.allSettled([
         api.get(`/teams/${teamId}/analytics/volume`, {
           params: { ...params, bucket, split_by: 'level' },
         }),
         ...
       ])
-      volume.value = volRes.data
-      topSources.value = srcRes.data
-      topErrors.value = errRes.data
-      topUsers.value = usrRes.data
-      heatmap.value = hmRes.data
+      volume.value = volRes.status === 'fulfilled' ? volRes.value.data : null
+      topSources.value = srcRes.status === 'fulfilled' ? srcRes.value.data : null
+      topErrors.value = errRes.status === 'fulfilled' ? errRes.value.data : null
+      topUsers.value = usrRes.status === 'fulfilled' ? usrRes.value.data : null
+      heatmap.value = hmRes.status === 'fulfilled' ? hmRes.value.data : null
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/composables/useAnalytics.ts` around lines 29 - 45, The current
Promise.all call in useAnalytics (which assigns [volRes, srcRes, errRes, usrRes,
hmRes]) causes all results to be discarded if any single request fails; change
to Promise.allSettled and iterate the settled results to set each reactive ref
(volume, sources, errors, users, heatmap) only when the corresponding promise is
fulfilled and assign or log the specific reason when rejected so partial data
still renders; use the same request order to map settled[index] ->
volRes/srcRes/errRes/usrRes/hmRes and update the existing refs accordingly.
frontend/src/components/TimeRangePicker.vue (1)

37-40: Extract TimeRange to a shared types location.

TimeRange is defined identically in both TimeRangePicker.vue (line 37) and useAnalytics.ts (line 8). Add it to @/api/client.ts alongside other API-related types and import from there to maintain a single source of truth.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/components/TimeRangePicker.vue` around lines 37 - 40, TimeRange
is duplicated; extract the interface from TimeRangePicker.vue and
useAnalytics.ts into the shared API types file by adding and exporting interface
TimeRange in `@/api/client.ts`, then replace the local definitions in
TimeRangePicker.vue and useAnalytics.ts with an import { TimeRange } from
'@/api/client.ts'; ensure the exported name matches and update any type
references to the shared TimeRange to maintain a single source of truth.
frontend/src/views/Dashboard.vue (1)

73-76: Dead CSS — .cursor-pointer is no longer applied to any element.

The card click-to-navigate was removed, so this rule is now unreferenced.

🧹 Cleanup
 <style scoped>
-.cursor-pointer {
-  cursor: pointer;
-}
 </style>
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/views/Dashboard.vue` around lines 73 - 76, Remove the now-unused
CSS rule `.cursor-pointer` from the scoped style block in Dashboard.vue; locate
the `.cursor-pointer { cursor: pointer; }` declaration inside the <style scoped>
section and delete it to clean up dead CSS since no elements reference that
class any more.
backend/app/api/analytics.py (2)

46-80: Ruff S608 SQL injection warnings are false positives — add suppression comments to clarify intent.

  • trunc is resolved from BUCKET_SQL[bucket] where bucket is Literal["hour","day","week"] — only three hardcoded SQL fragments are reachable.
  • col is resolved from a static whitelist dict keyed on Literal["source","message","user_id"].
  • placeholders are $N positional markers; values are always passed via the params list.

No actual user-controlled string is interpolated. Adding # noqa: S608 with a brief justification on each f-string query keeps the intent explicit and stops the linter from re-flagging on future CI runs.

✏️ Example suppression (apply to all five query sites)
         rows = await conn.execute_query_dict(
             f"""
             SELECT {trunc} AS bucket, count(*) AS count
             FROM logs
             WHERE team_id = $1 AND timestamp >= $2 AND timestamp <= $3
             GROUP BY bucket ORDER BY bucket
-            """,
+            """,  # noqa: S608 — `trunc` is from a hardcoded whitelist dict, not user input

Also applies to: 111-135, 171-181

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/app/api/analytics.py` around lines 46 - 80, The SQL queries use
f-strings (e.g., the f"""...{trunc}...""" passed to conn.execute_query_dict) and
Ruff flags them as S608 even though the interpolated values are safe; replace
the linter noise by adding a suppression comment "# noqa: S608" to each f-string
query site (including those where you interpolate trunc from BUCKET_SQL, col
from the static whitelist, and positional placeholders) with a short
justification like "trunc/col value from hardcoded whitelist; parameters passed
separately", and ensure you apply this to all five query locations (the blocks
constructing rows and calling execute_query_dict that then build VolumeBucket
objects) so the linter knows these are false positives.

15-23: Consider capping the maximum queryable time range.

_default_range has no upper-bound guard. A request with bucket=hour spanning a full year emits 8 760 buckets × N series in a single query with no row-count safety net. Consider rejecting (HTTP 400) requests where to − from exceeds a reasonable maximum (e.g., 90 days for hourly, 2 years for daily) and/or adding a PostgreSQL statement_timeout for these connections.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/app/api/analytics.py` around lines 15 - 23, The _default_range helper
currently has no upper-bound on to_time − from_time, which allows extremely
large queries; add range validation and reject oversized ranges with a 400.
Implement either (a) extend _default_range to accept a bucket parameter (e.g.,
"hour"/"day") and enforce caps (e.g., max 90 days for hourly, 2 years for daily)
and raise fastapi.HTTPException(status_code=400, detail=...) when exceeded, or
(b) add a new validate_time_range(from_time, to_time, bucket) function called by
the endpoint before executing SQL that enforces the same caps; also optionally
set a PostgreSQL statement_timeout on the DB connection for safety. Ensure you
reference and update callers that use _default_range to perform this check so
oversized requests are rejected early.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@backend/app/api/analytics.py`:
- Around line 58-80: The bucket datetime strings are built with str(r["bucket"])
which yields a space-separated timestamp; update all three VolumeBucket
constructions (the list comprehensions that create buckets for the default case,
the "level" split, and the "source" split) to call r["bucket"].isoformat()
instead of str(r["bucket"]) so the bucket values use ISO8601 with the "T"
separator and are parsed correctly by JS/ECharts.

In `@frontend/src/components/TimeRangePicker.vue`:
- Around line 80-89: The emitCustom function currently computes diffDays and
bucket even when customFrom >= customTo, causing negative diffs and wrong
bucket; update emitCustom (using customFrom.value, customTo.value, from, to,
diffMs) to validate that new Date(customFrom.value).getTime() < new
Date(customTo.value).getTime() before computing bucket and calling
emit('change', ...); on invalid ranges either swap the values, or reject/notify
the user (e.g., emit an 'invalid-range' or set a validation flag) and do not
call emit('change') — ensure the bucket selection runs only after the valid
from/to check.

In `@frontend/src/views/AnalyticsView.vue`:
- Around line 165-173: The frontend calls api.get(`/admin/teams/${teamId}`)
inside the onMounted block to populate teamName, but that admin-only endpoint
causes non-admins to get a 403 and see "Unknown Team"; instead add a public
team-info endpoint that uses get_team_member() (e.g., GET /{team_id}/info) which
returns team metadata (name) for any team member, then update the frontend to
call that new endpoint (replace api.get(`/admin/teams/${teamId}`) with
api.get(`/${teamId}/info`) or the existing analytics response that includes
name) so teamName.value is correctly populated for non-admin members.

---

Nitpick comments:
In `@backend/app/api/analytics.py`:
- Around line 46-80: The SQL queries use f-strings (e.g., the
f"""...{trunc}...""" passed to conn.execute_query_dict) and Ruff flags them as
S608 even though the interpolated values are safe; replace the linter noise by
adding a suppression comment "# noqa: S608" to each f-string query site
(including those where you interpolate trunc from BUCKET_SQL, col from the
static whitelist, and positional placeholders) with a short justification like
"trunc/col value from hardcoded whitelist; parameters passed separately", and
ensure you apply this to all five query locations (the blocks constructing rows
and calling execute_query_dict that then build VolumeBucket objects) so the
linter knows these are false positives.
- Around line 15-23: The _default_range helper currently has no upper-bound on
to_time − from_time, which allows extremely large queries; add range validation
and reject oversized ranges with a 400. Implement either (a) extend
_default_range to accept a bucket parameter (e.g., "hour"/"day") and enforce
caps (e.g., max 90 days for hourly, 2 years for daily) and raise
fastapi.HTTPException(status_code=400, detail=...) when exceeded, or (b) add a
new validate_time_range(from_time, to_time, bucket) function called by the
endpoint before executing SQL that enforces the same caps; also optionally set a
PostgreSQL statement_timeout on the DB connection for safety. Ensure you
reference and update callers that use _default_range to perform this check so
oversized requests are rejected early.

In `@frontend/src/components/TimeRangePicker.vue`:
- Around line 37-40: TimeRange is duplicated; extract the interface from
TimeRangePicker.vue and useAnalytics.ts into the shared API types file by adding
and exporting interface TimeRange in `@/api/client.ts`, then replace the local
definitions in TimeRangePicker.vue and useAnalytics.ts with an import {
TimeRange } from '@/api/client.ts'; ensure the exported name matches and update
any type references to the shared TimeRange to maintain a single source of
truth.

In `@frontend/src/composables/useAnalytics.ts`:
- Around line 29-45: The current Promise.all call in useAnalytics (which assigns
[volRes, srcRes, errRes, usrRes, hmRes]) causes all results to be discarded if
any single request fails; change to Promise.allSettled and iterate the settled
results to set each reactive ref (volume, sources, errors, users, heatmap) only
when the corresponding promise is fulfilled and assign or log the specific
reason when rejected so partial data still renders; use the same request order
to map settled[index] -> volRes/srcRes/errRes/usrRes/hmRes and update the
existing refs accordingly.

In `@frontend/src/composables/useChartOptions.ts`:
- Around line 44-56: The current series builder (involving ALL_LEVELS,
bucketSet, and data.buckets) does an O(n*m) lookup using data.buckets.find for
every (level, bucket) pair causing quadratic cost; fix it by pre-indexing
data.buckets into a Map keyed by a combined bucket+level string (or nested Map)
before constructing series, then replace the find calls inside the series
generation with O(1) Map lookups so the series creation loop (the function that
builds series) uses the Map to get entry.count (falling back to 0) and still
applies LEVEL_COLORS[level] and stack/type as before.
- Around line 76-84: The rates calculation loops bucketSet.map and calls
data.buckets.filter for each bucket, causing O(B×N) work; instead pre-group
data.buckets once (e.g., build a Map keyed by bucket or a reducer that
accumulates total and error counts per bucket) and then compute rates by looking
up the bucket's aggregated totals in that Map inside the bucketSet.map (update
the rates variable to use the precomputed group to derive total and errors
rather than filtering entries each iteration).

In `@frontend/src/views/Dashboard.vue`:
- Around line 73-76: Remove the now-unused CSS rule `.cursor-pointer` from the
scoped style block in Dashboard.vue; locate the `.cursor-pointer { cursor:
pointer; }` declaration inside the <style scoped> section and delete it to clean
up dead CSS since no elements reference that class any more.

Comment on lines +58 to +80
buckets = [VolumeBucket(bucket=str(r["bucket"]), count=r["count"]) for r in rows]
elif split_by == "level":
rows = await conn.execute_query_dict(
f"""
SELECT {trunc} AS bucket, level, count(*) AS count
FROM logs
WHERE team_id = $1 AND timestamp >= $2 AND timestamp <= $3
GROUP BY bucket, level ORDER BY bucket, level
""",
[str(team.id), start, end],
)
buckets = [VolumeBucket(bucket=str(r["bucket"]), level=r["level"], count=r["count"]) for r in rows]
else:
rows = await conn.execute_query_dict(
f"""
SELECT {trunc} AS bucket, source, count(*) AS count
FROM logs
WHERE team_id = $1 AND timestamp >= $2 AND timestamp <= $3
GROUP BY bucket, source ORDER BY bucket, source
""",
[str(team.id), start, end],
)
buckets = [VolumeBucket(bucket=str(r["bucket"]), source=r["source"], count=r["count"]) for r in rows]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Use .isoformat() instead of str() for the bucket datetime.

str(datetime_obj) produces "2024-01-01 00:00:00+00:00" (space separator), whereas ISO 8601 requires a T. ECharts time-axis parsing and JavaScript's new Date() are more reliably compatible with the T-separator form from .isoformat().

✏️ Proposed fix (all three bucket construction sites)
-        buckets = [VolumeBucket(bucket=str(r["bucket"]), count=r["count"]) for r in rows]
+        buckets = [VolumeBucket(bucket=r["bucket"].isoformat(), count=r["count"]) for r in rows]
-        buckets = [VolumeBucket(bucket=str(r["bucket"]), level=r["level"], count=r["count"]) for r in rows]
+        buckets = [VolumeBucket(bucket=r["bucket"].isoformat(), level=r["level"], count=r["count"]) for r in rows]
-        buckets = [VolumeBucket(bucket=str(r["bucket"]), source=r["source"], count=r["count"]) for r in rows]
+        buckets = [VolumeBucket(bucket=r["bucket"].isoformat(), source=r["source"], count=r["count"]) for r in rows]
🧰 Tools
🪛 Ruff (0.15.1)

[error] 61-66: Possible SQL injection vector through string-based query construction

(S608)


[error] 72-77: Possible SQL injection vector through string-based query construction

(S608)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/app/api/analytics.py` around lines 58 - 80, The bucket datetime
strings are built with str(r["bucket"]) which yields a space-separated
timestamp; update all three VolumeBucket constructions (the list comprehensions
that create buckets for the default case, the "level" split, and the "source"
split) to call r["bucket"].isoformat() instead of str(r["bucket"]) so the bucket
values use ISO8601 with the "T" separator and are parsed correctly by
JS/ECharts.

Comment on lines +80 to +89
function emitCustom() {
if (customFrom.value && customTo.value) {
const from = new Date(customFrom.value).toISOString()
const to = new Date(customTo.value).toISOString()
const diffMs = new Date(to).getTime() - new Date(from).getTime()
const diffDays = diffMs / (1000 * 60 * 60 * 24)
const bucket = diffDays <= 2 ? 'hour' : diffDays <= 14 ? 'day' : 'week'
emit('change', { from, to }, bucket)
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Missing from < to validation in custom range.

If the user enters a from date after the to date, diffMs becomes negative, diffDays is negative, and the bucket will silently default to 'hour'. The API call will receive an inverted range, likely returning empty or unexpected results with no user feedback.

🛡️ Proposed fix
 function emitCustom() {
   if (customFrom.value && customTo.value) {
     const from = new Date(customFrom.value).toISOString()
     const to = new Date(customTo.value).toISOString()
+    if (from >= to) return  // silently ignore or show a validation error
     const diffMs = new Date(to).getTime() - new Date(from).getTime()
     const diffDays = diffMs / (1000 * 60 * 60 * 24)
     const bucket = diffDays <= 2 ? 'hour' : diffDays <= 14 ? 'day' : 'week'
     emit('change', { from, to }, bucket)
   }
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
function emitCustom() {
if (customFrom.value && customTo.value) {
const from = new Date(customFrom.value).toISOString()
const to = new Date(customTo.value).toISOString()
const diffMs = new Date(to).getTime() - new Date(from).getTime()
const diffDays = diffMs / (1000 * 60 * 60 * 24)
const bucket = diffDays <= 2 ? 'hour' : diffDays <= 14 ? 'day' : 'week'
emit('change', { from, to }, bucket)
}
}
function emitCustom() {
if (customFrom.value && customTo.value) {
const from = new Date(customFrom.value).toISOString()
const to = new Date(customTo.value).toISOString()
if (from >= to) return // silently ignore or show a validation error
const diffMs = new Date(to).getTime() - new Date(from).getTime()
const diffDays = diffMs / (1000 * 60 * 60 * 24)
const bucket = diffDays <= 2 ? 'hour' : diffDays <= 14 ? 'day' : 'week'
emit('change', { from, to }, bucket)
}
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/components/TimeRangePicker.vue` around lines 80 - 89, The
emitCustom function currently computes diffDays and bucket even when customFrom
>= customTo, causing negative diffs and wrong bucket; update emitCustom (using
customFrom.value, customTo.value, from, to, diffMs) to validate that new
Date(customFrom.value).getTime() < new Date(customTo.value).getTime() before
computing bucket and calling emit('change', ...); on invalid ranges either swap
the values, or reject/notify the user (e.g., emit an 'invalid-range' or set a
validation flag) and do not call emit('change') — ensure the bucket selection
runs only after the valid from/to check.

Comment on lines +165 to +173
onMounted(async () => {
try {
const response = await api.get(`/admin/teams/${teamId}`)
teamName.value = response.data.name
} catch {
teamName.value = 'Unknown Team'
}
picker.value?.emitInitial()
})
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Search for admin teams endpoint definition
rg -n 'admin.*teams' --type=py -C3

Repository: 9cb14c1ec0/SimpleLogs

Length of output: 47


🏁 Script executed:

# Search for regular teams endpoints
rg -n '@.*route.*teams|def.*get.*team' --type=py -C5

Repository: 9cb14c1ec0/SimpleLogs

Length of output: 9056


🏁 Script executed:

# Look for get_team_member usage and what it provides
rg -n 'get_team_member' --type=py -C5

Repository: 9cb14c1ec0/SimpleLogs

Length of output: 7298


🏁 Script executed:

# Check for endpoint permission decorators/checks
rg -n 'require_admin|admin_required|require_team_member' --type=py -C3

Repository: 9cb14c1ec0/SimpleLogs

Length of output: 47


🏁 Script executed:

# Check the full get_team_member function and what it returns
rg -n 'async def get_team_member' -A 20 --type=py

Repository: 9cb14c1ec0/SimpleLogs

Length of output: 1384


🏁 Script executed:

# Look for any non-admin /teams endpoints in the routes
rg -n '@router' backend/app/api/analytics.py backend/app/api/logs.py -A 1

Repository: 9cb14c1ec0/SimpleLogs

Length of output: 1157


🏁 Script executed:

# Check if Team model or responses include team name
rg -n 'class Team|class TeamResponse' --type=py -A 10

Repository: 9cb14c1ec0/SimpleLogs

Length of output: 3737


🏁 Script executed:

# Check if analytics endpoints return team info or just data
rg -n 'response_model' backend/app/api/analytics.py -B 3

Repository: 9cb14c1ec0/SimpleLogs

Length of output: 484


🏁 Script executed:

# Check the VolumeResponse and other response schemas to see if they include team info
rg -n 'class VolumeResponse|class TopResponse|class HeatmapResponse' backend/app/schemas --type=py -A 8

Repository: 9cb14c1ec0/SimpleLogs

Length of output: 1254


Team name fetch uses /admin/teams/ endpoint—non-admin members will see "Unknown Team".

The analytics endpoints are gated behind team membership (get_team_member()), meaning regular members can view analytics. However, fetching the team name at line 167 uses /admin/teams/${teamId}, which requires admin privileges. Non-admin users will hit a 403, silently fall into the catch block, and always see "Unknown Team Analytics" in the header.

Create a new public endpoint (e.g., GET /{team_id}/info) using get_team_member() to return team metadata, or include the team name in one of the existing analytics responses.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/views/AnalyticsView.vue` around lines 165 - 173, The frontend
calls api.get(`/admin/teams/${teamId}`) inside the onMounted block to populate
teamName, but that admin-only endpoint causes non-admins to get a 403 and see
"Unknown Team"; instead add a public team-info endpoint that uses
get_team_member() (e.g., GET /{team_id}/info) which returns team metadata (name)
for any team member, then update the frontend to call that new endpoint (replace
api.get(`/admin/teams/${teamId}`) with api.get(`/${teamId}/info`) or the
existing analytics response that includes name) so teamName.value is correctly
populated for non-admin members.

@9cb14c1ec0 9cb14c1ec0 merged commit 264d51f into master Feb 19, 2026
3 checks passed
@9cb14c1ec0 9cb14c1ec0 deleted the analytics branch February 19, 2026 16:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant