add analytics dashboard with 7 charts and 3 API endpoints#6
add analytics dashboard with 7 charts and 3 API endpoints#69cb14c1ec0 merged 1 commit intomasterfrom
Conversation
Backend: volume (time-bucketed counts by level/source), top (top N sources/messages/users), and heatmap (source x level grid) endpoints with raw SQL for partition pruning. All gated behind get_team_member(). Frontend: ECharts-based dashboard at /teams/:teamId/analytics with log volume stacked bar, error rate area, level donut, top sources bar, top error messages table, logs per user bar, and source x level heatmap. Includes time range picker (24h/7d/30d/custom) and navigation links from Dashboard and LogsView. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
📝 WalkthroughWalkthroughA complete analytics dashboard feature is added with three backend FastAPI endpoints (volume, top, heatmap) computing aggregated log statistics, paired with frontend components including a time range picker, chart visualizations using ECharts, and composables for data fetching and chart rendering. Changes
Sequence DiagramsequenceDiagram
participant User as Frontend User
participant UI as AnalyticsView Component
participant Composable as useAnalytics Composable
participant API as Backend API
participant DB as Database
User->>UI: Opens analytics dashboard / selects time range
activate UI
UI->>Composable: Call fetchAll(range, bucket)
activate Composable
Composable->>API: POST /teams/{id}/analytics/volume
Composable->>API: POST /teams/{id}/analytics/top (sources)
Composable->>API: POST /teams/{id}/analytics/top (errors)
Composable->>API: POST /teams/{id}/analytics/top (users)
Composable->>API: POST /teams/{id}/analytics/heatmap
par Parallel API Calls
API->>DB: Query bucketed log counts
API->>DB: Query top sources
API->>DB: Query top error messages
API->>DB: Query top user_ids
API->>DB: Query source/level aggregates
end
DB-->>API: Return aggregated results
API-->>Composable: VolumeResponse
API-->>Composable: TopResponse (sources)
API-->>Composable: TopResponse (errors)
API-->>Composable: TopResponse (users)
API-->>Composable: HeatmapResponse
deactivate Composable
Composable->>UI: Update reactive state
UI->>UI: Compute chart options via useChartOptions
UI->>UI: Render VChart components with options
deactivate UI
UI-->>User: Display volume, level, error rate, heatmap charts
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (7)
frontend/src/composables/useChartOptions.ts (2)
44-56: Quadratic lookup when building volume series data.For each
(level, bucket)pair,find()scans the entiredata.bucketsarray. With 30-day hourly data (720 buckets × 5 levels × 3600 entries ≈ 13M comparisons), this can cause a noticeable stall. Pre-index with aMapfor O(1) lookups.♻️ Proposed fix
const bucketSet = [...new Set(data.buckets.map(b => b.bucket))].sort() const xLabels = bucketSet.map(formatBucket) + // Pre-index: "bucket|level" → count + const index = new Map<string, number>() + for (const b of data.buckets) { + index.set(`${b.bucket}|${b.level}`, b.count) + } + // Group by level const series = ALL_LEVELS.map(level => { const counts = bucketSet.map(bucket => { - const entry = data.buckets.find(b => b.bucket === bucket && b.level === level) - return entry?.count ?? 0 + return index.get(`${bucket}|${level}`) ?? 0 })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/composables/useChartOptions.ts` around lines 44 - 56, The current series builder (involving ALL_LEVELS, bucketSet, and data.buckets) does an O(n*m) lookup using data.buckets.find for every (level, bucket) pair causing quadratic cost; fix it by pre-indexing data.buckets into a Map keyed by a combined bucket+level string (or nested Map) before constructing series, then replace the find calls inside the series generation with O(1) Map lookups so the series creation loop (the function that builds series) uses the Map to get entry.count (falling back to 0) and still applies LEVEL_COLORS[level] and stack/type as before.
76-84: Same quadratic pattern in error-rate computation.
entries = data.buckets.filter(b => b.bucket === bucket)inside thebucketSet.maploop results in the same O(B × N) scan. The pre-builtMapfrom the volume chart (or a separate grouping by bucket) would resolve this as well.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/composables/useChartOptions.ts` around lines 76 - 84, The rates calculation loops bucketSet.map and calls data.buckets.filter for each bucket, causing O(B×N) work; instead pre-group data.buckets once (e.g., build a Map keyed by bucket or a reducer that accumulates total and error counts per bucket) and then compute rates by looking up the bucket's aggregated totals in that Map inside the bucketSet.map (update the rates variable to use the precomputed group to derive total and errors rather than filtering entries each iteration).frontend/src/composables/useAnalytics.ts (1)
29-45:Promise.allloses all results on single endpoint failure.If any one of the five API calls fails, the catch block fires and all reactive refs remain
null/stale—every chart shows "No data" even if four out of five endpoints succeeded. ConsiderPromise.allSettledso partial results can still be displayed.♻️ Sketch using Promise.allSettled
- const [volRes, srcRes, errRes, usrRes, hmRes] = await Promise.all([ + const [volRes, srcRes, errRes, usrRes, hmRes] = await Promise.allSettled([ api.get(`/teams/${teamId}/analytics/volume`, { params: { ...params, bucket, split_by: 'level' }, }), ... ]) - volume.value = volRes.data - topSources.value = srcRes.data - topErrors.value = errRes.data - topUsers.value = usrRes.data - heatmap.value = hmRes.data + volume.value = volRes.status === 'fulfilled' ? volRes.value.data : null + topSources.value = srcRes.status === 'fulfilled' ? srcRes.value.data : null + topErrors.value = errRes.status === 'fulfilled' ? errRes.value.data : null + topUsers.value = usrRes.status === 'fulfilled' ? usrRes.value.data : null + heatmap.value = hmRes.status === 'fulfilled' ? hmRes.value.data : null🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/composables/useAnalytics.ts` around lines 29 - 45, The current Promise.all call in useAnalytics (which assigns [volRes, srcRes, errRes, usrRes, hmRes]) causes all results to be discarded if any single request fails; change to Promise.allSettled and iterate the settled results to set each reactive ref (volume, sources, errors, users, heatmap) only when the corresponding promise is fulfilled and assign or log the specific reason when rejected so partial data still renders; use the same request order to map settled[index] -> volRes/srcRes/errRes/usrRes/hmRes and update the existing refs accordingly.frontend/src/components/TimeRangePicker.vue (1)
37-40: ExtractTimeRangeto a shared types location.
TimeRangeis defined identically in bothTimeRangePicker.vue(line 37) anduseAnalytics.ts(line 8). Add it to@/api/client.tsalongside other API-related types and import from there to maintain a single source of truth.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/TimeRangePicker.vue` around lines 37 - 40, TimeRange is duplicated; extract the interface from TimeRangePicker.vue and useAnalytics.ts into the shared API types file by adding and exporting interface TimeRange in `@/api/client.ts`, then replace the local definitions in TimeRangePicker.vue and useAnalytics.ts with an import { TimeRange } from '@/api/client.ts'; ensure the exported name matches and update any type references to the shared TimeRange to maintain a single source of truth.frontend/src/views/Dashboard.vue (1)
73-76: Dead CSS —.cursor-pointeris no longer applied to any element.The card click-to-navigate was removed, so this rule is now unreferenced.
🧹 Cleanup
<style scoped> -.cursor-pointer { - cursor: pointer; -} </style>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/views/Dashboard.vue` around lines 73 - 76, Remove the now-unused CSS rule `.cursor-pointer` from the scoped style block in Dashboard.vue; locate the `.cursor-pointer { cursor: pointer; }` declaration inside the <style scoped> section and delete it to clean up dead CSS since no elements reference that class any more.backend/app/api/analytics.py (2)
46-80: Ruff S608 SQL injection warnings are false positives — add suppression comments to clarify intent.
truncis resolved fromBUCKET_SQL[bucket]wherebucketisLiteral["hour","day","week"]— only three hardcoded SQL fragments are reachable.colis resolved from a static whitelist dict keyed onLiteral["source","message","user_id"].placeholdersare$Npositional markers; values are always passed via the params list.No actual user-controlled string is interpolated. Adding
# noqa: S608with a brief justification on each f-string query keeps the intent explicit and stops the linter from re-flagging on future CI runs.✏️ Example suppression (apply to all five query sites)
rows = await conn.execute_query_dict( f""" SELECT {trunc} AS bucket, count(*) AS count FROM logs WHERE team_id = $1 AND timestamp >= $2 AND timestamp <= $3 GROUP BY bucket ORDER BY bucket - """, + """, # noqa: S608 — `trunc` is from a hardcoded whitelist dict, not user inputAlso applies to: 111-135, 171-181
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/api/analytics.py` around lines 46 - 80, The SQL queries use f-strings (e.g., the f"""...{trunc}...""" passed to conn.execute_query_dict) and Ruff flags them as S608 even though the interpolated values are safe; replace the linter noise by adding a suppression comment "# noqa: S608" to each f-string query site (including those where you interpolate trunc from BUCKET_SQL, col from the static whitelist, and positional placeholders) with a short justification like "trunc/col value from hardcoded whitelist; parameters passed separately", and ensure you apply this to all five query locations (the blocks constructing rows and calling execute_query_dict that then build VolumeBucket objects) so the linter knows these are false positives.
15-23: Consider capping the maximum queryable time range.
_default_rangehas no upper-bound guard. A request withbucket=hourspanning a full year emits 8 760 buckets × N series in a single query with no row-count safety net. Consider rejecting (HTTP 400) requests whereto − fromexceeds a reasonable maximum (e.g., 90 days for hourly, 2 years for daily) and/or adding a PostgreSQLstatement_timeoutfor these connections.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/api/analytics.py` around lines 15 - 23, The _default_range helper currently has no upper-bound on to_time − from_time, which allows extremely large queries; add range validation and reject oversized ranges with a 400. Implement either (a) extend _default_range to accept a bucket parameter (e.g., "hour"/"day") and enforce caps (e.g., max 90 days for hourly, 2 years for daily) and raise fastapi.HTTPException(status_code=400, detail=...) when exceeded, or (b) add a new validate_time_range(from_time, to_time, bucket) function called by the endpoint before executing SQL that enforces the same caps; also optionally set a PostgreSQL statement_timeout on the DB connection for safety. Ensure you reference and update callers that use _default_range to perform this check so oversized requests are rejected early.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/app/api/analytics.py`:
- Around line 58-80: The bucket datetime strings are built with str(r["bucket"])
which yields a space-separated timestamp; update all three VolumeBucket
constructions (the list comprehensions that create buckets for the default case,
the "level" split, and the "source" split) to call r["bucket"].isoformat()
instead of str(r["bucket"]) so the bucket values use ISO8601 with the "T"
separator and are parsed correctly by JS/ECharts.
In `@frontend/src/components/TimeRangePicker.vue`:
- Around line 80-89: The emitCustom function currently computes diffDays and
bucket even when customFrom >= customTo, causing negative diffs and wrong
bucket; update emitCustom (using customFrom.value, customTo.value, from, to,
diffMs) to validate that new Date(customFrom.value).getTime() < new
Date(customTo.value).getTime() before computing bucket and calling
emit('change', ...); on invalid ranges either swap the values, or reject/notify
the user (e.g., emit an 'invalid-range' or set a validation flag) and do not
call emit('change') — ensure the bucket selection runs only after the valid
from/to check.
In `@frontend/src/views/AnalyticsView.vue`:
- Around line 165-173: The frontend calls api.get(`/admin/teams/${teamId}`)
inside the onMounted block to populate teamName, but that admin-only endpoint
causes non-admins to get a 403 and see "Unknown Team"; instead add a public
team-info endpoint that uses get_team_member() (e.g., GET /{team_id}/info) which
returns team metadata (name) for any team member, then update the frontend to
call that new endpoint (replace api.get(`/admin/teams/${teamId}`) with
api.get(`/${teamId}/info`) or the existing analytics response that includes
name) so teamName.value is correctly populated for non-admin members.
---
Nitpick comments:
In `@backend/app/api/analytics.py`:
- Around line 46-80: The SQL queries use f-strings (e.g., the
f"""...{trunc}...""" passed to conn.execute_query_dict) and Ruff flags them as
S608 even though the interpolated values are safe; replace the linter noise by
adding a suppression comment "# noqa: S608" to each f-string query site
(including those where you interpolate trunc from BUCKET_SQL, col from the
static whitelist, and positional placeholders) with a short justification like
"trunc/col value from hardcoded whitelist; parameters passed separately", and
ensure you apply this to all five query locations (the blocks constructing rows
and calling execute_query_dict that then build VolumeBucket objects) so the
linter knows these are false positives.
- Around line 15-23: The _default_range helper currently has no upper-bound on
to_time − from_time, which allows extremely large queries; add range validation
and reject oversized ranges with a 400. Implement either (a) extend
_default_range to accept a bucket parameter (e.g., "hour"/"day") and enforce
caps (e.g., max 90 days for hourly, 2 years for daily) and raise
fastapi.HTTPException(status_code=400, detail=...) when exceeded, or (b) add a
new validate_time_range(from_time, to_time, bucket) function called by the
endpoint before executing SQL that enforces the same caps; also optionally set a
PostgreSQL statement_timeout on the DB connection for safety. Ensure you
reference and update callers that use _default_range to perform this check so
oversized requests are rejected early.
In `@frontend/src/components/TimeRangePicker.vue`:
- Around line 37-40: TimeRange is duplicated; extract the interface from
TimeRangePicker.vue and useAnalytics.ts into the shared API types file by adding
and exporting interface TimeRange in `@/api/client.ts`, then replace the local
definitions in TimeRangePicker.vue and useAnalytics.ts with an import {
TimeRange } from '@/api/client.ts'; ensure the exported name matches and update
any type references to the shared TimeRange to maintain a single source of
truth.
In `@frontend/src/composables/useAnalytics.ts`:
- Around line 29-45: The current Promise.all call in useAnalytics (which assigns
[volRes, srcRes, errRes, usrRes, hmRes]) causes all results to be discarded if
any single request fails; change to Promise.allSettled and iterate the settled
results to set each reactive ref (volume, sources, errors, users, heatmap) only
when the corresponding promise is fulfilled and assign or log the specific
reason when rejected so partial data still renders; use the same request order
to map settled[index] -> volRes/srcRes/errRes/usrRes/hmRes and update the
existing refs accordingly.
In `@frontend/src/composables/useChartOptions.ts`:
- Around line 44-56: The current series builder (involving ALL_LEVELS,
bucketSet, and data.buckets) does an O(n*m) lookup using data.buckets.find for
every (level, bucket) pair causing quadratic cost; fix it by pre-indexing
data.buckets into a Map keyed by a combined bucket+level string (or nested Map)
before constructing series, then replace the find calls inside the series
generation with O(1) Map lookups so the series creation loop (the function that
builds series) uses the Map to get entry.count (falling back to 0) and still
applies LEVEL_COLORS[level] and stack/type as before.
- Around line 76-84: The rates calculation loops bucketSet.map and calls
data.buckets.filter for each bucket, causing O(B×N) work; instead pre-group
data.buckets once (e.g., build a Map keyed by bucket or a reducer that
accumulates total and error counts per bucket) and then compute rates by looking
up the bucket's aggregated totals in that Map inside the bucketSet.map (update
the rates variable to use the precomputed group to derive total and errors
rather than filtering entries each iteration).
In `@frontend/src/views/Dashboard.vue`:
- Around line 73-76: Remove the now-unused CSS rule `.cursor-pointer` from the
scoped style block in Dashboard.vue; locate the `.cursor-pointer { cursor:
pointer; }` declaration inside the <style scoped> section and delete it to clean
up dead CSS since no elements reference that class any more.
| buckets = [VolumeBucket(bucket=str(r["bucket"]), count=r["count"]) for r in rows] | ||
| elif split_by == "level": | ||
| rows = await conn.execute_query_dict( | ||
| f""" | ||
| SELECT {trunc} AS bucket, level, count(*) AS count | ||
| FROM logs | ||
| WHERE team_id = $1 AND timestamp >= $2 AND timestamp <= $3 | ||
| GROUP BY bucket, level ORDER BY bucket, level | ||
| """, | ||
| [str(team.id), start, end], | ||
| ) | ||
| buckets = [VolumeBucket(bucket=str(r["bucket"]), level=r["level"], count=r["count"]) for r in rows] | ||
| else: | ||
| rows = await conn.execute_query_dict( | ||
| f""" | ||
| SELECT {trunc} AS bucket, source, count(*) AS count | ||
| FROM logs | ||
| WHERE team_id = $1 AND timestamp >= $2 AND timestamp <= $3 | ||
| GROUP BY bucket, source ORDER BY bucket, source | ||
| """, | ||
| [str(team.id), start, end], | ||
| ) | ||
| buckets = [VolumeBucket(bucket=str(r["bucket"]), source=r["source"], count=r["count"]) for r in rows] |
There was a problem hiding this comment.
Use .isoformat() instead of str() for the bucket datetime.
str(datetime_obj) produces "2024-01-01 00:00:00+00:00" (space separator), whereas ISO 8601 requires a T. ECharts time-axis parsing and JavaScript's new Date() are more reliably compatible with the T-separator form from .isoformat().
✏️ Proposed fix (all three bucket construction sites)
- buckets = [VolumeBucket(bucket=str(r["bucket"]), count=r["count"]) for r in rows]
+ buckets = [VolumeBucket(bucket=r["bucket"].isoformat(), count=r["count"]) for r in rows]- buckets = [VolumeBucket(bucket=str(r["bucket"]), level=r["level"], count=r["count"]) for r in rows]
+ buckets = [VolumeBucket(bucket=r["bucket"].isoformat(), level=r["level"], count=r["count"]) for r in rows]- buckets = [VolumeBucket(bucket=str(r["bucket"]), source=r["source"], count=r["count"]) for r in rows]
+ buckets = [VolumeBucket(bucket=r["bucket"].isoformat(), source=r["source"], count=r["count"]) for r in rows]🧰 Tools
🪛 Ruff (0.15.1)
[error] 61-66: Possible SQL injection vector through string-based query construction
(S608)
[error] 72-77: Possible SQL injection vector through string-based query construction
(S608)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/api/analytics.py` around lines 58 - 80, The bucket datetime
strings are built with str(r["bucket"]) which yields a space-separated
timestamp; update all three VolumeBucket constructions (the list comprehensions
that create buckets for the default case, the "level" split, and the "source"
split) to call r["bucket"].isoformat() instead of str(r["bucket"]) so the bucket
values use ISO8601 with the "T" separator and are parsed correctly by
JS/ECharts.
| function emitCustom() { | ||
| if (customFrom.value && customTo.value) { | ||
| const from = new Date(customFrom.value).toISOString() | ||
| const to = new Date(customTo.value).toISOString() | ||
| const diffMs = new Date(to).getTime() - new Date(from).getTime() | ||
| const diffDays = diffMs / (1000 * 60 * 60 * 24) | ||
| const bucket = diffDays <= 2 ? 'hour' : diffDays <= 14 ? 'day' : 'week' | ||
| emit('change', { from, to }, bucket) | ||
| } | ||
| } |
There was a problem hiding this comment.
Missing from < to validation in custom range.
If the user enters a from date after the to date, diffMs becomes negative, diffDays is negative, and the bucket will silently default to 'hour'. The API call will receive an inverted range, likely returning empty or unexpected results with no user feedback.
🛡️ Proposed fix
function emitCustom() {
if (customFrom.value && customTo.value) {
const from = new Date(customFrom.value).toISOString()
const to = new Date(customTo.value).toISOString()
+ if (from >= to) return // silently ignore or show a validation error
const diffMs = new Date(to).getTime() - new Date(from).getTime()
const diffDays = diffMs / (1000 * 60 * 60 * 24)
const bucket = diffDays <= 2 ? 'hour' : diffDays <= 14 ? 'day' : 'week'
emit('change', { from, to }, bucket)
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| function emitCustom() { | |
| if (customFrom.value && customTo.value) { | |
| const from = new Date(customFrom.value).toISOString() | |
| const to = new Date(customTo.value).toISOString() | |
| const diffMs = new Date(to).getTime() - new Date(from).getTime() | |
| const diffDays = diffMs / (1000 * 60 * 60 * 24) | |
| const bucket = diffDays <= 2 ? 'hour' : diffDays <= 14 ? 'day' : 'week' | |
| emit('change', { from, to }, bucket) | |
| } | |
| } | |
| function emitCustom() { | |
| if (customFrom.value && customTo.value) { | |
| const from = new Date(customFrom.value).toISOString() | |
| const to = new Date(customTo.value).toISOString() | |
| if (from >= to) return // silently ignore or show a validation error | |
| const diffMs = new Date(to).getTime() - new Date(from).getTime() | |
| const diffDays = diffMs / (1000 * 60 * 60 * 24) | |
| const bucket = diffDays <= 2 ? 'hour' : diffDays <= 14 ? 'day' : 'week' | |
| emit('change', { from, to }, bucket) | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/src/components/TimeRangePicker.vue` around lines 80 - 89, The
emitCustom function currently computes diffDays and bucket even when customFrom
>= customTo, causing negative diffs and wrong bucket; update emitCustom (using
customFrom.value, customTo.value, from, to, diffMs) to validate that new
Date(customFrom.value).getTime() < new Date(customTo.value).getTime() before
computing bucket and calling emit('change', ...); on invalid ranges either swap
the values, or reject/notify the user (e.g., emit an 'invalid-range' or set a
validation flag) and do not call emit('change') — ensure the bucket selection
runs only after the valid from/to check.
| onMounted(async () => { | ||
| try { | ||
| const response = await api.get(`/admin/teams/${teamId}`) | ||
| teamName.value = response.data.name | ||
| } catch { | ||
| teamName.value = 'Unknown Team' | ||
| } | ||
| picker.value?.emitInitial() | ||
| }) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Search for admin teams endpoint definition
rg -n 'admin.*teams' --type=py -C3Repository: 9cb14c1ec0/SimpleLogs
Length of output: 47
🏁 Script executed:
# Search for regular teams endpoints
rg -n '@.*route.*teams|def.*get.*team' --type=py -C5Repository: 9cb14c1ec0/SimpleLogs
Length of output: 9056
🏁 Script executed:
# Look for get_team_member usage and what it provides
rg -n 'get_team_member' --type=py -C5Repository: 9cb14c1ec0/SimpleLogs
Length of output: 7298
🏁 Script executed:
# Check for endpoint permission decorators/checks
rg -n 'require_admin|admin_required|require_team_member' --type=py -C3Repository: 9cb14c1ec0/SimpleLogs
Length of output: 47
🏁 Script executed:
# Check the full get_team_member function and what it returns
rg -n 'async def get_team_member' -A 20 --type=pyRepository: 9cb14c1ec0/SimpleLogs
Length of output: 1384
🏁 Script executed:
# Look for any non-admin /teams endpoints in the routes
rg -n '@router' backend/app/api/analytics.py backend/app/api/logs.py -A 1Repository: 9cb14c1ec0/SimpleLogs
Length of output: 1157
🏁 Script executed:
# Check if Team model or responses include team name
rg -n 'class Team|class TeamResponse' --type=py -A 10Repository: 9cb14c1ec0/SimpleLogs
Length of output: 3737
🏁 Script executed:
# Check if analytics endpoints return team info or just data
rg -n 'response_model' backend/app/api/analytics.py -B 3Repository: 9cb14c1ec0/SimpleLogs
Length of output: 484
🏁 Script executed:
# Check the VolumeResponse and other response schemas to see if they include team info
rg -n 'class VolumeResponse|class TopResponse|class HeatmapResponse' backend/app/schemas --type=py -A 8Repository: 9cb14c1ec0/SimpleLogs
Length of output: 1254
Team name fetch uses /admin/teams/ endpoint—non-admin members will see "Unknown Team".
The analytics endpoints are gated behind team membership (get_team_member()), meaning regular members can view analytics. However, fetching the team name at line 167 uses /admin/teams/${teamId}, which requires admin privileges. Non-admin users will hit a 403, silently fall into the catch block, and always see "Unknown Team Analytics" in the header.
Create a new public endpoint (e.g., GET /{team_id}/info) using get_team_member() to return team metadata, or include the team name in one of the existing analytics responses.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/src/views/AnalyticsView.vue` around lines 165 - 173, The frontend
calls api.get(`/admin/teams/${teamId}`) inside the onMounted block to populate
teamName, but that admin-only endpoint causes non-admins to get a 403 and see
"Unknown Team"; instead add a public team-info endpoint that uses
get_team_member() (e.g., GET /{team_id}/info) which returns team metadata (name)
for any team member, then update the frontend to call that new endpoint (replace
api.get(`/admin/teams/${teamId}`) with api.get(`/${teamId}/info`) or the
existing analytics response that includes name) so teamName.value is correctly
populated for non-admin members.
Backend: volume (time-bucketed counts by level/source), top (top N sources/messages/users), and heatmap (source x level grid) endpoints with raw SQL for partition pruning. All gated behind get_team_member().
Frontend: ECharts-based dashboard at /teams/:teamId/analytics with log volume stacked bar, error rate area, level donut, top sources bar, top error messages table, logs per user bar, and source x level heatmap. Includes time range picker (24h/7d/30d/custom) and navigation links from Dashboard and LogsView.
Summary by CodeRabbit