Skip to content

Add staged tasks CRUD + daily scores endpoints (Day 2)#5375

Merged
beastoin merged 11 commits intocollab/5302-integrationfrom
collab/5302-kai-staged-tasks-scores
Mar 5, 2026
Merged

Add staged tasks CRUD + daily scores endpoints (Day 2)#5375
beastoin merged 11 commits intocollab/5302-integrationfrom
collab/5302-kai-staged-tasks-scores

Conversation

@beastoin
Copy link
Collaborator

@beastoin beastoin commented Mar 5, 2026

Summary

Port staged tasks (7 endpoints) and daily scores (2 endpoints) from Rust desktop backend to Python, as part of Day 2 migration (#5302).

Endpoints added

  • POST /v1/staged-tasks — create with case-insensitive dedup
  • GET /v1/staged-tasks — list filtered by completed=false, ordered by relevance_score ASC + created_at DESC tie-break
  • DELETE /v1/staged-tasks/{id} — idempotent hard-delete (matches Rust)
  • PATCH /v1/staged-tasks/batch-scores — batch update relevance scores
  • POST /v1/staged-tasks/promote — promote top-ranked task to action_items (max 5 active AI tasks, [screen] prefix/suffix dedup)
  • GET /v1/daily-score — daily completion score (due_at filter)
  • GET /v1/scores — daily + weekly + overall scores with default_tab selection

Key behaviors matching Rust

  • Create dedup: case-insensitive description match, skips deleted tasks
  • List: completed=false Firestore filter, client-side deleted skip, created_at DESC tie-break
  • Delete: idempotent (no 404 for missing items)
  • Weekly score: uses created_at range (not due_at)
  • Promote: strips [screen] prefix/suffix for dedup comparison

Review cycle changes

  • Round 1: Fixed 6 Rust-parity issues (dedup, filters, idempotent delete, weekly field)
  • Round 2: Added created_at DESC tie-break ordering, moved in-function imports to top level
  • Round 3: Approved (PR_APPROVED_LGTM)

Test coverage (50 tests)

  • 8 model validation tests
  • 16 staged task endpoint tests (incl. [screen] dedup, boundary caps, promote at 4 active)
  • 8 score endpoint tests
  • 18 DB-layer unit tests (dedup logic, completed/deleted filtering, scoring field verification, idempotent delete)

All 50 tests pass. Tests are registered in test.sh.

Test plan

  • 50 unit tests pass (pytest tests/unit/test_staged_tasks.py -v)
  • DB-layer tests verify Firestore query fields (daily=due_at, weekly=created_at)
  • DB-layer tests verify dedup logic (case-insensitive, whitespace trim, skip deleted)
  • Boundary tests: description 2000/2001, limit 0/1/501, offset -1
  • [screen] prefix/suffix normalization tested in promote flow
  • Codex reviewer approved (3 rounds)

Closes part of #5302

🤖 Generated with Claude Code

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Mar 5, 2026

Greptile Summary

This PR ports the desktop staged tasks CRUD and daily/weekly/overall score endpoints from a Rust backend to Python/FastAPI, adding 7 new endpoints and a clean database abstraction layer in database/staged_tasks.py. The implementation is well-structured and the test coverage (30 tests) is thorough, but there are three correctness issues worth fixing before merge:

  1. In-function imports (rule violation + dead code): from datetime import date as date_type (line 229) and from datetime import timedelta (line 255) are both imported inside endpoint function bodies, violating the project's no-in-function-imports rule. The date_type alias is never referenced, making it dead code.

  2. Timezone-naive datetime.now().date(): Both score endpoints (lines 237 and 263) use datetime.now().date() to determine "today" when constructing UTC boundary strings (T00:00:00Z/T23:59:59.999Z). This can silently mis-classify items near midnight depending on the server's timezone offset; datetime.now(timezone.utc).date() should be used instead.

  3. Unhandled NotFound on batch score update: batch.update() in Firestore (line 94, database/staged_tasks.py) raises google.cloud.exceptions.NotFound at commit time if any document in the batch doesn't exist. The router has no error handling around this call, so a single invalid ID in the request produces an unhandled 500 error instead of a 400/404 response.

Confidence Score: 2/5

  • Contains three concrete correctness bugs (timezone-naive date boundaries, unhandled Firestore exception, import rule violations) that should be fixed before merge.
  • The PR has solid structure and good test coverage, but three verified bugs warrant fixes: (1) in-function imports violate project rules and one is dead code, (2) timezone-naive datetime.now().date() in both score endpoints causes silent date-boundary mismatches with UTC Firestore timestamps depending on server TZ, (3) unhandled NotFound exception in batch update produces unhandled 500s on invalid document IDs instead of 400/404. These are all straightforward fixes but important for correctness.
  • backend/routers/staged_tasks.py (in-function imports at lines 229, 255; timezone issues at lines 237, 263) and backend/database/staged_tasks.py (unhandled NotFound at lines 94-95)

Sequence Diagram

sequenceDiagram
    participant Client
    participant Router as routers/staged_tasks.py
    participant DB as database/staged_tasks.py
    participant Firestore

    Note over Client,Firestore: POST /v1/staged-tasks/promote
    Client->>Router: POST /v1/staged-tasks/promote
    Router->>DB: get_active_ai_action_items(uid)
    DB->>Firestore: query action_items where from_staged=true, completed=false
    Firestore-->>DB: active items
    DB-->>Router: active_items list

    alt len(active_items) >= 5
        Router-->>Client: {promoted: false, reason: "max 5"}
    else
        Router->>DB: get_staged_tasks(uid, limit=20)
        DB->>Firestore: query staged_tasks ORDER BY relevance_score ASC LIMIT 21
        Firestore-->>DB: staged items
        DB-->>Router: staged_items

        loop for each staged task
            Router->>Router: normalize description, check against existing_descriptions + seen_descriptions
            alt duplicate
                Router->>Router: add to duplicate_ids
            else first non-duplicate
                Router->>Router: set as selected_task
            end
        end

        opt duplicate_ids not empty
            Router->>DB: delete_staged_tasks_batch(uid, duplicate_ids)
            DB->>Firestore: batch.delete() for each duplicate
        end

        alt selected_task is None
            Router-->>Client: {promoted: false, reason: "all duplicates"}
        else
            Router->>DB: promote_staged_task(uid, selected_task)
            DB->>Firestore: action_items.add(action_item_data with from_staged=true)
            Firestore-->>DB: new action item doc ref
            DB-->>Router: action_item with id

            Router->>DB: delete_staged_task(uid, selected_task.id)
            DB->>Firestore: staged_tasks/{id}.delete()

            Router-->>Client: {promoted: true, promoted_task: {...}}
        end
    end
Loading

Last reviewed commit: 67e6f59

Comment on lines +229 to +255
from datetime import date as date_type

if date:
try:
parsed = datetime.strptime(date, '%Y-%m-%d').date()
except ValueError:
raise HTTPException(status_code=400, detail='Invalid date format, use YYYY-MM-DD')
else:
parsed = datetime.now().date()

date_str = parsed.strftime('%Y-%m-%d')
due_start = f'{date_str}T00:00:00Z'
due_end = f'{date_str}T23:59:59.999Z'

completed, total = staged_tasks_db.get_action_items_for_daily_score(uid, due_start, due_end)
score = (completed / total * 100.0) if total > 0 else 0.0

return DailyScoreResponse(score=score, completed_tasks=completed, total_tasks=total, date=date_str)


@router.get('/v1/scores', response_model=ScoresResponse, tags=['scores'])
def get_scores(
date: Optional[str] = Query(default=None, description='Date in YYYY-MM-DD format'),
uid: str = Depends(auth.get_current_user_uid),
):
"""Get daily, weekly, and overall scores with default tab selection."""
from datetime import timedelta
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In-function imports violate project rules. The date as date_type import on line 229 is also never used (dead code).

Both timedelta (line 255) and date_type (line 229) should be moved to the top-level imports alongside the existing from datetime import datetime on line 13. The unused date_type alias should be removed entirely.

Suggested change
from datetime import date as date_type
if date:
try:
parsed = datetime.strptime(date, '%Y-%m-%d').date()
except ValueError:
raise HTTPException(status_code=400, detail='Invalid date format, use YYYY-MM-DD')
else:
parsed = datetime.now().date()
date_str = parsed.strftime('%Y-%m-%d')
due_start = f'{date_str}T00:00:00Z'
due_end = f'{date_str}T23:59:59.999Z'
completed, total = staged_tasks_db.get_action_items_for_daily_score(uid, due_start, due_end)
score = (completed / total * 100.0) if total > 0 else 0.0
return DailyScoreResponse(score=score, completed_tasks=completed, total_tasks=total, date=date_str)
@router.get('/v1/scores', response_model=ScoresResponse, tags=['scores'])
def get_scores(
date: Optional[str] = Query(default=None, description='Date in YYYY-MM-DD format'),
uid: str = Depends(auth.get_current_user_uid),
):
"""Get daily, weekly, and overall scores with default tab selection."""
from datetime import timedelta
from datetime import datetime, timedelta, timezone

Then remove lines 229 and 255.

Context Used: Rule from dashboard - Backend Python import rules - no in-function imports, follow module hierarchy (source)

except ValueError:
raise HTTPException(status_code=400, detail='Invalid date format, use YYYY-MM-DD')
else:
parsed = datetime.now().date()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Timezone-naive datetime.now().date() will use the server's local timezone instead of UTC. Since the boundary strings due_start/due_end are formatted as UTC (with Z suffix) and compared against Firestore timestamps stored in UTC, the date can be off by a day depending on the server's timezone offset.

Example: Server in EST (UTC-5), local time 11 PM Jan 5 = 4 AM UTC Jan 6. datetime.now().date() returns Jan 5, but the UTC date is actually Jan 6, so the query gets the wrong day's items.

Suggested change
parsed = datetime.now().date()
parsed = datetime.now(timezone.utc).date()

except ValueError:
raise HTTPException(status_code=400, detail='Invalid date format, use YYYY-MM-DD')
else:
parsed = datetime.now().date()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same timezone-naive issue as line 237: datetime.now().date() uses server local timezone instead of UTC, causing date boundary mismatches with UTC-formatted Firestore timestamps.

Suggested change
parsed = datetime.now().date()
parsed = datetime.now(timezone.utc).date()

Comment on lines +94 to +95
batch.update(doc_ref, {'relevance_score': item['relevance_score'], 'updated_at': now})
batch.commit()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Firestore's batch.update() raises google.cloud.exceptions.NotFound at commit time if any document in the batch does not exist. There is no error handling in the router or here (line 132 in routers/staged_tasks.py), so a single invalid task ID in the request will produce an unhandled 500 error instead of a 400/404.

Use batch.set(..., merge=True) to upsert instead, or wrap batch.commit() in a try/except that translates NotFound to a 400/404 response.

Suggested change
batch.update(doc_ref, {'relevance_score': item['relevance_score'], 'updated_at': now})
batch.commit()
batch.set(doc_ref, {'relevance_score': item['relevance_score'], 'updated_at': now}, merge=True)
batch.commit()

@beastoin
Copy link
Collaborator Author

beastoin commented Mar 5, 2026

Sub-PR Complete — All Checkpoints Passed

Checkpoint Status
CP0-CP4 Exploration, Codex consult
CP5 Implementation complete
CP6 PR created
CP7 Reviewer approved (3 rounds)
CP8 Tester approved (2 rounds, mutation-checked)

Test evidence

50 passed in 0.54s

Coverage summary

  • 8 model validation tests
  • 16 endpoint tests (incl. [screen] dedup, boundary caps)
  • 8 score endpoint tests
  • 18 DB-layer unit tests (dedup, filter, scoring fields, idempotent delete)

Ready for merge into collab/5302-integration trunk.

by AI for @beastoin

@beastoin beastoin merged commit f84b1c5 into collab/5302-integration Mar 5, 2026
1 check passed
@beastoin beastoin deleted the collab/5302-kai-staged-tasks-scores branch March 5, 2026 09:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant