Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
WalkthroughThis PR introduces three new Soroban gamification contracts (Quest, Referral, Streak) for Stellar with comprehensive web app integration including NFT minting, quest tracking, and reputation systems. It also adds extensive developer documentation for Stellar/Soroban and Trustless Work APIs. Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant WebApp as Web App
participant QuestAPI as Quest API
participant QuestContract as Quest Contract
participant ReputationContract as Reputation Contract
participant NFTContract as NFT Contract
User->>WebApp: Complete Donation/Action
WebApp->>QuestAPI: POST /api/quests/progress
QuestAPI->>QuestContract: simulate record_progress(user, quest_id, value)
QuestContract-->>QuestAPI: success/on-chain update
QuestAPI->>QuestAPI: Update DB quest progress
alt Quest Completed
QuestAPI->>ReputationContract: Award reward_points
ReputationContract->>ReputationContract: Update user level/tier
alt User Tier Changed
ReputationContract-->>ReputationContract: Emit LevelUp event
WebApp->>NFTContract: Check/Mint or Evolve NFT
NFTContract->>NFTContract: Update metadata & points
NFTContract-->>WebApp: Return evolved NFT
end
end
WebApp-->>User: Quest completed + NFT status
sequenceDiagram
actor Admin
participant Scripts as Deploy Scripts
participant Stellar as Stellar Network
participant Contract as On-Chain Contract
participant Storage as Contract Storage
Admin->>Scripts: ./deploy-quest.sh --network testnet
Scripts->>Scripts: Build quest contract (Cargo)
Scripts->>Stellar: Upload WASM
Stellar-->>Scripts: WASM hash
Scripts->>Contract: Deploy contract instance (admin, reputation_contract)
Contract->>Storage: Initialize storage
Storage-->>Contract: Initialized
Scripts->>Contract: Verify admin (get_admin)
Contract-->>Scripts: Admin confirmed
Scripts->>Scripts: Save deployment info + role grant commands
Scripts-->>Admin: Deployment complete + next steps
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Consider allocating time for careful review of: (1) contract logic across three new implementations—particularly the authorization flows and state transitions; (2) the NFT upgrade mechanics and points attribute handling in the Reputation contract; (3) the gamification workflow integration in the web app's fire-and-forget async processing; and (4) the escrow upsert logic refactoring with contribution linking. Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 104
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
apps/contract/scripts/deploy-reputation.sh (1)
156-165:⚠️ Potential issue | 🟠 MajorConsider passing the raw NFT contract ID without literal quotes.
The current pattern wraps the contract ID with literal single quotes, which would be passed as-is to Stellar's contract deploy CLI. This causes the contract ID argument to include quote characters, preventing correct parsing. Other contract parameters in the deployment scripts (e.g.,
--admin,--reputation_contract,--auth_contract) use simple variable expansion without literal wrapping, and Stellar's CLI expects bare StrKey addresses.Proposed adjustment
- --nft_contract '"'"$NFT_CONTRACT_ID"'"') + --nft_contract "$NFT_CONTRACT_ID")apps/web/app/actions/escrow/save-escrow-contract.ts (1)
174-190:⚠️ Potential issue | 🟠 MajorUpsert currently resets
current_stateto NEW.
Upsert is a solid choice for idempotency, but always settingcurrent_state: 'NEW'can regress an existing escrow’s lifecycle. Consider preserving the existing state on updates and only defaulting to NEW on insert.🔧 Preserve existing state on updates
+ const { data: existingEscrow } = await supabase + .from('escrow_contracts') + .select('current_state') + .eq('contract_id', params.contractId) + .maybeSingle() + const currentState = existingEscrow?.current_state ?? 'NEW' const { data: upsertedEscrow, error: escrowUpsertError } = await supabase .from('escrow_contracts') .upsert( { contract_id: params.contractId, engagement_id: engagementId, project_id: params.projectId, contribution_id: contributionId, payer_address: payerAddress, receiver_address: receiverAddress, amount: totalAmount, platform_fee: platformFee, - current_state: 'NEW', + current_state: currentState, updated_at: new Date().toISOString(), },
🤖 Fix all issues with AI agents
In @.agents/skills/stellar-dev/advanced-patterns.md:
- Around line 62-64: In the upgrade function, add a brief comment next to the
call that retrieves the admin
(env.storage().instance().get(&DataKey::Admin).unwrap()) to note that unwrap()
will panic if the Admin key is not set and that this example assumes proper
initialization of Admin; reference the function name upgrade, type Env, and
DataKey::Admin so readers understand the assumption and expected initialization
pattern.
In @.agents/skills/stellar-dev/common-pitfalls.md:
- Around line 520-529: Update the "Debugging Checklist" section to use varied
action verbs for each list item (the header "Debugging Checklist" and the items
currently starting with "Check" 1–8) so each step is more scannable; for
example, replace duplicates with distinct verbs like "Verify network", "Confirm
source account funded", "Ensure sequence number is current", "Simulate Soroban
tx", "Validate trustlines for asset transfers", "Confirm TTL for contract data",
"Ensure correct account signing", and "Review logs for error details" (use
similar varied verbs that fit the existing items).
- Around line 439-445: The comment referencing a "fresh blockhash" is misleading
for Stellar; update the text around the retry logic (the comment above the if
that checks error.response?.data?.extras?.result_codes?.transaction and the line
mentioning rebuildTransaction/signedXdr) to say you should refresh the
transaction's sequence number or adjust timebounds (or both) and rebuild the
XDR, e.g., "Rebuild with fresh sequence number or updated timebounds and retry"
so maintainers see Stellar‑specific guidance instead of "blockhash."
In @.agents/skills/stellar-dev/frontend-stellar-sdk.md:
- Around line 349-354: The polling loop that repeatedly calls
rpc.getTransaction(response.hash) using getResponse can hang indefinitely and
overload the RPC; modify the logic in this polling section to implement a
bounded wait and backoff by adding a max attempt count or a deadline (e.g.,
maxAttempts or timeoutDeadline) and an exponential or capped backoff between
retries, and if the deadline/attempts are exceeded throw or return a clear error
indicating the transaction did not appear; ensure you still await
rpc.getTransaction(response.hash) inside the retry loop and include the chosen
timeout/backoff variables in the retry condition to stop the loop safely.
- Around line 426-456: The SendPayment component references
config.networkPassphrase but never imports config and also assumes error is an
Error; fix by importing the config object (used by buildPaymentTx/sign) at the
top of the file and by guarding the catch error before reading message (e.g.,
check if error is instance of Error or has a message property) so the catch in
SendPayment safely sets status; update references around buildPaymentTx, sign,
submitTransaction and the catch block accordingly.
In @.agents/skills/stellar-dev/standards-reference.md:
- Around line 11-19: Add a "Last reviewed" date near the "Maintenance note"
heading so readers can see when the links/status guidance was last verified;
update the .agents/skills/stellar-dev/standards-reference.md file by inserting a
short line such as "Last reviewed: YYYY-MM-DD" directly below the "##
Maintenance note" heading and ensure it is updated whenever the linked SEPs/CAPs
are rechecked.
In @.agents/skills/stellar-dev/stellar-assets.md:
- Line 305: Two level-4 Markdown headings (marked with "####") within this
document should be normalized to level-3 headings ("###") to satisfy MD001 and
maintain consistent section hierarchy; locate the flagged "####" headings
(including the one around line 305 and the other around line 331) and change the
leading hashes from four to three so they become "###", then run the linter to
confirm the warning is resolved.
In @.agents/skills/stellar-dev/testing.md:
- Around line 516-531: Update the fenced code block under the "Test
Organization" header so the opening fence includes the language tag `text`
(change the leading ``` to ```text) to satisfy markdownlint and improve
readability; locate the tree diagram block (the triple-backtick fence
surrounding the project/ ├── src/ ... Cargo.toml lines) and add the `text` tag
to the opening fence only.
In @.agents/skills/trustless-work-dev/.gitignore:
- Around line 1-5: Add a trailing newline to the .gitignore file for POSIX
compliance and extend the ignore patterns to cover environment file variants and
build outputs by adding entries such as *.env.* and dist/ or build/ (or
whichever build output directory this skill produces); update the existing
.gitignore block (node_modules/, .DS_Store, *.log, .env, .env.local) to include
these additional patterns and ensure the file ends with a newline.
In @.agents/skills/trustless-work-dev/CODE_OF_CONDUCT.md:
- Around line 74-103: Add blank lines before and after each numbered section
heading (e.g., the "### 1. Correction", "### 2. Warning", "### 3. Temporary
Ban", and "### 4. Permanent Ban" headings) so Markdown lint rule MD022 is
satisfied; edit the CODE_OF_CONDUCT.md content around those headings to ensure
there is an empty line above each "### ..." heading and an empty line after the
heading line (before the following paragraph) to keep rendering consistent
across tools.
In @.agents/skills/trustless-work-dev/CONTRIBUTING.md:
- Around line 37-51: Add a blank line after each Markdown heading to satisfy the
linter: insert an empty line immediately after the "Getting Started" heading and
after the "**Fork the repository**" and "**Create a branch**" subheadings so
that each heading is followed by a blank line before the next content block.
In @.agents/skills/trustless-work-dev/LICENSE:
- Around line 1-202: The LICENSE file currently contains the full Apache 2.0
text but doesn't state the scope of its applicability; add a root-level
clarification (either a top-level README, NOTICE, or a short header in this
LICENSE) that explicitly states whether the Apache-2.0 applies only to the
.agents/skills/trustless-work-dev skill bundle or to the entire repository;
update the file referenced as LICENSE (and the repository README or NOTICE) to
include a clear sentence such as "This license applies to: [scope]" and ensure
the project copyright line (Copyright 2026 Trustless Work) remains accurate and
consistent.
In @.agents/skills/trustless-work-dev/SKILL.md:
- Around line 35-55: Add a blank line before and after each level-3 heading (the
lines starting with "### REST API Documentation", "### React SDK Documentation",
and "### Blocks SDK Documentation") in the Reference Files section so there is
an empty line separating those headings from surrounding text and lists to
satisfy markdownlint and improve readability; update the SKILL.md content around
those three "###" headings accordingly.
In @.agents/skills/trustless-work-dev/skills/api/core-concepts.md:
- Around line 52-74: Add a blank line immediately before the fenced JSON code
block that begins with "```json" (the example showing the "roles" array) so the
fenced code block is separated from the preceding paragraph; locate the fenced
block containing the "roles" JSON and insert a single empty line above the
opening ```json to satisfy markdown linting.
- Around line 85-91: Update the "API Authentication" section to reconcile the
two header conventions by explicitly stating which endpoints require
Authorization: Bearer YOUR_API_KEY versus which require x-api-key (referencing
the single-release-escrow.md behavior) and add guidance on when to use each;
also change the example code block to include a language identifier (e.g.,
```http) so the header snippet is consistently formatted.
In @.agents/skills/trustless-work-dev/skills/api/single-release-escrow.md:
- Around line 12-21: Documentation is inconsistent about authentication:
single-release-escrow.md and multi-release-escrow.md require x-api-key while
core-concepts.md shows Authorization: Bearer; update core-concepts.md to use the
x-api-key header consistently (replace occurrences of "Authorization: Bearer
YOUR_API_KEY" and related examples at the referenced locations with "x-api-key:
YOUR_API_KEY") or, if both schemes are supported, add a clear section that
explicitly states both are valid and when to use each, and update the examples
at lines ~140, ~155, ~166, and ~190 to match that clarified guidance.
In @.agents/skills/trustless-work-dev/skills/api/trustlines.md:
- Around line 98-123: The example function establishTrustline uses
TransactionBuilder and Network but they are not imported; update the import
statement that currently brings in Server, Asset, Keypair, Operation from
'stellar-sdk' to also include TransactionBuilder and Network so the symbols used
in establishTrustline are resolved (ensure the import list contains
TransactionBuilder and Network alongside Server, Asset, Keypair, Operation).
- Around line 157-171: The TRUSTLINES constant is using a multiplier value for
the decimals field (decimals: 10000000) which conflicts with SUPPORTED_ASSETS
that uses a decimal count (decimals: 7); update TRUSTLINES entries (the objects
in TRUSTLINES) to use the same decimal count representation as SUPPORTED_ASSETS
(e.g., set decimals: 7 for USDC and EURC) so both constants use the same
semantics for the decimals property (refer to TRUSTLINES and SUPPORTED_ASSETS to
locate the entries).
- Around line 38-53: The document uses inconsistent representation for token
precision: change the asset entries shown (the USDC and EURC objects in
trustlines.md) to use a consistent schema used elsewhere (e.g., replace
decimals: 10000000 with decimals: 7 and add multiplier: 10000000 or scale:
10000000), and update the SUPPORTED_ASSETS examples to match this same schema so
all references (e.g., the USDC/EURC objects and SUPPORTED_ASSETS) use decimals
as the count and multiplier/scale as the conversion factor.
In @.agents/skills/trustless-work-dev/skills/api/types.md:
- Around line 20-38: Update the markdown heading "Multi Release Milestone
Payload" to use a hyphenized form "Multi-Release Milestone Payload" for
consistency with other doc headings; leave the TypeScript type name
MultiReleaseMilestonePayload unchanged and ensure any cross-references or
anchors that rely on the heading are adjusted if necessary.
- Around line 579-663: The distributions field is currently typed as a
single-element tuple which forces exactly one item; update the distributions
property on SingleReleaseResolveDisputePayload,
MultiReleaseResolveDisputePayload, and WithdrawRemainingFundsPayload to be an
array type (i.e., use a repeated item type for the object with address and
amount) so callers can supply any number of distribution objects; locate and
change the distributions declarations inside those three types to the array form
for consistency and correct semantics.
In @.agents/skills/trustless-work-dev/skills/blocks/providers.md:
- Around line 221-233: The example GlobalProvider is using React.ReactNode
inline which conflicts with the repo's verbatimModuleSyntax setting; update the
example to add an explicit type-only import (e.g., import type { ReactNode }
from 'react') and change the GlobalProvider signature to use ReactNode for the
children prop (refer to GlobalProvider and children in the snippet) so the
example follows the project's type-only import pattern.
In @.agents/skills/trustless-work-dev/skills/blocks/vibe-coding.md:
- Around line 237-244: Add a language specifier to the fenced code blocks in the
markdown (the prompt templates currently starting at lines 237, 248, and 263) to
satisfy MD040; change the opening backticks from ``` to something like ```text
or ```markdown for each of those blocks so static analysis recognizes the
language—ensure you update all three occurrences (the blocks beginning at 237,
248, and 263) and keep the block contents unchanged.
- Line 6: The sentence "Export this page to PDF or Markdown and feed it to your
copilot (Cursor, Claude, etc)" is missing the period after "etc" for American
English; update that exact line in
.agents/skills/trustless-work-dev/skills/blocks/vibe-coding.md to read "...
(Cursor, Claude, etc.)" so the parenthetical ends with "etc." and adjust spacing
if needed.
In @.agents/skills/trustless-work-dev/skills/react-sdk/hooks-reference.md:
- Around line 726-763: The example imports useFundEscrow and FundEscrowPayload
but the body uses generic symbols useSomeEndpoint and SomeEndpointPayload,
causing a mismatch; update the example to be consistent by either replacing the
imports with generic ones (remove useFundEscrow and FundEscrowPayload from
imports and import/use generic hooks/types that match
useSomeEndpoint/SomeEndpointPayload) or change the example body to use
useFundEscrow and FundEscrowPayload (replace useSomeEndpoint -> useFundEscrow
and SomeEndpointPayload -> FundEscrowPayload), keeping other referenced symbols
(useSendTransaction, signTransaction, sendTransaction, walletAddress) unchanged
so the example compiles and reads correctly.
- Around line 36-86: The example hook useInitializeEscrowForm is missing the
signTransaction import and a walletAddress definition, causing undefined
references; update the example to import signTransaction (from the wallet or SDK
module that provides it) alongside useInitializeEscrow and useSendTransaction,
and define or accept walletAddress (e.g., from props, context, or a useWallet
hook) before using it in the onSubmit flow so signTransaction and walletAddress
are defined when calling signTransaction({ unsignedTransaction, address:
walletAddress || "" }); also consider adding a short "Prerequisites" note at the
top listing required imports/hooks (signTransaction and wallet access) so other
examples can follow the same pattern.
In @.agents/skills/trustless-work-dev/skills/react-sdk/react-sdk.md:
- Around line 89-97: The markdown has fenced code blocks without blank lines
around them causing MD031 warnings; update the file to insert a blank line
immediately before and after each fenced block (e.g., around the import block
and the Usage block) so that code sections like the import of
useInitializeEscrow and the declarations of InitializeSingleReleaseEscrowPayload
/ InitializeMultiReleaseEscrowPayload are surrounded by a blank line above and
below each ```tsx fence.
In @.agents/skills/trustless-work-dev/skills/react-sdk/vibe-coding.md:
- Line 3: Replace the unhyphenated token "GPT 5" with "GPT-5" in the prompt text
(look for the literal string "GPT 5" in vibe-coding.md) to follow grammatical
consistency and static analysis recommendations; ensure any other occurrences in
the same document (or within the prompt templates used by the VibeCoding/React
SDK skill) are updated to "GPT-5" as well.
In @.gitignore:
- Around line 109-113: The .gitignore currently covers specific .txt deployment
artifacts; extend those entries to also ignore the same filenames without the
.txt extension by adding wildcard patterns for
apps/contract/*-deployment-info-*, apps/contract/*-service-account-setup*,
apps/contract/complete-deployment-summary-*, and
apps/contract/backend-service-account-setup* so non-.txt outputs (or other
extensions) from deployment tooling are also ignored.
In `@apps/contract/contracts/quest/src/lib.rs`:
- Around line 273-282: The function award_quest_reward currently takes an unused
parameter _quest; either remove this parameter from the signature or explicitly
use/document it—best fix: remove the unused parameter from award_quest_reward
and from any call sites so the signature becomes fn award_quest_reward(e: &Env,
user: &Address) and update the call to
ReputationClient::record_quest_completion(e, &reputation_contract,
&quest_contract, user) unchanged; if you prefer to keep it for future use,
replace _quest with quest: &QuestDefinition and add a comment explaining why
it’s retained (or reference it in a log) so the unused-parameter pattern is
intentional.
- Around line 110-150: Add validation in create_quest to reject zero reward
points: after the existing target_value == 0 check, add a guard that panics with
Error::InvalidQuestParams when reward_points == 0 (referencing the reward_points
parameter and the create_quest function). This ensures quests with reward_points
== 0 are rejected; keep the same panic macro (panic_with_error!(e,
Error::InvalidQuestParams)) so behavior aligns with the existing validation.
In `@apps/contract/contracts/quest/src/reputation_client.rs`:
- Around line 35-38: The match currently swallows all errors by mapping every
non-Ok(Ok(points)) case to None; change the API to surface failures by either
(A) returning Result<Option<u32>, ReputationError> from the function that
contains the match (introduce a ReputationError enum, map the outer and inner
Err variants to distinct ReputationError variants, and return Ok(Some(points)) /
Ok(None) / Err(e) accordingly), or (B) keep the current Option<u32> return but
emit a diagnostic event/log on failures inside the match (use the existing
logging/event API in this crate and include context and the original error when
matching the _ => arm) so callers can still treat it as optional while
diagnostics capture the cause. Target the function that owns the shown match and
update its signature/usages and error mapping/logging sites.
- Around line 24-26: The code uses a magic number (5) for
EventType::QuestCompletion when calling the reputation contract (see the
event_type variable used before calling record_event); extract this literal into
a clearly named constant (e.g., const QUEST_COMPLETION_EVENT: u32 = 5) declared
near the top of reputation_client.rs (or next to the related functions) and
replace let event_type: u32 = 5; with a reference to that constant so the intent
is explicit and maintainable.
In `@apps/contract/contracts/quest/src/storage.rs`:
- Around line 80-92: add_user_completed_quest currently pushes quest IDs into
the user's completed list with no duplicate checks or size bounds, allowing
unbounded Vec growth; update this function to first load the list via
get_user_completed_quests, return early if quest_id already exists, and enforce
a fixed cap (e.g., const MAX_COMPLETED_QUESTS) — on overflow either drop the
oldest entry or reject the new entry before push_back; make sure to persist the
updated Vec to StorageKey::UserCompletedQuests only when modified and keep the
TTL extension logic consistent with the mutation behavior.
- Around line 94-103: Update the doc comment on get_and_increment_quest_id to
explicitly state that the function increments the stored counter and returns the
newly allocated quest ID (i.e., the incremented value, starting at 1), so
callers know it does not return the previous/current count; mention
StorageKey::NextQuestId and that the function both reads and sets that storage
key to clarify semantics.
In `@apps/contract/contracts/quest/src/test.rs`:
- Around line 9-38: The test test_create_quest should call env.mock_all_auths()
to ensure deterministic/mocked authorization during Soroban tests; add a call to
env.mock_all_auths() immediately after Env::default() (before registering the
contract and calling Quest::__constructor or create_quest) so the subsequent
calls (Quest::__constructor, quest_contract.create_quest,
quest_contract.get_quest) run with mocked auth.
In `@apps/contract/contracts/referral/src/lib.rs`:
- Around line 279-288: The function award_referral_reward currently takes an
unused `_reward_points` parameter; either remove that parameter from
award_referral_reward and update all callers to stop passing points (since
get_reputation_contract and ReputationClient::record_successful_referral imply
the reputation contract decides points), or modify the
ReputationClient::record_successful_referral call to accept and forward the
points (update the client's signature and its implementations) so
`_reward_points` is actually used; locate award_referral_reward,
get_reputation_contract, and ReputationClient::record_successful_referral to
apply the chosen change and update all callers accordingly.
- Around line 258-263: The code only increments stats.active_referrals when
reward_points > 0, which misses cases where a referral becomes
ReferralStatus::Active due to total_donations >= 3 but reward_points == 0;
modify the logic around updating stats in the block that computes reward_points
and status so that you detect a status transition (e.g., if previous
record.status != ReferralStatus::Active and new status ==
ReferralStatus::Active) and increment stats.active_referrals in that case;
update the spot that calls set_referrer_stats (and the variables named stats,
reward_points, record, ReferralStatus::Active) to apply this additional
increment so active_referrals is accurate.
In `@apps/contract/contracts/referral/src/reputation_client.rs`:
- Around line 1-40: The record_successful_referral implementation duplicates the
streak contract's ReputationClient; extract the shared logic into a common
module/crate (e.g., a shared ReputationClient helper) to avoid duplication: move
the try_invoke_contract call and result handling into a reusable function that
accepts the Contract Address, Symbol method name (record_event), caller, user,
and an event_type parameter (currently event_type = 2 for successful referral),
and update both ReputationClient::record_successful_referral and the streak
variant to call that shared function instead of repeating the same code.
In `@apps/contract/contracts/referral/src/storage.rs`:
- Around line 85-97: The add_referrer_referral function should avoid inserting
duplicate referred addresses: fetch the current referrals via
get_referrer_referrals(referrer) and check (e.g., contains or manual loop)
whether referred is already present before calling referrals.push_back and
persisting to StorageKey::ReferrerReferrals; if you want to limit growth also
enforce an upper bound (e.g., max referrals) and return/skip when exceeded;
update callers (e.g., create_referral) only if needed for semantics but keep
this function defensive.
- Around line 39-56: The code uses the magic number 518400 for TTL in
set_referral_record; extract this into a module-level constant (e.g.,
PERSISTENT_TTL) and replace every literal 518400 in this file with that constant
to improve maintainability; update references in set_referral_record where
extend_ttl is called (and anywhere else in this file that repeats 518400) and
keep function names/getter keys unchanged (get_referral_record,
set_referral_record, StorageKey::ReferralRecord).
In `@apps/contract/contracts/referral/src/test.rs`:
- Around line 9-33: Add env.mock_all_auths() at the start of
test_create_referral to mock authorizations so calls like
referral_contract.__constructor, grant_role, and create_referral succeed in the
test harness; locate the test function test_create_referral in test.rs and call
env.mock_all_auths() immediately after Env::default() (or before any contract
interactions). Also consider adding separate tests that call create_referral
with unauthorized signer, self-referral inputs, and duplicate referrals to cover
negative cases, using the same pattern but without mocking auths or with
expected error assertions.
In `@apps/contract/contracts/reputation/src/nft_client.rs`:
- Around line 91-322: The u32_to_string, build_three_digit_string,
build_four_digit_string and build_five_digit_string functions produce
rounded/fallback outputs making on-chain metadata inaccurate; replace the lookup
logic with a deterministic conversion using the no_std-compatible itoa crate:
add itoa to Cargo.toml, import itoa::Buffer, and in u32_to_string create a
Buffer, call buffer.format(num) to get an exact decimal &str, then return
String::from_str(e, that_str); remove or delegate the helper builders
(build_three_digit_string/build_four_digit_string/build_five_digit_string) so
all u32 values map exactly to their decimal string representation.
In `@apps/contract/contracts/streak/Cargo.toml`:
- Around line 1-21: The Cargo.toml for the "streak" crate currently lists
license = "MIT" which may not match the workspace standard; update the crate
metadata in this file (the [package] section for name = "streak") to align with
the repository policy by either setting the exact SPDX identifier used across
the workspace or adding a license-file entry that points to the repository
LICENSE (e.g., add license-file = "LICENSE" or replace license with the
canonical SPDX string used by other crates) so the crate metadata is
unambiguous.
In `@apps/contract/contracts/streak/src/lib.rs`:
- Around line 189-198: The function award_streak_bonus contains an unused
parameter _bonus_points; remove this parameter from the function signature (and
corresponding callers) to match the pattern used in referral and quest
contracts, or if you intend to use it later, keep the parameter but document its
future purpose; update the signature in lib.rs (award_streak_bonus) and any
places that call ReputationClient::record_streak_donation/streak contract
invocation to reflect the removed/retained parameter so there are no unused
variable warnings or mismatches.
- Around line 109-187: The streak calculation in record_donation uses
e.ledger().timestamp() (current_time) to compute time_since_last but then writes
donation_timestamp into streak.last_donation_timestamp, causing inconsistent
baselines and allowing future timestamps; fix by validating donation_timestamp
at the start of record_donation (ensure donation_timestamp <= current_time and
within a reasonable window) and then use the validated donation_ts variable
consistently for both time_since_last comparisons and storing
streak.last_donation_timestamp (or alternatively drop the donation_timestamp
parameter and always use current_time); update checks around time_since_last,
previous_streak and bonus_points accordingly in record_donation, and ensure any
published events (StreakBrokenEvent, StreakUpdatedEvent) reflect the validated
timestamp.
In `@apps/contract/contracts/streak/src/reputation_client.rs`:
- Around line 35-38: The match on result currently swallows all errors with `_
=> None`; update the failure arm(s) to log or emit an event before returning
None so integration issues are visible in production. Specifically, in the
`match result { Ok(Ok(points)) => Some(points), _ => None }` block (the `match
result` handling in this reputation call), replace the catch-all `_ => None`
with explicit failure arms (e.g., `Ok(Err(e))` and `Err(e)`) or a single
wildcard that logs the `result`/error via the project's logging/tracing facility
(e.g., `warn!`/`error!`) and then returns `None`, so failures are recorded but
still return None as before.
- Around line 24-26: Replace the hardcoded magic number in reputation_client.rs
by extracting it into a named constant or enum variant (e.g., STREAK_DONATION or
EventType::StreakDonation) and use that identifier instead of let event_type:
u32 = 1; so the meaning is self-documenting and maintainable alongside other
event types (e.g., quest/referral); update the reference in the call to the
reputation contract's record_event function to use the new constant/enum.
In `@apps/contract/contracts/streak/src/storage.rs`:
- Around line 46-56: The TTL in set_user_streak is inconsistent with the
comment: extend_ttl is called with 518_400 (≈6 days) but the comment says "30
days"; update either the value or the comment. Fix by changing the TTL argument
in e.storage().persistent().extend_ttl(&StorageKey::UserStreak(user.clone(),
streak.period), 2_592_000, 2_592_000) to represent 30 days, or alternatively
update the inline comment to "6 days" (or a clear duration) if 518_400 is
intentional; ensure you modify the extend_ttl call and/or the comment next to it
in the set_user_streak function so they match.
In `@apps/contract/contracts/streak/src/test.rs`:
- Around line 8-38: Update the test setup in test_create_streak to call
env.mock_all_auths() before any contract calls to mock authorization, and expand
the suite by adding separate tests that exercise StreakPeriod::Monthly, a
multi-donation continuation case (repeat calls to Streak::record_donation for
the same user increasing current_streak), a streak-break case (advance the
ledger timestamp past the grace period then call record_donation to assert
current_streak resets), and a longest_streak scenario (build up a streak, break
it, then rebuild and assert longest_streak preserved) — locate these changes
around the existing test_create_streak and the methods record_donation,
get_streak, and the StreakPeriod enum to implement and assert the new behaviors.
In `@apps/contract/scripts/deploy-quest.sh`:
- Line 71: The shell conditional uses a redundant redirection pattern; update
the command in deploy-quest.sh (the `if stellar keys address bran ...; then`
line) to remove the unnecessary `2>&1` so it uses a single correct redirection
(either `&>/dev/null` to drop both stdout/stderr or the POSIX form `> /dev/null
2>&1`) to simplify and clarify the intent.
- Line 126: Update the interactive prompt in deploy-quest.sh to use read -r so
backslashes in user input are not treated as escape sequences; locate the read
invocation (the prompt that assigns to the variable confirm) and change it to
use the -r flag (preserving the prompt text and variable name) so inputs
containing backslashes are read literally.
- Around line 176-180: The verification condition in deploy-quest.sh is too
permissive: replace the current OR logic using VERIFIED_ADMIN and ADMIN_ADDRESS
with a stricter check that requires the admin address to match (VERIFIED_ADMIN
contains ADMIN_ADDRESS) and that VERIFIED_ADMIN is not the literal
"VERIFICATION_FAILED"; update the if condition that currently references
VERIFIED_ADMIN and ADMIN_ADDRESS so it uses a logical AND between the
containment check and the failure check to only echo "✅ Contract admin
verified!" when both are true.
In `@apps/contract/scripts/deploy-referral.sh`:
- Line 126: Change the read invocation to use the -r flag to prevent backslash
interpretation: update the prompt line that assigns to the variable confirm (the
read -p "Are you sure you want to continue? (yes/no): " confirm) to use read -r
-p so backslashes aren’t treated as escape sequences while preserving the same
prompt and variable name.
- Around line 176-180: The verification currently uses an OR which passes for
any VERIFIED_ADMIN not equal to "VERIFICATION_FAILED", causing false positives;
update the check so it only succeeds when VERIFIED_ADMIN contains the
ADMIN_ADDRESS and is not "VERIFICATION_FAILED" (i.e., replace the logical OR
with a logical AND between the two checks for VERIFIED_ADMIN and ADMIN_ADDRESS),
using the existing VERIFIED_ADMIN and ADMIN_ADDRESS variables and preserving the
same string/quote style.
- Line 71: The shell conditional invoking the CLI command "stellar keys address
bran" uses a redundant redirection sequence "&>/dev/null 2>&1"; simplify by
removing the duplicate redirection so only a single redirection that discards
both stdout and stderr is used (update the conditional that checks the output of
"stellar keys address bran" to use a single redirection form).
In `@apps/contract/scripts/deploy-reputation.sh`:
- Around line 131-139: The script currently uses cd to enter the
contracts/reputation directory before running stellar contract build; replace
that pattern by invoking the stellar contract build command with the
--manifest-path option pointing at the Reputation contract's manifest (i.e., run
stellar contract build --manifest-path <reputation-manifest>) and remove the cd
and subsequent cd .. lines so the script is cwd‑independent and complies with
the no-cd rule; keep the existing error handling around the stellar contract
build invocation.
In `@apps/contract/scripts/deploy-streak.sh`:
- Line 126: Change the interactive prompt to use read with the -r flag to avoid
backslash interpretation: update the read invocation that assigns to the
variable confirm (the line using read -p and the variable confirm) to use read
-r so user input containing backslashes is preserved.
- Around line 176-180: The verification currently succeeds if VERIFIED_ADMIN
contains ADMIN_ADDRESS OR if VERIFIED_ADMIN is anything other than
"VERIFICATION_FAILED", which lets unexpected outputs pass; change the logic in
deploy-streak.sh so verification only succeeds when VERIFIED_ADMIN is not
"VERIFICATION_FAILED" AND it actually contains (or equals) ADMIN_ADDRESS. Locate
the check using the VERIFIED_ADMIN and ADMIN_ADDRESS variables and replace the
OR condition with an AND check that verifies VERIFIED_ADMIN !=
"VERIFICATION_FAILED" and VERIFIED_ADMIN contains (or equals) ADMIN_ADDRESS to
ensure strict verification.
- Line 71: The conditional invoking the CLI "stellar keys address bran" uses
redundant redirection "&>/dev/null 2>&1"; update the if statement that runs
"stellar keys address bran" to remove the redundancy and use a single
redirection form (either "&>/dev/null" or "> /dev/null 2>&1") so it reliably
silences both stdout and stderr without duplicating redirection.
In `@apps/contract/scripts/grant-recorder-role.sh`:
- Line 52: The script uses pipes (e.g., the get_admin output being piped into
grep -o 'G[A-Z0-9]\{55\}' | head -1 and other occurrences like tail -1 | tr -d
and tee /tmp/...) which violates the project guideline to avoid pipes; modify
each occurrence (including the instances around lines 90-91, 104, 118-119, 141,
147, 152-154, 158-160, 185-186, 188-189) to write intermediate command output to
temporary files and then run the single consumer commands against those files
(for example, redirect get_admin output to a tmp file, run grep -o on that file
and use sed or awk to extract the first match, and replace tee usage with
redirection to a temp file plus a cp/mv if needed), ensuring you update the
commands referenced (the get_admin invocation, grep -o 'G[A-Z0-9]\{55\}', head,
tail, tr, and tee usages) to operate on files rather than via pipes and clean up
temp files afterwards.
- Around line 209-217: Update the summary reporting in grant-recorder-role.sh to
reflect actual outcomes instead of unconditionally showing success; have
grant_and_verify return/exit with a status (or set a boolean) for each
invocation (e.g., for Streak, Referral, Quest) and collect those results into
variables (like STREAK_OK, REFERRAL_OK, QUEST_OK) then print the final summary
using those variables to show a checkmark for success and an X or "failed" for
failures; ensure you reference the grant_and_verify calls and the final echo
block so the summary logic uses the recorded results rather than assuming all
grants succeeded.
- Around line 48-52: Split declaration from assignment for variables capturing
output of `stellar contract invoke` (e.g., `contract_admin`): first declare the
variable, then run the command and capture its stdout/stderr into a temporary
variable, check the command's exit status ($?) and only assign the cleaned
output (grep/head/echo result) to `contract_admin` if the command succeeded; if
it failed, propagate or handle the error instead of masking it. Apply the same
pattern to all other variables that currently combine declaration and command
substitution from `stellar contract invoke` so failures are not hidden.
- Around line 106-107: The script currently checks the previous command's exit
status using the literal test `if [[ $? -eq 0 ]]; then`; change this to test the
grant command directly by using the grant command itself as the conditional
(i.e., use `if <grant-command>; then` so no intervening commands can alter
`$?`). Locate the `if [[ $? -eq 0 ]]; then` snippet in the
grant-recorder-role.sh flow surrounding the admin role grant and replace the
two-step success check with a single direct conditional using the command that
performs the grant, preserving the existing success and failure echo/log
branches.
In `@apps/contract/scripts/setup-gamification-contracts.sh`:
- Around line 141-145: Replace the uses of pipes (tee and grep) with direct file
redirection and separate reads: for the Streak role grant command that currently
pipes to /tmp/streak-grant.log via tee, redirect stdout/stderr to that file
using > and 2>&1, then check the command's exit status ($?) to decide
success/failure and perform grep on the file separately (grep -i
"already\|exists\|granted" /tmp/streak-grant.log || true) for diagnostics; apply
the same refactor to the similar blocks at the other occurrences (the blocks
around the references you noted) so no pipes (| or tee) are used and all output
is written and inspected via files and exit codes instead.
- Line 80: The conditional invoking the command `stellar keys address bran
&>/dev/null 2>&1` contains a redundant redirection; update the `if` test that
calls `stellar keys address bran` to remove the extra `2>&1` so it uses a single
correct combined redirection (e.g., `&>/dev/null`) to silence both stdout and
stderr; modify the `if` line accordingly in the script to eliminate the
redundant token.
In `@apps/contract/scripts/sync-quests-to-chain.sh`:
- Line 63: The if-statement uses redundant redirection ("if stellar keys address
bran &>/dev/null 2>&1; then"); update the command in the condition (in
sync-quests-to-chain.sh) to use a single redirection form (either "&>/dev/null"
or ">/dev/null 2>&1") so stdout and stderr are suppressed without duplication;
change the occurrence within the conditional that invokes "stellar keys address
bran" accordingly.
- Around line 104-136: Replace the piped invocations with redirected output
files and subsequent non-piped processing: for the grant_role call (the stellar
contract invoke that currently uses "2>&1 | tee /tmp/quest-admin-grant.log")
redirect both stdout and stderr to /tmp/quest-admin-grant.log (e.g. >
/tmp/quest-admin-grant.log 2>&1) and then check GRANT_RESULT as you already do;
for the has_role call that uses "2>&1 | grep -o '[0-9]' | head -1 || echo '0'",
redirect its output to a temp file (e.g. /tmp/quest-has-role.log), then extract
the first digit without pipes by running grep -o '[0-9]' /tmp/quest-has-role.log
> /tmp/quest-has-role-digits.log and then read the first line with head -1 <
/tmp/quest-has-role-digits.log or by using read to assign HAS_ROLE from the
file, defaulting to "0" if empty; update references to GRANT_RESULT, HAS_ROLE,
/tmp/quest-admin-grant.log and the stellar invocations (-- grant_role and --
has_role) accordingly.
In `@apps/web/app/`(routes)/admin/gamification/page.tsx:
- Line 10: The QueryClient is created with default staleTime (0) causing
hydrated SSR data to refetch on mount; modify the QueryClient instantiation (the
symbol QueryClient) to include a non-zero staleTime—either by setting
defaultOptions.queries.staleTime to a small value (e.g. 30s–1min) or by passing
staleTime when prefetching the specific queries you hydrate—so the client
doesn't immediately refetch Supabase data on first render.
In `@apps/web/app/actions/escrow/save-escrow-contract.ts`:
- Around line 134-172: The current logic reuses the latest contribution
(existingContribution) which can misattribute escrow to donors; update the
check/create flow in save-escrow-contract to either (A) query contributions with
an escrow-specific flag (e.g., .eq('contribution_type','escrow') or
.eq('is_escrow', true) when selecting existingContribution) or (B) always create
a dedicated escrow placeholder instead of reusing the latest donor contribution
by inserting a new row via supabase.from('contributions').insert(...) that
includes a clear escrow identifier (e.g., contribution_type: 'escrow' or
is_escrow: true) so the created record (newContribution) is unambiguously an
escrow record and does not affect donor attribution or reporting.
- Around line 108-133: The code currently falls back to placeholder values
(totalAmount = 1, payerAddress dummy address) which can persist invalid data;
update save-escrow-contract to validate required escrow inputs and fail fast
instead of using placeholders: run validateNotEmpty on engagementId (from
engagementId / projectId), validateAmount on totalAmount (compute from
escrowData.milestones or amount and throw/return error if invalid), and
validateAddress on payerAddress and receiverAddress (derived from
escrowData.roles/milestones); remove the hardcoded fallback values and
return/throw a clear validation error when any check fails (apply checks where
engagementId, totalAmount, payerAddress, receiverAddress, and platformFee are
computed).
- Around line 59-103: The action save-escrow-contract is missing standardized
input validation and auth enforcement; migrate this server action to use
next-safe-action with a Zod (or equivalent) schema that validates
params.projectId, userId, and the escrowData structure (including totalAmount as
a positive number and engagementId as a non-empty string) before any
supabaseServiceRole calls; keep the existing admin bypass and permission logic
(profile lookup, project lookup, project_members check using
supabaseServiceRole) but run it after schema/auth is enforced, and replace
implicit fallbacks (the placeholder totalAmount = 1 and engagementId =
`project-${params.projectId`) with explicit validation rules and explicit error
responses when those fields are missing or invalid.
In `@apps/web/app/api/contributions/create/route.ts`:
- Around line 166-174: Replace the current console.log telemetry that prints raw
wallet addresses and user identifiers with privacy-safe debug logging: in the
block that reads/sets userStellarAddress (and where devices is inspected and in
the catch), change console.log to the app's debug/logger level and
redact/truncate userStellarAddress and any user IDs before logging (e.g., show
only first 4 and last 4 chars or replace with a hashed token). Ensure the catch
log for errors in the devices lookup also avoids printing full PII and instead
logs a masked address or a non-PII identifier plus the error message.
- Around line 129-131: Add a rate limit check at the start of the POST handler
before triggering the gamification pipeline: implement a per-user or per-IP
limiter (token-bucket or sliding window, or reuse existing app limiter) and
validate the incoming request, returning HTTP 429 when the limit is exceeded;
place this check just before you call gamificationUpdates (so handlers like
gamificationUpdates remain fire-and-forget and non-blocking), and ensure the
limiter uses the request context (user id or IP) and is async-safe so it won't
block the rest of the request flow.
- Around line 135-138: Pass the user's Supabase JWT into the background Supabase
client so RLS and auth context persist: when creating the client in the
gamification flow, call createSupabaseServerClient with
session.supabaseAccessToken (use the existing session variable and keep userId
logging intact) so asynchronous work after the response still executes under the
correct user context and avoids empty reads or privilege mismatches.
- Around line 192-202: Update the createMockRequest function signature to
include an explicit return type of NextRequest; change the declaration to
annotate the return type (e.g., const createMockRequest = (body: Record<string,
unknown>): NextRequest => ...) and keep the body creating a new Request cast to
NextRequest so callers relying on Request/NextRequest methods remain compatible.
- Around line 356-360: Replace the any cast by importing the generated user_nfts
type from your centralized Supabase types (e.g. import type { user_nfts } from
'@services/supabase') and use the typed Supabase call instead of (svcClient as
any).from('user_nfts'); for example call svcClient.from<user_nfts>('user_nfts')
or use .returns<user_nfts>() on the query so the query and existingNFT variable
are strongly typed; remove the eslint/type cast lines afterwards.
- Around line 394-398: The current fire-and-forget call to gamificationUpdates()
can be terminated in serverless environments; replace the direct promise call
with Next.js's scheduling API by invoking after(...) to schedule
gamificationUpdates to run after the response completes (keep the existing error
handling inside the scheduled task). Locate the block that currently logs and
calls gamificationUpdates() and change it to schedule gamificationUpdates via
after so the gamification workflow reliably runs post-response while preserving
the console.error handling for any thrown errors.
In `@apps/web/app/api/nfts/`[address]/route.ts:
- Around line 125-128: The sequential per-token ownership loop that builds nfts
and uses maxTokensToCheck is causing slow RPCs; change it to batch-parallel
checks by introducing a BATCH_SIZE constant (e.g., 10), grouping token indices
into batches, invoking the existing per-token ownership RPC call in parallel for
each batch with Promise.allSettled (or Promise.all) and then processing
successful results into nfts; ensure you still cap work at maxTokensToCheck and
handle/retry or skip failed promises so the endpoint remains responsive.
- Around line 61-65: The code is using type assertions to any to call private
methods on SmartWalletTransactionService (seen in route.ts where (txService as
any).getFundingAccount() is used); add public methods on
SmartWalletTransactionService such as public async simulateTransaction(tx:
Transaction): Promise<SimulationResult> and public async
getSourceAccountForSimulation(): Promise<Account> (or a public getFundingAccount
wrapper) in smart-wallet-transactions.ts, then replace all five occurrences of
the (txService as any) casts in apps/web/app/api/nfts/[address]/route.ts to call
these new public methods (e.g., txService.getSourceAccountForSimulation() and
txService.simulateTransaction(tx)) to remove any casts and restore proper
typing.
- Around line 21-35: The GET handler currently only checks presence of address
but not its format and lacks rate limiting; update the GET function to validate
the Stellar address format (e.g., verify params → address is a valid Stellar
public key pattern/length/checksum) and return a 400 with NextResponse.json when
invalid, and implement rate limiting at the start of GET (per-IP or per-key) to
reject excessive requests (429) before doing work; reference the GET function,
the params/address extraction, and NextResponse.json for where to add the
validation and rate-limit checks.
In `@apps/web/app/api/nfts/evolve/route.ts`:
- Around line 203-246: The stats block currently sets totalDonations to
contributions?.length but omits the totalAmount expected by buildNFTMetadata;
compute a numeric totalAmount by summing the contributions' amount values
(coerce to Number and handle null/undefined), keep totalDonations as the count
if needed, and include totalAmount in the returned object so buildNFTMetadata
receives the same shape as the mint route (refer to the contributions result,
totalDonations, and the buildNFTMetadata consumer to ensure consistency).
- Around line 31-33: The handler currently allows body.user_id to override
session.user.id; change this to enforce self-only by default: derive userId from
session.user.id and only accept body.user_id if the session user has an admin
role (check session.user.role or call Supabase to verify RBAC) before assigning;
additionally validate the incoming payload (required fields, types, allowed
ranges) from req.json() and reject requests with invalid shapes or unauthorized
user_id overrides, and use the same unique symbols (body.user_id,
session.user.id, req.json()) when locating and updating the logic.
- Around line 24-29: This POST handler (exported POST in route.ts) lacks rate
limiting for a public, state-changing API; add a rate limit check at the start
of the POST flow after session validation (or immediately before sensitive
writes) using the project's rate-limiter utility (or implement one if missing)
so repeated calls from the same user/IP are throttled; specifically, call the
rate limiter with the requester identity (session.user.id or req.ip) and handle
rejected requests by returning NextResponse.json({ error: 'Too Many Requests' },
{ status: 429 }) before proceeding with on-chain writes in the POST function.
In `@apps/web/app/api/nfts/mint/route.ts`:
- Around line 96-118: Currently the code generates image/metadata using
tokenId=0 (generateTierSVG(tier, 0) and buildNFTMetadata(tier, 0,...)) which
produces incorrect on‑chain metadata; change the flow to mint first to obtain
the real tokenId (invoke the mint call you already use for onchain minting and
capture the returned tokenId/receipt), then regenerate the SVG with
generateTierSVG(tier, tokenId), upload it via uploadFileToIPFS (update
imageUri/imageIpfsHash), rebuild metadata with buildNFTMetadata(tier, tokenId,
stats, imageUri), upload the metadata to IPFS (metadataIpfsHash) and finally
update the on‑chain token URI (e.g., setTokenURI/updateTokenMetadata using the
captured tokenId) so the chain points to the correct token-specific assets
instead of the placeholder 0.
- Around line 25-30: Add rate limiting to the POST route handler to throttle
requests before proceeding with minting: in the POST(req: NextRequest) function,
perform a rate-limit check (e.g., IP or user-id based) immediately after
extracting the requestor (before or right after getServerSession) and return a
429 NextResponse.json when the limit is exceeded; implement the limiter using
the project's shared middleware/utility (or a Redis-backed token-bucket/Leaky
Bucket) and reference the handler name POST and session check
getServerSession/NextResponse.json so callers are denied before any mint logic
runs.
- Around line 31-35: The endpoint currently allows any authenticated caller to
override user_id and stellar_address via body, so change the logic to only
accept body.user_id and body.stellar_address when the session has an admin role
(e.g., check session.user.role === 'admin' or verify via Supabase RBAC)
otherwise force userId = session.user.id and stellarAddress = null/unchanged;
additionally validate and sanitize the payload after await req.json() (ensure
user_id is the expected type/format, stellar_address matches your address
regex/length rules) and return a 403 when a non-admin attempts to override these
fields or a 400 for invalid input. Use the existing variables body, userId,
session.user.id and stellarAddress to implement these checks and early returns.
- Around line 206-264: Extract getUserStats into a shared helper file and
convert it to a typed const arrow function using RORO (e.g. const getUserStats =
async ({ supabase, userId }: { supabase: import
type('@packages/lib/types').TypedSupabaseClient; userId: string; }):
Promise<UserStats> => { ... }), define an explicit UserStats interface for the
return shape, run independent Supabase queries in parallel with Promise.all to
improve performance, and export the helper for reuse from the evolve route;
ensure all type imports use "import type" and the function parameters/return
types are explicitly annotated.
In `@apps/web/app/api/nfts/user/route.ts`:
- Around line 43-47: Extract the numeric multipliers used to compute impactScore
into named constants (e.g., an IMPACT_WEIGHTS object with DONATION,
QUEST_COMPLETION, STREAK_DAY, REFERRAL keys) and update the impactScore
calculation to multiply totals by those constants instead of raw numbers; update
any related tests/comments and ensure the new constants are exported or scoped
appropriately where impactScore is computed (look for the impactScore variable
in route.ts).
- Around line 8-56: Add an explicit return type to the getUserStats function
signature (e.g., Promise<{ impactScore: number; totalDonations: number;
questsCompleted: number; streakDays: number; referralCount: number }>) so
callers and IDEs get correct typings; update the function declaration for
getUserStats to include that Promise-returning type (or create a named
interface/type like UserStats and use Promise<UserStats>), keeping the existing
parameter types and the returned object shape (impactScore, totalDonations,
questsCompleted, streakDays, referralCount).
In `@apps/web/app/api/quests/progress/route.ts`:
- Around line 11-12: Replace the exported async function POST declaration with a
const arrow export and add explicit TypeScript types: change the POST handler to
an exported const (POST) defined as an async arrow function that accepts req:
NextRequest and returns a typed Response (e.g., Promise<NextResponse> or
NextResponse depending on sync/async usage); update the signature and any return
points inside the function accordingly to satisfy type checking while keeping
the same handler logic.
- Around line 68-72: Declare a reusable TypeScript interface for the contract
result shape and use it instead of the inline union type for the variable
contractResult; e.g., create an interface named ContractResult with fields
success: boolean, completed?: boolean, error?: string and update the variable
declaration contractResult: ContractResult | null = null and any other places
that construct or check this object (e.g., in the route handler functions) to
use the new interface for consistent typing.
- Around line 67-141: The contract call block that uses
GamificationContractService.updateQuestProgress should be moved after the
server-side quest validation: check quest.is_active and quest.expires_at (the
existing checks that currently run after the contract call) before any
interaction with the blockchain; if either check fails, return the 400
NextResponse immediately and do not invoke updateQuestProgress or create
GamificationContractService. Ensure you reference the same symbols
(quest.is_active, quest.expires_at,
GamificationContractService.updateQuestProgress,
process.env.SOROBAN_PRIVATE_KEY, and stellarAddress) when relocating the logic
so the contract call only runs for validated, non-expired active quests.
- Around line 13-16: Add per-user rate limiting to this route by creating a
RateLimiter instance and calling its increment method after validating the
session (use getServerSession and nextAuthOption to get session.user.id); check
the returned value (blocked/isBlocked) and return a 429 JSON response when
blocked. Use a stable key like 'quest_progress' when calling
RateLimiter.increment(session.user.id, 'quest_progress') to match other uses
(see auth/confirm/route.ts) and ensure the handler exits early on a blocked
response. Make sure to import RateLimiter and keep the session check (if
(!session?.user?.id) ...) before invoking the limiter.
- Around line 13-16: The handler currently only checks authentication; add an
authorization check to enforce that only service or recorder accounts can call
this endpoint by inspecting session.user.role (or session.user.roles) after
getServerSession(nextAuthOption) and before proceeding; if the role is not
"service" or "recorder" return a NextResponse.json({ error: 'Forbidden' }, {
status: 403 }). Mirror the pattern used in projects/[slug]/members/route.ts (use
the same role names or shared constants if available) and keep the existing
unauthorized (401) check for missing session.user.id.
- Around line 31-35: The current devices query in
apps/web/app/api/quests/progress/route.ts uses the Supabase client directly
instead of DrizzleORM; to fix, update the query to use the Drizzle repository
pattern (as shown in webauthn.database.ts and stellar-passkey.service.ts) by
importing the Drizzle client and querying the devices table through the Drizzle
schema, and before converting quest-related logic to Drizzle add the missing
quest_definitions table to the Drizzle schema file
(packages/drizzle/src/data/schema.ts) so quest_definitions is defined and the
full repository pattern can be applied consistently.
- Around line 18-29: The handler currently destructures quest_id/progress_value
and does a loose undefined check; update the destructuring to use camelCase
(questId, progressValue, userAddress) and validate that questId and
progressValue are numbers before calling the contract service: coerce/parse them
(Number or parseInt), ensure Number.isFinite(value) and value meets expected
range (e.g., non-negative, within max if applicable), and return a 400 JSON
error for invalid inputs; keep using session.user.id for RLS and pass the
validated numeric values to the contract call (the functions/variables to
change: the destructuring in route.ts, validation logic around
questId/progressValue, and the subsequent call that sends these values to the
contract service).
In `@apps/web/app/api/quests/route.ts`:
- Around line 99-108: Extract the JSON body as before but map incoming
snake_case fields to camelCase locals (e.g., map quest_type -> questType,
target_value -> targetValue, reward_points -> rewardPoints, expires_at ->
expiresAt, contract_address -> contractAddress) so internal variables follow
camelCase while still writing the original snake_case names to the DB layer;
update any subsequent references in this handler (route.ts) to use the new
camelCase locals and ensure the DB insert/serialize step continues to send
snake_case keys.
- Around line 170-172: Replace the inline anonymous type for onChainResult with
a named interface to improve reuse and consistency: define an interface (e.g.,
OnChainResult) describing { success: boolean; questId?: number; error?: string }
and update the onChainResult declaration to use that interface; ensure the
interface name and the onChainResult variable are used where this shape appears
so other functions (e.g., any handlers that consume onChainResult) can also
reference the type.
- Around line 78-80: Convert the exported function declarations POST (and
similarly GET) into const arrow exports with explicit TypeScript types: replace
the function POST(req: NextRequest) { ... } with export const POST = async (req:
NextRequest): Promise<Response> => { ... } (and do the same for GET), preserving
existing internal logic (e.g., session retrieval via
getServerSession(nextAuthOption)); this aligns with repo conventions and
improves IDE/type checking.
- Around line 117-125: The current pattern that computes quest_id via the
supabase query grabbing max quest_id and then (max+1) is race-prone; replace it
with an atomic DB-backed generator: update the quest_definitions.quest_id column
to use GENERATED ALWAYS AS IDENTITY or a sequence (DEFAULT
nextval('quest_id_seq')) so inserts get an atomic id, or if you must keep manual
assignment implement a transaction that locks the max row (SELECT quest_id FROM
quest_definitions ORDER BY quest_id DESC LIMIT 1 FOR UPDATE) and perform the
insert inside that transaction or call a Postgres RPC that returns
nextval('quest_id_seq') via supabase before constructing the insert; change the
code around the supabase select(...).order(...).limit(1).single() and the
quest_id assignment to use the chosen atomic mechanism.
- Around line 23-31: The route currently queries Supabase directly using the
service-role client (imported as supabase) against the quest_definitions table;
replace that inline DB call with a DrizzleORM repository function (e.g.,
QuestRepository.getActiveQuests or questsRepo.getAllActive) that encapsulates
the query, error handling, and use of the service-role client so the route
(route.ts) simply imports and calls the repository method to fetch active
quests; implement the repository to use Drizzle's schema for quest_definitions,
return typed results or throw/return errors consistently, and update the route
to handle the repository response instead of using
supabase.from(...).select(...).
- Around line 16-21: Add per-user/IP rate limiting to the GET and POST handlers
by invoking the existing RateLimiter (Upstash Redis-backed) at the start of each
handler (before calling getServerSession and before any service-role client
queries). Use request IP or session.user.id as the key (fall back to IP for
unauthenticated requests), check the limiter quota, and if exceeded return
NextResponse.json({ error: 'Too Many Requests' }, { status: 429 }) early. Ensure
the limiter instance/config used matches the project's RateLimiter API and that
rate checks occur before expensive operations (references: GET, POST,
RateLimiter, getServerSession, nextAuthOption).
- Around line 163-167: The expires_at string is not validated before converting
to a Unix timestamp, allowing invalid dates to produce NaN in expiresAtUnix and
leak to on-chain state; update the API route handling (look for the expires_at
variable and the expiresAtUnix computation in route.ts) to validate the incoming
expires_at value (use the same zod schema pattern used elsewhere: e.g., add a
zod refinement or parse step that requires a valid ISO/date string) and if
invalid return a 4xx error, otherwise safely convert to Unix seconds (ensuring
Math.floor(new Date(...).getTime()/1000) will not produce NaN).
| // Fetch NFTs owned by this address | ||
| // Note: This is inefficient - ideally use an indexer | ||
| const nfts = [] | ||
| const maxTokensToCheck = Math.min(totalSupply, 100) // Limit to 100 for performance |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Performance concern acknowledged—consider parallel fetching.
The sequential loop for ownership checks (up to 100 RPC calls) will be slow. While the comment correctly notes an indexer should be used for production, we could improve the MVP by batching or parallelizing requests:
♻️ Parallel ownership checking example
// Batch check ownership in parallel (e.g., 10 at a time)
const BATCH_SIZE = 10;
for (let i = 0; i < maxTokensToCheck; i += BATCH_SIZE) {
const batch = Array.from(
{ length: Math.min(BATCH_SIZE, maxTokensToCheck - i) },
(_, idx) => checkTokenOwnership(i + idx)
);
const results = await Promise.allSettled(batch);
// Process results...
}🤖 Prompt for AI Agents
In `@apps/web/app/api/nfts/`[address]/route.ts around lines 125 - 128, The
sequential per-token ownership loop that builds nfts and uses maxTokensToCheck
is causing slow RPCs; change it to batch-parallel checks by introducing a
BATCH_SIZE constant (e.g., 10), grouping token indices into batches, invoking
the existing per-token ownership RPC call in parallel for each batch with
Promise.allSettled (or Promise.all) and then processing successful results into
nfts; ensure you still cap work at maxTokensToCheck and handle/retry or skip
failed promises so the endpoint remains responsive.
| const { data: contributions } = await supabase | ||
| .from('contributions') | ||
| .select('amount') | ||
| .eq('contributor_id', userId) | ||
|
|
||
| const totalDonations = contributions?.length ?? 0 | ||
|
|
||
| const { data: quests } = await supabase | ||
| .from('user_quest_progress') | ||
| .select('id') | ||
| .eq('user_id', userId) | ||
| .eq('is_completed', true) | ||
|
|
||
| const questsCompleted = quests?.length ?? 0 | ||
|
|
||
| const { data: streaks } = await supabase | ||
| .from('user_streaks') | ||
| .select('current_streak') | ||
| .eq('user_id', userId) | ||
| .order('current_streak', { ascending: false }) | ||
| .limit(1) | ||
|
|
||
| const streakDays = streaks?.[0]?.current_streak ?? 0 | ||
|
|
||
| const { data: referrals } = await supabase | ||
| .from('referral_records') | ||
| .select('id') | ||
| .eq('referrer_id', userId) | ||
|
|
||
| const referralCount = referrals?.length ?? 0 | ||
|
|
||
| const impactScore = | ||
| totalDonations * 10 + | ||
| questsCompleted * 25 + | ||
| streakDays * 5 + | ||
| referralCount * 15 | ||
|
|
||
| return { | ||
| impactScore, | ||
| totalDonations, | ||
| questsCompleted, | ||
| streakDays, | ||
| referralCount, | ||
| } |
There was a problem hiding this comment.
Good stats aggregation—consider aligning its shape with mint to avoid metadata gaps.
buildNFTMetadata receives stats, but this helper omits totalAmount that the mint route provides. If metadata expects it, values become undefined. Consider adding totalAmount here for consistency.
🤖 Prompt for AI Agents
In `@apps/web/app/api/nfts/evolve/route.ts` around lines 203 - 246, The stats
block currently sets totalDonations to contributions?.length but omits the
totalAmount expected by buildNFTMetadata; compute a numeric totalAmount by
summing the contributions' amount values (coerce to Number and handle
null/undefined), keep totalDonations as the count if needed, and include
totalAmount in the returned object so buildNFTMetadata receives the same shape
as the mint route (refer to the contributions result, totalDonations, and the
buildNFTMetadata consumer to ensure consistency).
There was a problem hiding this comment.
@coderabbitai, create a new issue for this one to be addressed by the Open Source Drips X Stellar Wave
| // Generate and upload tier image to IPFS via Pinata | ||
| let imageUri = '' | ||
| let imageIpfsHash = '' | ||
| try { | ||
| const svgContent = generateTierSVG(tier, 0) // Token ID 0 placeholder; will be updated | ||
| const svgBuffer = Buffer.from(svgContent, 'utf-8') | ||
| const uploadResult = await uploadFileToIPFS( | ||
| svgBuffer, | ||
| `kindfi-kinder-${tier}-${userId.slice(0, 8)}.svg`, | ||
| 'image/svg+xml', | ||
| ) | ||
| imageUri = uploadResult.ipfsUrl | ||
| imageIpfsHash = uploadResult.ipfsHash | ||
| console.log('[NFT Mint] Image uploaded to IPFS:', imageUri) | ||
| } catch (err) { | ||
| console.warn('[NFT Mint] Failed to upload image to Pinata, using placeholder:', err) | ||
| imageUri = `https://kindfi.org/images/nft-${tier}.svg` | ||
| } | ||
|
|
||
| // Upload metadata JSON to IPFS as backup | ||
| let metadataIpfsHash = '' | ||
| const nftMetadataJSON = buildNFTMetadata(tier, 0, stats, imageUri) | ||
| try { |
There was a problem hiding this comment.
Nice IPFS fallback—consider avoiding the tokenId=0 placeholder in metadata/image.
If generateTierSVG or buildNFTMetadata embed tokenId, minting with 0 yields incorrect on‑chain metadata. Consider minting first to obtain the real tokenId, then regenerating metadata + image and updating on‑chain metadata.
🤖 Prompt for AI Agents
In `@apps/web/app/api/nfts/mint/route.ts` around lines 96 - 118, Currently the
code generates image/metadata using tokenId=0 (generateTierSVG(tier, 0) and
buildNFTMetadata(tier, 0,...)) which produces incorrect on‑chain metadata;
change the flow to mint first to obtain the real tokenId (invoke the mint call
you already use for onchain minting and capture the returned tokenId/receipt),
then regenerate the SVG with generateTierSVG(tier, tokenId), upload it via
uploadFileToIPFS (update imageUri/imageIpfsHash), rebuild metadata with
buildNFTMetadata(tier, tokenId, stats, imageUri), upload the metadata to IPFS
(metadataIpfsHash) and finally update the on‑chain token URI (e.g.,
setTokenURI/updateTokenMetadata using the captured tokenId) so the chain points
to the correct token-specific assets instead of the placeholder 0.
There was a problem hiding this comment.
@coderabbitai, create a new issue for this one to be addressed by the Open Source Drips X Stellar Wave
| const impactScore = | ||
| totalDonations * 10 + | ||
| questsCompleted * 25 + | ||
| streakDays * 5 + | ||
| referralCount * 15 |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider extracting impact score weights to constants.
The multipliers (10, 25, 5, 15) determine how activities translate to impact score. Extracting these as named constants would make the scoring logic more transparent and easier to tune:
♻️ Suggested constants
const IMPACT_WEIGHTS = {
DONATION: 10,
QUEST_COMPLETION: 25,
STREAK_DAY: 5,
REFERRAL: 15,
} as const;
const impactScore =
totalDonations * IMPACT_WEIGHTS.DONATION +
questsCompleted * IMPACT_WEIGHTS.QUEST_COMPLETION +
streakDays * IMPACT_WEIGHTS.STREAK_DAY +
referralCount * IMPACT_WEIGHTS.REFERRAL;🤖 Prompt for AI Agents
In `@apps/web/app/api/nfts/user/route.ts` around lines 43 - 47, Extract the
numeric multipliers used to compute impactScore into named constants (e.g., an
IMPACT_WEIGHTS object with DONATION, QUEST_COMPLETION, STREAK_DAY, REFERRAL
keys) and update the impactScore calculation to multiply totals by those
constants instead of raw numbers; update any related tests/comments and ensure
the new constants are exported or scoped appropriately where impactScore is
computed (look for the impactScore variable in route.ts).
There was a problem hiding this comment.
@coderabbitai, create a new issue for this one to be addressed by the Open Source Drips X Stellar Wave
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Create GitHub issue for extracting impact score weights to constants
gh issue create \
--repo kindfi-org/kindfi \
--title "Refactor: Extract impact score weights to named constants in NFT user route" \
--body "## Description
In the NFT user route handler, the impact score calculation currently uses hardcoded numeric multipliers (10, 25, 5, 15) which makes the scoring logic less transparent and harder to maintain.
## Location
File: \`apps/web/app/api/nfts/user/route.ts\` (lines 43-47)
## Request
Extract these magic numbers into named constants to improve code readability and make the scoring weights easier to adjust in the future.
## Context
This issue was identified during code review and should be addressed as part of the **Open Source Drips X Stellar Wave** initiative.
## References
- PR: https://github.com/kindfi-org/kindfi/pull/784
- Review Comment: https://github.com/kindfi-org/kindfi/pull/784#discussion_r2785855684
- Requested by: `@Bran18`" \
--assignee Bran18
echo "Issue created successfully!"
Length of output: 75
@Bran18, I've created issue #786 to track the refactoring of impact score weights into named constants. The issue has been assigned to you and includes references to this PR and review comment for context.
| ### Test Organization | ||
|
|
||
| ``` | ||
| project/ | ||
| ├── src/ | ||
| │ └── lib.rs | ||
| ├── tests/ | ||
| │ ├── common/ | ||
| │ │ └── mod.rs # Shared test utilities | ||
| │ ├── unit/ | ||
| │ │ ├── mod.rs | ||
| │ │ └── transfer.rs | ||
| │ └── integration/ | ||
| │ └── full_flow.rs | ||
| └── Cargo.toml | ||
| ``` |
There was a problem hiding this comment.
Nice directory layout example—consider adding a language tag to the fence.
Adding text keeps markdownlint happy and improves readability.
📝 Suggested tweak
-```
+```text
project/
├── src/
│ └── lib.rs📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ### Test Organization | |
| ``` | |
| project/ | |
| ├── src/ | |
| │ └── lib.rs | |
| ├── tests/ | |
| │ ├── common/ | |
| │ │ └── mod.rs # Shared test utilities | |
| │ ├── unit/ | |
| │ │ ├── mod.rs | |
| │ │ └── transfer.rs | |
| │ └── integration/ | |
| │ └── full_flow.rs | |
| └── Cargo.toml | |
| ``` | |
| ### Test Organization | |
🧰 Tools
🪛 markdownlint-cli2 (0.20.0)
[warning] 518-518: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
In @.agents/skills/stellar-dev/testing.md around lines 516 - 531, Update the
fenced code block under the "Test Organization" header so the opening fence
includes the language tag `text` (change the leading ``` to ```text) to satisfy
markdownlint and improve readability; locate the tree diagram block (the
triple-backtick fence surrounding the project/ ├── src/ ... Cargo.toml lines)
and add the `text` tag to the opening fence only.
| #### Multi Release Milestone Payload | ||
|
|
||
| ```typescript | ||
| export type MultiReleaseMilestonePayload = { | ||
| /** | ||
| * Text describing the function of the milestone | ||
| */ | ||
| description: string; | ||
| /** | ||
| * Amount to be transferred upon completion of this milestone | ||
| */ | ||
| amount: number; | ||
| /** | ||
| * Address where milestone proceeds will be sent to | ||
| */ | ||
| receiver: string; | ||
| }; | ||
| ``` | ||
|
|
There was a problem hiding this comment.
Nice structure—consider using “Multi‑Release” consistently.
The headings read cleaner and align with the other doc naming when hyphenated (e.g., “Multi‑Release”).
✏️ Suggested heading tweak
-#### Multi Release Milestone Payload
+#### Multi-Release Milestone Payload🧰 Tools
🪛 LanguageTool
[grammar] ~20-~20: Use a hyphen to join words.
Context: ... description: string; }; #### Multi Release Milestone Payload typescript...
(QB_NEW_EN_HYPHEN)
🤖 Prompt for AI Agents
In @.agents/skills/trustless-work-dev/skills/api/types.md around lines 20 - 38,
Update the markdown heading "Multi Release Milestone Payload" to use a
hyphenized form "Multi-Release Milestone Payload" for consistency with other doc
headings; leave the TypeScript type name MultiReleaseMilestonePayload unchanged
and ensure any cross-references or anchors that rely on the heading are adjusted
if necessary.
| export type SingleReleaseResolveDisputePayload = { | ||
| /** | ||
| * ID (address) that identifies the escrow contract | ||
| */ | ||
| contractId: string; | ||
|
|
||
| /** | ||
| * Address in charge of resolving disputes within the escrow. | ||
| */ | ||
| disputeResolver: string; | ||
|
|
||
| /** | ||
| * Distributions of the escrow amount to the receivers. | ||
| */ | ||
| distributions: [ | ||
| { | ||
| /** | ||
| * Address of the receiver | ||
| */ | ||
| address: string; | ||
| /** | ||
| * Amount to be transferred to the receiver. All the amount must be equal to the total amount of the escrow. | ||
| */ | ||
| amount: number; | ||
| }, | ||
| ]; | ||
| }; | ||
| ``` | ||
|
|
||
| #### Multi Release Resolve Dispute Payload | ||
|
|
||
| ```typescript | ||
| export type MultiReleaseResolveDisputePayload = { | ||
| /** | ||
| * ID (address) that identifies the escrow contract | ||
| */ | ||
| contractId: string; | ||
|
|
||
| /** | ||
| * Address in charge of resolving disputes within the escrow. | ||
| */ | ||
| disputeResolver: string; | ||
|
|
||
| /** | ||
| * Distributions of the escrow amount to the receivers. | ||
| */ | ||
| distributions: [ | ||
| { | ||
| address: string; | ||
| amount: number; | ||
| }, | ||
| ]; | ||
|
|
||
| /** | ||
| * Index of the milestone to be resolved | ||
| */ | ||
| milestoneIndex: string; | ||
| }; | ||
| ``` | ||
|
|
||
| ### Withdraw Remaining Funds Payload | ||
|
|
||
| ```typescript | ||
| export type WithdrawRemainingFundsPayload = { | ||
| /** | ||
| * ID (address) that identifies the escrow contract | ||
| */ | ||
| contractId: string; | ||
|
|
||
| /** | ||
| * Address in charge of resolving disputes within the escrow. | ||
| */ | ||
| disputeResolver: string; | ||
|
|
||
| /** | ||
| * Distributions of the escrow amount to the receivers. | ||
| */ | ||
| distributions: [ | ||
| { | ||
| address: string; | ||
| amount: number; | ||
| }, | ||
| ]; | ||
| }; | ||
| ``` |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's check if the file exists and get its size
ls -la .agents/skills/trustless-work-dev/skills/api/types.md 2>&1 || echo "File not found"Repository: kindfi-org/kindfi
Length of output: 164
🏁 Script executed:
# Get the total lines in the file
wc -l .agents/skills/trustless-work-dev/skills/api/types.md 2>&1 || echo "Cannot count lines"Repository: kindfi-org/kindfi
Length of output: 119
🏁 Script executed:
# Read the specific lines mentioned in the review (579-663)
sed -n '579,663p' .agents/skills/trustless-work-dev/skills/api/types.md 2>&1 | head -100Repository: kindfi-org/kindfi
Length of output: 1647
🏁 Script executed:
# Search for other distribution fields in the file
grep -n "distributions:" .agents/skills/trustless-work-dev/skills/api/types.mdRepository: kindfi-org/kindfi
Length of output: 129
🏁 Script executed:
# Count total instances of the tuple pattern in the file
grep -n "\[" .agents/skills/trustless-work-dev/skills/api/types.md | grep -A 2 "distributions" | head -20Repository: kindfi-org/kindfi
Length of output: 185
🏁 Script executed:
# Get a broader view - search for all type definitions with distributions
rg -A 8 "distributions:" .agents/skills/trustless-work-dev/skills/api/types.md | head -80Repository: kindfi-org/kindfi
Length of output: 526
Consider refactoring distributions as an array type for better API semantics.
The distributions field is currently defined as a fixed-length tuple ([ { ... }, ]), which in TypeScript represents a tuple with exactly one element. Since the field name is plural and the API context suggests supporting multiple distributions, this should be an array type instead. Changing to { address: string; amount: number }[] will correctly express that any number of distributions can be provided—improving clarity for API consumers.
This pattern appears in three types: SingleReleaseResolveDisputePayload, MultiReleaseResolveDisputePayload, and WithdrawRemainingFundsPayload. All three would benefit from this consistency update.
✅ Suggested type correction
- distributions: [
- {
- address: string;
- amount: number;
- },
- ];
+ distributions: {
+ address: string;
+ amount: number;
+ }[];📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export type SingleReleaseResolveDisputePayload = { | |
| /** | |
| * ID (address) that identifies the escrow contract | |
| */ | |
| contractId: string; | |
| /** | |
| * Address in charge of resolving disputes within the escrow. | |
| */ | |
| disputeResolver: string; | |
| /** | |
| * Distributions of the escrow amount to the receivers. | |
| */ | |
| distributions: [ | |
| { | |
| /** | |
| * Address of the receiver | |
| */ | |
| address: string; | |
| /** | |
| * Amount to be transferred to the receiver. All the amount must be equal to the total amount of the escrow. | |
| */ | |
| amount: number; | |
| }, | |
| ]; | |
| }; | |
| ``` | |
| #### Multi Release Resolve Dispute Payload | |
| ```typescript | |
| export type MultiReleaseResolveDisputePayload = { | |
| /** | |
| * ID (address) that identifies the escrow contract | |
| */ | |
| contractId: string; | |
| /** | |
| * Address in charge of resolving disputes within the escrow. | |
| */ | |
| disputeResolver: string; | |
| /** | |
| * Distributions of the escrow amount to the receivers. | |
| */ | |
| distributions: [ | |
| { | |
| address: string; | |
| amount: number; | |
| }, | |
| ]; | |
| /** | |
| * Index of the milestone to be resolved | |
| */ | |
| milestoneIndex: string; | |
| }; | |
| ``` | |
| ### Withdraw Remaining Funds Payload | |
| ```typescript | |
| export type WithdrawRemainingFundsPayload = { | |
| /** | |
| * ID (address) that identifies the escrow contract | |
| */ | |
| contractId: string; | |
| /** | |
| * Address in charge of resolving disputes within the escrow. | |
| */ | |
| disputeResolver: string; | |
| /** | |
| * Distributions of the escrow amount to the receivers. | |
| */ | |
| distributions: [ | |
| { | |
| address: string; | |
| amount: number; | |
| }, | |
| ]; | |
| }; | |
| ``` | |
| export type SingleReleaseResolveDisputePayload = { | |
| /** | |
| * ID (address) that identifies the escrow contract | |
| */ | |
| contractId: string; | |
| /** | |
| * Address in charge of resolving disputes within the escrow. | |
| */ | |
| disputeResolver: string; | |
| /** | |
| * Distributions of the escrow amount to the receivers. | |
| */ | |
| distributions: { | |
| /** | |
| * Address of the receiver | |
| */ | |
| address: string; | |
| /** | |
| * Amount to be transferred to the receiver. All the amount must be equal to the total amount of the escrow. | |
| */ | |
| amount: number; | |
| }[]; | |
| }; |
🧰 Tools
🪛 LanguageTool
[grammar] ~608-~608: Use a hyphen to join words.
Context: ...: number; }, ]; }; #### Multi Release Resolve Dispute Payload type...
(QB_NEW_EN_HYPHEN)
🤖 Prompt for AI Agents
In @.agents/skills/trustless-work-dev/skills/api/types.md around lines 579 -
663, The distributions field is currently typed as a single-element tuple which
forces exactly one item; update the distributions property on
SingleReleaseResolveDisputePayload, MultiReleaseResolveDisputePayload, and
WithdrawRemainingFundsPayload to be an array type (i.e., use a repeated item
type for the object with address and amount) so callers can supply any number of
distribution objects; locate and change the distributions declarations inside
those three types to the array form for consistency and correct semantics.
| ``` | ||
| You are an expert Next.js + Trustless Work Escrow Blocks engineer. | ||
| Follow these rules: | ||
| - Use files and paths exactly as provided here. | ||
| - Keep provider order identical to the guide. | ||
| - Prefer code that compiles with no TODOs. | ||
| - When an escrow type is single vs multi, import the matching actions. | ||
| ``` |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider adding language specifiers to fenced code blocks.
Static analysis (MD040) flags that code blocks at lines 237, 248, and 263 should have a language specified. These appear to be prompt templates, which could use text or markdown as the language.
♻️ Proposed fix
-```
+```text
You are an expert Next.js + Trustless Work Escrow Blocks engineer.Also applies to: 248-259, 263-273
🧰 Tools
🪛 markdownlint-cli2 (0.20.0)
[warning] 237-237: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
In @.agents/skills/trustless-work-dev/skills/blocks/vibe-coding.md around lines
237 - 244, Add a language specifier to the fenced code blocks in the markdown
(the prompt templates currently starting at lines 237, 248, and 263) to satisfy
MD040; change the opening backticks from ``` to something like ```text or
```markdown for each of those blocks so static analysis recognizes the
language—ensure you update all three occurrences (the blocks beginning at 237,
248, and 263) and keep the block contents unchanged.
| export async function POST(req: NextRequest) { | ||
| try { | ||
| const session = await getServerSession(nextAuthOption) |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Handler logic is solid; consider const arrow exports with explicit types for TS consistency.
This aligns with repo conventions and improves IDE tooling (apply similarly to GET).
Example refactor
-export async function POST(req: NextRequest) {
+export const POST = async (req: NextRequest): Promise<NextResponse> => {
// ...
-}
+}🤖 Prompt for AI Agents
In `@apps/web/app/api/quests/route.ts` around lines 78 - 80, Convert the exported
function declarations POST (and similarly GET) into const arrow exports with
explicit TypeScript types: replace the function POST(req: NextRequest) { ... }
with export const POST = async (req: NextRequest): Promise<Response> => { ... }
(and do the same for GET), preserving existing internal logic (e.g., session
retrieval via getServerSession(nextAuthOption)); this aligns with repo
conventions and improves IDE/type checking.
| const body = await req.json() | ||
| const { | ||
| quest_type, | ||
| name, | ||
| description, | ||
| target_value, | ||
| reward_points, | ||
| expires_at, | ||
| contract_address, | ||
| } = body |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Clear payload extraction; consider mapping request fields to camelCase locals.
This keeps internal variables idiomatic while still persisting snake_case to the DB boundary.
As per coding guidelines 'Use camelCase naming convention for variables, method parameters, and properties (e.g., rpcUrl, networkPassphrase, contractId)'.
🤖 Prompt for AI Agents
In `@apps/web/app/api/quests/route.ts` around lines 99 - 108, Extract the JSON
body as before but map incoming snake_case fields to camelCase locals (e.g., map
quest_type -> questType, target_value -> targetValue, reward_points ->
rewardPoints, expires_at -> expiresAt, contract_address -> contractAddress) so
internal variables follow camelCase while still writing the original snake_case
names to the DB layer; update any subsequent references in this handler
(route.ts) to use the new camelCase locals and ensure the DB insert/serialize
step continues to send snake_case keys.
| // Get next quest_id (simple increment from max) | ||
| const { data: maxQuest } = await supabase | ||
| .from('quest_definitions') | ||
| .select('quest_id') | ||
| .order('quest_id', { ascending: false }) | ||
| .limit(1) | ||
| .single() | ||
|
|
||
| const quest_id = (maxQuest?.quest_id || 0) + 1 |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n apps/web/app/api/quests/route.tsRepository: kindfi-org/kindfi
Length of output: 8512
🏁 Script executed:
find . -type f \( -name "*.sql" -o -name "*migration*" -o -name "*schema*" \) | head -20Repository: kindfi-org/kindfi
Length of output: 1380
🏁 Script executed:
# Search for quest_definitions table definition
git ls-files | xargs grep -l "quest_definitions" | head -10Repository: kindfi-org/kindfi
Length of output: 1747
🏁 Script executed:
cat services/supabase/migrations/20260207000000_create_quest_streak_referral_tables.sqlRepository: kindfi-org/kindfi
Length of output: 8202
Consider implementing atomic quest_id generation to prevent race conditions.
The current max + 1 approach is vulnerable to concurrent requests, which can result in duplicate ID attempts and on-chain mismatches. Two simultaneous admin requests would fetch the same maximum quest_id, calculate the same new ID, and cause the second insert to fail with a UNIQUE constraint violation and a 500 error to the user.
We could enhance this by using PostgreSQL's built-in mechanisms:
- Use
GENERATED ALWAYS AS IDENTITYon thequest_idcolumn for automatic, atomic incrementation - Alternatively, create a
SEQUENCEand reference it viaDEFAULT nextval('quest_id_seq') - If manual control is needed, wrap the lookup and insert in a transaction with
SELECT ... FOR UPDATElocking to serialize concurrent attempts
This ensures consistent quest_id generation regardless of concurrent admin activity.
🤖 Prompt for AI Agents
In `@apps/web/app/api/quests/route.ts` around lines 117 - 125, The current pattern
that computes quest_id via the supabase query grabbing max quest_id and then
(max+1) is race-prone; replace it with an atomic DB-backed generator: update the
quest_definitions.quest_id column to use GENERATED ALWAYS AS IDENTITY or a
sequence (DEFAULT nextval('quest_id_seq')) so inserts get an atomic id, or if
you must keep manual assignment implement a transaction that locks the max row
(SELECT quest_id FROM quest_definitions ORDER BY quest_id DESC LIMIT 1 FOR
UPDATE) and perform the insert inside that transaction or call a Postgres RPC
that returns nextval('quest_id_seq') via supabase before constructing the
insert; change the code around the supabase
select(...).order(...).limit(1).single() and the quest_id assignment to use the
chosen atomic mechanism.
| // Convert expires_at to Unix timestamp (0 for no expiration) | ||
| const expiresAtUnix = | ||
| expires_at && expires_at !== '' | ||
| ? Math.floor(new Date(expires_at).getTime() / 1000) | ||
| : 0 |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's read the file and see the context around lines 163-167
wc -l apps/web/app/api/quests/route.tsRepository: kindfi-org/kindfi
Length of output: 97
🏁 Script executed:
# Read the file to see the exact code and surrounding context
cat -n apps/web/app/api/quests/route.ts | sed -n '150,180p'Repository: kindfi-org/kindfi
Length of output: 1286
🏁 Script executed:
# Search for how expiresAtUnix is used throughout the file
rg -n "expiresAtUnix" apps/web/app/api/quests/route.ts -A 3 -B 1Repository: kindfi-org/kindfi
Length of output: 406
🏁 Script executed:
# Check if there's any input validation on expires_at earlier in the handler
rg -n "expires_at" apps/web/app/api/quests/route.ts | head -20Repository: kindfi-org/kindfi
Length of output: 286
🏁 Script executed:
# Look at the beginning of the file to see if there's request validation or schema
cat -n apps/web/app/api/quests/route.ts | head -140Repository: kindfi-org/kindfi
Length of output: 4832
🏁 Script executed:
# Check if there's any zod/validation schema for the request body
rg -n "schema|Schema|validate|Validate|zod" apps/web/app/api/quests/route.ts | head -20Repository: kindfi-org/kindfi
Length of output: 43
🏁 Script executed:
# Look at the full context around the POST handler to see request parsing
cat -n apps/web/app/api/quests/route.ts | sed -n '1,130p'Repository: kindfi-org/kindfi
Length of output: 4538
🏁 Script executed:
# Look further to see the complete on-chain sync code and how expiresAtUnix is used
cat -n apps/web/app/api/quests/route.ts | sed -n '163,210p'Repository: kindfi-org/kindfi
Length of output: 1940
🏁 Script executed:
# Check if there's any schema validation library being used in the codebase
rg -l "zod|schema" apps/web/app/api/ | head -10Repository: kindfi-org/kindfi
Length of output: 225
🏁 Script executed:
# Look for any utility functions or validators for date handling
rg -n "isValidDate|validateDate|parseDate" apps/web/app/api/ -r "files"Repository: kindfi-org/kindfi
Length of output: 43
🏁 Script executed:
# Let's check the comments validation to see the pattern used in this codebase
cat apps/web/app/api/comments/validation.tsRepository: kindfi-org/kindfi
Length of output: 4313
🏁 Script executed:
# Check if any of the other API routes validate dates
rg -n "expires_at|Date|validation" apps/web/app/api/waitlist/route.ts -A 2 -B 2Repository: kindfi-org/kindfi
Length of output: 43
🏁 Script executed:
# Test what Math.floor(NaN) actually returns to verify the concern
node -e "console.log('Math.floor(NaN):', Math.floor(NaN)); console.log('new Date(\"invalid\").getTime():', new Date('invalid').getTime()); console.log('Math.floor(new Date(\"invalid\").getTime() / 1000):', Math.floor(new Date('invalid').getTime() / 1000))"Repository: kindfi-org/kindfi
Length of output: 170
Validate expires_at format to prevent NaN propagation to on-chain sync.
An invalid expires_at string produces NaN when converted, which then leaks into the on-chain contract state. This violates the guideline to validate all incoming API requests. Consider adding explicit date validation before conversion, consistent with the zod patterns used elsewhere in your API routes.
Suggested validation
- // Convert expires_at to Unix timestamp (0 for no expiration)
- const expiresAtUnix =
- expires_at && expires_at !== ''
- ? Math.floor(new Date(expires_at).getTime() / 1000)
- : 0
+ // Convert expires_at to Unix timestamp (0 for no expiration)
+ const expiresAtDate = expires_at ? new Date(expires_at) : null
+ if (expiresAtDate && Number.isNaN(expiresAtDate.getTime())) {
+ return NextResponse.json(
+ { error: 'Invalid expires_at' },
+ { status: 400 },
+ )
+ }
+ const expiresAtUnix = expiresAtDate
+ ? Math.floor(expiresAtDate.getTime() / 1000)
+ : 0📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Convert expires_at to Unix timestamp (0 for no expiration) | |
| const expiresAtUnix = | |
| expires_at && expires_at !== '' | |
| ? Math.floor(new Date(expires_at).getTime() / 1000) | |
| : 0 | |
| // Convert expires_at to Unix timestamp (0 for no expiration) | |
| const expiresAtDate = expires_at ? new Date(expires_at) : null | |
| if (expiresAtDate && Number.isNaN(expiresAtDate.getTime())) { | |
| return NextResponse.json( | |
| { error: 'Invalid expires_at' }, | |
| { status: 400 }, | |
| ) | |
| } | |
| const expiresAtUnix = expiresAtDate | |
| ? Math.floor(expiresAtDate.getTime() / 1000) | |
| : 0 |
🤖 Prompt for AI Agents
In `@apps/web/app/api/quests/route.ts` around lines 163 - 167, The expires_at
string is not validated before converting to a Unix timestamp, allowing invalid
dates to produce NaN in expiresAtUnix and leak to on-chain state; update the API
route handling (look for the expires_at variable and the expiresAtUnix
computation in route.ts) to validate the incoming expires_at value (use the same
zod schema pattern used elsewhere: e.g., add a zod refinement or parse step that
requires a valid ISO/date string) and if invalid return a 4xx error, otherwise
safely convert to Unix seconds (ensuring Math.floor(new
Date(...).getTime()/1000) will not produce NaN).
| let onChainResult: { success: boolean; questId?: number; error?: string } | null = | ||
| null | ||
| const questContractAddress = |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
On-chain response handling is clear; consider an interface for the result shape.
A named interface improves reuse and keeps object typing consistent.
Example interface
+interface OnChainResult {
+ success: boolean
+ questId?: number
+ error?: string
+}
-let onChainResult: { success: boolean; questId?: number; error?: string } | null = null
+let onChainResult: OnChainResult | null = null🤖 Prompt for AI Agents
In `@apps/web/app/api/quests/route.ts` around lines 170 - 172, Replace the inline
anonymous type for onChainResult with a named interface to improve reuse and
consistency: define an interface (e.g., OnChainResult) describing { success:
boolean; questId?: number; error?: string } and update the onChainResult
declaration to use that interface; ensure the interface name and the
onChainResult variable are used where this shape appears so other functions
(e.g., any handlers that consume onChainResult) can also reference the type.
| Single-file AI context guide for Trustless Work Escrow Blocks with Next.js. Built for agent workflows. | ||
|
|
||
| {% hint style="info" %} | ||
| Export this page to PDF or Markdown and feed it to your copilot (Cursor, Claude, etc). |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider adding a period after "etc" for American English consistency.
Static analysis suggests "etc." requires a period in American English.
♻️ Proposed fix
-Export this page to PDF or Markdown and feed it to your copilot (Cursor, Claude, etc).
+Export this page to PDF or Markdown and feed it to your copilot (Cursor, Claude, etc.).📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Export this page to PDF or Markdown and feed it to your copilot (Cursor, Claude, etc). | |
| Export this page to PDF or Markdown and feed it to your copilot (Cursor, Claude, etc.). |
🧰 Tools
🪛 LanguageTool
[style] ~6-~6: In American English, abbreviations like “etc.” require a period.
Context: ...eed it to your copilot (Cursor, Claude, etc). {% endhint %} ## What This File Is ...
(ETC_PERIOD)
🤖 Prompt for AI Agents
In @.agents/skills/trustless-work-dev/skills/blocks/vibe-coding.md at line 6,
The sentence "Export this page to PDF or Markdown and feed it to your copilot
(Cursor, Claude, etc)" is missing the period after "etc" for American English;
update that exact line in
.agents/skills/trustless-work-dev/skills/blocks/vibe-coding.md to read "...
(Cursor, Claude, etc.)" so the parenthetical ends with "etc." and adjust spacing
if needed.
| ```typescript | ||
| import { | ||
| useInitializeEscrow, | ||
| useSendTransaction, | ||
| } from "@trustless-work/escrow/hooks"; | ||
| import { | ||
| InitializeMultiReleaseEscrowPayload, | ||
| InitializeSingleReleaseEscrowPayload | ||
| } from "@trustless-work/escrow/types"; | ||
|
|
||
| export const useInitializeEscrowForm = () => { | ||
| const { deployEscrow } = useInitializeEscrow(); | ||
| const { sendTransaction } = useSendTransaction(); | ||
|
|
||
| const onSubmit = async ( | ||
| payload: InitializeSingleReleaseEscrowPayload | InitializeMultiReleaseEscrowPayload | ||
| ) => { | ||
| try { | ||
| // Step 1: Get unsigned transaction | ||
| const { unsignedTransaction } = await deployEscrow( | ||
| payload, | ||
| "multi-release" // or "single-release" | ||
| ); | ||
|
|
||
| if (!unsignedTransaction) { | ||
| throw new Error("Unsigned transaction is missing from deployEscrow response."); | ||
| } | ||
|
|
||
| // Step 2: Sign transaction | ||
| const signedXdr = await signTransaction({ | ||
| unsignedTransaction, | ||
| address: walletAddress || "", | ||
| }); | ||
|
|
||
| if (!signedXdr) { | ||
| throw new Error("Signed transaction is missing."); | ||
| } | ||
|
|
||
| // Step 3: Send transaction | ||
| const data = await sendTransaction(signedXdr); | ||
|
|
||
| if (data.status === "SUCCESS") { | ||
| toast.success("Escrow Created"); | ||
| // data contains: { status, message, contractId, escrow } | ||
| } | ||
| } catch (error: unknown) { | ||
| // Handle error | ||
| } | ||
| }; | ||
| }; | ||
| ``` |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider adding missing imports and variable definitions for completeness.
The complete example demonstrates the 3-step pattern nicely. However, we could enhance this by including the missing signTransaction function import and walletAddress definition to make the example fully self-contained and copy-paste ready:
signTransactionis used at line 65 but not importedwalletAddressis referenced at line 67 but not defined
📝 Suggested enhancement
import {
useInitializeEscrow,
useSendTransaction,
} from "@trustless-work/escrow/hooks";
import {
InitializeMultiReleaseEscrowPayload,
InitializeSingleReleaseEscrowPayload
} from "@trustless-work/escrow/types";
+import { useWallet, signTransaction } from "@trustless-work/wallet"; // or appropriate wallet hook
export const useInitializeEscrowForm = () => {
const { deployEscrow } = useInitializeEscrow();
const { sendTransaction } = useSendTransaction();
+ const { walletAddress } = useWallet(); // or however wallet context is accessedThis pattern repeats across other complete examples (useFundEscrow, useChangeMilestoneStatus, etc.) - consider adding a "Prerequisites" section at the top documenting required wallet setup.
🤖 Prompt for AI Agents
In @.agents/skills/trustless-work-dev/skills/react-sdk/hooks-reference.md around
lines 36 - 86, The example hook useInitializeEscrowForm is missing the
signTransaction import and a walletAddress definition, causing undefined
references; update the example to import signTransaction (from the wallet or SDK
module that provides it) alongside useInitializeEscrow and useSendTransaction,
and define or accept walletAddress (e.g., from props, context, or a useWallet
hook) before using it in the onSubmit flow so signTransaction and walletAddress
are defined when calling signTransaction({ unsignedTransaction, address:
walletAddress || "" }); also consider adding a short "Prerequisites" note at the
top listing required imports/hooks (signTransaction and wallet access) so other
examples can follow the same pattern.
| import { | ||
| useFundEscrow, | ||
| useSendTransaction, | ||
| } from "@trustless-work/escrow/hooks"; | ||
| import { FundEscrowPayload } from "@trustless-work/escrow/types"; | ||
|
|
||
| export const useSomeEndpointForm = () => { | ||
| const { someFunction } = useSomeEndpoint(); | ||
| const { sendTransaction } = useSendTransaction(); | ||
|
|
||
| const onSubmit = async (payload: SomeEndpointPayload) => { | ||
| try { | ||
| // Step 1: Get unsigned transaction | ||
| const { unsignedTransaction } = await someFunction(payload, "multi-release"); | ||
|
|
||
| // Step 2: Sign transaction | ||
| const signedXdr = await signTransaction({ | ||
| unsignedTransaction, | ||
| address: walletAddress || "", | ||
| }); | ||
|
|
||
| if (!signedXdr) { | ||
| throw new Error("Signed transaction is missing."); | ||
| } | ||
|
|
||
| // Step 3: Send transaction | ||
| const data = await sendTransaction(signedXdr); | ||
|
|
||
| // Handle response | ||
| if (data.status === "SUCCESS") { | ||
| toast.success("Operation successful"); | ||
| } | ||
| } catch (error: unknown) { | ||
| // Handle error | ||
| } | ||
| }; | ||
| }; | ||
| ``` |
There was a problem hiding this comment.
Correct import mismatch in the useSendTransaction complete example.
There's a small inconsistency here - the imports include useFundEscrow and FundEscrowPayload, but the example code uses generic useSomeEndpoint and SomeEndpointPayload. We could enhance clarity by either:
- Making the example fully generic (removing the specific imports), or
- Using the actual
useFundEscrowthroughout
📝 Option 1: Make fully generic
-import {
- useFundEscrow,
- useSendTransaction,
-} from "@trustless-work/escrow/hooks";
-import { FundEscrowPayload } from "@trustless-work/escrow/types";
+import {
+ useSomeEndpoint,
+ useSendTransaction,
+} from "@trustless-work/escrow/hooks";
+import { SomeEndpointPayload } from "@trustless-work/escrow/types";📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| import { | |
| useFundEscrow, | |
| useSendTransaction, | |
| } from "@trustless-work/escrow/hooks"; | |
| import { FundEscrowPayload } from "@trustless-work/escrow/types"; | |
| export const useSomeEndpointForm = () => { | |
| const { someFunction } = useSomeEndpoint(); | |
| const { sendTransaction } = useSendTransaction(); | |
| const onSubmit = async (payload: SomeEndpointPayload) => { | |
| try { | |
| // Step 1: Get unsigned transaction | |
| const { unsignedTransaction } = await someFunction(payload, "multi-release"); | |
| // Step 2: Sign transaction | |
| const signedXdr = await signTransaction({ | |
| unsignedTransaction, | |
| address: walletAddress || "", | |
| }); | |
| if (!signedXdr) { | |
| throw new Error("Signed transaction is missing."); | |
| } | |
| // Step 3: Send transaction | |
| const data = await sendTransaction(signedXdr); | |
| // Handle response | |
| if (data.status === "SUCCESS") { | |
| toast.success("Operation successful"); | |
| } | |
| } catch (error: unknown) { | |
| // Handle error | |
| } | |
| }; | |
| }; | |
| ``` | |
| import { | |
| useSomeEndpoint, | |
| useSendTransaction, | |
| } from "@trustless-work/escrow/hooks"; | |
| import { SomeEndpointPayload } from "@trustless-work/escrow/types"; | |
| export const useSomeEndpointForm = () => { | |
| const { someFunction } = useSomeEndpoint(); | |
| const { sendTransaction } = useSendTransaction(); | |
| const onSubmit = async (payload: SomeEndpointPayload) => { | |
| try { | |
| // Step 1: Get unsigned transaction | |
| const { unsignedTransaction } = await someFunction(payload, "multi-release"); | |
| // Step 2: Sign transaction | |
| const signedXdr = await signTransaction({ | |
| unsignedTransaction, | |
| address: walletAddress || "", | |
| }); | |
| if (!signedXdr) { | |
| throw new Error("Signed transaction is missing."); | |
| } | |
| // Step 3: Send transaction | |
| const data = await sendTransaction(signedXdr); | |
| // Handle response | |
| if (data.status === "SUCCESS") { | |
| toast.success("Operation successful"); | |
| } | |
| } catch (error: unknown) { | |
| // Handle error | |
| } | |
| }; | |
| }; |
🤖 Prompt for AI Agents
In @.agents/skills/trustless-work-dev/skills/react-sdk/hooks-reference.md around
lines 726 - 763, The example imports useFundEscrow and FundEscrowPayload but the
body uses generic symbols useSomeEndpoint and SomeEndpointPayload, causing a
mismatch; update the example to be consistent by either replacing the imports
with generic ones (remove useFundEscrow and FundEscrowPayload from imports and
import/use generic hooks/types that match useSomeEndpoint/SomeEndpointPayload)
or change the example body to use useFundEscrow and FundEscrowPayload (replace
useSomeEndpoint -> useFundEscrow and SomeEndpointPayload -> FundEscrowPayload),
keeping other referenced symbols (useSendTransaction, signTransaction,
sendTransaction, walletAddress) unchanged so the example compiles and reads
correctly.
|
|
||
| **Import:** | ||
| ```tsx | ||
| import { useInitializeEscrow } from '@trustless-work/escrow/hooks'; | ||
| import { | ||
| InitializeSingleReleaseEscrowPayload, | ||
| InitializeMultiReleaseEscrowPayload | ||
| } from '@trustless-work/escrow/types'; | ||
| ``` |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider adding blank lines around fenced code blocks for better markdown compliance.
Static analysis flags MD031 (blanks-around-fences) warnings throughout. Adding a blank line before and after each code block improves readability and ensures consistent rendering across markdown parsers.
♻️ Example fix pattern
**Import:**
+
```tsx
import { useInitializeEscrow } from '@trustless-work/escrow/hooks';
...Usage:
+
const { initializeEscrow, isPending, isError, isSuccess } = useInitializeEscrow();
...</details>
<details>
<summary>🧰 Tools</summary>
<details>
<summary>🪛 markdownlint-cli2 (0.20.0)</summary>
[warning] 91-91: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
</details>
</details>
<details>
<summary>🤖 Prompt for AI Agents</summary>
In @.agents/skills/trustless-work-dev/skills/react-sdk/react-sdk.md around lines
89 - 97, The markdown has fenced code blocks without blank lines around them
causing MD031 warnings; update the file to insert a blank line immediately
before and after each fenced block (e.g., around the import block and the Usage
block) so that code sections like the import of useInitializeEscrow and the
declarations of InitializeSingleReleaseEscrowPayload /
InitializeMultiReleaseEscrowPayload are surrounded by a blank line above and
below each ```tsx fence.
</details>
<!-- fingerprinting:phantom:medusa:ocelot -->
<!-- This is an auto-generated comment by CodeRabbit -->
| #[only_role(caller, "admin")] | ||
| pub fn create_quest( | ||
| e: &Env, | ||
| caller: Address, | ||
| quest_type: QuestType, | ||
| name: String, | ||
| description: String, | ||
| target_value: u32, | ||
| reward_points: u32, | ||
| expires_at: u64, | ||
| ) -> u32 { | ||
| if target_value == 0 { | ||
| panic_with_error!(e, Error::InvalidQuestParams); | ||
| } | ||
|
|
||
| let quest_id = get_and_increment_quest_id(e); | ||
|
|
||
| let quest = QuestDefinition { | ||
| quest_id, | ||
| quest_type, | ||
| name: name.clone(), | ||
| description, | ||
| target_value, | ||
| reward_points, | ||
| expires_at, | ||
| is_active: true, | ||
| }; | ||
|
|
||
| set_quest_definition(e, &quest); | ||
|
|
||
| QuestCreatedEvent { | ||
| quest_id, | ||
| quest_type, | ||
| name, | ||
| reward_points, | ||
| } | ||
| .publish(e); | ||
|
|
||
| Self::extend_instance_ttl(e); | ||
| quest_id | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider validating reward_points in quest creation.
The function validates that target_value > 0 but doesn't validate reward_points. A quest with reward_points = 0 would be valid but perhaps unintentional. Consider whether zero-reward quests are a valid use case or should be rejected.
💡 Optional validation if zero-reward quests are unintended
if target_value == 0 {
panic_with_error!(e, Error::InvalidQuestParams);
}
+
+ // Optional: reject zero-reward quests if unintended
+ if reward_points == 0 {
+ panic_with_error!(e, Error::InvalidQuestParams);
+ }🤖 Prompt for AI Agents
In `@apps/contract/contracts/quest/src/lib.rs` around lines 110 - 150, Add
validation in create_quest to reject zero reward points: after the existing
target_value == 0 check, add a guard that panics with Error::InvalidQuestParams
when reward_points == 0 (referencing the reward_points parameter and the
create_quest function). This ensures quests with reward_points == 0 are
rejected; keep the same panic macro (panic_with_error!(e,
Error::InvalidQuestParams)) so behavior aligns with the existing validation.
| if [[ -z "$SOURCE" ]]; then | ||
| case $NETWORK in | ||
| testnet) | ||
| if stellar keys address bran &>/dev/null 2>&1; then |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider simplifying the redundant redirection.
Same pattern as noted in deploy-referral.sh - &>/dev/null already handles both streams:
♻️ Suggested simplification
- if stellar keys address bran &>/dev/null 2>&1; then
+ if stellar keys address bran &>/dev/null; then🤖 Prompt for AI Agents
In `@apps/contract/scripts/sync-quests-to-chain.sh` at line 63, The if-statement
uses redundant redirection ("if stellar keys address bran &>/dev/null 2>&1;
then"); update the command in the condition (in sync-quests-to-chain.sh) to use
a single redirection form (either "&>/dev/null" or ">/dev/null 2>&1") so stdout
and stderr are suppressed without duplication; change the occurrence within the
conditional that invokes "stellar keys address bran" accordingly.
| stellar contract invoke \ | ||
| --network "$NETWORK" \ | ||
| --source "$SOURCE" \ | ||
| --id "$QUEST_CONTRACT_ID" \ | ||
| -- grant_role \ | ||
| --account "$ADMIN_ADDRESS" \ | ||
| --role 'admin' \ | ||
| --caller "$ADMIN_ADDRESS" 2>&1 | tee /tmp/quest-admin-grant.log | ||
|
|
||
| GRANT_RESULT=$? | ||
| if [[ $GRANT_RESULT -eq 0 ]]; then | ||
| echo "✅ Admin role granted successfully" | ||
| else | ||
| echo "⚠️ Admin role grant returned non-zero exit code" | ||
| echo "Checking if admin role is already granted..." | ||
| HAS_ROLE=$(stellar contract invoke \ | ||
| --network "$NETWORK" \ | ||
| --source "$SOURCE" \ | ||
| --id "$QUEST_CONTRACT_ID" \ | ||
| -- has_role \ | ||
| --account "$ADMIN_ADDRESS" \ | ||
| --role 'admin' 2>&1 | grep -o '[0-9]' | head -1 || echo "0") | ||
|
|
||
| if [[ "$HAS_ROLE" != "0" ]]; then | ||
| echo "✅ Admin role is already granted (exit code: $HAS_ROLE)" | ||
| else | ||
| echo "❌ Admin role is not granted. Error details:" | ||
| cat /tmp/quest-admin-grant.log | ||
| echo "" | ||
| echo "This may indicate that the admin address needs different permissions." | ||
| echo "Attempting to create quests anyway..." | ||
| fi | ||
| fi |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Consider removing pipe usage to align with coding guidelines.
The coding guidelines specify avoiding pipes (|) in shell scripts. Lines 111 and 125 use pipes which could be refactored. We could enhance compliance by using intermediate files or alternative approaches:
♻️ Suggested refactor for line 111
-stellar contract invoke \
- --network "$NETWORK" \
- --source "$SOURCE" \
- --id "$QUEST_CONTRACT_ID" \
- -- grant_role \
- --account "$ADMIN_ADDRESS" \
- --role 'admin' \
- --caller "$ADMIN_ADDRESS" 2>&1 | tee /tmp/quest-admin-grant.log
+stellar contract invoke \
+ --network "$NETWORK" \
+ --source "$SOURCE" \
+ --id "$QUEST_CONTRACT_ID" \
+ -- grant_role \
+ --account "$ADMIN_ADDRESS" \
+ --role 'admin' \
+ --caller "$ADMIN_ADDRESS" > /tmp/quest-admin-grant.log 2>&1
+cat /tmp/quest-admin-grant.log♻️ Suggested refactor for line 125
- HAS_ROLE=$(stellar contract invoke \
- --network "$NETWORK" \
- --source "$SOURCE" \
- --id "$QUEST_CONTRACT_ID" \
- -- has_role \
- --account "$ADMIN_ADDRESS" \
- --role 'admin' 2>&1 | grep -o '[0-9]' | head -1 || echo "0")
+ HAS_ROLE_OUTPUT=$(stellar contract invoke \
+ --network "$NETWORK" \
+ --source "$SOURCE" \
+ --id "$QUEST_CONTRACT_ID" \
+ -- has_role \
+ --account "$ADMIN_ADDRESS" \
+ --role 'admin' 2>&1 || echo "0")
+ HAS_ROLE=$(echo "$HAS_ROLE_OUTPUT" | grep -o '[0-9]' | head -1)
+ HAS_ROLE=${HAS_ROLE:-0}As per coding guidelines: "Do not use cd, &&, | or pipes in shell actions."
🤖 Prompt for AI Agents
In `@apps/contract/scripts/sync-quests-to-chain.sh` around lines 104 - 136,
Replace the piped invocations with redirected output files and subsequent
non-piped processing: for the grant_role call (the stellar contract invoke that
currently uses "2>&1 | tee /tmp/quest-admin-grant.log") redirect both stdout and
stderr to /tmp/quest-admin-grant.log (e.g. > /tmp/quest-admin-grant.log 2>&1)
and then check GRANT_RESULT as you already do; for the has_role call that uses
"2>&1 | grep -o '[0-9]' | head -1 || echo '0'", redirect its output to a temp
file (e.g. /tmp/quest-has-role.log), then extract the first digit without pipes
by running grep -o '[0-9]' /tmp/quest-has-role.log >
/tmp/quest-has-role-digits.log and then read the first line with head -1 <
/tmp/quest-has-role-digits.log or by using read to assign HAS_ROLE from the
file, defaulting to "0" if empty; update references to GRANT_RESULT, HAS_ROLE,
/tmp/quest-admin-grant.log and the stellar invocations (-- grant_role and --
has_role) accordingly.
| // Check if user is a platform admin | ||
| const { data: profileData } = await supabaseServiceRole | ||
| .from('profiles') | ||
| .select('role') | ||
| .eq('id', userId) | ||
| .single() | ||
|
|
||
| if (projectError || !project) { | ||
| return { success: false, error: 'Project not found' } | ||
| } | ||
| const isPlatformAdmin = profileData?.role === 'admin' | ||
|
|
||
| // Check if user is the project owner | ||
| const isOwner = project.kindler_id === userId | ||
| // If user is platform admin, skip project permission checks | ||
| if (!isPlatformAdmin) { | ||
| // Verify user has permission to update escrow for this project | ||
| // Check if user is the project owner or has editor role | ||
| const { data: project, error: projectError } = await supabaseServiceRole | ||
| .from('projects') | ||
| .select('id, kindler_id') | ||
| .eq('id', params.projectId) | ||
| .single() | ||
|
|
||
| // Check if user is a project member with editor role | ||
| const { data: memberData } = await supabaseServiceRole | ||
| .from('project_members') | ||
| .select('role') | ||
| .eq('project_id', params.projectId) | ||
| .eq('user_id', userId) | ||
| .in('role', ['core', 'admin', 'editor']) | ||
| .single() | ||
| if (projectError || !project) { | ||
| return { success: false, error: 'Project not found' } | ||
| } | ||
|
|
||
| const hasEditorRole = !!memberData | ||
| // Check if user is the project owner | ||
| const isOwner = project.kindler_id === userId | ||
|
|
||
| if (!isOwner && !hasEditorRole) { | ||
| return { | ||
| success: false, | ||
| error: | ||
| 'Forbidden: You do not have permission to update escrow for this project', | ||
| // Check if user is a project member with editor role | ||
| const { data: memberData } = await supabaseServiceRole | ||
| .from('project_members') | ||
| .select('role') | ||
| .eq('project_id', params.projectId) | ||
| .eq('user_id', userId) | ||
| .in('role', ['core', 'admin', 'editor']) | ||
| .single() | ||
|
|
||
| const hasEditorRole = !!memberData | ||
|
|
||
| if (!isOwner && !hasEditorRole) { | ||
| return { | ||
| success: false, | ||
| error: | ||
| 'Forbidden: You do not have permission to update escrow for this project', | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
🧩 Analysis chain
🏁 Script executed:
head -120 apps/web/app/actions/escrow/save-escrow-contract.tsRepository: kindfi-org/kindfi
Length of output: 3695
🏁 Script executed:
# Check if next-safe-action is used elsewhere in the project
rg "next-safe-action" apps/web --type ts --type tsx -A 2 | head -50Repository: kindfi-org/kindfi
Length of output: 88
🏁 Script executed:
# Check the entire actions directory to see patterns
fd "\.ts$" apps/web/app/actions --type f | head -20Repository: kindfi-org/kindfi
Length of output: 224
🏁 Script executed:
# Look for existing server action patterns in the codebase
rg "use server" apps/web --type ts --type tsx -B 2 -A 5 | head -60Repository: kindfi-org/kindfi
Length of output: 88
🏁 Script executed:
cat package.json | grep -i "next-safe-action"Repository: kindfi-org/kindfi
Length of output: 43
🏁 Script executed:
# Check for any usage of createAction or next-safe-action patterns
rg "createAction|initializeAction|createServerAction" apps/web --type ts -B 2 -A 3 | head -40Repository: kindfi-org/kindfi
Length of output: 43
🏁 Script executed:
# Check the auth.ts action file to see patterns
head -80 apps/web/app/actions/auth.tsRepository: kindfi-org/kindfi
Length of output: 2459
🏁 Script executed:
# Search more broadly for safe action imports
rg "from.*safe|import.*safe" apps/web --type ts | head -20Repository: kindfi-org/kindfi
Length of output: 556
🏁 Script executed:
# Check Next.js version and other dependencies
head -50 apps/web/package.json | grep -A 10 "dependencies"Repository: kindfi-org/kindfi
Length of output: 517
🏁 Script executed:
# Check if there's any mention of next-safe-action in docs or config
rg "next-safe-action|safe.*action" . --type md --type json | head -20Repository: kindfi-org/kindfi
Length of output: 43
Consider next-safe-action for input validation standardization across server actions.
The permission checks and admin bypass logic here are clean and well-structured. However, per coding guidelines, server actions should leverage next-safe-action for consistent input validation and auth enforcement. While the project hasn't widely adopted this pattern yet, migrating this action would align with security best practices and enable schema-based input validation (especially useful for the complex escrowData parameter structure).
As a lower-priority improvement: ensure that placeholder fallback values like 1 for totalAmount and project-${params.projectId} for engagementId have explicit validation to prevent incomplete escrow configurations from persisting.
🤖 Prompt for AI Agents
In `@apps/web/app/actions/escrow/save-escrow-contract.ts` around lines 59 - 103,
The action save-escrow-contract is missing standardized input validation and
auth enforcement; migrate this server action to use next-safe-action with a Zod
(or equivalent) schema that validates params.projectId, userId, and the
escrowData structure (including totalAmount as a positive number and
engagementId as a non-empty string) before any supabaseServiceRole calls; keep
the existing admin bypass and permission logic (profile lookup, project lookup,
project_members check using supabaseServiceRole) but run it after schema/auth is
enforced, and replace implicit fallbacks (the placeholder totalAmount = 1 and
engagementId = `project-${params.projectId`) with explicit validation rules and
explicit error responses when those fields are missing or invalid.
| // Prepare escrow data | ||
| const engagementId = | ||
| params.escrowData?.engagementId || | ||
| params.engagementId || | ||
| `project-${params.projectId}` | ||
|
|
||
| let escrowContractUuid: string | ||
| // Calculate amount: for multi-release, sum milestone amounts; for single-release, use amount | ||
| const totalAmount = params.escrowData?.milestones | ||
| ? params.escrowData.milestones.reduce((sum, m) => sum + m.amount, 0) | ||
| : params.escrowData?.amount || 1 // Minimum amount to satisfy constraint | ||
|
|
||
| if (existingEscrow?.id) { | ||
| // Use existing record UUID | ||
| escrowContractUuid = existingEscrow.id | ||
| } else { | ||
| // Create new escrow_contracts record - always create if it doesn't exist | ||
| // Use escrowData if provided, otherwise use minimal defaults | ||
| const engagementId = | ||
| params.escrowData?.engagementId || | ||
| params.engagementId || | ||
| `project-${params.projectId}` | ||
| // Determine payer and receiver addresses - use escrowData if available, otherwise use placeholder | ||
| const payerAddress = | ||
| params.escrowData?.roles?.serviceProvider || | ||
| params.escrowData?.roles?.approver || | ||
| 'G000000000000000000000000000000000000000' // Placeholder Stellar address | ||
|
|
||
| // Calculate amount: for multi-release, sum milestone amounts; for single-release, use amount | ||
| const totalAmount = params.escrowData?.milestones | ||
| ? params.escrowData.milestones.reduce((sum, m) => sum + m.amount, 0) | ||
| : params.escrowData?.amount || 1 // Minimum amount to satisfy constraint | ||
| const receiverAddress = | ||
| params.escrowData?.milestones && params.escrowData.milestones.length > 0 | ||
| ? params.escrowData.milestones[0]?.receiver | ||
| : params.escrowData?.receiver || | ||
| params.escrowData?.roles?.serviceProvider || | ||
| payerAddress | ||
|
|
||
| // Determine payer and receiver addresses - use escrowData if available, otherwise use placeholder | ||
| const payerAddress = | ||
| params.escrowData?.roles?.serviceProvider || | ||
| params.escrowData?.roles?.approver || | ||
| 'G000000000000000000000000000000000000000' // Placeholder Stellar address | ||
| const platformFee = params.escrowData?.platformFee ?? 0 | ||
|
|
There was a problem hiding this comment.
Good milestone aggregation; avoid persisting placeholder amounts/addresses.
Nice to see milestone sums handled. The fallback 1 amount and dummy Stellar address can write invalid data and skew downstream totals. Consider failing fast when required escrow fields are missing and validating amount/address inputs before persisting.
✅ Suggested validation guard
- const totalAmount = params.escrowData?.milestones
- ? params.escrowData.milestones.reduce((sum, m) => sum + m.amount, 0)
- : params.escrowData?.amount || 1 // Minimum amount to satisfy constraint
+ const totalAmount = params.escrowData?.milestones
+ ? params.escrowData.milestones.reduce((sum, m) => sum + m.amount, 0)
+ : params.escrowData?.amount
+ if (!totalAmount || !validateAmount(totalAmount)) {
+ return { success: false, error: 'Invalid escrow amount' }
+ }
- const payerAddress =
- params.escrowData?.roles?.serviceProvider ||
- params.escrowData?.roles?.approver ||
- 'G000000000000000000000000000000000000000' // Placeholder Stellar address
+ const payerAddress =
+ params.escrowData?.roles?.approver ||
+ params.escrowData?.roles?.serviceProvider
+ if (!payerAddress || !validateAddress(payerAddress)) {
+ return { success: false, error: 'Invalid payer address' }
+ }
const receiverAddress =
params.escrowData?.milestones && params.escrowData.milestones.length > 0
? params.escrowData.milestones[0]?.receiver
: params.escrowData?.receiver ||
params.escrowData?.roles?.serviceProvider ||
payerAddress
+ if (!receiverAddress || !validateAddress(receiverAddress)) {
+ return { success: false, error: 'Invalid receiver address' }
+ }🤖 Prompt for AI Agents
In `@apps/web/app/actions/escrow/save-escrow-contract.ts` around lines 108 - 133,
The code currently falls back to placeholder values (totalAmount = 1,
payerAddress dummy address) which can persist invalid data; update
save-escrow-contract to validate required escrow inputs and fail fast instead of
using placeholders: run validateNotEmpty on engagementId (from engagementId /
projectId), validateAmount on totalAmount (compute from escrowData.milestones or
amount and throw/return error if invalid), and validateAddress on payerAddress
and receiverAddress (derived from escrowData.roles/milestones); remove the
hardcoded fallback values and return/throw a clear validation error when any
check fails (apply checks where engagementId, totalAmount, payerAddress,
receiverAddress, and platformFee are computed).
| // Check if a contribution exists for this project, or create a placeholder | ||
| const { data: existingContribution } = await supabase | ||
| .from('contributions') | ||
| .select('id') | ||
| .eq('project_id', params.projectId) | ||
| .order('created_at', { ascending: false }) | ||
| .limit(1) | ||
| .maybeSingle() | ||
|
|
||
| const platformFee = params.escrowData?.platformFee ?? 0 | ||
| let contributionId: string | ||
|
|
||
| // Check if a contribution exists for this project, or create a placeholder | ||
| const { data: existingContribution } = await supabase | ||
| if (existingContribution?.id) { | ||
| contributionId = existingContribution.id | ||
| } else { | ||
| // Create a placeholder contribution for the escrow | ||
| // This represents the project's escrow setup, not an actual contribution | ||
| const { data: newContribution, error: contribError } = await supabase | ||
| .from('contributions') | ||
| .insert({ | ||
| project_id: params.projectId, | ||
| contributor_id: userId, // Use current user as placeholder | ||
| amount: totalAmount, | ||
| }) | ||
| .select('id') | ||
| .eq('project_id', params.projectId) | ||
| .order('created_at', { ascending: false }) | ||
| .limit(1) | ||
| .maybeSingle() | ||
|
|
||
| let contributionId: string | ||
|
|
||
| if (existingContribution?.id) { | ||
| contributionId = existingContribution.id | ||
| } else { | ||
| // Create a placeholder contribution for the escrow | ||
| // This represents the project's escrow setup, not an actual contribution | ||
| const { data: newContribution, error: contribError } = await supabase | ||
| .from('contributions') | ||
| .insert({ | ||
| project_id: params.projectId, | ||
| contributor_id: userId, // Use current user as placeholder | ||
| amount: totalAmount, | ||
| }) | ||
| .select('id') | ||
| .single() | ||
| .single() | ||
|
|
||
| if (contribError || !newContribution?.id) { | ||
| console.error( | ||
| '❌ Failed to create contribution record:', | ||
| contribError, | ||
| ) | ||
| return { | ||
| success: false, | ||
| error: `Failed to create contribution record: ${contribError?.message || 'Unknown error'}`, | ||
| } | ||
| if (contribError || !newContribution?.id) { | ||
| console.error( | ||
| '❌ Failed to create contribution record:', | ||
| contribError, | ||
| ) | ||
| return { | ||
| success: false, | ||
| error: `Failed to create contribution record: ${contribError?.message || 'Unknown error'}`, | ||
| } | ||
|
|
||
| contributionId = newContribution.id | ||
| } | ||
|
|
||
| // Create the escrow_contracts record | ||
| // Note: current_state enum values are: NEW, FUNDED, ACTIVE, COMPLETED, DISPUTED, CANCELLED | ||
| const { data: newEscrow, error: createError } = await supabase | ||
| .from('escrow_contracts') | ||
| .insert({ | ||
| contributionId = newContribution.id | ||
| } |
There was a problem hiding this comment.
Consider isolating escrow linkage from donor contributions.
Good idea to ensure a contribution exists. Reusing the latest project contribution can inadvertently link the escrow to a donor record, which affects attribution and reporting. Consider filtering for an escrow-specific placeholder (e.g., a source/type flag) or always creating a dedicated escrow contribution.
🤖 Prompt for AI Agents
In `@apps/web/app/actions/escrow/save-escrow-contract.ts` around lines 134 - 172,
The current logic reuses the latest contribution (existingContribution) which
can misattribute escrow to donors; update the check/create flow in
save-escrow-contract to either (A) query contributions with an escrow-specific
flag (e.g., .eq('contribution_type','escrow') or .eq('is_escrow', true) when
selecting existingContribution) or (B) always create a dedicated escrow
placeholder instead of reusing the latest donor contribution by inserting a new
row via supabase.from('contributions').insert(...) that includes a clear escrow
identifier (e.g., contribution_type: 'escrow' or is_escrow: true) so the created
record (newContribution) is unambiguously an escrow record and does not affect
donor attribution or reporting.
Summary by CodeRabbit
New Features
Documentation