View Interactive Presentation Online
Presenter: Sergey Kurdin
Senior Developer at Charles River Labs (Apollo SA project)
- 30+ years building software
- Built and shipped products at Marriott.com, Ski.com, It.com, Amazon, NIH, and multiple startups
- PasteBar App Maintainer - Free, Open Source Clipboard Manager for Mac & Windows (1.7k★ on GitHub)
- Foundation: Why AI coding now & mindset shift
- Understanding AI: How LLMs work & context windows
- Best Practices: Prompting patterns & planning-first approach
- Git Workflow: Safe version control & incremental commits
- CLI-first Agents: Common commands & workflow (tool-agnostic)
- Quality Control: Testing, reviews & catching bad patterns
- Limits & Safety: AI security guardrails
- Human Skills: What matters MORE with AI
- Key Takeaways: Start small, verify everything
- Syntax highlighting → Better readability
- Linting & type checking → Fewer mistakes
- ChatGPT 3.5 → Natural language Q&A
- GitHub Copilot → Inline autocomplete
- Claude Code, Codex CLI, Cursor, Windsurf, Gemini Code Assist
- Understand repo structure & context
- Planning first, then implementation
- Plan → apply small step → test → iterate (repo-aware, reviewable edits)
- Less typing, more steering
- Fast refactors and unit tests in minutes; iterate on implementation variants
- Faster documentation and code reviews
- Learns and follows your repo's patterns
- Produces code aligned with project style
✓ You guide, review, and accept
✓ AI accelerates mechanics
✓ You own design and code quality
💡 Pro Tip: Generate and reuse AGENTS.md file with repo-specific instructions
- GitHub Copilot, Codeium, TabNine
- Inline suggestions while typing
- ChatGPT, Claude, Gemini
- Architecture and design decisions
- Q&A and code generation via chat
- Claude Code, Codex CLI, Cursor, Windsurf, Gemini Code Assist
- Understand entire repository structure
- Plan → Review → Implement workflow
- Direct file editing with reviewable changes
✓ Focus on patterns, not brands — most agents support similar flows
- Build systems that solve problems
- Shift from coder → architect
- AI learns from your repo and suggests solutions; you guide and control
- Your new responsibilities:
- Define the problem clearly
- Curate context carefully
- Review changes thoroughly
- Enforce quality gates
- Result: Fewer keystrokes, more impact and fewer typing mistakes
- AI = Large Language Models trained on text/code patterns
- They're NOT databases or compilers
- They predict the next token given prior tokens
- Pattern recall & style mimicry
- Fast scaffolding
- Broad knowledge base
- Hallucinations (confident but wrong)
- No true runtime awareness
- Tokenization → Probabilities → Decoding
- Output quality depends on:
- Prompt clarity
- Curated context
- Explicit constraints
- Temperature controls creativity vs safety
- Outputs can vary—build, run, and test to validate
✓ Best Practice: Ask for plans first, then for implementation
✓ Keep changes smaller and under control
✓ Compile, run, manual test, unit tests
- Model context windows vary by provider (hundreds of thousands of tokens)
- Effective usable window is always smaller than advertised
- Training data: General text vs code repositories
- Architecture: Base vs instruction-tuned vs code-specialized, fine tuning
- Optimization: Next-token prediction; instruction-tuning and tool use enable task completion
- General models: Great explanations, weaker syntax
- Code models: Strong patterns, weaker natural language
✓ Code models trained on millions of patterns
✓ Choose model based on task: coding vs explaining
- Models only see a limited window
- Too much context → truncation of critical code
- Provide only minimal relevant files plus explicit acceptance criteria and constraints
- Current error messages & stack traces
- Files you're editing & their imports
- Related type definitions & interfaces
- Relevant test files
- Project config (package.json, tsconfig)
✓ Mantra: Curate → Confirm → Constrain → Verify
- LLMs are always confident, even when wrong
- No built-in "I don't know" mechanism
- Inventing APIs that don't exist
- Using deprecated methods confidently
- Creating strange-sounding features
- Cross-check with your knowledge and official docs; assume unfamiliar APIs are suspect
- Extra caution with new libraries, new APIs
- Track hallucination patterns and ask for clarification or start over
- Decompose tasks → Write acceptance criteria
- Constrain scope → Request small changes
- Iterate quickly → Run, test, review each step
- Treat AI like a junior developer
- Ask to plan first
- Ask why you need to change
- Request alternatives
- Trust, but verify: compile → run → test before commit
- Clarify your intent, better understand the problem
- Define constraints
- Provide relevant context only: files, imports, components, types, etc.
- Ask for plans first, review then implement
- Request smaller changes
- Review small changes, revert early, and iterate
- Review all changes, ask AI to explain
- Use lint, typecheck, build and test
- Manual test, unit tests
- Security review
- Performance check
✓ Workflow: Plan → Implement → Test → Review
- Structure: Role • Context • Task • Constraints • Verification
- Be explicit about files/scope, runtime/versions, and safety expectations
- Prefer plan → apply for multi-step work; keep each step small and reviewable
- Keep outputs focused and actionable; prefer commands or concise status summaries
Example Prompt (Refactor, File Edits Applied):
Role: Senior TypeScript engineer.
Context: Node 20, Jest; repo uses src/ and test/.
Target: src/auth/token.ts#getUserToken duplicates retry/backoff logic.
Task: Extract retry/backoff into src/utils/retry.ts and reuse it in getUserToken
without changing external behavior.
Constraints:
• Keep public signatures stable
• Do not change unrelated modules
• Update test/auth/token.spec.ts to cover 429/503 with exponential backoff (max 3 attempts)
Verification:
• build passes (Node 20)
• npm run typecheck
• npm test -- test/auth/token.spec.ts
Out of scope:
• Do not modify unrelated modules or configuration files
- Start new branch:
git checkout -b feature/ai-refactor - Ensure clean directory:
git statusshows no changes - Request AI to create a plan, review it, approve or request improvements, do small, focused, reviewable changes
- Stage incrementally:
git add -p(review each hunk) - Test locally: manual test, lint, type check, compile, unit tests
- Good → commit:
git commit -m "[AI]: Extract retry logic" - Bad → revert:
git restoreorgit revertand retry prompt - Move to next task; repeat for multi-step work
git checkout -b feature/ai-refactor
git add -p # Review hunks interactively
git commit -m "[AI]: Extract retry logic"
git restore --staged . # Unstage if needed (avoid reset --hard)Use AI to plan large features upfront → Break into manageable subtasks → Reduce risk
- Current state, requirements, constraints. Include UI screenshots, wireframes, diagrams, etc.
"As a product architect, summarize this feature spec: [details]"
- Break into smaller subtasks
"Create plan for [feature]: subtasks, dependencies, save as Markdown [feature-name]-plan.md"
- Get your approval before implementation or request changes
"Start implementation of subtask 1 from [feature-name]-plan.md"
git switch -c ai/<feature>— Create dedicated AI branchgit add -p— Stage specific hunksgit restore --staged .— Unstage to retrygit diff --staged— Verify changesgit commit -m "[AI] checkpoint: desc"git revert HEAD— Safe undogit log --grep="[AI]"— Track AI commitsgit rebase -i— Clean before PR- pre-commit hooks — Auto-lint/format before commit
reset --hard
AI code needs extra testing — it makes creative mistakes
- Define invariants, auto-generate diverse inputs
- AI often misses edge cases — good unit tests help to expose them
- Verify test quality by introducing small bugs
- If tests don't catch the bugs, they need improvement
- Especially important for AI-generated code
- Use AI against itself
"Find 3 ways to break this function, including edge cases"- AI suggests: huge inputs, duplicates, non-comparable types
- Model Context Protocol (MCP) enables AI to interact with Chrome
- AI can navigate, click, fill forms, verify UI behavior, check console logs and errors
- Example:
github.com/hangwin/mcp-chrome
AI code → Human tests | Human reviews → AI tests
- Balance automation with human judgment in the middle to avoid false positives
- Always run existing tests before committing AI changes
- If AI wrote it, add at least one property-based test or mutation-killing assertion
AI code often needs optimization — combine performance checks with AI reviews
- Inefficient database queries
- Too many nested loops
- Blocking I/O operations
- Memory leaks in handlers
- Ask for risk analysis and efficiency
- Request missing test cases
- Generate PR descriptions
- Review for security, performance, edge cases
"Find 3 issues with this code and suggest fixes"
✓ When creating a PR, ask: "Write a concise PR description from these changes: [summary]"
Protect customer data, secrets, and IP while integrating AI safely
- Use placeholders like
[API_KEY]and replace post-generation - Configure AI tools to auto-redact sensitive patterns
- Use synthetic data:
user_1@example.cominstead of real emails
- Use
[AI]prefix for transparency
- Never paste tokens into prompts
- Use environment variables or secret management tools
- Verify licenses for generated code and suggested dependencies
- Ensure compliance with your organization's policies
- Use only enterprise or on-prem models for private repos; avoid public AI services for proprietary code
Automation, consistency, CI/CD-friendly
- Scriptable: Wrap prompts in bash/python scripts
- Portable: Works in terminals, servers, remote
- Standardize: Share configs & aliases team-wide
- Plan / Review / Diff / Apply
- Mention files or folders
- Search or fuzzy-find files
- Resume sessions, compact/summarize
✓ Flow: Plan → Diff preview → Apply small step → Test → Commit
/init— GenerateAGENTS.md/ set context/status— Show settings/model— Pick model/effort level/newor/resume— Session control
/plan— Propose steps/diff— Preview changes/apply— Apply edits/review— Critique changes/mention— Add files to context@— Fuzzy file search
- CLI agents automatically detect and use
AGENTS.mdfiles for context - Use multiple AGENTS.md files for different repo areas:
backend/AGENTS.md— API-specific patterns & rulesfrontend/AGENTS.md— UI conventions & componentslibs/AGENTS.md— Component library guidelines
- Each file provides domain-specific context to guide AI behavior
📚 CLI-Specific References:
$ claude | codex
→ Plan retry logic with exponential backoff; save to retry-plan.md
$ cat retry-plan.md
← Approve before applying
$ claude | codex
→ Implement step 1 from retry-plan.md; run tests after changes
$ git add -p
$ git commit -m "[AI]: Implement retry logic step 1"
$ npm test
$ git push -u origin ai/retry-logic
→ Open PR for review
- Batch similar tasks
- All tests, then all refactors
- Save successful prompts in team knowledge base
- Build context once, use many times
- When AI gets stuck: Clear context, simplify request, provide working example
- 15-minute rule:
- If stuck → change approach or better start over
💡 Tip: Provide a minimal snippet instead of the entire file when requesting a specific change
data,item,result
- Deeply nested ifs or loops
- Overly "clever" one-liners
- Unnecessary layers of indirection that obscure control flow
- No error boundaries
- Hardcoded values
- Catching and swallowing errors without logging or remediation
- Too many console.logs and comments
- Unnecessary type assertions
- Unused or redundant imports
- System design (AI can't architect)
- Code review (3x more important)
- Specification writing = new programming
- Domain expertise = your differentiator
- Debugging intuition
💡 Strong fundamentals matter MORE, not less
- AI is a tool, not a replacement for developers
- Small, verifiable changes win
- Context is everything, use it wisely
- Don't trust—verify
- Your skills become MORE valuable, focus on patterns
- Plan, guide, review, and accept—you stay in control
- Start small. Ship safely. Measure impact.
Planning:
"Create plan for [feature]. Save to [feature].md for review"
Bugfix:
"Fix [bug] in [file]. Include test."
Refactor:
"Extract [logic] to [target dir]."
Testing:
"Add tests for [file] edge cases."
Docs:
"Document [API] with examples."
- Start with clean dir
- Have AI make one small change
git add -p→ review- Test locally first
- Pass? → commit
- Fail? →
git restore --staged - Iterate with better prompt
Golden Rule: Smaller changes = Easy rollbacks
AI Coding Best Practices & Patterns for Modern Development
Sergey Kurdin
CRL: sergey.kurdin@crl.com
LinkedIn: linkedin.com/in/kurdin
GitHub: @sergeykurdin
Project: PasteBar - Free Clipboard Manager for Mac & Windows
- Email: sergey.kurdin@crl.com
- LinkedIn: linkedin.com/in/kurdin
- GitHub: @sergeykurdin
- Project: PasteBar - Free Clipboard Manager for Mac & Windows
This presentation material is for educational purposes. Please contact the author for usage rights.