Principal Engineer building cloud-native distributed systems and intelligent platforms.
I work at the intersection of:
- βοΈ Cloud Infrastructure
- π€ Applied AI / LLM Systems
- π Distributed Systems
- π§ͺ Research-Driven Engineering
My work focuses on AI system reliability, Retrieval-Augmented Generation (RAG), and developer tooling for modern platforms.
| Project | Type | Description |
|---|---|---|
| π§ rag-evidence-coverage-evaluator | AI Evaluation Framework | Framework for evaluating grounding quality and evidence coverage in Retrieval-Augmented Generation (RAG) systems |
| π rag-chain-of-logic-coverage | AI Evaluation Framework | Evaluates reasoning validity and logical consistency in RAG pipelines |
| 𧬠orthogonal-low-rank-editing | AI Research Implementation | Implementation of subspace collision analysis and orthogonal low-rank updates for neural knowledge editing |
| π§ͺ amorphous-intelligence-experiments | AI Research Experiments | Experiments demonstrating reasoning divergence and epistemic drift in local LLMs |
| π bigocheck | Python CLI Library | Zero-dependency empirical Big-O complexity checker with CLI assertions and pytest integration |
| β‘ pdperf | Static Analysis Tool | Static performance linter that detects slow Pandas anti-patterns |
| π§© promptspecj | Java Developer Library | Compile-time prompt contracts for JVM AI applications (βOpenAPI for promptsβ) |
| π schemaglow | Data Engineering Tool | Human-friendly schema diff and visualization tool |
| πΊ tracemap | Network Visualization Tool | Terminal UI traceroute visualizer with ASCII network maps |
| π‘ mcp-shield-pii | AI Security Middleware | PII detection and protection layer for MCP servers |
| π¦ mcp-egress-guard | AI Security Middleware | Security policy enforcement layer for outbound MCP traffic |
My research focuses on evaluation and reliability of AI systems, including:
- Retrieval-Augmented Generation evaluation
- LLM reasoning validation
- Knowledge editing in neural networks
- AI observability and reliability
Selected research directions:
- Evidence Coverage Evaluator for RAG Systems
- Chain-of-Logic Coverage Evaluator
- Subspace Collisions in Knowledge Editing
- RAG as a Scientific Instrument
AWS β’ ECS β’ EKS β’ Lambda Terraform β’ Infrastructure as Code Distributed Systems Architecture
Java β’ Spring Boot Python Event-Driven Architectures
Large Language Models RAG pipelines Vector databases AI evaluation frameworks
Iβm interested in collaborating on:
- AI system evaluation frameworks
- distributed systems research
- developer productivity tools
- open-source AI infrastructure
GitHub: https://github.com/adwantg
