You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The Swiss Army Knife of Offline AI. Chat, Speak, and Generate Images - Privacy First, Zero Internet. Download an LLM and use it on your mobile device. No data ever leaves your phone. Supports text-to-text, vision, text-to-image
AI-powered disaster response platform with offline-first architecture using Gemma 3n. Provides computer vision hazard detection, voice analysis with emergency keywords, PDF report generation, and multi-user coordination - all working without internet access.
A personal demo project for Flutter + ONNX Runtime integration. Not related to any company work.A comprehensive on-device face recognition SDK for Flutter
On-device AI framework inspired by LangChain, purpose-built for mobile. Build, compose, and extend AI features using platform-native models — all processing happens locally on the device.
CoreML conversion of all-MiniLM-L6-v2 with a full SwiftUI demo, tokenizer implementation, model resources, and conversion script for easy on-device text embeddings.
AI-powered recipe extraction using Apple Vision OCR and LLM reasoning. Converts photos of recipes into structured data with allergen and FODMAP analysis.
Offline-first AI-powered medical triage app that analyzes symptoms in real time to provide first-aid guidance, emergency severity assessment, and validated emergency protocols without requiring an internet connection.
Ojas is a production-grade Android application that measures heart rate from live camera feed using remote photoplethysmography (rPPG) technology. The app leverages Arm Neon SIMD for signal processing and NNAPI for AI-powered signal refinement, achieving real-time performance on mobile devices.
A research project for natively embedding llama.cpp into Kotlin, Swift, and Desktop environments. Enables autonomous, on-device multimodal agents with system-level tool execution (Shell/File), VLM support, and secure web accessibility.
This is a demo mobile application that runs AI models completely on-device using React Native ExecuTorch and react-native-rag. The goal of this project was to explore how modern LLMs and RAG systems can run locally on mobile devices without cloud APIs.
This is SmartNotes — an Android app built with Jetpack Compose. An offline AI app that summarises notes, retrieves related context, and speaks results aloud using TensorFlow Lite and Android’s native TTS engine.
This is the start of a multi-part series where I explore Google’s new GenAI features in ML Kit by building a real Android app from scratch, using Jetpack Compose and MVVM. It’s beginner-friendly, practical, and a bit fun too.