Skip to content

Project: 1 Creative App - With You #47

@AravindhaSamy-Synergech

Description

Track

Creative Apps (GitHub Copilot)

Project Name

With You

GitHub Username

@Aravindha-samy

Repository URL

https://github.com/Aravindha-samy/With-You

Project Description

With You - Project Description

"With You – When memory fades, presence remains."

Overview

With You is an AI-powered cognitive support system designed for individuals with early-stage Alzheimer's disease and their caregivers. It's not a reminder app—it's a persistent emotional memory companion that protects identity, relationships, emotional safety, and personal story continuity.

Core Philosophy

With You operates on an identity-first AI architecture that prioritizes:

  • Identity: Preserving who the person is
  • Relationships: Maintaining connections with loved ones
  • Emotional Safety: Protecting dignity and reducing anxiety
  • Personal Story Continuity: Ensuring life narrative remains intact

The Cognitive Mesh Architecture

Instead of a single chatbot, With You uses a multi-agent system where specialized AI agents work together quietly in the background. Each agent has a clear responsibility:

🌞 Aurora - The Orchestrator

Role: Central coordinator and traffic controller

Aurora decides:

  • What the patient is asking
  • Which agent should respond
  • Whether the patient sounds anxious
  • If caregiver alert is needed

Aurora directs intelligence but does not store memory.

🏠 Harbor - The Orientation Agent

Role: Keeps the patient grounded

Harbor answers:

  • What day is it?
  • Where am I?
  • Who is visiting today?
  • What is happening next?

Harbor handles repeated questions gently and increases reassurance if confusion rises, protecting stability and safety.

👨‍👩‍👧 Roots - Identity & Relationship Agent

Role: Protects relational identity

Roots answers:

  • Who is this person?
  • How am I related to them?
  • What is our shared history?

Roots maintains family structure, emotional importance of relationships, and shared memories—protecting familiarity and belonging.

💬 Solace - Emotional Intelligence Agent

Role: Detects and regulates emotional distress

Solace monitors:

  • Tone of voice
  • Word patterns
  • Anxiety signals
  • Repetition frequency

When distress is detected, Solace calms the patient, activates Calm Mode, and alerts caregivers if needed—protecting emotional safety and dignity.

🧠 Echo - Memory Layer Agent

Role: Long-term memory preservation

Echo:

  • Stores conversation history
  • Tracks emotional trends
  • Detects cognitive patterns
  • Identifies increasing repetition

Echo enables predictive anxiety detection and longitudinal emotional modeling, protecting continuity over time.

👩‍⚕️ Guardian - Caregiver Agent

Role: The caregiver co-pilot

Guardian provides:

  • Daily cognitive summary
  • Emotional trend insights
  • Escalation alerts
  • Simple analytics dashboard

Guardian helps caregivers act early instead of reacting late.

🌅 Legacy - Story Continuity Agent

Role: Maintains life narrative

Legacy gently reinforces personal history and prevents identity erosion. When the patient references their past, Legacy completes the narrative with dignity and respect.

Operating Modes

Mode A - Structured Navigation (Button Mode)

Used when the patient taps predefined buttons like "Family Photos" or "Where am I?"

Characteristics:

  • Fast and stable
  • Controlled and low-risk
  • No advanced AI reasoning needed
  • Best for moderate cognitive stages

Flow:

  1. Patient taps button
  2. Aurora routes directly to correct agent
  3. Agent retrieves stored information
  4. Response shown/spoken

Mode B - Free Speech Mode

Used when patient speaks naturally: "Is Anna coming today?"

Characteristics:

  • Emotional understanding
  • Anxiety detection
  • Intelligent routing
  • Context awareness
  • Predictive intelligence

Flow:

  1. Voice converted to text (Azure Speech)
  2. Aurora analyzes meaning and emotion (Azure OpenAI)
  3. Aurora selects correct agent
  4. Agent retrieves memory from database
  5. Response delivered and optionally spoken back

Technology Stack

AI & NLP

  • Azure OpenAI: Intent understanding, natural language processing, emotion detection
  • Azure Speech Services: Voice-to-text and text-to-speech capabilities

Backend

  • FastAPI: Python-based REST API server
  • Python: Core programming language
  • SQLite: Relational database for structured data storage

Frontend

  • Next.js: React-based web framework
  • TypeScript: Type-safe frontend development
  • Tailwind CSS: Styling and responsive design

Storage

  • SQLite Database: Stores memory, relationships, events, and cognitive metrics
  • Azure Blob Storage: Stores photos, audio files, and media assets

Database Design

1. Users Collection

Stores basic patient profile, age, diagnosis stage, and caregiver contact information.

2. Relationships Collection

Stores family members, relationship types (daughter, son, spouse), descriptions, importance levels, and photo references.

3. Events Collection

Stores daily schedule, appointments, visits, and routine reminders.

4. Interactions Collection

Stores conversation history, question frequency, emotional tone markers, and repetition counts.

5. Cognitive Metrics Collection

Stores trend data including orientation frequency, anxiety averages, repetition patterns, and escalation flags.

User Experience

Patient Interface

  • Large, simple buttons for easy interaction
  • Voice-enabled natural conversation
  • Calm Mode for anxiety reduction
  • Family photo browsing
  • Emergency contacts access
  • No harsh corrections or reminders of forgetfulness

Caregiver Dashboard

  • Add/manage family member profiles
  • View cognitive trends and insights
  • Monitor emotional patterns
  • Receive escalation alerts
  • Access conversation summaries
  • Track orientation and memory metrics

Key Differentiators

  1. Identity-First Design: Protects dignity and personal identity above all
  2. Predictive Anxiety Detection: Intervenes before distress escalates
  3. No Harsh Corrections: Never says "you forgot" or "I told you before"
  4. Multi-Agent Intelligence: Specialized agents for different needs
  5. Longitudinal Tracking: Monitors cognitive patterns over time
  6. Dual Interface: Separate, optimized experiences for patients and caregivers
  7. Emotional Safety: Continuous monitoring and gentle reassurance
  8. Story Continuity: Actively prevents identity erosion

Behavioral Guardrails

The system follows strict ethical guidelines:

  • Never assumes cognitive decline
  • Never provides medical diagnosis
  • Never corrects harshly or dismisses emotions
  • Never argues with confusion or distorted memory
  • Always validates feelings and provides reassurance
  • Always maintains dignity and respect
  • Always prioritizes emotional safety over accuracy

Sample User Scenarios

Scenario 1: Orientation Question (Button Mode)

Patient: Taps "Where am I?" button

System Flow:

  1. Aurora receives structured request
  2. Routes to Harbor
  3. Harbor retrieves location data
  4. Response: "You're at home in Chennai. You moved here in 2018. You're safe."

Scenario 2: Relationship Question (Voice Mode)

Patient: "Is Anna coming today?"

System Flow:

  1. Azure Speech converts voice to text
  2. Aurora analyzes intent and emotion
  3. Routes to Harbor for event information
  4. Harbor checks schedule
  5. Response: "Anna is coming today at 5 PM."

Scenario 3: Anxiety Detection

Patient: "I don't know where I am... I feel scared."

System Flow:

  1. Aurora detects high anxiety score
  2. Routes to Solace immediately
  3. Solace checks recent interaction patterns via Echo
  4. Provides calming response: "You're at home. You're safe. I'm here with you."
  5. May trigger Calm Mode (music/photos)
  6. May alert caregiver if anxiety persists

Development Status

The project consists of:

  • Backend: Python FastAPI application with agent implementations
  • Frontend: Next.js application with patient and caregiver interfaces
  • Database: SQLite schema with proper relationships
  • Agent System: Seven specialized agents (Aurora, Harbor, Roots, Solace, Echo, Guardian, Legacy)

Project Structure

WithYou/
├── backend/          # FastAPI server and agent implementations
│   ├── app/
│   │   ├── agents/   # AI agent modules
│   │   ├── api/      # REST API endpoints
│   │   └── model/    # Database models
│   ├── database.py   # Database setup
│   └── main.py       # Application entry point
├── frontend/         # Next.js application
│   ├── app/          # Pages (patient/caregiver interfaces)
│   ├── components/   # React components
│   └── lib/          # Utilities and API client
└── foundry/          # Agent design documentation
    ├── agents and prompts.md
    ├── about with you app text.md
    └── user and category flow.md

Mission

With You exists to ensure that when memory fades, presence remains. It's about maintaining the essence of who someone is, preserving their relationships, and protecting their emotional wellbeing—allowing them to feel safe, connected, and valued throughout their journey with Alzheimer's disease.

Demo Video or Screenshots

Readme file : https://github.com/Aravindha-samy/With-You

Primary Programming Language

Python

Key Technologies Used

Next Js
Python

Submission Type

Team (2-4 members)

Team Members

@Cshiva2603 - Full stack Developer
@santhoshkotti - Full stack developer

Submission Requirements

  • My project meets the track-specific challenge requirements
  • My repository includes a comprehensive README.md with setup instructions
  • My code does not contain hardcoded API keys or secrets
  • I have included demo materials (video or screenshots)
  • My project is my own work with proper attribution for any third-party code
  • I agree to the Code of Conduct
  • I have read and agree to the Disclaimer
  • My submission does NOT contain any confidential, proprietary, or sensitive information
  • I confirm I have the rights to submit this content and grant the necessary licenses

Quick Setup Summary

To run the "With You" project, you'll need to set up both the Python backend and the Next.js frontend.

Backend Setup (FastAPI)

  1. Navigate to the backend directory:

    cd WithYou/backend
  2. Create and activate a Python virtual environment:

    # For Windows
    python -m venv .venv
    .\.venv\Scripts\Activate.ps1
    
    # For macOS/Linux
    python3 -m venv .venv
    source .venv/bin/activate
  3. Install dependencies:

    pip install -r requirements.txt
  4. Seed the database with initial data:

    python seed_data.py
  5. Run the backend server:

    uvicorn main:app --reload

    The backend will be running at http://127.0.0.1:8000.

Frontend Setup (Next.js)

  1. Navigate to the frontend directory in a new terminal:

    cd WithYou/frontend
  2. Install dependencies using pnpm:

    pnpm install
  3. Run the frontend development server:

    pnpm dev

    The frontend will be running at http://localhost:3000.

  4. Open the application:
    Open your web browser and navigate to http://localhost:3000.

Technical Highlights

1. The "Cognitive Mesh": A Multi-Agent System for Emotional Safety

The core of our implementation is the Cognitive Mesh, a multi-agent architecture where each AI agent has a distinct, specialized role. Instead of a monolithic chatbot, we created a team of agents (Aurora, Harbor, Roots, Solace, Echo, Guardian, Legacy) that collaborate to provide holistic cognitive support.

Technical Decision: We chose this architecture to enforce a separation of concerns at the AI level. This allows for:

  • Targeted Prompt Engineering: Each agent's system prompt is highly focused, leading to more accurate and safer responses. For example, Solace is an expert in de-escalation, while Harbor is an expert in providing grounding information.
  • Maintainability and Scalability: It's easier to debug, refine, or replace a single agent without affecting the entire system.
  • Safety and Guardrails: The orchestrator, Aurora, acts as a central routing and safety layer, assessing intent and emotion before dispatching to a specialized agent. This prevents inappropriate or emotionally jarring responses.

We are most proud of how this architecture directly serves the project's primary goal: protecting the user's emotional safety and dignity.

2. Identity-First AI and Proactive De-escalation

Our system is built on an identity-first principle. The AI is designed to never correct the user harshly or remind them of their memory loss.

Technical Decision: We implemented this through a combination of prompt design and intelligent routing.

  • Behavioral Guardrails: Agents like Harbor and Roots have strict rules in their prompts, such as "Never say 'You forgot'" and "Never say 'As I told you before.'"
  • Proactive Emotional Routing: The Aurora orchestrator analyzes the emotional content of every user utterance. If the anxiety score exceeds a certain threshold, the request is immediately routed to the Solace agent, which is specialized in emotional regulation. This allows the system to de-escalate a situation proactively rather than reacting to it after the fact.

This proactive approach to emotional safety is a key technical achievement that differentiates "With You" from standard conversational AI.

3. Dual-Mode Interaction for Cognitive Accessibility

We recognized that users with cognitive decline have varying abilities and needs. To address this, we designed two distinct interaction modes.

Technical Decision:

  • Structured Navigation (Button Mode): This mode bypasses complex AI reasoning. Button presses are mapped to direct API calls that trigger specific agents (e.g., "Where am I?" -> Harbor). This provides a fast, stable, and predictable experience for users who may be overwhelmed by open-ended conversation.
  • Free Speech Mode: This mode leverages the full power of the Cognitive Mesh, using Azure Speech and Azure OpenAI for users who are more comfortable speaking naturally.

This dual-mode approach makes the application more accessible and adaptable to the user's changing cognitive state, ensuring it remains a useful tool throughout their journey. It represents a thoughtful blend of simple, deterministic logic and advanced AI reasoning to create a user-centric experience.


Challenges & Learnings

Challenge 1: Designing for Emotional Safety, Not Just Accuracy

The Challenge: Our biggest challenge was moving beyond the traditional goal of chatbot accuracy to prioritize emotional safety. A standard AI might bluntly correct a user or fail to recognize subtle signs of distress, which would be harmful to someone with cognitive decline. We couldn't just build a Q&A bot; we had to build an empathetic companion. How do you program empathy and dignity?

What We Learned:

  • Specialization is Key: We learned that a single, monolithic AI cannot be an expert in everything. Our initial attempts with a single agent often led to generic or tonally inappropriate responses. This led us to develop the Cognitive Mesh, our multi-agent architecture. By creating specialized agents like Solace (for emotional regulation) and Roots (for identity), we could craft highly-focused system prompts and guardrails for each one.
  • Prompt Engineering as Empathy: Crafting the system prompts became an exercise in applied empathy. We spent a significant amount of time on the behavioral guardrails, explicitly forbidding phrases like "You forgot" or "I already told you." We learned that for this application, the constraints you put on the AI are more important than the knowledge you give it.
  • Orchestration is the Safety Net: The Aurora agent became our most critical component. We learned that having an intelligent router to first assess intent and, more importantly, emotion, was the key to preventing harmful interactions. By routing high-anxiety queries to Solace before attempting to answer a factual question, the system can prioritize de-escalation over information, which is a fundamental shift from conventional AI design.

Challenge 2: Balancing Simplicity with Intelligence

The Challenge: The user interface had to be radically simple for the patient, yet the backend system was complex. How could we create an experience that was both powerful and accessible, catering to users who might be overwhelmed by technology?

What We Learned:

  • One Size Doesn't Fit All: We quickly realized that a single mode of interaction was insufficient. This led to the creation of our dual-mode system.
    1. Structured Navigation (Button Mode) provides a safe, predictable, and low-cognitive-load path. It's deterministic and bypasses complex AI, which is crucial for users in a state of confusion.
    2. Free Speech Mode offers the full power of the AI for users who are able to communicate more naturally.
  • Accessibility is More Than UI: We learned that for AI applications, accessibility isn't just about large fonts and buttons. It's about providing multiple, adaptable pathways for interaction. The system must adapt to the user's cognitive state, not the other way around. This design decision made the application more inclusive and useful across different stages of cognitive ability.

Challenge 3: Preventing "AI Hallucinations" in a High-Stakes Context

The Challenge: Large language models can "hallucinate" or invent information. In a medical or emotional support context, providing fabricated information about a person's life or schedule would be confusing and potentially dangerous.

What We Learned:

  • Grounding the AI in a "Single Source of Truth": We learned that the AI should not be the source of truth; it should be an interpreter of the truth. Our agents are designed to be heavily grounded in the structured data from our SQLite database.
  • Separation of Roles: The Legacy (story) and Roots (identity) agents are strictly instructed to only use information from the database. They are not allowed to infer or create new "memies." The Echo agent's role is simply to log interactions and compute trends, not to generate narrative content.
  • The Power of Structured Data: This challenge reinforced the importance of a well-designed database schema. By having the caregiver input structured, factual data, we create a reliable foundation that the AI can draw from. The AI's job is to present this data in a gentle, conversational, and context-aware manner, not to invent it. This significantly reduces the risk of harmful hallucinations and ensures the continuity of the user's true personal story.

Contact Information

https://www.linkedin.com/in/aravindhasamyb/

Country/Region

India

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions