An all-in-one, browser-based presentation tool combining a minimalist Markdown-based slide editor with AI gesture-control system. Create and deliver beautiful presentations using only a web browser and a webcam.
- Markdown-based Editor - Write slides quickly with familiar Markdown syntax
- Live Preview - See your slides in real-time as you type
- AI Gesture Controls - Navigate slides hands-free using hand gestures
- Full-screen Presentation Mode - Immersive presentation experience
- Keyboard Fallback - Arrow keys always work as backup navigation
- 100% Client-side - All processing happens in your browser
- No Clickers Needed - Freedom to move and gesture naturally
- React 19 - Modern React with hooks
- React Router v7 - File-based routing with SSR
- TensorFlow.js - Machine learning in the browser
- MoveNet Pose Detection - Real-time gesture recognition
- Tailwind CSS v4 - Utility-first styling
- Shadcn/ui - Beautiful, accessible components
- TypeScript - Type-safe development
- Node.js 18+ installed
- A webcam (for gesture controls)
Install the dependencies:
npm installStart the development server with HMR:
npm run devYour application will be available at http://localhost:5173.
Run TypeScript type checking:
npm run typecheck- Open the editor at
http://localhost:5173 - Write your presentation using Markdown syntax
- Separate slides with
---(three hyphens on a new line)
Example:
# Welcome to My Presentation
Introduction slide content
---
# Second Slide
- Point one
- Point two
- Point three
---
# Thank You!
Questions?- Click the "Present" button in the top-right corner
- Grant camera permissions when prompted
- Wait for the AI model to load
- Use gestures or keyboard to navigate:
- Swipe Right (right hand) - Next slide
- Swipe Left (left hand) - Previous slide
- Arrow Keys - Keyboard fallback
- ESC - Exit presentation
The app uses TensorFlow.js MoveNet model to detect hand gestures:
- Position yourself so your webcam can see your upper body
- Make clear, deliberate swipe gestures with one hand
- There's a short cooldown between gestures to prevent accidental triggers
- The "AI Active" indicator shows when gesture detection is working
air-deck/
βββ app/
β βββ components/ # React components
β β βββ ui/ # Shadcn/ui components
β β βββ Editor.tsx # Main editor with split layout
β β βββ PresentationView.tsx # Presentation mode
β β βββ SlideRenderer.tsx # Markdown renderer
β β βββ GestureFeedback.tsx # Gesture UI feedback
β βββ contexts/ # React contexts
β β βββ PresentationContext.tsx # Global state
β βββ hooks/ # Custom React hooks
β β βββ useGestureDetection.ts
β β βββ useKeyboardNavigation.ts
β βββ lib/ # Utilities
β β βββ slideParser.ts
β β βββ utils.ts
β βββ routes/ # React Router routes
β β βββ home.tsx # Editor route
β β βββ present.tsx # Presentation route
β βββ services/ # Business logic
β β βββ gestureDetection.ts
β βββ root.tsx # Root layout
βββ slides/ # Sample markdown files
β βββ demo.md
βββ docs/ # Documentation
β βββ PRD.md # Product requirements
β βββ TRD.md # Technical requirements
βββ public/ # Static assets
- PresentationContext provides global state for slides and navigation
- Markdown text is split into slides using
\n---\ndelimiter - Current slide index tracked for both editor preview and presentation mode
- Lazy-loaded only in presentation route to keep editor lightweight
- Uses MoveNet Lightning model (fastest variant)
- Tracks wrist keypoints to detect swipe gestures
- Cooldown mechanism prevents multiple triggers
- Runs in
requestAnimationFrameloop for real-time detection
/- Editor route (no AI models loaded)/present- Presentation route (loads TensorFlow.js + pose detection)
This separation ensures fast editor performance while enabling powerful gesture controls in presentation mode.
Create a production build:
npm run buildStart the production server:
npm run start- Chrome/Edge 90+ (recommended for best TensorFlow.js performance)
- Firefox 88+
- Safari 14+ (may have slower model loading)
If gesture controls aren't working:
- Check browser permissions for camera access
- Make sure no other app is using the webcam
- Try refreshing the page
- Use keyboard arrow keys as fallback
The AI model is ~10-20MB and downloads on first presentation:
- Wait for "AI Active" indicator
- Subsequent presentations will be faster (cached)
- Use keyboard navigation while loading
If gesture detection is laggy:
- Close other tabs/applications
- Ensure good lighting for better detection
- Consider using a faster browser (Chrome recommended)
See docs/PRD.md for planned features:
- Speaker notes support
- Multiple gesture types
- Theme customization
- Local storage auto-save
- PDF export
This is a demonstration project built according to the PRD and TRD specifications. Feel free to fork and extend it with additional features!
MIT
Built with β€οΈ using React Router, TensorFlow.js, and Tailwind CSS.