An AI-powered note-taking app that runs entirely on your home network.
It records short audio segments and turns them into concise summaries, ideas, and reminders.
Tap record, speak naturally, and EchoMem will extract the key points for you to review later.
Core features:
- Local only – everything runs on your own machine, no cloud services or external API costs.
- Secure by default – uses HTTPS on your home network, with automatic service discovery (mDNS).
- Multiple devices – phones, tablets, or desktops can connect to the same server.
- Profiles – keep notes separated into different spaces (e.g. personal, work, family).
- Offline use – record notes when away; they will be processed once you reconnect to your home server.
- Automatic tags – notes are labeled so you can filter and browse related content easily.
This project is experimental and not production-ready.
It may contain serious bugs, incomplete features, and breaking changes.
Do not use it for critical work or important data.
- Android device, Android 12 (API 31) or later.
- Windows workstation to run the server that processes your recordings.
- Ollama for local LLM inference.
- A GPU is recommended for better models & performance.
Contained in the server directory.
Create a folder for your EchoMem installation & data storage. Note that your data will be stored in plaintext within this folder, along with generated certificates for HTTPS.
mkdir ~/echomem; cd ~/echomem
Initialize the server. This is optional as the server can get running with defaults.
echomem-server init
This will create echomem.config.yaml, which lists available configuration
values. Review and tweak to the defaults as desired. You can also list
available configuration values from the command line:
echomem-server -h
Configuration is resolved from the command line, environment, config file, then defaults (in that order). You can see all currently resolved configuration values by running:
echomem-server config
To start the server:
echomem-server
By default, the server will try to start Ollama and
ensure configured models are available for processing. Disable this with --server.no_ollama_autostart.
Once the server is up and running, you will be able to pair your mobile device to it over the local network.
Contained in the app directory.
This will need to be side-loaded on to your device. When first starting the app, you will be prompted to pair with a server available on the local network. This finds the server via Multicast DNS, so the server will need to be accessible on 5353.
The server will automatically generate necessary security resources when it first starts. You will need to verify the pairing when the device finds it, confirming the displayed codes match.
A notification will appear on your workstation:
And a screen is shown on your device:


