WhatsApp client with LLM-powered auto-replies for Mac, Windows, and Linux.
Just want to run the app? Here's how:
- Download release.zip - Get the latest version directly
- Extract the zip file anywhere on your computer
- Run the script for your system:
- Windows: Double-click
run-windows.bat
- Mac: Open Terminal, navigate to folder, run
chmod +x run-mac.sh && ./run-mac.sh
- Linux: Open Terminal, navigate to folder, run
chmod +x run-linux.sh && ./run-linux.sh
- Windows: Double-click
That's it! The script will install everything needed and launch the app.
- Interactive LLM Integration: Choose between OpenAI's GPT, local models via Ollama, or other custom LLM APIs
- Selective Auto-Reply: Enable or disable auto-reply for specific chats
- Customizable System Prompts: Define how your AI assistant should behave in conversations
- Multi-Platform Support: Works on Windows, macOS, and Linux
- Message History: Maintains conversation context for more coherent responses
- Download the latest
release.zip
file from the Releases page - Extract the contents of the ZIP file to any location on your computer
- Run the appropriate script for your operating system:
- Windows: Double-click
run-windows.bat
- If Node.js is installed during setup, you'll need to close and run the script again
- macOS:
- Open Terminal
- Navigate to the extracted folder:
cd path/to/extracted/folder
- Make the script executable:
chmod +x run-mac.sh
- Run the script:
./run-mac.sh
- Linux:
- Open Terminal
- Navigate to the extracted folder:
cd path/to/extracted/folder
- Make the script executable:
chmod +x run-linux.sh
- Run the script:
./run-linux.sh
- Windows: Double-click
For detailed installation instructions, see the included install_instructions.md
file.
- Node.js
- npm or yarn
- Clone the repository
git clone https://github.com/iongpt/LLM-for-Whatsapp.git
cd LLM-for-Whatsapp
- Install dependencies
npm install
or if you prefer yarn:
yarn install
- Build the application
npm run build
or if you prefer yarn:
yarn build
- Start the application
npm run start
or if you prefer yarn:
yarn start
- Launch the application
- Scan the QR code with WhatsApp on your phone (Menu > WhatsApp Web > Link a Device)
- After authentication, your chats will appear in the left sidebar
- Select a chat from the sidebar
- Toggle the "Auto-reply" switch to enable AI responses for that chat
- The assistant will automatically respond to incoming messages in that chat
- You can also toggle "Auto-reply to all chats" in the Settings tab
- OpenAI API (requires API key)
- Local models via Ollama
- Custom API endpoints for other LLM providers
- Temperature: Control randomness of responses (0.0-2.0)
- System prompt: Define assistant behavior
- History length: Number of messages to include for context
To build for your platform:
# For all platforms
npm run package:all
# For Windows
npm run package:win
# For macOS
npm run package:mac
# For Linux
npm run package:linux
# Run in development mode with auto-reload
npm run dev
# Run linting
npm run lint
You can also use yarn
instead of npm run
for all the commands above if you prefer.
For details on integrating custom LLM backends, check out our custom_LLM.md guide.
Several utility scripts are included to help with troubleshooting:
- apply-settings.js: Applies all necessary settings for reply delay features
- fix-settings.js: Fixes issues with default settings
- debug-settings.js: Creates a debug window to inspect current settings
- troubleshoot/inspect-settings.js: Inspects and modifies settings files
To run these scripts:
node apply-settings.js
This project is not affiliated with WhatsApp or Meta. It uses an unofficial WhatsApp Web client library and should be used responsibly and in accordance with WhatsApp's terms of service.
MIT License