ArthaPage is a sophisticated browser extension designed to integrate a contextual AI sidebar heavily optimized for research and reading. It allows users to interact with webpage content using various Large Language Models (LLMs) including OpenAI, Gemini, Claude, DeepSeek, and local Ollama instances.
- Context-Aware Chat: Ask questions directly related to the current webpage.
- Model Flexibility: Switch between cloud providers (OpenAI, Gemini, Anthropic) and local privacy-focused models (Ollama).
- Isolated UI: Built with Shadow DOM to ensure styles do not bleed into or from the host webpage.
- Persistent Settings: Syncs preferences and API configurations across browser sessions.
- Privacy-First: API keys are stored in local storage and never transmitted to intermediate servers.
- Download the latest release from the Releases section.
- Extract the archive to a local folder.
- Open your browser (Chrome/Brave/Edge) and navigate to
chrome://extensions. - Enable Developer Mode in the top right.
- Click Load unpacked and select the
distfolder from the extracted archive.
- Clone the repository:
git clone https://github.com/Start-Up-code/First-Web-Extension.git
- Install dependencies:
npm install
- Start the development server with Hot Module Replacement (HMR):
npm run dev
- Load the
distfolder inchrome://extensionsas an unpacked extension.
To use cloud-based models like OpenAI, Gemini, Claude, or DeepSeek:
- Open the ArthaPage sidebar on any webpage.
- Click the Settings (gear icon) in the sidebar.
- Select your desired provider from the list.
- Enter your valid API Key in the input field.
- The extension is now ready to generate summaries and chat using the cloud provider.
To use local, privacy-focused models without an internet connection, follow these specific steps to configure Ollama:
Download and install Ollama from ollama.com.
Open your terminal or command prompt and pull a lightweight model (e.g., DeepSeek or Llama 3) to get started:
ollama pull deepseek-r1:1.5bBy default, Ollama blocks requests from browser extensions. You must set the OLLAMA_ORIGINS environment variable to allow the extension to communicate with Ollama.
For Windows (PowerShell):
- Close Ollama from the taskbar (system tray) if it is running.
- Open PowerShell and run the following command found in the documentation:
[Environment]::SetEnvironmentVariable("OLLAMA_ORIGINS", "*", "User")
- Restart your computer or strictly restart the Ollama application.
For Mac/Linux:
OLLAMA_ORIGINS="*" ollama serveOnce configured, ensure the Ollama server is running:
ollama serve- Open the ArthaPage sidebar.
- Go to Settings and select Ollama as the provider.
- Ensure the URL is set to
http://localhost:11434(default). - Select your pulled model (e.g.,
deepseek-r1:1.5b) from the dropdown.
Detailed documentation is available in the docs directory:
- Developer Guide: Advanced setup and build processes.
- Architecture: System design, message passing, and component isolation.
- Providers: Supported LLMs and integration details.
- Contributing: Guidelines for code contributions.
- Framework: React 18
- Language: TypeScript
- Build Tool: Vite + CRXJS
- Styling: Tailwind CSS (Shadow DOM compatible)
- State Management: React Hooks + Chrome Storage API
First-Web-Extension/
├── public/ # Static assets
├── src/
│ ├── background/ # Service workers (API handling)
│ ├── components/ # React UI components
│ ├── content/ # Injected scripts (Shadow DOM host)
│ ├── lib/ # Utilities and LLM clients
│ ├── options/ # Options page
│ ├── popup/ # Extension popup
│ └── manifest.json # Manifest V3 configuration
└── vite.config.ts # Build configuration
MIT



