Perplexed is an Obsidian plugin that enables AI-powered content generation with source citations using Perplexity and Perplexica. This plugin brings research-grade AI capabilities directly into your Obsidian workspace, allowing you to generate well-cited content for your notes.
- Source-Cited AI Responses: Get AI-generated content with proper citations and references
- Default format:
[1]: 2024, Dec 13. What is GRC (Governance, Risk and Compliance) - Metricstream. Published: 2024-05-01 | Updated: 2024-12-13
[2]: 2025, Jun 16. Governance, risk and compliance (GRC): Definitions and resources. Published: 2025-05-27 | Updated: 2025-06-16
- Multiple AI Providers: Support for Perplexity (commercial) and Perplexica (self-hosted)
- Streaming Responses: Real-time streaming of AI responses for better UX
- Flexible Configuration: Customizable endpoints, models, and parameters
- Deep Research Mode: Comprehensive research across hundreds of sources
- Local LLM Support: Integration with LM Studio for local AI processing
-
Download the Plugin:
- Download the latest release from the releases page
- Extract the ZIP file to your Obsidian plugins folder
-
Enable in Obsidian:
- Open Obsidian Settings → Community Plugins
- Turn off Safe Mode
- Click "Install plugin from file"
- Select the extracted plugin folder
- Enable the "Perplexed" plugin
Perplexity is a commercial AI service that provides high-quality, source-cited responses.
-
Get API Key:
- Visit Perplexity AI
- Sign up for an account
- Navigate to API settings to get your API key
-
Configure in Plugin:
- Open Obsidian Settings → Community Plugins → Perplexed
- Enter your Perplexity API key
- The default endpoint should work:
https://api.perplexity.ai/chat/completions
Perplexica is a free, open-source alternative that you can host yourself.
-
Set up Perplexica Server:
- Follow the Perplexica setup guide
- Ensure your server is running and accessible
-
Configure in Plugin:
- Open plugin settings
- Set the Perplexica endpoint (e.g.,
http://localhost:3030/api/search
) - Configure your preferred model and settings
For local AI processing without internet dependency:
-
Install LM Studio:
- Download from LM Studio
- Install and start the application
-
Configure in Plugin:
- Set LM Studio endpoint:
http://localhost:1234/v1/chat/completions
- Choose your preferred local model
- Set LM Studio endpoint:
- Open Command Palette:
Ctrl/Cmd + Shift + P
- Run Command: Type "Ask Perplexity" and select it
- Enter Your Question: Type your research question
- Configure Options:
- Model: Choose from available Perplexity models
- Citations: Enable/disable source citations
- Images: Include image results
- Recency Filter: Filter results by time period
- Streaming: Enable real-time response streaming
- sonar-pro: Balanced performance and quality (recommended)
- sonar-small: Fast responses, good for simple queries
- sonar-deep-research: Comprehensive research across hundreds of sources
- llama-3.1-sonar-small-128k-online: Extended context window
- llama-3.1-sonar-large-128k-online: Large model with extended context
Question: "What are the latest developments in quantum computing?"
Model: sonar-pro
Citations: Enabled
Recency: Past month
Question: "Analyze the impact of AI on healthcare in the last 5 years"
Model: sonar-deep-research
Citations: Enabled
Recency: Past 5 years
Note: Deep research mode conducts exhaustive analysis across hundreds of sources and may take 30-60 seconds.
Perplexity responses include:
- Main Answer: Comprehensive response to your question
- Citations: Numbered references with source links
- Images: Relevant images (if enabled)
- Related Questions: Additional questions for exploration (if enabled)
Enhance selected text using Perplexity AI to improve clarity, add details, and make content more comprehensive.
- Select Text: Highlight the text you want to enhance in your note
- Open Command Palette:
Ctrl/Cmd + Shift + P
- Run Command: Type "Enhance Selected Text with Perplexity" and select it
- Configure Options:
- Model: Choose from available Perplexity models
- Citations: Enable/disable source citations
- Images: Include image results
- Streaming: Enable real-time response streaming
- Replace Original: Replace the selected text with the enhanced version
- Insert Below: Insert the enhanced text below the current cursor position
- Preview: Review the enhanced text before applying changes
Original Text:
AI is changing how we work.
Enhanced Text:
Artificial Intelligence (AI) is fundamentally transforming how we work across various industries and sectors. From automating routine tasks to enabling more sophisticated decision-making processes, AI technologies are reshaping traditional workflows and creating new opportunities for productivity and innovation.
- Open Command Palette:
Ctrl/Cmd + Shift + P
- Run Command: Type "Ask Perplexica" and select it
- Enter Your Question: Type your research question
- Configure Options:
- Focus Mode: Choose search specialization
- Optimization: Balance speed vs. quality
- Streaming: Enable real-time response streaming
- webSearch: General web search (default)
- academicSearch: Academic and research papers
- writingAssistant: Writing and content creation
- wolframAlpha: Mathematical and computational queries
- youtubeSearch: Video content search
- redditSearch: Reddit community discussions
- speed: Fastest responses
- balanced: Good balance of speed and quality
- quality: Highest quality responses
Question: "What are the current theories about dark matter?"
Focus Mode: academicSearch
Optimization: quality
Question: "Help me write an introduction about climate change"
Focus Mode: writingAssistant
Optimization: balanced
- Open Command Palette:
Ctrl/Cmd + Shift + P
- Run Command: Type "Ask LM Studio" and select it
- Enter Your Question: Type your question
- Configure Options:
- Model: Choose your local model
- System Prompt: Customize AI behavior
- Temperature: Control response creativity
- Max Tokens: Limit response length
Question: "Write a short story about a robot learning to paint"
Model: ibm/granite-3.2-8b
Temperature: 0.8
System Prompt: "You are a creative storyteller who writes engaging narratives."
Question: "Explain how neural networks work"
Model: microsoft/phi-4-reasoning-plus
Temperature: 0.3
System Prompt: "You are a technical expert who explains complex concepts clearly."
Command | Description | Usage |
---|---|---|
Ask Perplexity |
Query Perplexity AI with full configuration | Editor command with modal interface |
Enhance Selected Text with Perplexity |
Enhance selected text using Perplexity AI | Editor command with modal interface |
Update Perplexity URL |
Change Perplexity API endpoint | Settings command |
Show Perplexity Settings |
Display current Perplexity configuration | Debug command |
Command | Description | Usage |
---|---|---|
Ask Perplexica |
Query Perplexica with focus and optimization modes | Editor command with modal interface |
Update Perplexica URL |
Change Perplexica API endpoint | Settings command |
Show Perplexica Settings |
Display current Perplexica configuration | Debug command |
Command | Description | Usage |
---|---|---|
Ask LM Studio |
Query local LM Studio with custom parameters | Editor command with modal interface |
Update LM Studio URL |
Change LM Studio API endpoint | Settings command |
Show LM Studio Settings |
Display current LM Studio configuration | Debug command |
You can set custom keyboard shortcuts for any command:
- Open Obsidian Settings → Hotkeys
- Search for "Perplexed" commands
- Assign your preferred shortcuts
perplexed-plugin/
├── main.ts # Main plugin file with all functionality
├── manifest.json # Plugin metadata and requirements
├── package.json # Dependencies and build scripts
├── esbuild.config.mjs # Build configuration
├── tsconfig.json # TypeScript configuration
├── styles.css # Plugin styles (if any)
└── README.md # This file
- Node.js (v18 or higher)
- pnpm (recommended) or npm
- Obsidian desktop application
- Git
-
Clone the Repository:
git clone <repository-url> cd perplexed-plugin
-
Install Dependencies:
pnpm install
3. **Build the Plugin**:
```bash
pnpm build
- Development Mode:
pnpm dev
### Testing Your Plugin
1. **Create Symbolic Link** (macOS/Linux):
```bash
ln -s /path/to/your/plugin /path/to/obsidian/vault/.obsidian/plugins/perplexed
-
Windows (PowerShell):
New-Item -ItemType SymbolicLink -Path "C:\path\to\obsidian\vault\.obsidian\plugins\perplexed" -Target "C:\path\to\your\plugin"
-
Enable in Obsidian:
- Open Obsidian Settings → Community Plugins
- Disable Safe Mode
- Enable the "Perplexed" plugin
-
PerplexedPlugin Class (
main.ts
):- Main plugin class extending Obsidian's Plugin
- Manages settings, commands, and UI components
- Handles API interactions with all providers
-
Settings Management:
PerplexedPluginSettings
interface defines all configurable optionsPerplexedSettingTab
provides the settings UI- Settings are persisted using Obsidian's data API
-
Command Registration:
registerPerplexityCommands()
: Perplexity-specific commandsregisterPerplexicaCommands()
: Perplexica-specific commandsregisterLMStudioCommands()
: LM Studio-specific commands
-
API Integration:
queryPerplexity()
: Handles Perplexity API callsqueryPerplexica()
: Handles Perplexica API callsqueryLMStudio()
: Handles LM Studio API calls
// Example from queryPerplexity method
if (useStreaming) {
const reader = response.body?.getReader();
while (true) {
const { done, value } = await reader.read();
if (done) break;
const chunk = new TextDecoder().decode(value);
// Process and display chunk in real-time
}
}
Each command uses Obsidian's Modal class to create user-friendly input forms:
const modal = new (class extends Modal {
private queryInput!: HTMLTextAreaElement;
onOpen() {
// Create form elements
}
async onSubmit() {
// Handle form submission
}
})(this.app, this, editor);
Comprehensive error handling for API failures, network issues, and invalid configurations:
try {
const response = await fetch(endpoint, options);
if (!response.ok) {
throw new Error(`HTTP error! status: ${response.status}`);
}
} catch (error) {
new Notice(`Error: ${error.message}`);
console.error('API Error:', error);
}
The plugin supports extensive configuration through the settings interface:
interface PerplexedPluginSettings {
perplexityApiKey: string;
perplexityEndpoint: string;
perplexicaEndpoint: string;
lmStudioEndpoint: string;
defaultModel: string;
defaultOptimizationMode: string;
defaultFocusMode: string;
// ... additional settings
}
The project uses esbuild for fast compilation:
// esbuild.config.mjs
import esbuild from 'esbuild';
import process from 'process';
import builtins from 'builtin-modules';
const banner =
`/*
THIS IS A GENERATED/BUNDLED FILE BY ESBUILD
if you want to view the source, please visit the github repository of this plugin
*/
`;
const prod = (process.argv[2] === 'production');
esbuild.build({
banner: {
js: banner,
},
entryPoints: ['main.ts'],
bundle: true,
external: [
'obsidian',
'electron',
'@codemirror/autocomplete',
'@codemirror/collab',
'@codemirror/commands',
'@codemirror/language',
'@codemirror/lint',
'@codemirror/search',
'@codemirror/state',
'@codemirror/view',
'@lezer/common',
'@lezer/highlight',
'@lezer/lr',
...builtins],
format: 'cjs',
watch: !prod,
target: 'es2018',
logLevel: "info",
sourcemap: prod ? false : 'inline',
treeShaking: true,
outfile: 'main.js',
}).catch(() => process.exit(1));
-
Create Feature Branch:
git checkout -b feature/your-feature-name
-
Make Changes:
- Follow TypeScript best practices
- Add proper error handling
- Include JSDoc comments for public methods
-
Test Your Changes:
pnpm build
# Test in Obsidian
- Submit Pull Request:
- Include clear description of changes
- Add tests if applicable
- Update documentation
- Use TypeScript strict mode
- Follow Obsidian plugin conventions
- Use async/await for API calls
- Implement proper error handling
- Add JSDoc comments for public APIs
Currently, testing is manual through Obsidian. To test:
- Build the plugin:
pnpm build
- Enable in Obsidian
- Test all commands and settings
- Verify error handling with invalid configurations
-
Add Settings:
interface PerplexedPluginSettings { newProviderEndpoint: string; newProviderApiKey: string; // ... other settings }
-
Add Query Method:
public async queryNewProvider(query: string, options: any): Promise<void> { // Implementation }
-
Register Commands:
private registerNewProviderCommands(): void { this.addCommand({ id: 'ask-new-provider', name: 'Ask New Provider', editorCallback: (editor: Editor) => { // Modal implementation } }); }
-
Update Settings UI: Add configuration options to
PerplexedSettingTab.display()
The plugin inserts responses directly into the editor. To modify the format:
// In query methods, modify the headerText
const headerText = `\n\n***\n## Custom Header\n**Question:** ${query}\n\n### **Response**:\n\n`;
-
API Key Not Working:
- Verify API key is correct
- Check API key permissions
- Ensure endpoint URL is correct
-
Network Errors:
- Check internet connection
- Verify firewall settings
- Test endpoint accessibility
-
Plugin Not Loading:
- Check Obsidian console for errors
- Verify plugin is enabled
- Check for conflicting plugins
Enable debug logging by checking the browser console:
- Open Obsidian
- Press
Ctrl/Cmd + Shift + I
(Developer Tools) - Check Console tab for plugin logs
- Check the Issues page
- Review the Discussions forum
- Contact the development team
The Lossless Group is a loose collection of individuals and organizations interested in creating winning formulae for using AI and Collaborative Tooling. We consult, invest in startups, run Venture Capital Funds, host Hackathons, build products, write content, and contribute to open source projects.
We are committed to playing on the frontiers of technology and staying curious and engaged.
License: MIT
Version: 0.0.0.1
Author: The Lossless Group
Support: GitHub Issues