Skip to content

Commit fd3d26d

Browse files
committed
feat: add support for Cerebras API provider and enhance MCQ functionality
1 parent 82239ff commit fd3d26d

File tree

7 files changed

+395
-35
lines changed

7 files changed

+395
-35
lines changed

README.md

Lines changed: 39 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
<div align="center">
22
<h1>🤖 Cluelessly Coder</h1>
3-
<p>A powerful desktop application that helps developers solve coding problems by analyzing screenshots of code and providing AI-powered solutions.</p>
3+
<p>A powerful desktop application that helps developers solve MCQS, coding problems by analyzing screenshots of code and providing AI-powered solutions.</p>
44

55
<img src="https://i.postimg.cc/cdqKxKyy/700-1x-shots-so.jpg" alt="">
66

@@ -32,17 +32,18 @@
3232

3333
- **Multiple AI Providers**:
3434
- Google Gemini (2.5 Flash, 2.0 Flash, 2.5 Pro, and more)
35-
- Groq (Llama 4 and Llama 3 models)
35+
- Llama (Llama 4 and Llama 3 models)
3636
- Deepseek (R1 Distill Llama 70B)
3737
- Mistral (Saba 24B)
3838
- Qwen (Qwen3 32B and Qwen QWQ 32B)
3939
- OpenAI (GPT-4o, GPT-4o Mini)
4040
- **Smart Model Selection**:
4141
- Adaptive model selection based on task complexity
4242
- Balance between speed and accuracy
43-
- **Dual Interaction Modes**:
43+
- **Triple Interaction Modes**:
4444
- **Coder Mode**: Analyze code from screenshots and get structured solutions
4545
- **Question Mode**: Ask conversational questions with optional screenshot context
46+
- **MCQ Mode**: Analyze multiple choice questions from screenshots and get correct answers with detailed explanations
4647

4748
### 🛠️ Developer Experience
4849

@@ -101,20 +102,20 @@
101102
### Development Mode
102103

103104
```bash
104-
npm dev # or bun run dev / yarn dev
105+
bun dev # or npm run dev / yarn dev
105106
```
106107

107108
### Production Build
108109

109110
```bash
110111
# For Windows
111-
npm build:win
112+
bun build:win
112113

113114
# For macOS
114-
npm build:mac
115+
bun build:mac
115116

116117
# For Linux
117-
npm build:linux
118+
bun build:linux
118119
```
119120

120121
## 🎯 Usage Guide
@@ -135,11 +136,20 @@ npm build:linux
135136
4. Submit your question with `Ctrl+Enter` / `Cmd+Enter`
136137
5. Get conversational AI responses with helpful explanations
137138

139+
### MCQ Mode
140+
141+
1. Switch to MCQ Mode using the "MCQ Mode" button or `Ctrl+M` / `Cmd+M`
142+
2. Take a screenshot of your multiple choice question using `Ctrl+H` / `Cmd+H`
143+
3. Click "Analyze MCQ" to process the question
144+
4. Get the correct answer highlighted with detailed explanations
145+
5. View explanations for why other options are incorrect (when available)
146+
138147
### Mode Switching
139148

140-
- **Toggle between modes**: `Ctrl+M` / `Cmd+M`
149+
- **Cycle through modes**: `Ctrl+M` / `Cmd+M`
141150
- **Coder Mode**: Best for analyzing code problems and getting structured solutions
142151
- **Question Mode**: Perfect for asking general programming questions, getting explanations, or seeking advice
152+
- **MCQ Mode**: Ideal for analyzing multiple choice questions and understanding correct answers with explanations
143153

144154
## ⌨️ Keyboard Shortcuts
145155

@@ -148,7 +158,7 @@ npm build:linux
148158
| Toggle Visibility | `Ctrl+B` / `Cmd+B` | Show/hide the application window |
149159
| Take Screenshot | `Ctrl+H` / `Cmd+H` | Capture a screenshot for analysis |
150160
| Process/Submit | `Ctrl+Enter` / `Cmd+Enter` | Process screenshots or submit questions |
151-
| Toggle Mode | `Ctrl+M` / `Cmd+M` | Switch between Screenshot and Question modes |
161+
| Toggle Mode | `Ctrl+M` / `Cmd+M` | Cycle between Coder, Question, and MCQ modes |
152162
| Delete Last Screenshot | `Ctrl+L` / `Cmd+L` | Remove the most recent screenshot |
153163
| Reset View | `Ctrl+R` / `Cmd+R` | Reset to initial state |
154164
| Quit Application | `Ctrl+Q` / `Cmd+Q` | Exit the application |
@@ -202,15 +212,26 @@ The new Question Mode transforms Cluelessly Coder into a conversational AI assis
202212

203213
Choose different models based on your needs:
204214

205-
- **Gemini 2.5 Pro**: Advanced reasoning for most complex problems
206-
- **Gemini 2.5 Flash**: Best for complex problems
207-
- **Gemini 2.0 Flash**: Balanced performance
208-
- **GPT-4o**: OpenAI's most capable model
209-
- **Llama 4**: Scout and Maverick
210-
- **Llama 3**: Another Open-source alternative
211-
- **Deepseek**: R1 Distill Llama 70B
212-
- **Mistral**: Saba 24B
213-
- **Qwen**: Qwen3 32B , Qwen QWQ 32B
215+
- Gemini 2.5 Pro
216+
- Gemini 2.5 Flash
217+
- Gemini 2.0 Flash
218+
- GPT-4o
219+
- Llama 4 Scout
220+
- Llama 4 Maverick
221+
- Llama 3.1 8B
222+
- Llama 3.1 8B Instant
223+
- Llama 3.3 70B
224+
- Llama 3.3 70B Versatile
225+
- Qwen 3 32B
226+
- Qwen 3 235B Instruct
227+
- Qwen 3 235B Thinking
228+
- Qwen 3 480B Coder
229+
- Gemma 2 9B IT
230+
- Deepseek R1 Distill Llama 70B
231+
- Meta Llama Prompt Guard 2 22M
232+
- Meta Llama Prompt Guard 2 86M
233+
- MoonshotAI Kimi K2 Instruct
234+
- Qwen/Qwen3-32B
214235

215236
## 🤝 Contributing
216237

src/main/lib/config-manager.ts

Lines changed: 65 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ import OpenAI from 'openai'
66

77
interface Config {
88
apiKey: string
9-
apiProvider: 'openai' | 'gemini' | 'groq'
9+
apiProvider: 'openai' | 'gemini' | 'groq' | 'cerebras'
1010
extractionModel: string
1111
solutionModel: string
1212
debuggingModel: string
@@ -62,7 +62,10 @@ export class ConfigManager extends EventEmitter {
6262
}
6363
}
6464

65-
private sanitizeModelSelection(model: string, provider: 'openai' | 'gemini' | 'groq') {
65+
private sanitizeModelSelection(
66+
model: string,
67+
provider: 'openai' | 'gemini' | 'groq' | 'cerebras'
68+
) {
6669
if (provider === 'openai') {
6770
const allowedModels = ['gpt-4o-mini', 'gpt-4o']
6871
if (!allowedModels.includes(model)) {
@@ -101,6 +104,24 @@ export class ConfigManager extends EventEmitter {
101104
return 'meta-llama/llama-4-scout-17b-16e-instruct'
102105
}
103106
return model
107+
} else if (provider === 'cerebras') {
108+
const allowedModels = [
109+
'llama-4-scout-17b-16e-instruct',
110+
'llama3.1-8b',
111+
'llama-3.3-70b',
112+
'qwen-3-32b',
113+
'llama-4-maverick-17b-128e-instruct',
114+
'qwen-3-235b-a22b-instruct-2507',
115+
'qwen-3-235b-a22b-thinking-2507',
116+
'qwen-3-coder-480b'
117+
]
118+
if (!allowedModels.includes(model)) {
119+
console.log(
120+
`Invalid model: ${model} for provider: ${provider}. Defaulting to llama-4-scout-17b-16e-instruct`
121+
)
122+
return 'llama-4-scout-17b-16e-instruct'
123+
}
124+
return model
104125
}
105126
return model
106127
}
@@ -240,20 +261,25 @@ export class ConfigManager extends EventEmitter {
240261
return !!config.apiKey && config.apiKey.trim().length > 0
241262
}
242263

243-
public isValidApiKeyFormat(apiKey: string, provider?: 'openai' | 'gemini' | 'groq'): boolean {
264+
public isValidApiKeyFormat(
265+
apiKey: string,
266+
provider?: 'openai' | 'gemini' | 'groq' | 'cerebras'
267+
): boolean {
244268
if (provider === 'openai') {
245269
return apiKey.trim().startsWith('sk-')
246270
} else if (provider === 'gemini') {
247271
return apiKey.trim().startsWith('AIzaSyB')
248272
} else if (provider === 'groq') {
249273
return apiKey.trim().startsWith('gsk_')
274+
} else if (provider === 'cerebras') {
275+
return apiKey.trim().startsWith('csk-')
250276
}
251277
return false
252278
}
253279

254280
public async testApiKey(
255281
apiKey: string,
256-
provider?: 'openai' | 'gemini' | 'groq'
282+
provider?: 'openai' | 'gemini' | 'groq' | 'cerebras'
257283
): Promise<{
258284
valid: boolean
259285
error?: string
@@ -263,6 +289,8 @@ export class ConfigManager extends EventEmitter {
263289
provider = 'openai'
264290
} else if (apiKey.trim().startsWith('gsk_')) {
265291
provider = 'groq'
292+
} else if (apiKey.trim().startsWith('csk-')) {
293+
provider = 'cerebras'
266294
} else {
267295
provider = 'gemini'
268296
}
@@ -274,6 +302,8 @@ export class ConfigManager extends EventEmitter {
274302
return this.testGeminiKey()
275303
} else if (provider === 'groq') {
276304
return this.testGroqKey(apiKey)
305+
} else if (provider === 'cerebras') {
306+
return this.testCerebrasKey(apiKey)
277307
}
278308

279309
return { valid: false, error: 'Invalid provider' }
@@ -339,6 +369,37 @@ export class ConfigManager extends EventEmitter {
339369
}
340370
}
341371

372+
private async testCerebrasKey(apiKey: string): Promise<{
373+
valid: boolean
374+
error?: string
375+
}> {
376+
try {
377+
const response = await fetch('https://api.cerebras.ai/v1/models', {
378+
headers: {
379+
Authorization: `Bearer ${apiKey}`,
380+
'Content-Type': 'application/json'
381+
}
382+
})
383+
384+
if (!response.ok) {
385+
throw new Error(`API request failed with status ${response.status}`)
386+
}
387+
388+
const data = await response.json()
389+
if (data && Array.isArray(data.data)) {
390+
return { valid: true }
391+
}
392+
393+
return { valid: false, error: 'Invalid API response' }
394+
} catch (error) {
395+
console.error('Cerebras API key test failed:', error)
396+
return {
397+
valid: false,
398+
error: error instanceof Error ? error.message : 'Failed to validate Cerebras API key'
399+
}
400+
}
401+
}
402+
342403
public getOpacity(): number {
343404
const config = this.loadConfig()
344405
return config.opacity !== undefined ? config.opacity : 1.0

src/main/lib/processing-manager.ts

Lines changed: 19 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,7 @@ import { generateText, generateObject, CoreMessage, LanguageModel } from 'ai'
88
import { createOpenAI } from '@ai-sdk/openai'
99
import { createGoogleGenerativeAI } from '@ai-sdk/google'
1010
import { createGroq } from '@ai-sdk/groq'
11+
import { createCerebras } from '@ai-sdk/cerebras'
1112
import { z } from 'zod'
1213

1314
export interface IProcessingManager {
@@ -68,6 +69,7 @@ export class ProcessingManager {
6869
private vercelOpenAI: ReturnType<typeof createOpenAI> | null = null
6970
private vercelGoogle: ReturnType<typeof createGoogleGenerativeAI> | null = null
7071
private vercelGroq: ReturnType<typeof createGroq> | null = null
72+
private vercelCerebras: ReturnType<typeof createCerebras> | null = null
7173

7274
private currentProcessingAbortController: AbortController | null = null
7375
private currentExtraProcessingAbortController: AbortController | null = null
@@ -89,6 +91,7 @@ export class ProcessingManager {
8991
this.vercelOpenAI = null
9092
this.vercelGoogle = null
9193
this.vercelGroq = null
94+
this.vercelCerebras = null
9295

9396
if (config.apiProvider === 'openai') {
9497
if (config.apiKey) {
@@ -112,18 +115,27 @@ export class ProcessingManager {
112115
if (config.apiKey) {
113116
this.vercelGroq = createGroq({
114117
apiKey: config.apiKey
115-
// Add other Groq specific configurations here
116118
})
117119
console.log('Vercel Groq provider initialized successfully')
118120
} else {
119121
console.log('Vercel Groq provider not initialized: No API key provided')
120122
}
123+
} else if (config.apiProvider === 'cerebras') {
124+
if (config.apiKey) {
125+
this.vercelCerebras = createCerebras({
126+
apiKey: config.apiKey
127+
})
128+
console.log('Vercel Cerebras provider initialized successfully')
129+
} else {
130+
console.log('Vercel Cerebras provider not initialized: No API key provided')
131+
}
121132
}
122133
} catch (error) {
123134
console.error('Error initializing Vercel AI provider:', error)
124135
this.vercelOpenAI = null
125136
this.vercelGoogle = null
126137
this.vercelGroq = null
138+
this.vercelCerebras = null
127139
}
128140
}
129141

@@ -136,6 +148,8 @@ export class ProcessingManager {
136148
return this.vercelGoogle(config.extractionModel || 'gemini-2.0-flash')
137149
} else if (config.apiProvider === 'groq' && this.vercelGroq) {
138150
return this.vercelGroq(config.extractionModel || 'meta-llama/llama-4-scout-17b-16e-instruct')
151+
} else if (config.apiProvider === 'cerebras' && this.vercelCerebras) {
152+
return this.vercelCerebras(config.extractionModel || 'llama-4-scout-17b-16e-instruct')
139153
}
140154
return null
141155
}
@@ -148,6 +162,8 @@ export class ProcessingManager {
148162
return this.vercelGoogle(config.solutionModel || 'gemini-2.0-flash')
149163
} else if (config.apiProvider === 'groq' && this.vercelGroq) {
150164
return this.vercelGroq(config.solutionModel || 'meta-llama/llama-4-scout-17b-16e-instruct')
165+
} else if (config.apiProvider === 'cerebras' && this.vercelCerebras) {
166+
return this.vercelCerebras(config.solutionModel || 'llama-4-scout-17b-16e-instruct')
151167
}
152168
return null
153169
}
@@ -160,6 +176,8 @@ export class ProcessingManager {
160176
return this.vercelGoogle(config.debuggingModel || 'gemini-2.0-flash')
161177
} else if (config.apiProvider === 'groq' && this.vercelGroq) {
162178
return this.vercelGroq(config.debuggingModel || 'meta-llama/llama-4-scout-17b-16e-instruct')
179+
} else if (config.apiProvider === 'cerebras' && this.vercelCerebras) {
180+
return this.vercelCerebras(config.debuggingModel || 'llama-4-scout-17b-16e-instruct')
163181
}
164182
return null
165183
}

src/preload/index.d.ts

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,15 @@
11
export interface ElectronAPI {
22
getConfig: () => Promise<{
33
apiKey?: string
4-
apiProvider?: 'openai' | 'gemini' | 'groq'
4+
apiProvider?: 'openai' | 'gemini' | 'groq' | 'cerebras'
55
extractionModel?: string
66
solutionModel?: string
77
debuggingModel?: string
88
language?: string
99
}>
1010
updateConfig: (config: {
1111
apiKey?: string
12-
apiProvider?: 'openai' | 'gemini' | 'groq'
12+
apiProvider?: 'openai' | 'gemini' | 'groq' | 'cerebras'
1313
extractionModel?: string
1414
solutionModel?: string
1515
debuggingModel?: string

src/preload/index.ts

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ const electronAPI = {
2424
getConfig: () => ipcRenderer.invoke('get-config'),
2525
updateConfig: (config: {
2626
apiKey?: string
27-
apiProvider?: 'openai' | 'gemini'
27+
apiProvider?: 'openai' | 'gemini' | 'groq' | 'cerebras'
2828
extractionModel?: string
2929
solutionModel?: string
3030
debuggingModel?: string

0 commit comments

Comments
 (0)