-
Notifications
You must be signed in to change notification settings - Fork 74
Feat: Add uv python enviorment, dspy chatbot, ollama, an easy frontend by flask #359
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Co-authored-by: Kelvin <cychandt@connect.ust.hk>
Co-authored-by: Kelvin <cychandt@connect.ust.hk>
…ulnerability detection and challenge generation
…port an investigator chatbot
…las-World chatbot
… modules, and add history management. Use dspy to replace whole old ollama logic
|
免責聲明同 #170 。 本ChatBot只用於交流分享。並不保證能解決任何模型幻覺、不保證產出的內容正確、不保證指出的問題到位。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This pull request adds a chatbot system to the Atlas-World project, featuring dual AI personas (Investigator and Opposition) built with DSPy framework. The implementation includes a Flask-based web backend, an interactive HTML/JavaScript frontend, comprehensive Chinese documentation, and automated setup scripts for deployment with uv and Ollama.
Key Changes
- Implements DSPy-based chatbot modules with configurable LLM backends (Ollama, OpenAI, Gemini, Claude)
- Adds Flask REST API with
/investigate,/oppose, and/resetendpoints - Creates a responsive web UI with Markdown rendering and MathJax support
Reviewed changes
Copilot reviewed 15 out of 18 changed files in this pull request and generated 23 comments.
Show a summary per file
| File | Description |
|---|---|
use_chatbot_guide.md |
Chinese user guide with installation and usage instructions |
templates/index.html |
Frontend interface with chat functionality and markdown rendering |
script/chatbot.py |
Flask backend with DSPy integration and endpoint handlers |
script/llm_config.py |
Interactive LLM configuration module with multi-provider support |
script/llm_module/investigator/* |
Investigator chatbot module and signature definition |
script/llm_module/opposition/* |
Opposition chatbot module with extensive prompt file |
pyproject.toml |
Project dependencies configuration for uv |
.python-version |
Python version specification (3.13) |
.gitignore |
Updated ignore patterns for Python and environment files |
.vscode/settings.json |
VS Code workspace configuration |
.idea/.gitignore |
IntelliJ IDEA ignore patterns |
Files not reviewed (1)
- .idea/.gitignore: Language not supported
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| def forward(self, system_prompt, history): | ||
| # history 應為 dspy.History 物件 | ||
| result = self.predictor(system_prompt=system_prompt, history=history) | ||
| return result.response |
Copilot
AI
Jan 7, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The opposition module's signature defines response as an OutputField (line 15 of opposition/signature.py) but the module's forward method returns result.response as a string (line 15 of opposition/module.py). However, in chatbot.py line 70, the result is used directly without accessing the .response attribute, which is inconsistent with how the investigator module is used. This suggests the forward method should return the full result object, not just the response string.
| return result.response | |
| return result |
| def forward(self, system_prompt, user_query, history): | ||
| return self.investigate( | ||
| system_prompt=system_prompt, | ||
| user_query=user_query, | ||
| history=history | ||
| ) |
Copilot
AI
Jan 7, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The investigator module's forward method returns the full result object with .response attribute accessible (lines 13-16), but in chatbot.py line 50, it accesses result.response. This is correct, but inconsistent with the opposition module pattern. For consistency, either both modules should return just the response string, or both should return the full result object.
| else: | ||
| print(" 選擇無效,預設用 openai/gpt-5.2") | ||
| model_name = "openai/gpt-5.2" | ||
| api_key = os.getenv("OPENAI_API_KEY") | ||
| lm = dspy.LM(model_name, api_key=api_key, cache=False) |
Copilot
AI
Jan 7, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The default model fallback uses "openai/gpt-5.2" which does not exist. If a user provides an invalid choice, the application will fail when trying to use this non-existent model. Use a real, existing model as the fallback, such as "openai/gpt-4o" or "openai/gpt-4o-mini".
|
|
||
| ## 1. 用到的工具簡介 | ||
|
|
||
| - uv |
Copilot
AI
Jan 7, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Spelling error: "enviorment" should be "environment" in the PR title. This typo should also be corrected.
| idx = input("選擇模型 (1-5): ").strip() | ||
| mapping = { | ||
| "1": "openai/gpt-5.2", | ||
| "2": "openai/gpt-4o", |
Copilot
AI
Jan 7, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The mapping dictionary for OpenAI models is incomplete. It only defines mappings for options "1" and "2", but the prompt offers 5 options (1-5). Options 3, 4, and 5 will fallback to the default "openai/gpt-5.2" without warning. Either complete the mapping or update the prompt to match available options.
| "2": "openai/gpt-4o", | |
| "2": "openai/gpt-4o", | |
| "3": "openai/gpt-4o-mini", | |
| "4": "openai/o4-mini", | |
| "5": "openai/o3-mini", |
| with open(__file__.replace("signature.py", "prompt.md"), "r", encoding="utf-8") as f: | ||
| return f.read() |
Copilot
AI
Jan 7, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Error handling issue: If the prompt.md file cannot be found or read (e.g., incorrect path, missing file, encoding errors), the load_prompt() function will raise an unhandled exception during class definition, which will crash the application at import time. Add proper error handling with a fallback mechanism.
| with open(__file__.replace("signature.py", "prompt.md"), "r", encoding="utf-8") as f: | |
| return f.read() | |
| prompt_path = __file__.replace("signature.py", "prompt.md") | |
| try: | |
| with open(prompt_path, "r", encoding="utf-8") as f: | |
| return f.read() | |
| except (OSError, UnicodeError): | |
| # Fallback: use a built-in prompt if the external file is unavailable | |
| return ( | |
| "作為 Atlas-World 的反對派或監督者,根據文明憲法與原則," | |
| "對目前的對話歷史提出挑戰、建設性批評或不同的觀點。" | |
| ) |
| <script src="https://cdn.jsdelivr.net/npm/marked/marked.min.js"></script> | ||
|
|
||
| <script> | ||
| window.MathJax = { | ||
| tex: { inlineMath: [['$', '$'], ['\\(', '\\)']] }, | ||
| svg: { fontCache: 'global' } | ||
| }; | ||
| </script> | ||
| <script id="MathJax-script" async src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script> |
Copilot
AI
Jan 7, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The HTML hardcodes external CDN URLs without integrity checks (SRI - Subresource Integrity). This creates a security vulnerability where if the CDN is compromised, malicious code could be injected. Add integrity attributes with hash values to the script tags for marked.js and MathJax to ensure the loaded scripts haven't been tampered with.
| // 1. 強力清除 "粗體內部" 的空格 | ||
| // 將 `** 文字 **` 或 `** 文字**` 或 `**文字 **` 統一轉為 `**文字**` | ||
| // 原理:找到 ** 開頭,中間包含非星號的內容,結尾是 **,並去掉頭尾的空白 | ||
| content = content.replace(/\*\*\s*([^*]+?)\s*\*\*/g, '**$1**'); |
Copilot
AI
Jan 7, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The regex pattern on line 249 attempts to remove spaces inside bold markdown, but it's overly aggressive and will incorrectly transform legitimate bold text. For example, "some bold text" would have internal spaces removed, potentially breaking words. This pattern assumes all spaces inside bold markers are unwanted, which is a flawed assumption for natural language text.
| opposer = OppositionModule() | ||
|
|
||
| # 使用官方 dspy.History 管理 | ||
| chat_history = dspy.History(messages=[]) |
Copilot
AI
Jan 7, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Global state management issue: The chat_history is a global variable that is shared across all users/sessions. In a multi-user environment, all users would see each other's conversation history, which is a serious privacy and functionality issue. Consider using session-based storage (e.g., Flask sessions or a session manager) to maintain separate histories for each user.
| if(!confirm('確定要清空所有對話記憶嗎?')) return; | ||
| try { | ||
| const resp = await fetch('/reset', { method: 'POST' }); | ||
| if (!resp.ok) throw new Error(resp.statusText); | ||
| const data = await resp.json(); | ||
| resDiv.innerHTML = `<span style="color:#27ae60; font-weight:bold;">${data.status || '歷史已重置'}</span>`; | ||
| resDiv.style.borderLeftColor = '#bdc3c7'; | ||
| setTimeout(() => { | ||
| if(resDiv.textContent.includes('已重置')) resDiv.innerHTML = '準備就緒。'; | ||
| }, 1500); | ||
| } catch (err) { | ||
| alert('重置失敗: ' + err); | ||
| } | ||
| }); |
Copilot
AI
Jan 7, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The confirmation dialog message on line 278 asks in Chinese but doesn't check the return value properly. The function returns early if the user cancels, but then proceeds to send an async request. However, if the reset fails silently on the backend, the UI will show success. Consider adding better error state management.
KageRyo
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
太狠了
|
在 |
3underscoreN
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please rebase to include new commits in #308
|
你們真用心 |
…ulnerability detection and challenge generation
…port an investigator chatbot
…las-World chatbot
… modules, and add history management. Use dspy to replace whole old ollama logic
Done. Thanks for your suggestion. |
太狠了 🛐 |
|
LGTM, you really take effort in this meme repository lol 😂 |
WuSandWitch
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM

基於 #308 的一個分支。
總之就是加上一些chatBot的code、寫了文檔(use_chatbot_guide.md)、寫了3個自動啟動腳本。
用gemini簡單的搓了前端。
新增了調查者與反對者兩個ChatBot。反對派的prompt可以參考 #170 。
dspy跟uv好用。
預設是已經把整個README.md當作 Ai 的輸入先輸入進去。
使用方法: 先一鍵安裝再一鍵啟動