Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
37 commits
Select commit Hold shift + click to select a range
9fd0ddf
Implement local llm feature with ollama
Chung1045 Jan 7, 2026
019a943
Update the local llm approach to use Flask Web Framework
Chung1045 Jan 7, 2026
d659d5c
feat: Add flask template and update the frontend logic
Skywind5487 Jan 7, 2026
8680dc5
feat: add uv stuff
Skywind5487 Jan 7, 2026
cbd88dc
Update to fix overlooked issues
Chung1045 Jan 7, 2026
23dba99
Update to fix overlooked issue
Chung1045 Jan 7, 2026
41ca6ea
feat: update dependencies and add dotenv, Flask, and Flask-CORS
Skywind5487 Jan 7, 2026
b356e8d
feat: add .gitignore file to exclude default ignored files and Python…
Skywind5487 Jan 7, 2026
ca138d2
feat: update pyproject.toml and add dotenv and flask-cors dependencies
Skywind5487 Jan 7, 2026
95ced16
feat: add OppositionModule and OppositionSignature DSPy classes for v…
Skywind5487 Jan 7, 2026
f570e25
feat: add InvestigatorModule and InvestigatorSignature classes to sup…
Skywind5487 Jan 7, 2026
b260eca
feat: add usage documentation and batch scripts to help launch the At…
Skywind5487 Jan 7, 2026
986dd06
feat: add llm_config.py for configuring LLM model source selection
Skywind5487 Jan 7, 2026
82720b9
feat: update LLM initialization to disable caching by default
Skywind5487 Jan 7, 2026
9d52c76
feat: enhance chatbot functionality with investigation and opposition…
Skywind5487 Jan 7, 2026
b1c205a
feat: Add an easy frontend
Skywind5487 Jan 7, 2026
5b8eea5
feat: add settings.json
Skywind5487 Jan 7, 2026
c3c9cc8
fix: disable reloader in app.run to prevent double prompts during mod…
Skywind5487 Jan 7, 2026
abd2eb1
feat: disable caching for LLM initialization across all model choices
Skywind5487 Jan 7, 2026
5428c50
Update 一鍵安裝uv+python+各種package.bat
Skywind5487 Jan 7, 2026
dcb305d
feat: Add flask template and update the frontend logic
Skywind5487 Jan 7, 2026
e8f0005
feat: add uv stuff
Skywind5487 Jan 7, 2026
9af2ca1
feat: update dependencies and add dotenv, Flask, and Flask-CORS
Skywind5487 Jan 7, 2026
361e8ef
feat: add .gitignore file to exclude default ignored files and Python…
Skywind5487 Jan 7, 2026
51da118
feat: update pyproject.toml and add dotenv and flask-cors dependencies
Skywind5487 Jan 7, 2026
d2bc4d1
feat: add OppositionModule and OppositionSignature DSPy classes for v…
Skywind5487 Jan 7, 2026
c9df197
feat: add InvestigatorModule and InvestigatorSignature classes to sup…
Skywind5487 Jan 7, 2026
4757e5a
feat: add usage documentation and batch scripts to help launch the At…
Skywind5487 Jan 7, 2026
9c9e951
feat: add llm_config.py for configuring LLM model source selection
Skywind5487 Jan 7, 2026
6c7e884
feat: update LLM initialization to disable caching by default
Skywind5487 Jan 7, 2026
c513c18
feat: enhance chatbot functionality with investigation and opposition…
Skywind5487 Jan 7, 2026
54d2e0f
feat: Add an easy frontend
Skywind5487 Jan 7, 2026
a7ad552
feat: add settings.json
Skywind5487 Jan 7, 2026
7116830
fix: disable reloader in app.run to prevent double prompts during mod…
Skywind5487 Jan 7, 2026
44c813c
feat: disable caching for LLM initialization across all model choices
Skywind5487 Jan 7, 2026
3843544
Update 一鍵安裝uv+python+各種package.bat
Skywind5487 Jan 7, 2026
2a6188a
Merge branch 'main' of https://github.com/Skywind5487/Atlas-World
Skywind5487 Jan 7, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 16 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
# Default ignored files
/shelf/
/workspace.xml
# Ignored default folder with query files
/queries/
# Datasource local storage ignored files
/dataSources/
/dataSources.local.xml
# Editor-based HTTP Client requests
/httpRequests/

*.env

# Ignore Python cache files
__pycache__/
*.py[cod]
12 changes: 12 additions & 0 deletions .idea/.gitignore

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

1 change: 1 addition & 0 deletions .python-version
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
3.13
3 changes: 3 additions & 0 deletions .vscode/settings.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
{
"python-envs.pythonProjects": []
}
13 changes: 13 additions & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
[project]
name = "atlas-world"
version = "0.1.0"
description = "Add your description here"
readme = "README.md"
requires-python = ">=3.13"
dependencies = [
"dspy>=3.1.0",
"ollama>=0.6.1",
"flask>=3.0.0",
"dotenv>=0.9.9",
Copy link

Copilot AI Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The package name "dotenv" in the dependencies is incorrect. The correct package name is "python-dotenv". The current specification will fail during installation as the "dotenv" package (version 0.9.9) is outdated and not the commonly used dotenv library.

Suggested change
"dotenv>=0.9.9",
"python-dotenv>=0.9.9",

Copilot uses AI. Check for mistakes.
"flask-cors>=6.0.2",
]
98 changes: 98 additions & 0 deletions script/chatbot.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,98 @@
from flask import Flask, request, jsonify, render_template
from flask_cors import CORS
import os
import dspy
from script.llm_config import configure_lm
from script.llm_module.investigator.module import InvestigatorModule
from script.llm_module.opposition.module import OppositionModule

# Ensure Flask can find the templates folder
base_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), '..'))
template_dir = os.path.join(base_dir, 'templates')
app = Flask(__name__, template_folder=template_dir)
CORS(app)

# Initialize DSPy with user-chosen LM
print("正在啟動 LLM 配置引導...")
configure_lm()
investigator = InvestigatorModule()
opposer = OppositionModule()

# 使用官方 dspy.History 管理
chat_history = dspy.History(messages=[])
Copy link

Copilot AI Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Global state management issue: The chat_history is a global variable that is shared across all users/sessions. In a multi-user environment, all users would see each other's conversation history, which is a serious privacy and functionality issue. Consider using session-based storage (e.g., Flask sessions or a session manager) to maintain separate histories for each user.

Copilot uses AI. Check for mistakes.

port = int(os.getenv("PORT") or 5000)
Copy link

Copilot AI Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Port configuration from environment variable lacks validation. If the PORT environment variable contains an invalid value (non-numeric, negative, or out of valid port range 1-65535), the application will crash. Add validation to ensure the port value is a valid integer within the acceptable range.

Suggested change
port = int(os.getenv("PORT") or 5000)
port_env = os.getenv("PORT")
try:
port = int(port_env)
if not (1 <= port <= 65535):
raise ValueError("Port out of valid range")
except (TypeError, ValueError):
port = 5000

Copilot uses AI. Check for mistakes.

@app.route('/')
def home():
return render_template('index.html')

@app.route('/investigate', methods=['POST'])
def investigate_endpoint():
try:
data = request.json
user_content = data.get('userContent')
system_prompt = data.get('systemPrompt') or ""

readme_path = os.path.join(base_dir, 'README.md')
if os.path.exists(readme_path):
with open(readme_path, 'r', encoding='utf-8') as f:
readme_content = f.read()
system_prompt += "\n--- 背景資訊 ---\n" + readme_content
Comment on lines +37 to +41
Copy link

Copilot AI Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Security concern: The README.md file is automatically included in every request without any size checks or sanitization. If the README becomes very large, this could cause performance issues or memory problems. Consider adding size limits or lazy loading mechanisms.

Copilot uses AI. Check for mistakes.
Comment on lines +39 to +41
Copy link

Copilot AI Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing error handling: If the README.md file cannot be read due to encoding issues or file system errors, the exception will not be caught properly. The try-catch block at the endpoint level may not provide sufficient context about which file operation failed.

Suggested change
with open(readme_path, 'r', encoding='utf-8') as f:
readme_content = f.read()
system_prompt += "\n--- 背景資訊 ---\n" + readme_content
try:
with open(readme_path, 'r', encoding='utf-8') as f:
readme_content = f.read()
except (OSError, UnicodeError) as readme_error:
# Provide clearer context about failures related to README.md
raise RuntimeError(f"Failed to read README.md at {readme_path}: {readme_error}") from readme_error
else:
system_prompt += "\n--- 背景資訊 ---\n" + readme_content

Copilot uses AI. Check for mistakes.

if not user_content:
user_content = "請進行調查"
Comment on lines +33 to +44
Copy link

Copilot AI Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing input validation: The endpoints do not validate that user_content is actually provided before use. While there's a fallback for empty strings, there's no validation for missing keys or None values from the JSON payload, which could cause errors if the client sends malformed data.

Copilot uses AI. Check for mistakes.

# 使用調查者模組
result = investigator(system_prompt=system_prompt, user_query=user_content, history=chat_history)

# 更新歷史軌跡
chat_history.messages.append({"user_query": user_content, "response": result.response})

return jsonify({'response': result.response})

except Exception as e:
return jsonify({'error': str(e)}), 500

@app.route('/oppose', methods=['POST'])
def oppose_endpoint():
try:
data = request.json
system_prompt = data.get('systemPrompt') or ""

readme_path = os.path.join(base_dir, 'README.md')
if os.path.exists(readme_path):
with open(readme_path, 'r', encoding='utf-8') as f:
readme_content = f.read()
system_prompt += "\n--- 背景資訊 ---\n" + readme_content

# 反對者模組主要基於當前對話歷史提出挑戰
result = opposer(system_prompt=system_prompt, history=chat_history)

# 反對者的回應通常也要更新進歷史,以便調查者下一次能回應反對點
chat_history.messages.append({"user_query": "[監察反對請求]", "response": result})

return jsonify({'response': result})
Comment on lines +73 to +75
Copy link

Copilot AI Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is an inconsistency in how the investigator and opposer modules are called. The investigator returns result.response (line 50) which is then stored in history, while the opposer returns result directly (line 70). This suggests the opposer's forward method returns a string directly rather than an object with a .response attribute. This inconsistency could lead to data structure mismatches in the chat history.

Suggested change
chat_history.messages.append({"user_query": "[監察反對請求]", "response": result})
return jsonify({'response': result})
chat_history.messages.append({"user_query": "[監察反對請求]", "response": result.response})
return jsonify({'response': result.response})

Copilot uses AI. Check for mistakes.

except Exception as e:
return jsonify({'error': str(e)}), 500

@app.route('/reset', methods=['POST'])
def reset_history():
try:
global chat_history
chat_history = dspy.History(messages=[])
return jsonify({'status': '歷史記錄已重置'})
except Exception as e:
print(f"Reset error: {e}")
return jsonify({'error': str(e)}), 500

# 保留 /chat 以相容舊前端,指向預設調查者
@app.route('/chat', methods=['POST'])
def chat_endpoint():
return investigate_endpoint()

if __name__ == '__main__':
# 注意:debug=True 會自動啟動 Reloader 導致兩次 Prompt
# 這裡關閉它以確保只需輸入一次模型配置
app.run(debug=True, use_reloader=False, host='0.0.0.0', port=port)
91 changes: 91 additions & 0 deletions script/llm_config.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,91 @@
import dspy
import os

CONFIGED_LM = None

def configure_lm():
from dotenv import load_dotenv
load_dotenv()
global CONFIGED_LM
if CONFIGED_LM is not None:
return CONFIGED_LM
print("\n--- 🌐 選擇要用的 LLM 模型來源 ---")
print("1) 本地模型 (Ollama/HF)")
print("2) OpenAI API")
print("3) Google Gemini API")
print("4) Anthropic Claude API")

choice = input("請輸入數字 (1/2/3/4): ").strip()

model_name = None
api_key = None


if choice == "1":
# 預設幾個常見本地模型參考
env_model = os.getenv("OLLAMA_MODEL")
if env_model:
print(f"\n偵測到預設模型: {env_model}")
model_name = input(f"輸入本地模型名稱 (直接按 Enter 使用 {env_model}): ").strip()
if not model_name:
model_name = env_model
else:
print("\n可用本地模型例子: llama3, gemma3:1b, mistral, phi3")
model_name = input("輸入本地模型名稱: ").strip()

# 確保本地模型有名稱前綴 (LiteLM 要求)
if "/" not in model_name:
model_name = f"ollama/{model_name}"
print(f"自動修正為 LiteLM 格式: {model_name}")

lm = dspy.LM(model_name, cache=False)

elif choice == "2":
print("\nOpenAI 模型選擇:")
print("1) openai/gpt-5.2\n2) openai/gpt-4o\n3) openai/gpt-4o-mini\n4) openai/o4-mini\n5) openai/o3-mini")
idx = input("選擇模型 (1-5): ").strip()
mapping = {
"1": "openai/gpt-5.2",
"2": "openai/gpt-4o",
Copy link

Copilot AI Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The mapping dictionary for OpenAI models is incomplete. It only defines mappings for options "1" and "2", but the prompt offers 5 options (1-5). Options 3, 4, and 5 will fallback to the default "openai/gpt-5.2" without warning. Either complete the mapping or update the prompt to match available options.

Suggested change
"2": "openai/gpt-4o",
"2": "openai/gpt-4o",
"3": "openai/gpt-4o-mini",
"4": "openai/o4-mini",
"5": "openai/o3-mini",

Copilot uses AI. Check for mistakes.
}
model_name = mapping.get(idx, "openai/gpt-5.2")
api_key = input("輸入 OpenAI API Key (或留空用 OPENAI_API_KEY 環境變數): ").strip() or os.getenv("OPENAI_API_KEY")
lm = dspy.LM(model_name, api_key=api_key, cache=False)

elif choice == "3":
print("\nGoogle Gemini 模型選擇:\n1) gemini-2.5-flash\n2) gemini-2.5-pro\n3) gemini-3-flash-preview\n4) gemini-3-pro-preview")
idx = input("選擇模型 (1-4): ").strip()
mapping = {
"1": "gemini/gemini-2.5-flash",
"2": "gemini/gemini-2.5-pro",
"3": "gemini/gemini-3-flash-preview",
"4": "gemini/gemini-3-pro-preview"
}
model_name = mapping.get(idx, "gemini-2.5-pro")
api_key = input("輸入 Gemini API Key (或留空用 GEMINI_API_KEY 環境變數): ").strip() or os.getenv("GEMINI_API_KEY")
lm = dspy.LM(model_name, api_key=api_key, cache=False)

elif choice == "4":
print("\nAnthropic Claude 模型選擇:")
print("1) claude-opus-4.5-20251101\n2) claude-sonnet-4.5\n3) claude-haiku-4.5")
idx = input("選擇模型 (1-3): ").strip()
mapping = {
"1": "claude/claude-opus-4.5-20251101",
"2": "claude/claude-sonnet-4.5",
"3": "claude/claude-haiku-4.5"
}
model_name = mapping.get(idx, "claude-opus-4.5-20251101")
api_key = input("輸入 Claude API Key (或留空用 ANTHROPIC_API_KEY): ").strip() or os.getenv("ANTHROPIC_API_KEY")
Comment on lines +52 to +78
Copy link

Copilot AI Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Security issue: API keys are prompted for user input via the terminal, which may be logged in command history or visible on screen. Consider using the getpass module for secure password/API key input to prevent exposure in terminal history and over-the-shoulder viewing.

Copilot uses AI. Check for mistakes.
lm = dspy.LM(model_name, api_key=api_key, cache=False)

else:
print(" 選擇無效,預設用 openai/gpt-5.2")
model_name = "openai/gpt-5.2"
api_key = os.getenv("OPENAI_API_KEY")
lm = dspy.LM(model_name, api_key=api_key, cache=False)
Comment on lines +81 to +85
Copy link

Copilot AI Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The default model fallback uses "openai/gpt-5.2" which does not exist. If a user provides an invalid choice, the application will fail when trying to use this non-existent model. Use a real, existing model as the fallback, such as "openai/gpt-4o" or "openai/gpt-4o-mini".

Copilot uses AI. Check for mistakes.

# 設定為全域預設 LLM
dspy.configure(lm=lm)
CONFIGED_LM = lm
print(f"\n 已設定模型: {model_name}")
return lm
17 changes: 17 additions & 0 deletions script/llm_module/investigator/module.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
import dspy
from .signature import InvestigatorSignature

class InvestigatorModule(dspy.Module):
"""
實作調查者邏輯的 DSPy 模組。
"""
def __init__(self):
super().__init__()
self.investigate = dspy.ChainOfThought(InvestigatorSignature)

def forward(self, system_prompt, user_query, history):
return self.investigate(
system_prompt=system_prompt,
user_query=user_query,
history=history
)
Comment on lines +12 to +17
Copy link

Copilot AI Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The investigator module's forward method returns the full result object with .response attribute accessible (lines 13-16), but in chatbot.py line 50, it accesses result.response. This is correct, but inconsistent with the opposition module pattern. For consistency, either both modules should return just the response string, or both should return the full result object.

Copilot uses AI. Check for mistakes.
12 changes: 12 additions & 0 deletions script/llm_module/investigator/signature.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
import dspy

class InvestigatorSignature(dspy.Signature):
"""
妳是 Atlas-World 的「調查者」(Investigator)。
妳的職責是深入分析使用者的問題,結合背景資訊與對話歷史,
Comment on lines +5 to +6
Copy link

Copilot AI Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Documentation issue: The docstring for InvestigatorSignature uses gendered language "妳" (feminine "you" in Chinese), which is unnecessarily specific. Consider using gender-neutral language "你" to be more inclusive, or rephrase to avoid pronouns entirely.

Suggested change
妳是 Atlas-World 調查者」(Investigator)。
妳的職責是深入分析使用者的問題結合背景資訊與對話歷史
你是 Atlas-World 調查者」(Investigator)。
你的職責是深入分析使用者的問題結合背景資訊與對話歷史

Copilot uses AI. Check for mistakes.
進行邏輯推演、事實查核或情境解析,給出詳盡且具洞察力的調查報告或回應。
"""
system_prompt = dspy.InputField(desc="目前的文件、與使用者給予的系統提示")
user_query = dspy.InputField(desc="使用者的當前問題或調查標的")
history: dspy.History = dspy.InputField(desc="目前的調查對話歷史記錄")
response = dspy.OutputField(desc="生成的調查分析、洞察或回應")
15 changes: 15 additions & 0 deletions script/llm_module/opposition/module.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
import dspy
from .signature import OppositionSignature

class OppositionModule(dspy.Module):
"""
提供反向思考與監察邏輯的 DSPy 模組。
"""
def __init__(self):
super().__init__()
self.predictor = dspy.ChainOfThought(OppositionSignature)

def forward(self, system_prompt, history):
# history 應為 dspy.History 物件
result = self.predictor(system_prompt=system_prompt, history=history)
return result.response
Copy link

Copilot AI Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The opposition module's signature defines response as an OutputField (line 15 of opposition/signature.py) but the module's forward method returns result.response as a string (line 15 of opposition/module.py). However, in chatbot.py line 70, the result is used directly without accessing the .response attribute, which is inconsistent with how the investigator module is used. This suggests the forward method should return the full result object, not just the response string.

Suggested change
return result.response
return result

Copilot uses AI. Check for mistakes.
Loading