Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 16 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
# Default ignored files
/shelf/
/workspace.xml
# Ignored default folder with query files
/queries/
# Datasource local storage ignored files
/dataSources/
/dataSources.local.xml
# Editor-based HTTP Client requests
/httpRequests/

*.env

# Ignore Python cache files
__pycache__/
*.py[cod]
12 changes: 12 additions & 0 deletions .idea/.gitignore

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

1 change: 1 addition & 0 deletions .python-version
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
3.13
3 changes: 3 additions & 0 deletions .vscode/settings.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
{
"python-envs.pythonProjects": []
}
13 changes: 13 additions & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
[project]
name = "atlas-world"
version = "0.1.0"
description = "Add your description here"
readme = "README.md"
requires-python = ">=3.13"
dependencies = [
"dspy>=3.1.0",
"ollama>=0.6.1",
"flask>=3.0.0",
"dotenv>=0.9.9",
"flask-cors>=6.0.2",
]
98 changes: 98 additions & 0 deletions script/chatbot.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,98 @@
from flask import Flask, request, jsonify, render_template
from flask_cors import CORS
import os
import dspy
from script.llm_config import configure_lm
from script.llm_module.investigator.module import InvestigatorModule
from script.llm_module.opposition.module import OppositionModule

# Ensure Flask can find the templates folder
base_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), '..'))
template_dir = os.path.join(base_dir, 'templates')
app = Flask(__name__, template_folder=template_dir)
CORS(app)
Copy link

Copilot AI Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The CORS configuration uses CORS(app) without any restrictions, which allows requests from any origin. This could be a security risk in production environments. Consider restricting CORS to specific origins or at minimum documenting that this is intentional for development purposes only.

Suggested change
CORS(app)
allowed_origins = os.getenv("ALLOWED_ORIGINS", "http://localhost:3000,http://127.0.0.1:3000").split(",")
CORS(app, resources={r"/*": {"origins": allowed_origins}})

Copilot uses AI. Check for mistakes.

# Initialize DSPy with user-chosen LM
print("正在啟動 LLM 配置引導...")
configure_lm()
investigator = InvestigatorModule()
opposer = OppositionModule()
Copy link

Copilot AI Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The variable name "opposer" on line 19 is inconsistent with the module name "OppositionModule". For consistency and clarity, consider renaming it to "opposition" to match the naming pattern used for "investigator" (which matches "InvestigatorModule").

Copilot uses AI. Check for mistakes.

# 使用官方 dspy.History 管理
chat_history = dspy.History(messages=[])

port = int(os.getenv("PORT") or 5000)

@app.route('/')
def home():
return render_template('index.html')

@app.route('/investigate', methods=['POST'])
def investigate_endpoint():
try:
data = request.json
user_content = data.get('userContent')
system_prompt = data.get('systemPrompt') or ""

readme_path = os.path.join(base_dir, 'README.md')
if os.path.exists(readme_path):
with open(readme_path, 'r', encoding='utf-8') as f:
readme_content = f.read()
system_prompt += "\n--- 背景資訊 ---\n" + readme_content
Comment on lines +37 to +41
Copy link

Copilot AI Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The README.md file content is being read and appended to every single API request (lines 37-41 and 63-67). This is inefficient and could cause performance issues. Consider reading the README.md content once during application initialization and storing it in a module-level variable, then reusing it for all requests.

Copilot uses AI. Check for mistakes.

if not user_content:
user_content = "請進行調查"

# 使用調查者模組
result = investigator(system_prompt=system_prompt, user_query=user_content, history=chat_history)

# 更新歷史軌跡
chat_history.messages.append({"user_query": user_content, "response": result.response})

return jsonify({'response': result.response})

except Exception as e:
return jsonify({'error': str(e)}), 500

@app.route('/oppose', methods=['POST'])
def oppose_endpoint():
try:
data = request.json
system_prompt = data.get('systemPrompt') or ""

readme_path = os.path.join(base_dir, 'README.md')
if os.path.exists(readme_path):
with open(readme_path, 'r', encoding='utf-8') as f:
readme_content = f.read()
system_prompt += "\n--- 背景資訊 ---\n" + readme_content
Comment on lines +63 to +67
Copy link

Copilot AI Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The same README.md reading logic is duplicated in both /investigate and /oppose endpoints. This code duplication violates the DRY (Don't Repeat Yourself) principle. Consider extracting this into a helper function or loading it once at application startup.

Copilot uses AI. Check for mistakes.

# 反對者模組主要基於當前對話歷史提出挑戰
result = opposer(system_prompt=system_prompt, history=chat_history)

# 反對者的回應通常也要更新進歷史,以便調查者下一次能回應反對點
chat_history.messages.append({"user_query": "[監察反對請求]", "response": result})

return jsonify({'response': result})
Comment on lines +73 to +75
Copy link

Copilot AI Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's an inconsistency in how history is updated between the two endpoints. In /investigate (line 50), the code appends a dictionary with both "user_query" and "response" keys, accessing result.response. In /oppose (line 73), it appends a dictionary with "user_query" and "response" keys, but uses result directly (not result.response). This inconsistency will likely cause issues since OppositionModule.forward returns result.response (a string), while the code treats result as if it were already the string.

Suggested change
chat_history.messages.append({"user_query": "[監察反對請求]", "response": result})
return jsonify({'response': result})
chat_history.messages.append({"user_query": "[監察反對請求]", "response": result.response})
return jsonify({'response': result.response})

Copilot uses AI. Check for mistakes.

except Exception as e:
return jsonify({'error': str(e)}), 500

@app.route('/reset', methods=['POST'])
def reset_history():
try:
global chat_history
chat_history = dspy.History(messages=[])
return jsonify({'status': '歷史記錄已重置'})
except Exception as e:
print(f"Reset error: {e}")
return jsonify({'error': str(e)}), 500

# 保留 /chat 以相容舊前端,指向預設調查者
@app.route('/chat', methods=['POST'])
def chat_endpoint():
return investigate_endpoint()

if __name__ == '__main__':
# 注意:debug=True 會自動啟動 Reloader 導致兩次 Prompt
# 這裡關閉它以確保只需輸入一次模型配置
app.run(debug=True, use_reloader=True, host='0.0.0.0', port=port)
Copy link

Copilot AI Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Running Flask with both debug=True and use_reloader=True in production can expose sensitive information through debug tracebacks and cause the configure_lm() function to be called twice. While the comment mentions this issue, it's contradictory to set use_reloader=True. Consider setting use_reloader=False or using environment-based configuration to disable debug mode in production.

Suggested change
app.run(debug=True, use_reloader=True, host='0.0.0.0', port=port)
app.run(debug=True, use_reloader=False, host='0.0.0.0', port=port)

Copilot uses AI. Check for mistakes.
91 changes: 91 additions & 0 deletions script/llm_config.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,91 @@
import dspy
import os

CONFIGED_LM = None

def configure_lm():
from dotenv import load_dotenv
load_dotenv()
global CONFIGED_LM
if CONFIGED_LM is not None:
return CONFIGED_LM
print("\n--- 🌐 選擇要用的 LLM 模型來源 ---")
print("1) 本地模型 (Ollama/HF)")
print("2) OpenAI API")
print("3) Google Gemini API")
print("4) Anthropic Claude API")

choice = input("請輸入數字 (1/2/3/4): ").strip()

model_name = None
api_key = None


if choice == "1":
# 預設幾個常見本地模型參考
env_model = os.getenv("OLLAMA_MODEL")
if env_model:
print(f"\n偵測到預設模型: {env_model}")
model_name = input(f"輸入本地模型名稱 (直接按 Enter 使用 {env_model}): ").strip()
if not model_name:
model_name = env_model
else:
print("\n可用本地模型例子: llama3, gemma3:1b, mistral, phi3")
model_name = input("輸入本地模型名稱: ").strip()

# 確保本地模型有名稱前綴 (LiteLM 要求)
if "/" not in model_name:
model_name = f"ollama/{model_name}"
print(f"自動修正為 LiteLM 格式: {model_name}")

lm = dspy.LM(model_name)

elif choice == "2":
print("\nOpenAI 模型選擇:")
print("1) openai/gpt-5.2\n2) openai/gpt-4o\n3) openai/gpt-4o-mini\n4) openai/o4-mini\n5) openai/o3-mini")
idx = input("選擇模型 (1-5): ").strip()
mapping = {
"1": "openai/gpt-5.2",
"2": "openai/gpt-4o",
}
model_name = mapping.get(idx, "openai/gpt-5.2")
Comment on lines +47 to +51
Copy link

Copilot AI Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The mapping dictionary is incomplete and only contains 2 entries (keys "1" and "2") but the user can select options 1-5. When the user selects option 3, 4, or 5, the code will fall back to "openai/gpt-5.2" which doesn't exist. The mapping should include all 5 options that are presented to the user.

Copilot uses AI. Check for mistakes.
api_key = input("輸入 OpenAI API Key (或留空用 OPENAI_API_KEY 環境變數): ").strip() or os.getenv("OPENAI_API_KEY")
lm = dspy.LM(model_name, api_key=api_key)

elif choice == "3":
print("\nGoogle Gemini 模型選擇:\n1) gemini-2.5-flash\n2) gemini-2.5-pro\n3) gemini-3-flash-preview\n4) gemini-3-pro-preview")
idx = input("選擇模型 (1-4): ").strip()
mapping = {
"1": "gemini/gemini-2.5-flash",
"2": "gemini/gemini-2.5-pro",
"3": "gemini/gemini-3-flash-preview",
"4": "gemini/gemini-3-pro-preview"
}
model_name = mapping.get(idx, "gemini-2.5-pro")
api_key = input("輸入 Gemini API Key (或留空用 GEMINI_API_KEY 環境變數): ").strip() or os.getenv("GEMINI_API_KEY")
lm = dspy.LM(model_name, api_key=api_key)

elif choice == "4":
print("\nAnthropic Claude 模型選擇:")
print("1) claude-opus-4.5-20251101\n2) claude-sonnet-4.5\n3) claude-haiku-4.5")
idx = input("選擇模型 (1-3): ").strip()
mapping = {
"1": "claude/claude-opus-4.5-20251101",
"2": "claude/claude-sonnet-4.5",
"3": "claude/claude-haiku-4.5"
}
model_name = mapping.get(idx, "claude-opus-4.5-20251101")
api_key = input("輸入 Claude API Key (或留空用 ANTHROPIC_API_KEY): ").strip() or os.getenv("ANTHROPIC_API_KEY")
lm = dspy.LM(model_name, api_key=api_key)

else:
print(" 選擇無效,預設用 openai/gpt-5.2")
model_name = "openai/gpt-5.2"
api_key = os.getenv("OPENAI_API_KEY")
lm = dspy.LM(model_name, api_key=api_key, cache=False)
Copy link

Copilot AI Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The cache=False parameter is only passed when creating the LM instance in the else branch (lines 82-85), but not for the other branches. This inconsistency may lead to unexpected caching behavior. Either apply cache=False consistently across all LM instantiations or remove it if caching is desired for all cases.

Suggested change
lm = dspy.LM(model_name, api_key=api_key, cache=False)
lm = dspy.LM(model_name, api_key=api_key)

Copilot uses AI. Check for mistakes.

# 設定為全域預設 LLM
dspy.configure(lm=lm)
CONFIGED_LM = lm
print(f"\n 已設定模型: {model_name}")
return lm
17 changes: 17 additions & 0 deletions script/llm_module/investigator/module.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
import dspy
from .signature import InvestigatorSignature

class InvestigatorModule(dspy.Module):
"""
實作調查者邏輯的 DSPy 模組。
"""
def __init__(self):
super().__init__()
self.investigate = dspy.ChainOfThought(InvestigatorSignature)

def forward(self, system_prompt, user_query, history):
return self.investigate(
system_prompt=system_prompt,
user_query=user_query,
history=history
)
Comment on lines +13 to +17
Copy link

Copilot AI Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The InvestigatorModule's forward method returns the full result object from dspy.ChainOfThought, but in chatbot.py line 50, the code accesses result.response. This assumes the result has a response attribute. However, looking at line 47, investigator() is called which triggers forward(), and the return value should be checked for consistency. The forward method should explicitly return result.response to match the usage pattern, or the calling code should be updated.

Suggested change
return self.investigate(
system_prompt=system_prompt,
user_query=user_query,
history=history
)
result = self.investigate(
system_prompt=system_prompt,
user_query=user_query,
history=history
)
return result.response

Copilot uses AI. Check for mistakes.
12 changes: 12 additions & 0 deletions script/llm_module/investigator/signature.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
import dspy

class InvestigatorSignature(dspy.Signature):
"""
妳是 Atlas-World 的「調查者」(Investigator)。
妳的職責是深入分析使用者的問題,結合背景資訊與對話歷史,
進行邏輯推演、事實查核或情境解析,給出詳盡且具洞察力的調查報告或回應。
"""
system_prompt = dspy.InputField(desc="目前的文件、與使用者給予的系統提示")
user_query = dspy.InputField(desc="使用者的當前問題或調查標的")
history: dspy.History = dspy.InputField(desc="目前的調查對話歷史記錄")
response = dspy.OutputField(desc="生成的調查分析、洞察或回應")
15 changes: 15 additions & 0 deletions script/llm_module/opposition/module.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
import dspy
from .signature import OppositionSignature

class OppositionModule(dspy.Module):
"""
提供反向思考與監察邏輯的 DSPy 模組。
"""
def __init__(self):
super().__init__()
self.predictor = dspy.ChainOfThought(OppositionSignature)

def forward(self, system_prompt, history):
# history 應為 dspy.History 物件
result = self.predictor(system_prompt=system_prompt, history=history)
return result.response
Loading