Skip to content

Commit

Permalink
Merge branch 'refactoring-v3-mvp' of https://github.com/chuanSir123/c…
Browse files Browse the repository at this point in the history
…hatgpt-mirai-qq-bot into chuanSir123-refactoring-v3-mvp
  • Loading branch information
lss233 committed Feb 2, 2025
2 parents 9d38c9c + b5e622e commit 31b4293
Show file tree
Hide file tree
Showing 56 changed files with 3,750 additions and 41 deletions.
54 changes: 48 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -214,8 +214,8 @@ debug = false
|:---|:---|:---|
|result| String |SUCESS,DONE,FAILED|
|message| String[] |文本返回,支持多段返回|
|voice| String[] |音频返回,支持多个音频的base64编码;参考:data:audio/mpeg;base64,...|
|image| String[] |图片返回,支持多个图片的base64编码;参考:data:image/png;base64,...|
|voice| String[] |音频返回,支持多个音频的base64编码;参考:data:audio/mpeg;base64,,iVBORw0KGgoAAAANS...|
|image| String[] |图片返回,支持多个图片的base64编码;参考:data:image/png;base64,UhEUgAAAgAAAAIACAIA...|

**响应示例**
```json
Expand Down Expand Up @@ -245,6 +245,17 @@ debug = false
"message": "ping"
}
```

* 请注意,`session_id`请采用规范格式。其格式为`friend-`(好友)或`group-`(群组)加字符串

示例
```
friend-R6sxRvblulTZqNC
group-M3jpvxv26mKVM
```

如果不能正确继续是好友还是群组,将一律按照群组处理

**响应格式**
字符串:request_id

Expand All @@ -253,6 +264,12 @@ debug = false
1681525479905
```

* 请注意,返回的内容可能会带有引号。请去除引号。(包括 `"``'`

```
'1681525479905'
```

**GET** `/v2/chat/response`

**请求参数**
Expand All @@ -265,13 +282,28 @@ debug = false
```
/v2/chat/response?request_id=1681525479905
```
* 请注意,request_id不能带有引号(包括 `"``'` )。
下列为错误示范
```
/v2/chat/response?request_id='1681525479905'
```
```
/v2/chat/response?request_id="1681525479905"
```
```
/v2/chat/response?request_id='1681525479905"
```
```
/v2/chat/response?request_id="1681525479905'
```

**响应格式**
|参数名|类型|说明|
|:---|:---|:---|
|result| String |SUCESS,DONE,FAILED|
|message| String[] |文本返回,支持多段返回|
|voice| String[] |音频返回,支持多个音频的base64编码;参考:data:audio/mpeg;base64,...|
|image| String[] |图片返回,支持多个图片的base64编码;参考:data:image/png;base64,...|
|voice| String[] |音频返回,支持多个音频的base64编码;参考:data:audio/mpeg;base64,,iVBORw0KGgoAAAANS...|
|image| String[] |图片返回,支持多个图片的base64编码;参考:data:image/png;base64,UhEUgAAAgAAAAIACAIA...|

* 每次请求返回增量并清空。DONE、FAILED之后没有更多返回。

Expand All @@ -280,10 +312,20 @@ debug = false
{
"result": "DONE",
"message": ["pong!"],
"voice": ["data:audio/mpeg;base64,..."],
"image": ["data:image/png;base64,...", "data:image/png;base64,..."]
"voice": ["data:audio/mpeg;base64,iVBORw0KGgoAAAANS..."],
"image": ["data:image/png;base64,UhEUgAAAgAAAAIACAIA...", "data:image/png;base64,UhEUgAAAgAAAAIACAIA..."]
}
```
* 请注意,当返回 `SUCCESS`的时候表示等待
```json
{"result": "SUCCESS", "message": [], "voice": [], "image": []}
```
* 请注意,可能有多条`DONE`,请一直请求,直到出现`FAILED``FAILED`表示回复完毕。
```json
{"result": "FAILED", "message": ["\u6ca1\u6709\u66f4\u591a\u4e86\uff01"], "voice": [], "image": []}
```
* 请注意`DONE``FAILED`之间可能会穿插`SUCCESS`。整个回复周期可能会大于一分钟。

</details>

## 🦊 加载预设
Expand Down
37 changes: 37 additions & 0 deletions config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
ims:
configs:
onebot-default:
access_token: ''
filter_file: filter.json
heartbeat_interval: '15000'
host: 127.0.0.1
name: onebot
port: '8568'
reconnect_interval: '3000'
enable:
onebot:
- onebot-default
llms:
backends:
openai:
adapter: openai
configs:
- api_base: https://wind.chuansir.top/v1
api_key: d3105c0f
model: claude-3.5-sonnet
enable: true
models:
- claude-3.5-sonnet
- deepseek-chat
plugins:
enable:
- image_generator
- image_understanding
- music_player
- onebot_adapter
- openai_adapter
- prompt_generator
- scheduler_plugin
- weather_query
- workflow_plugin
- web_search
14 changes: 13 additions & 1 deletion config.yaml.example
Original file line number Diff line number Diff line change
Expand Up @@ -5,5 +5,17 @@ ims:
telegram-bot-1234:
token: 'abcd'

llms:
backends: # 这是必需的
openai: # 后端名称作为key
enable: true
adapter: "openai"
configs:
- api_key: ""
api_base: "https://wind.chuansir.top/v1" # 可选
model: "claude-3.5-sonnet" # 可选,也可以在调用时指定
models:
- "claude-3.5-sonnet"

plugins:
enable: []
enable: ['onebot_adapter','openai_adapter','workflow_plugin','prompt_generator','music_player','weather_query','scheduler_plugin']
8 changes: 7 additions & 1 deletion framework/config/global_config.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,10 +15,16 @@ class LLMBackendConfig(BaseModel):
class LLMConfig(BaseModel):
backends: Dict[str, LLMBackendConfig] = dict()

class PluginsConfig(BaseModel):
"""插件配置"""
enable: List[str] = [] # 启用的插件列表

class DefaultConfig(BaseModel):
llm_model: str = Field(default="gemini-1.5-flash", description="默认使用的 LLM 模型名称")

class GlobalConfig(BaseModel):
"""全局配置"""
ims: IMConfig = IMConfig()
llms: LLMConfig = LLMConfig()
plugins: PluginsConfig = PluginsConfig() # 插件配置
defaults: DefaultConfig = DefaultConfig()
12 changes: 6 additions & 6 deletions framework/im/manager.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,11 +14,11 @@ class IMManager:
IM 生命周期管理器,负责管理所有 adapter 的启动、运行和停止。
"""
container: DependencyContainer

config: GlobalConfig

im_registry: IMRegistry

@Inject()
def __init__(self, container: DependencyContainer, config: GlobalConfig, adapter_registry: IMRegistry):
self.container = container
Expand All @@ -34,7 +34,7 @@ def start_adapters(self, loop=None):
"""
if loop is None:
loop = asyncio.get_event_loop()

enable_ims = self.config.ims.enable
credentials = self.config.ims.configs

Expand Down Expand Up @@ -68,7 +68,7 @@ def stop_adapters(self, loop=None):
"""
if loop is None:
loop = asyncio.get_event_loop()

for key, adapter in self.adapters.items():
loop.run_until_complete(self._stop_adapter(key, adapter, loop))

Expand Down Expand Up @@ -96,4 +96,4 @@ async def _start_adapter(self, key, adapter, loop):
async def _stop_adapter(self, key, adapter, loop):
logger.info(f"Stopping adapter: {key}")
await adapter.stop()
logger.info(f"Stopped adapter: {key}")
logger.info(f"Stopped adapter: {key}")
3 changes: 2 additions & 1 deletion framework/llm/format/request.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,8 @@ class LLMChatRequest(BaseModel):
stream_options: Optional[Any] = None
temperature: Optional[int] = None
top_p: Optional[int] = None
top_k: Optional[int] = None
tools: Optional[Any] = None
tool_choice: Optional[str] = None
logprobs: Optional[bool] = None
top_logprobs: Optional[Any] = None
top_logprobs: Optional[Any] = None
2 changes: 1 addition & 1 deletion framework/llm/format/response.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,4 +43,4 @@ class LLMChatResponseContent(BaseModel):
class LLMChatResponse(BaseModel):
choices: Optional[List[LLMChatResponseContent]] = None
model: Optional[str] = None
usage: Optional[Usage] = None
usage: Optional[Usage] = None
20 changes: 10 additions & 10 deletions framework/llm/llm_manager.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,41 +12,41 @@ class LLMManager:
跟踪、管理和调度模型后端
"""
container: DependencyContainer

config: GlobalConfig

backend_registry: LLMBackendRegistry

active_backends: Dict[str, List[LLMBackendAdapter]]

@Inject()
def __init__(self, container: DependencyContainer, config: GlobalConfig, backend_registry: LLMBackendRegistry):
self.container = container
self.config = config
self.backend_registry = backend_registry
self.logger = get_logger("LLMAdapter")
self.active_backends = {}

def load_config(self):
for key, backend_config in self.config.llms.backends.items():
if backend_config.enable:
self.logger.info(f"Loading backend: {key}")
self.load_backend(key, backend_config)

def load_backend(self, name: str, backend_config: LLMBackendConfig):
if name in self.active_backends:
raise ValueError

adapter_class = self.backend_registry.get(backend_config.adapter)
config_class = self.backend_registry.get_config_class(backend_config.adapter)

if not adapter_class or not config_class:
raise ValueError

configs = [config_class(**config_entry) for config_entry in backend_config.configs]

adapters = []

for config in configs:
with self.container.scoped() as scoped_container:
scoped_container.register(config_class, config)
Expand Down
6 changes: 3 additions & 3 deletions framework/llm/llm_registry.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ class LLMAbility(Enum):
ImageGeneration = ImageInput | ImageOutput
TextImageMultiModal = Chat | ImageGeneration
TextImageAudioMultiModal = TextImageMultiModal | AudioInput | AudioOutput

class LLMBackendRegistry:
"""
LLM 注册表,用于动态注册和管理 LLM 适配器及其配置。
Expand Down Expand Up @@ -81,7 +81,7 @@ def get_config_class(self, name: str) -> Type[BaseModel]:
if name not in self._config_registry:
raise ValueError(f"Config class for LLMAdapter '{name}' is not registered.")
return self._config_registry[name]

def get_ability(self, name: str) -> LLMAbility:
"""
获取已注册的 LLM 适配器能力。
Expand All @@ -90,4 +90,4 @@ def get_ability(self, name: str) -> LLMAbility:
"""
if name not in self._ability_registry:
raise ValueError(f"LLMAdapter with name '{name}' is not registered.")
return self._ability_registry[name]
return self._ability_registry[name]
43 changes: 40 additions & 3 deletions framework/plugin_manager/plugin.py
Original file line number Diff line number Diff line change
@@ -1,19 +1,29 @@
from abc import ABC, abstractmethod

from typing import Dict, Any, List, Optional, Union, Pattern
from framework.config.global_config import GlobalConfig
from framework.im.im_registry import IMRegistry
from framework.im.manager import IMManager
from framework.ioc.inject import Inject
from framework.llm.llm_registry import LLMBackendRegistry
from framework.llm.llm_manager import LLMManager
from framework.plugin_manager.plugin_event_bus import PluginEventBus
from framework.workflow_dispatcher.workflow_dispatcher import WorkflowDispatcher
from framework.ioc.container import DependencyContainer

class Plugin(ABC):
event_bus: PluginEventBus
workflow_dispatcher: WorkflowDispatcher
llm_registry: LLMBackendRegistry
im_registry: IMRegistry
im_manager: IMManager

llm_manager: LLMManager
config: GlobalConfig
container: DependencyContainer

@Inject()
def __init__(self, config: GlobalConfig = None):
self.config = config

@abstractmethod
def on_load(self):
pass
Expand All @@ -24,4 +34,31 @@ def on_start(self):

@abstractmethod
def on_stop(self):
pass
pass

def get_action_params(self, action: str) -> Dict[str, Any]:
"""获取动作所需的参数描述"""
return {}

async def execute(self, chat_id: str, action: str, params: Dict[str, Any]) -> Dict[str, Any]:
"""执行插件动作"""
return {}

def get_actions(self) -> List[str]:
"""获取插件支持的所有动作"""
return []

def get_action_trigger(self, message: str) -> Optional[Dict[str, Any]]:
"""根据消息内容获取触发的动作和参数
Args:
message: 用户消息内容
Returns:
None: 不触发任何动作
Dict: {
"action": str, # 触发的动作名称
"params": Dict[str, Any] # 动作的参数
}
"""
return None # 默认不触发
Loading

0 comments on commit 31b4293

Please sign in to comment.