Skip to content
Closed

Patch 1 #1908

Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
157 commits
Select commit Hold shift + click to select a range
82b32b3
✨ image to text tool
Zhi-a Nov 20, 2025
c031632
Merge branch 'develop' of https://github.com/ModelEngine-Group/nexent…
Zhi-a Nov 20, 2025
1064d38
✨ image to text tool
Zhi-a Nov 20, 2025
e6734eb
✨Agent duplicate name handling logic improvement #1622
YehongPan Nov 20, 2025
893d2e8
Merge branch 'refs/heads/develop' into pyh/feat_runtime_isolation_dev…
YehongPan Nov 20, 2025
5dd8645
Merge remote-tracking branch 'refs/remotes/origin/develop' into pyh/f…
YehongPan Nov 21, 2025
906ca05
✨ image to text tool
Zhi-a Nov 21, 2025
e4166e1
✨Agent duplicate name handling logic improvement #1622
YehongPan Nov 21, 2025
f56b014
Merge branch 'refs/heads/develop' into pyh/feat_runtime_isolation_dev…
YehongPan Nov 21, 2025
063120a
Merge pull request #1779 from ModelEngine-Group/xyc/agent_generate_bu…
Phinease Nov 21, 2025
15fe9c5
Update docs
SimengBian Nov 21, 2025
694dc17
Update opensource-memorial-wall.md
Lin-Wang-QHU Nov 21, 2025
74ab821
Add user testimonials to memorial wall documentation
AgainsTurb Nov 22, 2025
93d5b34
(Disable_generate_prompt_textbox_while_generating
Nov 22, 2025
3cfdec0
Update opensource-memorial-wall.md
fujin6399-bit Nov 23, 2025
7d74f3e
Add messages from DUTBenjamin
DUTBenjamin Nov 23, 2025
820578d
Add user testimonials to memorial wall
dean-stack Nov 23, 2025
af377ed
Add user messages to memorial wall
adasibi Nov 23, 2025
16d0afd
Update opensource-memorial-wall.md
charleswang-de Nov 23, 2025
b247825
Add user contributions to memorial wall
adasibi Nov 23, 2025
88ed77a
Update opensource-memorial-wall.md
AuroraHashcat Nov 23, 2025
69cf667
Add user contributions to memorial wall
williamllk-ai Nov 23, 2025
9f5585f
📝 Update user-guide docs to fit new features
Phinease Nov 24, 2025
be133cd
✨ image to text tool
Zhi-a Nov 24, 2025
dd1e687
✨ image to text tool
Zhi-a Nov 24, 2025
c206605
Add user messages to open source memorial wall
lostlight530 Nov 24, 2025
7fadafc
Merge branch 'develop' of https://github.com/ModelEngine-Group/nexent…
YehongPan Nov 24, 2025
702ed45
Merge branch 'develop' into patch-2
WMC001 Nov 24, 2025
856c57b
Update opensource-memorial-wall.md from Lin-Wang-QHU/patch-2
WMC001 Nov 24, 2025
5f2ac89
Add user testimonials to memorial wall documentation from AgainsTurb/…
WMC001 Nov 24, 2025
99011c0
Merge branch 'develop' into patch-2
WMC001 Nov 24, 2025
8db4fd6
Update opensource-memorial-wall.md from fujin6399-bit/patch-2
WMC001 Nov 24, 2025
7bc6198
Merge branch 'develop' into patch-1
WMC001 Nov 24, 2025
b900059
Add messages from DUTBenjamin from DUTBenjamin/patch-1
WMC001 Nov 24, 2025
23b3787
Merge branch 'develop' into patch-1
WMC001 Nov 24, 2025
f7da8b2
Add user testimonials to memorial wall from dean-stack/patch-1
WMC001 Nov 24, 2025
f2617b4
Merge branch 'develop' into patch-2
WMC001 Nov 24, 2025
8f1c131
Add user messages to memorial wall from adasibi/patch-2
WMC001 Nov 24, 2025
51b824c
Merge branch 'develop' into patch-1
WMC001 Nov 24, 2025
c65c06c
Update opensource-memorial-wall.md from charleswang-de/patch-1
WMC001 Nov 24, 2025
928fb22
Merge branch 'develop' into patch-3
WMC001 Nov 24, 2025
16d3cc2
Add user contributions to memorial wall from adasibi/patch-3
WMC001 Nov 24, 2025
ac6f54d
Update memorial wall with new entry
dantingli1 Nov 24, 2025
e2688ee
Update opensource-memorial-wall.md
wuhu-wjd Nov 24, 2025
b261cfa
Update opensource-memorial-wall.md
czh1316 Nov 24, 2025
59f3122
🐛 Bugfix: After redeployment, the contents of files in Minio were cle…
WMC001 Nov 24, 2025
16e7167
Merge branch 'develop' into patch-1
WMC001 Nov 25, 2025
21262d3
Update opensource-memorial-wall.md from AuroraHashcat/patch-1
WMC001 Nov 25, 2025
62c7955
Merge branch 'develop' into patch-1
WMC001 Nov 25, 2025
a3f71c6
Add user contributions to memorial wall from williamllk-ai/patch-1
WMC001 Nov 25, 2025
7b314fc
Add user messages to open source memorial wall from lostlight530/patch-2
WMC001 Nov 25, 2025
800fd92
Merge branch 'develop' into patch-1
WMC001 Nov 25, 2025
8a1f810
Update memorial wall with new entry from dantingli1/patch-1
WMC001 Nov 25, 2025
caf4542
Merge branch 'develop' into patch-1
WMC001 Nov 25, 2025
15a1b97
Update opensource-memorial-wall.md from wuhu-wjd/patch-1
WMC001 Nov 25, 2025
c234725
Merge branch 'develop' into patch-1
WMC001 Nov 25, 2025
94d893b
✨ Now documents and chunks support semantic search
Jasonxia007 Nov 25, 2025
c30b18d
Add user comments to open source memorial wall
WMC001 Nov 25, 2025
175e3f6
Update opensource-memorial-wall.md from czh1316/patch-1
WMC001 Nov 25, 2025
4a15ef0
Add entry for jackliu631 on AI product implementation
jackliu631 Nov 25, 2025
7b739fc
✨ file to text tool
Zhi-a Nov 25, 2025
7c21b39
✨ image to text tool
Zhi-a Nov 25, 2025
6eaba0b
Merge branch 'develop_data_process_tool' of https://github.com/ModelE…
Zhi-a Nov 25, 2025
2e88f1e
✨ image to text tool
Zhi-a Nov 25, 2025
d707514
✨ file to text tool
Zhi-a Nov 25, 2025
34de64d
Add doc to illustrate MCP server development
SimengBian Nov 25, 2025
506078b
Merge branch 'develop_data_process_tool' of https://github.com/ModelE…
Zhi-a Nov 25, 2025
1747d0c
Merge branch 'develop_image_tool' of https://github.com/ModelEngine-G…
Zhi-a Nov 25, 2025
2a1d94c
Add git ignore
SimengBian Nov 25, 2025
be54251
🐛 Bugfix about exiting login #1856 #1858 #1859
WMC001 Nov 25, 2025
54cf613
🐛 After redeployment, the contents of files in Minio were cleared
Phinease Nov 25, 2025
88ced66
✨ Now documents and chunks support semantic search
Phinease Nov 25, 2025
83583bc
✨ Multi modal agent.
Zhi-a Nov 25, 2025
f73011e
Update git ignore
SimengBian Nov 25, 2025
6477deb
Update git ignore
SimengBian Nov 25, 2025
5cd9d1e
Merge branch 'bsm-doc-mcp-development' of https://github.com/ModelEng…
SimengBian Nov 25, 2025
1c9a9e5
✨ Agent duplicate name handling logic improvement #1622
YehongPan Nov 25, 2025
0ef4c32
Merge remote-tracking branch 'origin/develop' into pyh/feat_runtime_i…
YehongPan Nov 25, 2025
daad039
✨ Agent duplicate name handling logic improvement #1622
YehongPan Nov 25, 2025
552b78f
✨ Agent duplicate name handling logic improvement #1622
YehongPan Nov 25, 2025
3662187
Add testimonial for Nexent's intelligent agent platform
gs80140 Nov 26, 2025
c61b5b1
✨ Agent duplicate name handling logic improvement #1622
YehongPan Nov 26, 2025
0ca5b84
✨ Agent duplicate name handling logic improvement #1622
YehongPan Nov 26, 2025
42ebd2e
✨ Agent duplicate name handling logic improvement #1622
YehongPan Nov 26, 2025
d0591db
🐛 Add renderings of failed image and video loading #1872
WMC001 Nov 26, 2025
733f104
Merge pull request #1857 from ModelEngine-Group/bsm-doc-mcp-development
Phinease Nov 26, 2025
1201ae5
Add new testimonials to open source memorial wall
wwaawwaaee Nov 26, 2025
eb338fe
✨ image to text tool
Zhi-a Nov 25, 2025
56f6c8d
Add entry for jackliu631 on AI product implementation from jackliu631…
WMC001 Nov 26, 2025
a764a51
Add testimonial for Nexent's intelligent agent platform from gs80140/…
WMC001 Nov 26, 2025
9d0491a
✨ Agent duplicate name handling logic improvement #1622
YehongPan Nov 26, 2025
8cc7312
Add new testimonials to open source memorial wall from wwaawwaaee/pat…
WMC001 Nov 26, 2025
43b16c0
✨ file to text tool
Zhi-a Nov 26, 2025
4490075
✨ Agent duplicate name handling logic improvement #1622
YehongPan Nov 26, 2025
cab8171
Add entry for jackliu631 on AI product success
sharkkk123-1 Nov 26, 2025
ef19880
✨ Agent duplicate name handling logic improvement #1622
YehongPan Nov 26, 2025
c3e6b9d
Add user feedback to memorial wall
sharkkk123-1 Nov 26, 2025
116db18
✨ image to text tool v2
Zhi-a Nov 26, 2025
0d155cf
Update opensource-memorial-wall.md
18June96 Nov 26, 2025
7fbfe68
✨ Now supports add/delete/edit chunks
Jasonxia007 Nov 26, 2025
c5a61eb
Add encouragement note for Nexent developers
Lillian00700 Nov 27, 2025
eca4c88
🐛 Bugfix:the link in the non-login prompt is invalid #1876
WMC001 Nov 27, 2025
3d7980a
✨ Multi modal agent.
Zhi-a Nov 27, 2025
03ea527
✨ Now supports add/delete/edit chunks
Jasonxia007 Nov 27, 2025
bb8bbdb
✨ file to text tool
Zhi-a Nov 26, 2025
7766713
Merge branch 'develop_image_tool' of https://github.com/ModelEngine-G…
Zhi-a Nov 27, 2025
13ca3a3
Merge branch 'develop_data_process_tool' of https://github.com/ModelE…
Zhi-a Nov 27, 2025
c616655
🐛 Unify Agent save icon #1883
WMC001 Nov 27, 2025
2827c47
Add entry for jackliu631 on AI product success from sharkkk123-1/patch-1
WMC001 Nov 27, 2025
662eb64
Merge branch 'develop' into patch-4
WMC001 Nov 27, 2025
910220e
Update opensource-memorial-wall.md from 18June96/patch-4
WMC001 Nov 27, 2025
c4c1d2a
Update sceenshots and illustrate tool test run in docs
SimengBian Nov 27, 2025
f0e8e4c
📝 Update sceenshots and illustrate tool test run in docs
Phinease Nov 27, 2025
d983012
🐛 Add renderings of failed image and video loading #1872
Phinease Nov 27, 2025
67982d2
✨ Now supports add/delete/edit chunks
Phinease Nov 27, 2025
26b40e6
✨ Agent duplicate name handling logic improvement #1622
Phinease Nov 27, 2025
2b0a3e5
Update opensource-memorial-wall.md
NOSNNN Nov 27, 2025
0f1ec53
Add new entry for Chenpi-Sakura in memorial wall
Chenpi-Sakura Nov 27, 2025
38a80bf
Merge branch 'develop' into patch-2
WMC001 Nov 27, 2025
3e7be2b
Add encouragement note for Nexent developers from Lillian00700/patch-2
WMC001 Nov 27, 2025
0200eac
Merge branch 'develop' into patch-1
WMC001 Nov 27, 2025
56c1120
Update opensource-memorial-wall.md
yy-cy Nov 27, 2025
c40d884
Update opensource-memorial-wall.md from NOSNNN/patch-1
WMC001 Nov 27, 2025
3553949
Merge branch 'develop' of https://github.com/ModelEngine-Group/nexent…
Zhi-a Nov 27, 2025
3909681
Add user comments for Nexent and AstreoX
AstreoX Nov 27, 2025
dbf5701
Add user testimonials to memorial wall documentation
jiajialuo027-droid Nov 27, 2025
88b0aca
Update opensource-memorial-wall.md
Phoebe246824 Nov 27, 2025
5cbdc90
✨ Multi modal agent.
Zhi-a Nov 27, 2025
1c8864e
Merge pull request #1787 from 992638310-glitch/Disable_generate_promp…
Phinease Nov 27, 2025
03bcef2
Update opensource-memorial-wall.md
jiajialuo027-droid Nov 27, 2025
7446729
Add user comments to memorial wall
wei-cell12 Nov 27, 2025
bef9329
✨ image to text tool from ModelEngine-Group/develop_image_tool
WMC001 Nov 27, 2025
5182770
🐛 Bugfix: When creating a new Agent, the user is not prompted to save…
Jasonxia007 Nov 27, 2025
0fc8f13
📝 Update app version and format the memorial wall
WMC001 Nov 27, 2025
3036b14
Merge branch 'develop' into wmc/bugfix_1117
WMC001 Nov 27, 2025
b1ac254
📝 Update app version and format the memorial wall
WMC001 Nov 27, 2025
b701282
📝 Update app version and format the memorial wall
WMC001 Nov 27, 2025
35dcc9f
✨ file to text tool
Zhi-a Nov 27, 2025
1e3bbfe
✨ file to text tool
Zhi-a Nov 27, 2025
4787eb3
✨ Multi modal agent.
Zhi-a Nov 27, 2025
c63cb3d
🐛 Bugfix: When creating a new Agent, the user is not prompted to save…
Jasonxia007 Nov 28, 2025
c8136a1
📝 Update app version and format the memorial wall
Phinease Nov 28, 2025
7ba3dd9
Merge branch 'develop' into patch-1
WMC001 Nov 28, 2025
4c06635
Add new entry for Chenpi-Sakura in memorial wall from Chenpi-Sakura/p…
WMC001 Nov 28, 2025
484c7fe
Merge branch 'develop' into patch-1
WMC001 Nov 28, 2025
603c074
Update opensource-memorial-wall.md from yy-cy/patch-1
WMC001 Nov 28, 2025
5587ea2
Add user comments for Nexent from AstreoX/patch-1
WMC001 Nov 28, 2025
99add10
Add user testimonials to memorial wall documentation from jiajialuo02…
WMC001 Nov 28, 2025
9b574e1
Update opensource-memorial-wall.md from Phoebe246824/patch-1
WMC001 Nov 28, 2025
3674759
✨ Multi modal agent.
Zhi-a Nov 28, 2025
1811b10
🐛 Bugfix: When creating a new Agent, the user is not prompted to save…
Jasonxia007 Nov 28, 2025
88fce6e
Update opensource-memorial-wall.md from jiajialuo027-droid/patch-3
WMC001 Nov 28, 2025
9147098
Add user comments to memorial wall from wei-cell12/patch-2
WMC001 Nov 28, 2025
762f164
✨ Multi modal agent.
Zhi-a Nov 28, 2025
6c83812
✨ Multi modal agent.
Zhi-a Nov 28, 2025
862866e
✨ Multi modal agent. Pass the URL of the multimodal file as the query…
Phinease Nov 28, 2025
f7fe905
Add a new message to the memorial wall
joket11 Nov 28, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
.idea
.gitignore
/.env
.vscode

Expand Down
26 changes: 20 additions & 6 deletions backend/agents/create_agent_info.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@
from nexent.core.agents.agent_model import AgentRunInfo, ModelConfig, AgentConfig, ToolConfig
from nexent.memory.memory_service import search_memory_in_levels

from services.file_management_service import get_llm_model
from services.vectordatabase_service import (
ElasticSearchService,
get_vector_db_core,
Expand All @@ -17,13 +18,15 @@
from services.tenant_config_service import get_selected_knowledge_list
from services.remote_mcp_service import get_remote_mcp_server_list
from services.memory_config_service import build_memory_context
from services.image_service import get_vlm_model
from database.agent_db import search_agent_info_by_agent_id, query_sub_agents_id_list
from database.tool_db import search_tools_for_sub_agent
from database.model_management_db import get_model_records, get_model_by_model_id
from database.client import minio_client
from utils.model_name_utils import add_repo_to_name
from utils.prompt_template_utils import get_agent_prompt_template
from utils.config_utils import tenant_config_manager, get_model_name_from_config
from consts.const import LOCAL_MCP_SERVER, MODEL_CONFIG_MAPPING, LANGUAGE
from consts.const import LOCAL_MCP_SERVER, MODEL_CONFIG_MAPPING, LANGUAGE, DATA_PROCESS_SERVICE

logger = logging.getLogger("create_agent_info")
logger.setLevel(logging.DEBUG)
Expand Down Expand Up @@ -236,6 +239,18 @@ async def create_tool_config_list(agent_id, tenant_id, user_id):
"vdb_core": get_vector_db_core(),
"embedding_model": get_embedding_model(tenant_id=tenant_id),
}
elif tool_config.class_name == "AnalyzeTextFileTool":
tool_config.metadata = {
"llm_model": get_llm_model(tenant_id=tenant_id),
"storage_client": minio_client,
"data_process_service_url": DATA_PROCESS_SERVICE
}
elif tool_config.class_name == "AnalyzeImageTool":
tool_config.metadata = {
"vlm_model": get_vlm_model(tenant_id=tenant_id),
"storage_client": minio_client,
}

tool_config_list.append(tool_config)

return tool_config_list
Expand Down Expand Up @@ -299,13 +314,12 @@ async def join_minio_file_description_to_query(minio_files, query):
if minio_files and isinstance(minio_files, list):
file_descriptions = []
for file in minio_files:
if isinstance(file, dict) and "description" in file and file["description"]:
file_descriptions.append(file["description"])

if isinstance(file, dict) and "url" in file and file["url"] and "name" in file and file["name"]:
file_descriptions.append(f"File name: {file['name']}, S3 URL: s3:/{file['url']}")
if file_descriptions:
final_query = "User provided some reference files:\n"
final_query = "User uploaded files. The file information is as follows:\n"
final_query += "\n".join(file_descriptions) + "\n\n"
final_query += f"User wants to answer questions based on the above information: {query}"
final_query += f"User wants to answer questions based on the information in the above files: {query}"
return final_query


Expand Down
6 changes: 5 additions & 1 deletion backend/apps/agent_app.py
Original file line number Diff line number Diff line change
Expand Up @@ -134,7 +134,11 @@ async def import_agent_api(request: AgentImportRequest, authorization: Optional[
import an agent
"""
try:
await import_agent_impl(request.agent_info, authorization)
await import_agent_impl(
request.agent_info,
authorization,
force_import=request.force_import
)
return {}
except Exception as e:
logger.error(f"Agent import error: {str(e)}")
Expand Down
65 changes: 2 additions & 63 deletions backend/apps/file_management_app.py
Original file line number Diff line number Diff line change
@@ -1,16 +1,13 @@
import logging
import os
from http import HTTPStatus
from typing import List, Optional

from fastapi import APIRouter, Body, File, Form, Header, HTTPException, Path as PathParam, Query, Request, UploadFile
from fastapi import APIRouter, Body, File, Form, Header, HTTPException, Path as PathParam, Query, UploadFile
from fastapi.responses import JSONResponse, RedirectResponse, StreamingResponse

from consts.model import ProcessParams
from services.file_management_service import upload_to_minio, upload_files_impl, \
get_file_url_impl, get_file_stream_impl, delete_file_impl, list_files_impl, \
preprocess_files_generator
from utils.auth_utils import get_current_user_info
get_file_url_impl, get_file_stream_impl, delete_file_impl, list_files_impl
from utils.file_management_utils import trigger_data_process

logger = logging.getLogger("file_management_app")
Expand Down Expand Up @@ -271,61 +268,3 @@ async def get_storage_file_batch_urls(
"failed_count": sum(1 for r in results if not r.get("success", False)),
"results": results
}


@file_management_runtime_router.post("/preprocess")
async def agent_preprocess_api(
request: Request, query: str = Form(...),
files: List[UploadFile] = File(...),
authorization: Optional[str] = Header(None)
):
"""
Preprocess uploaded files and return streaming response
"""
try:
# Pre-read and cache all file contents
user_id, tenant_id, language = get_current_user_info(
authorization, request)
file_cache = []
for file in files:
try:
content = await file.read()
file_cache.append({
"filename": file.filename or "",
"content": content,
"ext": os.path.splitext(file.filename or "")[1].lower()
})
except Exception as e:
file_cache.append({
"filename": file.filename or "",
"error": str(e)
})

# Generate unique task ID for this preprocess operation
import uuid
task_id = str(uuid.uuid4())
conversation_id = request.query_params.get("conversation_id")
if conversation_id:
conversation_id = int(conversation_id)
else:
conversation_id = -1 # Default for cases without conversation_id

# Call service layer to generate streaming response
return StreamingResponse(
preprocess_files_generator(
query=query,
file_cache=file_cache,
tenant_id=tenant_id,
language=language,
task_id=task_id,
conversation_id=conversation_id
),
media_type="text/event-stream",
headers={
"Cache-Control": "no-cache",
"Connection": "keep-alive"
}
)
except Exception as e:
raise HTTPException(
status_code=500, detail=f"File preprocessing error: {str(e)}")
124 changes: 123 additions & 1 deletion backend/apps/vectordatabase_app.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
from fastapi import APIRouter, Body, Depends, Header, HTTPException, Path, Query
from fastapi.responses import JSONResponse

from consts.model import IndexingResponse
from consts.model import ChunkCreateRequest, ChunkUpdateRequest, HybridSearchRequest, IndexingResponse
from nexent.vector_database.base import VectorDatabaseCore
from services.vectordatabase_service import (
ElasticSearchService,
Expand Down Expand Up @@ -226,3 +226,125 @@ def get_index_chunks(
f"Error getting chunks for index '{index_name}': {error_msg}")
raise HTTPException(
status_code=HTTPStatus.INTERNAL_SERVER_ERROR, detail=f"Error getting chunks: {error_msg}")


@router.post("/{index_name}/chunk")
def create_chunk(
index_name: str = Path(..., description="Name of the index"),
payload: ChunkCreateRequest = Body(..., description="Chunk data"),
vdb_core: VectorDatabaseCore = Depends(get_vector_db_core),
authorization: Optional[str] = Header(None),
):
"""Create a manual chunk."""
try:
user_id, _ = get_current_user_id(authorization)
result = ElasticSearchService.create_chunk(
index_name=index_name,
chunk_request=payload,
vdb_core=vdb_core,
user_id=user_id,
)
return JSONResponse(status_code=HTTPStatus.OK, content=result)
except Exception as exc:
logger.error(
"Error creating chunk for index %s: %s", index_name, exc, exc_info=True
)
raise HTTPException(
status_code=HTTPStatus.INTERNAL_SERVER_ERROR, detail=str(exc)
)


@router.put("/{index_name}/chunk/{chunk_id}")
def update_chunk(
index_name: str = Path(..., description="Name of the index"),
chunk_id: str = Path(..., description="Chunk identifier"),
payload: ChunkUpdateRequest = Body(...,
description="Chunk update payload"),
vdb_core: VectorDatabaseCore = Depends(get_vector_db_core),
authorization: Optional[str] = Header(None),
):
"""Update an existing chunk."""
try:
user_id, _ = get_current_user_id(authorization)
result = ElasticSearchService.update_chunk(
index_name=index_name,
chunk_id=chunk_id,
chunk_request=payload,
vdb_core=vdb_core,
user_id=user_id,
)
return JSONResponse(status_code=HTTPStatus.OK, content=result)
except ValueError as exc:
raise HTTPException(
status_code=HTTPStatus.BAD_REQUEST, detail=str(exc))
except Exception as exc:
logger.error(
"Error updating chunk %s for index %s: %s",
chunk_id,
index_name,
exc,
exc_info=True,
)
raise HTTPException(
status_code=HTTPStatus.INTERNAL_SERVER_ERROR, detail=str(exc)
)


@router.delete("/{index_name}/chunk/{chunk_id}")
def delete_chunk(
index_name: str = Path(..., description="Name of the index"),
chunk_id: str = Path(..., description="Chunk identifier"),
vdb_core: VectorDatabaseCore = Depends(get_vector_db_core),
authorization: Optional[str] = Header(None),
):
"""Delete a chunk."""
try:
get_current_user_id(authorization)
result = ElasticSearchService.delete_chunk(
index_name=index_name,
chunk_id=chunk_id,
vdb_core=vdb_core,
)
return JSONResponse(status_code=HTTPStatus.OK, content=result)
except ValueError as exc:
raise HTTPException(status_code=HTTPStatus.NOT_FOUND, detail=str(exc))
except Exception as exc:
logger.error(
"Error deleting chunk %s for index %s: %s",
chunk_id,
index_name,
exc,
exc_info=True,
)
raise HTTPException(
status_code=HTTPStatus.INTERNAL_SERVER_ERROR, detail=str(exc)
)


@router.post("/search/hybrid")
async def hybrid_search(
payload: HybridSearchRequest,
vdb_core: VectorDatabaseCore = Depends(get_vector_db_core),
authorization: Optional[str] = Header(None),
):
"""Run a hybrid (accurate + semantic) search across indices."""
try:
_, tenant_id = get_current_user_id(authorization)
result = ElasticSearchService.search_hybrid(
index_names=payload.index_names,
query=payload.query,
tenant_id=tenant_id,
top_k=payload.top_k,
weight_accurate=payload.weight_accurate,
vdb_core=vdb_core,
)
return JSONResponse(status_code=HTTPStatus.OK, content=result)
except ValueError as exc:
raise HTTPException(status_code=HTTPStatus.BAD_REQUEST,
detail=str(exc))
except Exception as exc:
logger.error(f"Hybrid search failed: {exc}", exc_info=True)
raise HTTPException(
status_code=HTTPStatus.INTERNAL_SERVER_ERROR,
detail=f"Error executing hybrid search: {str(exc)}",
)
2 changes: 1 addition & 1 deletion backend/consts/const.py
Original file line number Diff line number Diff line change
Expand Up @@ -279,7 +279,7 @@ class VectorDatabaseType(str, Enum):
os.getenv("LLM_SLOW_TOKEN_RATE_THRESHOLD", "10.0")) # tokens per second

# APP Version
APP_VERSION = "v1.7.6"
APP_VERSION = "v1.7.7"

DEFAULT_ZH_TITLE = "新对话"
DEFAULT_EN_TITLE = "New Conversation"
38 changes: 38 additions & 0 deletions backend/consts/model.py
Original file line number Diff line number Diff line change
Expand Up @@ -175,6 +175,43 @@ class IndexingResponse(BaseModel):
total_submitted: int


class ChunkCreateRequest(BaseModel):
"""Request payload for manual chunk creation."""

content: str = Field(..., min_length=1, description="Chunk content")
title: Optional[str] = Field(None, description="Optional chunk title")
filename: Optional[str] = Field(None, description="Associated file name")
path_or_url: Optional[str] = Field(None, description="Source path or URL")
chunk_id: Optional[str] = Field(
None, description="Explicit chunk identifier")
metadata: Dict[str, Any] = Field(
default_factory=dict, description="Additional chunk metadata")


class ChunkUpdateRequest(BaseModel):
"""Request payload for chunk updates."""

content: Optional[str] = Field(None, description="Updated chunk content")
title: Optional[str] = Field(None, description="Updated chunk title")
filename: Optional[str] = Field(None, description="Updated file name")
path_or_url: Optional[str] = Field(
None, description="Updated source path or URL")
metadata: Dict[str, Any] = Field(
default_factory=dict, description="Additional metadata updates")


class HybridSearchRequest(BaseModel):
"""Request payload for hybrid knowledge-base searches."""
query: str = Field(..., min_length=1,
description="Search query text")
index_names: List[str] = Field(..., min_items=1,
description="List of index names to search")
top_k: int = Field(10, ge=1, le=100,
description="Number of results to return")
weight_accurate: float = Field(0.5, ge=0.0, le=1.0,
description="Weight applied to accurate search scores")


# Request models
class ProcessParams(BaseModel):
chunking_strategy: Optional[str] = "basic"
Expand Down Expand Up @@ -304,6 +341,7 @@ class ExportAndImportDataFormat(BaseModel):

class AgentImportRequest(BaseModel):
agent_info: ExportAndImportDataFormat
force_import: bool = False


class ConvertStateRequest(BaseModel):
Expand Down
Loading