Skip to content

Commit 4ae34ea

Browse files
committed
merge main
2 parents 450766a + 96273fd commit 4ae34ea

31 files changed

+417
-154
lines changed

README.md

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -63,7 +63,7 @@ For enterprise inquiries, please contact: **business@nextchat.dev**
6363

6464
企业版咨询: **business@nextchat.dev**
6565

66-
<img width="300" src="https://github.com/user-attachments/assets/3daeb7b6-ab63-4542-9141-2e4a12c80601">
66+
<img width="300" src="https://github.com/user-attachments/assets/3d4305ac-6e95-489e-884b-51d51db5f692">
6767

6868
## Features
6969

@@ -100,6 +100,7 @@ For enterprise inquiries, please contact: **business@nextchat.dev**
100100

101101
## What's New
102102

103+
- 🚀 v2.15.4 The Application supports using Tauri fetch LLM API, MORE SECURITY! [#5379](https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web/issues/5379)
103104
- 🚀 v2.15.0 Now supports Plugins! Read this: [NextChat-Awesome-Plugins](https://github.com/ChatGPTNextWeb/NextChat-Awesome-Plugins)
104105
- 🚀 v2.14.0 Now supports Artifacts & SD
105106
- 🚀 v2.10.1 support Google Gemini Pro model.
@@ -137,6 +138,7 @@ For enterprise inquiries, please contact: **business@nextchat.dev**
137138

138139
## 最新动态
139140

141+
- 🚀 v2.15.4 客户端支持Tauri本地直接调用大模型API,更安全![#5379](https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web/issues/5379)
140142
- 🚀 v2.15.0 现在支持插件功能了!了解更多:[NextChat-Awesome-Plugins](https://github.com/ChatGPTNextWeb/NextChat-Awesome-Plugins)
141143
- 🚀 v2.14.0 现在支持 Artifacts & SD 了。
142144
- 🚀 v2.10.1 现在支持 Gemini Pro 模型。
@@ -332,9 +334,9 @@ To control custom models, use `+` to add a custom model, use `-` to hide a model
332334

333335
User `-all` to disable all default models, `+all` to enable all default models.
334336

335-
For Azure: use `modelName@azure=deploymentName` to customize model name and deployment name.
336-
> Example: `+gpt-3.5-turbo@azure=gpt35` will show option `gpt35(Azure)` in model list.
337-
> If you only can use Azure model, `-all,+gpt-3.5-turbo@azure=gpt35` will `gpt35(Azure)` the only option in model list.
337+
For Azure: use `modelName@Azure=deploymentName` to customize model name and deployment name.
338+
> Example: `+gpt-3.5-turbo@Azure=gpt35` will show option `gpt35(Azure)` in model list.
339+
> If you only can use Azure model, `-all,+gpt-3.5-turbo@Azure=gpt35` will `gpt35(Azure)` the only option in model list.
338340
339341
For ByteDance: use `modelName@bytedance=deploymentName` to customize model name and deployment name.
340342
> Example: `+Doubao-lite-4k@bytedance=ep-xxxxx-xxx` will show option `Doubao-lite-4k(ByteDance)` in model list.

README_CN.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -216,9 +216,9 @@ ByteDance Api Url.
216216
217217
用来控制模型列表,使用 `+` 增加一个模型,使用 `-` 来隐藏一个模型,使用 `模型名=展示名` 来自定义模型的展示名,用英文逗号隔开。
218218

219-
在Azure的模式下,支持使用`modelName@azure=deploymentName`的方式配置模型名称和部署名称(deploy-name)
220-
> 示例:`+gpt-3.5-turbo@azure=gpt35`这个配置会在模型列表显示一个`gpt35(Azure)`的选项。
221-
> 如果你只能使用Azure模式,那么设置 `-all,+gpt-3.5-turbo@azure=gpt35` 则可以让对话的默认使用 `gpt35(Azure)`
219+
在Azure的模式下,支持使用`modelName@Azure=deploymentName`的方式配置模型名称和部署名称(deploy-name)
220+
> 示例:`+gpt-3.5-turbo@Azure=gpt35`这个配置会在模型列表显示一个`gpt35(Azure)`的选项。
221+
> 如果你只能使用Azure模式,那么设置 `-all,+gpt-3.5-turbo@Azure=gpt35` 则可以让对话的默认使用 `gpt35(Azure)`
222222
223223
在ByteDance的模式下,支持使用`modelName@bytedance=deploymentName`的方式配置模型名称和部署名称(deploy-name)
224224
> 示例: `+Doubao-lite-4k@bytedance=ep-xxxxx-xxx`这个配置会在模型列表显示一个`Doubao-lite-4k(ByteDance)`的选项

README_JA.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -207,8 +207,8 @@ ByteDance API の URL。
207207
208208
モデルリストを管理します。`+` でモデルを追加し、`-` でモデルを非表示にし、`モデル名=表示名` でモデルの表示名をカスタマイズし、カンマで区切ります。
209209

210-
Azure モードでは、`modelName@azure=deploymentName` 形式でモデル名とデプロイ名(deploy-name)を設定できます。
211-
> 例:`+gpt-3.5-turbo@azure=gpt35` この設定でモデルリストに `gpt35(Azure)` のオプションが表示されます。
210+
Azure モードでは、`modelName@Azure=deploymentName` 形式でモデル名とデプロイ名(deploy-name)を設定できます。
211+
> 例:`+gpt-3.5-turbo@Azure=gpt35` この設定でモデルリストに `gpt35(Azure)` のオプションが表示されます。
212212
213213
ByteDance モードでは、`modelName@bytedance=deploymentName` 形式でモデル名とデプロイ名(deploy-name)を設定できます。
214214
> 例: `+Doubao-lite-4k@bytedance=ep-xxxxx-xxx` この設定でモデルリストに `Doubao-lite-4k(ByteDance)` のオプションが表示されます。

app/api/openai.ts

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ import { NextRequest, NextResponse } from "next/server";
66
import { auth } from "./auth";
77
import { requestOpenai } from "./common";
88

9-
const ALLOWD_PATH = new Set(Object.values(OpenaiPath));
9+
const ALLOWED_PATH = new Set(Object.values(OpenaiPath));
1010

1111
function getModels(remoteModelRes: OpenAIListModelResponse) {
1212
const config = getServerSideConfig();
@@ -34,7 +34,7 @@ export async function handle(
3434

3535
const subpath = params.path.join("/");
3636

37-
if (!ALLOWD_PATH.has(subpath)) {
37+
if (!ALLOWED_PATH.has(subpath)) {
3838
console.log("[OpenAI Route] forbidden path ", subpath);
3939
return NextResponse.json(
4040
{

app/client/api.ts

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -231,7 +231,7 @@ export function getHeaders(ignoreHeaders: boolean = false) {
231231

232232
function getConfig() {
233233
const modelConfig = chatStore.currentSession().mask.modelConfig;
234-
const isGoogle = modelConfig.providerName == ServiceProvider.Google;
234+
const isGoogle = modelConfig.providerName === ServiceProvider.Google;
235235
const isAzure = modelConfig.providerName === ServiceProvider.Azure;
236236
const isAnthropic = modelConfig.providerName === ServiceProvider.Anthropic;
237237
const isBaidu = modelConfig.providerName == ServiceProvider.Baidu;

app/client/platforms/alibaba.ts

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,7 @@ import {
2323
import { prettyObject } from "@/app/utils/format";
2424
import { getClientConfig } from "@/app/config/client";
2525
import { getMessageTextContent } from "@/app/utils";
26+
import { fetch } from "@/app/utils/stream";
2627

2728
export interface OpenAIListModelResponse {
2829
object: string;
@@ -178,6 +179,7 @@ export class QwenApi implements LLMApi {
178179
controller.signal.onabort = finish;
179180

180181
fetchEventSource(chatPath, {
182+
fetch: fetch as any,
181183
...chatPayload,
182184
async onopen(res) {
183185
clearTimeout(requestTimeoutId);

app/client/platforms/anthropic.ts

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ import {
88
ChatMessageTool,
99
} from "@/app/store";
1010
import { getClientConfig } from "@/app/config/client";
11-
import { DEFAULT_API_HOST } from "@/app/constant";
11+
import { ANTHROPIC_BASE_URL } from "@/app/constant";
1212
import { getMessageTextContent, isVisionModel } from "@/app/utils";
1313
import { preProcessImageContent, stream } from "@/app/utils/chat";
1414
import { cloudflareAIGatewayUrl } from "@/app/utils/cloudflare";
@@ -388,9 +388,7 @@ export class ClaudeApi implements LLMApi {
388388
if (baseUrl.trim().length === 0) {
389389
const isApp = !!getClientConfig()?.isApp;
390390

391-
baseUrl = isApp
392-
? DEFAULT_API_HOST + "/api/proxy/anthropic"
393-
: ApiPath.Anthropic;
391+
baseUrl = isApp ? ANTHROPIC_BASE_URL : ApiPath.Anthropic;
394392
}
395393

396394
if (!baseUrl.startsWith("http") && !baseUrl.startsWith("/api")) {

app/client/platforms/baidu.ts

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,7 @@ import {
2424
import { prettyObject } from "@/app/utils/format";
2525
import { getClientConfig } from "@/app/config/client";
2626
import { getMessageTextContent } from "@/app/utils";
27+
import { fetch } from "@/app/utils/stream";
2728

2829
export interface OpenAIListModelResponse {
2930
object: string;
@@ -197,6 +198,7 @@ export class ErnieApi implements LLMApi {
197198
controller.signal.onabort = finish;
198199

199200
fetchEventSource(chatPath, {
201+
fetch: fetch as any,
200202
...chatPayload,
201203
async onopen(res) {
202204
clearTimeout(requestTimeoutId);

app/client/platforms/bytedance.ts

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,7 @@ import {
2323
import { prettyObject } from "@/app/utils/format";
2424
import { getClientConfig } from "@/app/config/client";
2525
import { getMessageTextContent } from "@/app/utils";
26+
import { fetch } from "@/app/utils/stream";
2627

2728
export interface OpenAIListModelResponse {
2829
object: string;
@@ -165,6 +166,7 @@ export class DoubaoApi implements LLMApi {
165166
controller.signal.onabort = finish;
166167

167168
fetchEventSource(chatPath, {
169+
fetch: fetch as any,
168170
...chatPayload,
169171
async onopen(res) {
170172
clearTimeout(requestTimeoutId);

app/client/platforms/google.ts

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ import {
1616
} from "@/app/store";
1717
import { stream } from "@/app/utils/chat";
1818
import { getClientConfig } from "@/app/config/client";
19-
import { DEFAULT_API_HOST } from "@/app/constant";
19+
import { GEMINI_BASE_URL } from "@/app/constant";
2020

2121
import {
2222
getMessageTextContent,
@@ -26,6 +26,7 @@ import {
2626
import { preProcessImageContent } from "@/app/utils/chat";
2727
import { nanoid } from "nanoid";
2828
import { RequestPayload } from "./openai";
29+
import { fetch } from "@/app/utils/stream";
2930

3031
export class GeminiProApi implements LLMApi {
3132
path(path: string): string {
@@ -38,7 +39,7 @@ export class GeminiProApi implements LLMApi {
3839

3940
const isApp = !!getClientConfig()?.isApp;
4041
if (baseUrl.length === 0) {
41-
baseUrl = isApp ? DEFAULT_API_HOST + `/api/proxy/google` : ApiPath.Google;
42+
baseUrl = isApp ? GEMINI_BASE_URL : ApiPath.Google;
4243
}
4344
if (baseUrl.endsWith("/")) {
4445
baseUrl = baseUrl.slice(0, baseUrl.length - 1);

app/client/platforms/iflytek.ts

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
"use client";
22
import {
33
ApiPath,
4-
DEFAULT_API_HOST,
4+
IFLYTEK_BASE_URL,
55
Iflytek,
66
REQUEST_TIMEOUT_MS,
77
} from "@/app/constant";
@@ -22,6 +22,7 @@ import {
2222
import { prettyObject } from "@/app/utils/format";
2323
import { getClientConfig } from "@/app/config/client";
2424
import { getMessageTextContent } from "@/app/utils";
25+
import { fetch } from "@/app/utils/stream";
2526

2627
import { RequestPayload } from "./openai";
2728

@@ -40,7 +41,7 @@ export class SparkApi implements LLMApi {
4041
if (baseUrl.length === 0) {
4142
const isApp = !!getClientConfig()?.isApp;
4243
const apiPath = ApiPath.Iflytek;
43-
baseUrl = isApp ? DEFAULT_API_HOST + "/proxy" + apiPath : apiPath;
44+
baseUrl = isApp ? IFLYTEK_BASE_URL : apiPath;
4445
}
4546

4647
if (baseUrl.endsWith("/")) {
@@ -149,6 +150,7 @@ export class SparkApi implements LLMApi {
149150
controller.signal.onabort = finish;
150151

151152
fetchEventSource(chatPath, {
153+
fetch: fetch as any,
152154
...chatPayload,
153155
async onopen(res) {
154156
clearTimeout(requestTimeoutId);

app/client/platforms/moonshot.ts

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
// azure and openai, using same models. so using same LLMApi.
33
import {
44
ApiPath,
5-
DEFAULT_API_HOST,
5+
MOONSHOT_BASE_URL,
66
Moonshot,
77
REQUEST_TIMEOUT_MS,
88
} from "@/app/constant";
@@ -40,7 +40,7 @@ export class MoonshotApi implements LLMApi {
4040
if (baseUrl.length === 0) {
4141
const isApp = !!getClientConfig()?.isApp;
4242
const apiPath = ApiPath.Moonshot;
43-
baseUrl = isApp ? DEFAULT_API_HOST + "/proxy" + apiPath : apiPath;
43+
baseUrl = isApp ? MOONSHOT_BASE_URL : apiPath;
4444
}
4545

4646
if (baseUrl.endsWith("/")) {

app/client/platforms/openai.ts

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
// azure and openai, using same models. so using same LLMApi.
33
import {
44
ApiPath,
5-
DEFAULT_API_HOST,
5+
OPENAI_BASE_URL,
66
DEFAULT_MODELS,
77
OpenaiPath,
88
Azure,
@@ -98,7 +98,7 @@ export class ChatGPTApi implements LLMApi {
9898
if (baseUrl.length === 0) {
9999
const isApp = !!getClientConfig()?.isApp;
100100
const apiPath = isAzure ? ApiPath.Azure : ApiPath.OpenAI;
101-
baseUrl = isApp ? DEFAULT_API_HOST + "/proxy" + apiPath : apiPath;
101+
baseUrl = isApp ? OPENAI_BASE_URL : apiPath;
102102
}
103103

104104
if (baseUrl.endsWith("/")) {

app/client/platforms/tencent.ts

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
"use client";
2-
import { ApiPath, DEFAULT_API_HOST, REQUEST_TIMEOUT_MS } from "@/app/constant";
2+
import { ApiPath, TENCENT_BASE_URL, REQUEST_TIMEOUT_MS } from "@/app/constant";
33
import { useAccessStore, useAppConfig, useChatStore } from "@/app/store";
44

55
import {
@@ -22,6 +22,7 @@ import mapKeys from "lodash-es/mapKeys";
2222
import mapValues from "lodash-es/mapValues";
2323
import isArray from "lodash-es/isArray";
2424
import isObject from "lodash-es/isObject";
25+
import { fetch } from "@/app/utils/stream";
2526

2627
export interface OpenAIListModelResponse {
2728
object: string;
@@ -70,9 +71,7 @@ export class HunyuanApi implements LLMApi {
7071

7172
if (baseUrl.length === 0) {
7273
const isApp = !!getClientConfig()?.isApp;
73-
baseUrl = isApp
74-
? DEFAULT_API_HOST + "/api/proxy/tencent"
75-
: ApiPath.Tencent;
74+
baseUrl = isApp ? TENCENT_BASE_URL : ApiPath.Tencent;
7675
}
7776

7877
if (baseUrl.endsWith("/")) {
@@ -179,6 +178,7 @@ export class HunyuanApi implements LLMApi {
179178
controller.signal.onabort = finish;
180179

181180
fetchEventSource(chatPath, {
181+
fetch: fetch as any,
182182
...chatPayload,
183183
async onopen(res) {
184184
clearTimeout(requestTimeoutId);

app/components/chat.tsx

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1815,6 +1815,7 @@ function _Chat() {
18151815
{message?.tools?.map((tool) => (
18161816
<div
18171817
key={tool.id}
1818+
title={tool?.errorMsg}
18181819
className={styles["chat-message-tool"]}
18191820
>
18201821
{tool.isError === false ? (

app/components/markdown.tsx

Lines changed: 1 addition & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -207,23 +207,6 @@ function CustomCode(props: { children: any; className?: string }) {
207207
);
208208
}
209209

210-
function escapeDollarNumber(text: string) {
211-
let escapedText = "";
212-
213-
for (let i = 0; i < text.length; i += 1) {
214-
let char = text[i];
215-
const nextChar = text[i + 1] || " ";
216-
217-
if (char === "$" && nextChar >= "0" && nextChar <= "9") {
218-
char = "\\$";
219-
}
220-
221-
escapedText += char;
222-
}
223-
224-
return escapedText;
225-
}
226-
227210
function escapeBrackets(text: string) {
228211
const pattern =
229212
/(```[\s\S]*?```|`.*?`)|\\\[([\s\S]*?[^\\])\\\]|\\\((.*?)\\\)/g;
@@ -261,7 +244,7 @@ function tryWrapHtmlCode(text: string) {
261244

262245
function _MarkDownContent(props: { content: string }) {
263246
const escapedContent = useMemo(() => {
264-
return tryWrapHtmlCode(escapeBrackets(escapeDollarNumber(props.content)));
247+
return tryWrapHtmlCode(escapeBrackets(props.content));
265248
}, [props.content]);
266249

267250
return (

app/constant.ts

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,6 @@ export const RUNTIME_CONFIG_DOM = "danger-runtime-config";
1111

1212
export const STABILITY_BASE_URL = "https://api.stability.ai";
1313

14-
export const DEFAULT_API_HOST = "https://api.nextchat.dev";
1514
export const OPENAI_BASE_URL = "https://api.openai.com";
1615
export const ANTHROPIC_BASE_URL = "https://api.anthropic.com";
1716

0 commit comments

Comments
 (0)