Compare commits

...

39 Commits

Author SHA1 Message Date
LaskyJ
42e09ad350 Merge ac7b720b5b into 48469bd8ca 2025-04-15 13:37:29 -05:00
Davidlasky
ac7b720b5b send longer chat history 2025-04-15 13:37:26 -05:00
Davidlasky
d02f9b0dd4 added gpt-4.1 and removed gpt-4.5-preview 2025-04-15 12:52:30 -05:00
Davidlasky
f2a5af7556 ignore local Dockerfile 2025-04-01 17:53:56 -05:00
Davidlasky
ff196f22c2 local debug 2025-04-01 17:48:31 -05:00
Davidlasky
106db97f8c enable o3-mini-high, optimize timeout for thinking models 2025-04-01 17:47:16 -05:00
Davidlasky
e30d90714b remove legacy models and support new models 2025-04-01 17:45:33 -05:00
Davidlasky
2329d59c83 defaults o3-mini to high 2025-04-01 13:33:38 -05:00
Davidlasky
b5ee4c1fcf make timeout longer 2025-03-29 02:28:00 -05:00
LaskyJ
6d69494e08 Update utils.ts 2025-03-27 12:52:25 -05:00
Davidlasky
2509495cdc try to add o1 as a vision model 2025-03-25 18:01:23 -05:00
LaskyJ
d65aca6d13 Update constant.ts 2025-03-25 15:40:36 -05:00
LaskyJ
2f5184c5b4 Update constant.ts 2025-03-25 15:27:26 -05:00
GH Action - Upstream Sync
20df2eed07 Merge branch 'main' of https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web 2025-03-21 01:00:47 +00:00
RiverRay
48469bd8ca Merge pull request #6392 from ChatGPTNextWeb/Leizhenpeng-patch-6
Some checks failed
Run Tests / test (push) Has been cancelled
Update README.md
2025-03-20 17:52:02 +08:00
RiverRay
5a5e887f2b Update README.md 2025-03-20 17:51:47 +08:00
LaskyJ
fd998de148 Merge branch 'ChatGPTNextWeb:main' into main 2025-03-14 18:47:04 -05:00
RiverRay
b6f5d75656 Merge pull request #6344 from vangie/fix/jest-setup-esm
Some checks failed
Run Tests / test (push) Has been cancelled
test: fix unit test failures
2025-03-14 20:04:56 +08:00
Vangie Du
0d41a17ef6 test: fix unit test failures 2025-03-07 14:49:17 +08:00
GH Action - Upstream Sync
fd2e69d1c7 Merge branch 'main' of https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web 2025-03-02 01:01:58 +00:00
RiverRay
f7cde17919 Merge pull request #6292 from Little-LittleProgrammer/feature/alibaba-omni-support
Some checks failed
Run Tests / test (push) Has been cancelled
feat(alibaba): Added alibaba vision model and omni model support
2025-03-01 10:25:16 +08:00
RiverRay
570cbb34b6 Merge pull request #6310 from agi-dude/patch-1
Remove duplicate links
2025-03-01 10:24:38 +08:00
GH Action - Upstream Sync
e8dcede878 Merge branch 'main' of https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web 2025-03-01 01:02:26 +00:00
RiverRay
7aa9ae0a3e Merge pull request #6311 from ChatGPTNextWeb/6305-bugthe-first-message-except-the-system-message-of-deepseek-reasoner-must-be-a-user-message-but-an-assistant-message-detected
Some checks are pending
Run Tests / test (push) Waiting to run
fix: enforce that the first message (excluding system messages) is a …
2025-02-28 19:48:09 +08:00
Kadxy
2d4180f5be fix: update request payload to use filtered messages in Deepseek API 2025-02-28 13:59:30 +08:00
Kadxy
9f0182b55e fix: enforce that the first message (excluding system messages) is a user message in the Deepseek API 2025-02-28 13:54:58 +08:00
Mr. AGI
ad6666eeaf Update README.md 2025-02-28 10:47:52 +05:00
GH Action - Upstream Sync
3b23f5f8ab Merge branch 'main' of https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web 2025-02-27 00:58:12 +00:00
EvanWu
a2c4e468a0 fix(app/utils/chat.ts): fix type error 2025-02-26 19:58:32 +08:00
RiverRay
2167076652 Merge pull request #6293 from hyiip/main
Some checks failed
Run Tests / test (push) Has been cancelled
claude 3.7 support
2025-02-26 18:41:28 +08:00
RiverRay
e123076250 Merge pull request #6295 from rexkyng/patch-1
Fix: Improve Mistral icon detection and remove redundant code.
2025-02-26 18:39:59 +08:00
LaskyJ
75cdd15bc2 Update constant.ts 2025-02-25 21:04:07 -06:00
LaskyJ
af1dfd2a6c Update constant.ts 2025-02-25 20:52:01 -06:00
LaskyJ
6aecdd80e9 Update constant.ts 2025-02-25 20:50:10 -06:00
Rex Ng
ebcb4db245 Fix: Improve Mistral icon detection and remove redundant code.
- Added "codestral" to the list of acceptable names for the Mistral icon, ensuring proper detection.
- Removed duplicate `toLowerCase()` calls.
2025-02-25 14:30:18 +08:00
EvanWu
0a25a1a8cb refacto(app/utils/chat.ts)r: optimize function preProcessImageContentBase 2025-02-25 09:22:47 +08:00
hyiip
f3154b20a5 claude 3.7 support 2025-02-25 03:55:24 +08:00
EvanWu
b709ee3983 feat(alibaba): Added alibaba vision model and omni model support 2025-02-24 20:18:07 +08:00
RiverRay
f5f3ce94f6 Update README.md
Some checks failed
Run Tests / test (push) Has been cancelled
2025-02-21 08:56:43 +08:00
20 changed files with 18892 additions and 1991 deletions

1
.gitignore vendored
View File

@@ -49,3 +49,4 @@ masks.json
# mcp config
app/mcp/mcp_config.json
Dockerfile.local

View File

@@ -7,7 +7,7 @@
<h1 align="center">NextChat (ChatGPT Next Web)</h1>
<h1 align="center">NextChat</h1>
English / [简体中文](./README_CN.md)
@@ -22,7 +22,6 @@ English / [简体中文](./README_CN.md)
[![MacOS][MacOS-image]][download-url]
[![Linux][Linux-image]][download-url]
[NextChatAI](https://nextchat.dev/chat?utm_source=readme) / [Web App Demo](https://app.nextchat.dev) / [Desktop App](https://github.com/Yidadaa/ChatGPT-Next-Web/releases)
[NextChatAI](https://nextchat.club?utm_source=readme) / [Web App Demo](https://app.nextchat.dev) / [Desktop App](https://github.com/Yidadaa/ChatGPT-Next-Web/releases) / [Discord](https://discord.gg/YCkeafCafC) / [Enterprise Edition](#enterprise-edition) / [Twitter](https://twitter.com/NextChatDev)
@@ -41,24 +40,6 @@ English / [简体中文](./README_CN.md)
</div>
## 👋 Hey, NextChat is going to develop a native app!
> This week we are going to start working on iOS and Android APP, and we want to find some reliable friends to do it together!
✨ Several key points:
- Starting from 0, you are a veteran
- Completely open source, not hidden
- Native development, pursuing the ultimate experience
Will you come and do something together? 😎
https://github.com/ChatGPTNextWeb/NextChat/issues/6269
#Seeking for talents is thirsty #lack of people"
## 🥳 Cheer for DeepSeek, China's AI star!
> Purpose-Built UI for DeepSeek Reasoner Model

View File

@@ -40,6 +40,11 @@ export interface MultimodalContent {
};
}
export interface MultimodalContentForAlibaba {
text?: string;
image?: string;
}
export interface RequestMessage {
role: MessageRole;
content: string | MultimodalContent[];

View File

@@ -7,7 +7,10 @@ import {
ChatMessageTool,
usePluginStore,
} from "@/app/store";
import { streamWithThink } from "@/app/utils/chat";
import {
preProcessImageContentForAlibabaDashScope,
streamWithThink,
} from "@/app/utils/chat";
import {
ChatOptions,
getHeaders,
@@ -15,12 +18,14 @@ import {
LLMModel,
SpeechOptions,
MultimodalContent,
MultimodalContentForAlibaba,
} from "../api";
import { getClientConfig } from "@/app/config/client";
import {
getMessageTextContent,
getMessageTextContentWithoutThinking,
getTimeoutMSByModel,
isVisionModel,
} from "@/app/utils";
import { fetch } from "@/app/utils/stream";
@@ -89,14 +94,6 @@ export class QwenApi implements LLMApi {
}
async chat(options: ChatOptions) {
const messages = options.messages.map((v) => ({
role: v.role,
content:
v.role === "assistant"
? getMessageTextContentWithoutThinking(v)
: getMessageTextContent(v),
}));
const modelConfig = {
...useAppConfig.getState().modelConfig,
...useChatStore.getState().currentSession().mask.modelConfig,
@@ -105,6 +102,21 @@ export class QwenApi implements LLMApi {
},
};
const visionModel = isVisionModel(options.config.model);
const messages: ChatOptions["messages"] = [];
for (const v of options.messages) {
const content = (
visionModel
? await preProcessImageContentForAlibabaDashScope(v.content)
: v.role === "assistant"
? getMessageTextContentWithoutThinking(v)
: getMessageTextContent(v)
) as any;
messages.push({ role: v.role, content });
}
const shouldStream = !!options.config.stream;
const requestPayload: RequestPayload = {
model: modelConfig.model,
@@ -129,7 +141,7 @@ export class QwenApi implements LLMApi {
"X-DashScope-SSE": shouldStream ? "enable" : "disable",
};
const chatPath = this.path(Alibaba.ChatPath);
const chatPath = this.path(Alibaba.ChatPath(modelConfig.model));
const chatPayload = {
method: "POST",
body: JSON.stringify(requestPayload),
@@ -162,7 +174,7 @@ export class QwenApi implements LLMApi {
const json = JSON.parse(text);
const choices = json.output.choices as Array<{
message: {
content: string | null;
content: string | null | MultimodalContentForAlibaba[];
tool_calls: ChatMessageTool[];
reasoning_content: string | null;
};
@@ -212,7 +224,9 @@ export class QwenApi implements LLMApi {
} else if (content && content.length > 0) {
return {
isThinking: false,
content: content,
content: Array.isArray(content)
? content.map((item) => item.text).join(",")
: content,
};
}

View File

@@ -71,8 +71,6 @@ const ClaudeMapper = {
system: "user",
} as const;
const keys = ["claude-2, claude-instant-1"];
export class ClaudeApi implements LLMApi {
speech(options: SpeechOptions): Promise<ArrayBuffer> {
throw new Error("Method not implemented.");

View File

@@ -75,6 +75,25 @@ export class DeepSeekApi implements LLMApi {
}
}
// 检测并修复消息顺序确保除system外的第一个消息是user
const filteredMessages: ChatOptions["messages"] = [];
let hasFoundFirstUser = false;
for (const msg of messages) {
if (msg.role === "system") {
// Keep all system messages
filteredMessages.push(msg);
} else if (msg.role === "user") {
// User message directly added
filteredMessages.push(msg);
hasFoundFirstUser = true;
} else if (hasFoundFirstUser) {
// After finding the first user message, all subsequent non-system messages are retained.
filteredMessages.push(msg);
}
// If hasFoundFirstUser is false and it is not a system message, it will be skipped.
}
const modelConfig = {
...useAppConfig.getState().modelConfig,
...useChatStore.getState().currentSession().mask.modelConfig,
@@ -85,7 +104,7 @@ export class DeepSeekApi implements LLMApi {
};
const requestPayload: RequestPayload = {
messages,
messages: filteredMessages,
stream: options.config.stream,
model: modelConfig.model,
temperature: modelConfig.temperature,

View File

@@ -197,8 +197,6 @@ export class GeminiProApi implements LLMApi {
signal: controller.signal,
headers: getHeaders(),
};
const isThinking = options.config.model.includes("-thinking");
// make a fetch request
const requestTimeoutId = setTimeout(
() => controller.abort(),

View File

@@ -67,6 +67,8 @@ export interface RequestPayload {
top_p: number;
max_tokens?: number;
max_completion_tokens?: number;
reasoning_effort?: string;
// O3 only
}
export interface DalleRequestPayload {
@@ -196,9 +198,9 @@ export class ChatGPTApi implements LLMApi {
let requestPayload: RequestPayload | DalleRequestPayload;
const isDalle3 = _isDalle3(options.config.model);
const isO1OrO3 =
options.config.model.startsWith("o1") ||
options.config.model.startsWith("o3");
const isO1 = options.config.model.startsWith("o1");
const isO3 = options.config.model.startsWith("o3");
const isO1OrO3 = isO1 || isO3;
if (isDalle3) {
const prompt = getMessageTextContent(
options.messages.slice(-1)?.pop() as any,
@@ -242,9 +244,18 @@ export class ChatGPTApi implements LLMApi {
requestPayload["max_completion_tokens"] = modelConfig.max_tokens;
}
if (isO3) {
requestPayload["reasoning_effort"] = "high";
// make o3-mini defaults to high reasoning effort
}
// add max_tokens to vision model
if (visionModel) {
requestPayload["max_tokens"] = Math.max(modelConfig.max_tokens, 4000);
if (isO1) {
requestPayload["max_completion_tokens"] = modelConfig.max_tokens;
} else {
requestPayload["max_tokens"] = Math.max(modelConfig.max_tokens, 4000);
}
}
}
@@ -286,6 +297,11 @@ export class ChatGPTApi implements LLMApi {
isDalle3 ? OpenaiPath.ImagePath : OpenaiPath.ChatPath,
);
}
// make a fetch request
const requestTimeoutId = setTimeout(
() => controller.abort(),
getTimeoutMSByModel(options.config.model),
);
if (shouldStream) {
let index = -1;
const [tools, funcs] = usePluginStore
@@ -393,12 +409,6 @@ export class ChatGPTApi implements LLMApi {
headers: getHeaders(),
};
// make a fetch request
const requestTimeoutId = setTimeout(
() => controller.abort(),
getTimeoutMSByModel(options.config.model),
);
const res = await fetch(chatPath, chatPayload);
clearTimeout(requestTimeoutId);

View File

@@ -66,11 +66,11 @@ export function Avatar(props: { model?: ModelType; avatar?: string }) {
LlmIcon = BotIconGemma;
} else if (modelName.startsWith("claude")) {
LlmIcon = BotIconClaude;
} else if (modelName.toLowerCase().includes("llama")) {
} else if (modelName.includes("llama")) {
LlmIcon = BotIconMeta;
} else if (modelName.startsWith("mixtral")) {
} else if (modelName.startsWith("mixtral") || modelName.startsWith("codestral")) {
LlmIcon = BotIconMistral;
} else if (modelName.toLowerCase().includes("deepseek")) {
} else if (modelName.includes("deepseek")) {
LlmIcon = BotIconDeepseek;
} else if (modelName.startsWith("moonshot")) {
LlmIcon = BotIconMoonshot;
@@ -85,7 +85,7 @@ export function Avatar(props: { model?: ModelType; avatar?: string }) {
} else if (modelName.startsWith("doubao") || modelName.startsWith("ep-")) {
LlmIcon = BotIconDoubao;
} else if (
modelName.toLowerCase().includes("glm") ||
modelName.includes("glm") ||
modelName.startsWith("cogview-") ||
modelName.startsWith("cogvideox-")
) {

View File

@@ -221,7 +221,12 @@ export const ByteDance = {
export const Alibaba = {
ExampleEndpoint: ALIBABA_BASE_URL,
ChatPath: "v1/services/aigc/text-generation/generation",
ChatPath: (modelName: string) => {
if (modelName.includes("vl") || modelName.includes("omni")) {
return "v1/services/aigc/multimodal-generation/generation";
}
return `v1/services/aigc/text-generation/generation`;
},
};
export const Tencent = {
@@ -403,35 +408,20 @@ You are an AI assistant with access to system tools. Your role is to help users
`;
export const SUMMARIZE_MODEL = "gpt-4o-mini";
export const GEMINI_SUMMARIZE_MODEL = "gemini-pro";
export const SUMMARIZE_MODEL = "gpt-4.1-mini";
export const GEMINI_SUMMARIZE_MODEL = "gemini-2.0-flash";
export const DEEPSEEK_SUMMARIZE_MODEL = "deepseek-chat";
export const KnowledgeCutOffDate: Record<string, string> = {
default: "2021-09",
"gpt-4-turbo": "2023-12",
"gpt-4-turbo-2024-04-09": "2023-12",
"gpt-4-turbo-preview": "2023-12",
"gpt-4o": "2023-10",
"gpt-4o-2024-05-13": "2023-10",
"gpt-4o-2024-08-06": "2023-10",
"gpt-4o-2024-11-20": "2023-10",
"chatgpt-4o-latest": "2023-10",
"gpt-4o-mini": "2023-10",
"gpt-4o-mini-2024-07-18": "2023-10",
"gpt-4-vision-preview": "2023-04",
"o1-mini-2024-09-12": "2023-10",
"o1-mini": "2023-10",
"o1-preview-2024-09-12": "2023-10",
"o1-preview": "2023-10",
"o1-2024-12-17": "2023-10",
o1: "2023-10",
"o3-mini-2025-01-31": "2023-10",
"o3-mini": "2023-10",
default: "2023-10",
// After improvements,
// it's now easier to add "KnowledgeCutOffDate" instead of stupid hardcoding it, as was done previously.
"gemini-pro": "2023-12",
"gemini-pro-vision": "2023-12",
"gemini-2.5-pro-exp-03-25": "2025-01",
"gemini-2.0-flash": "2024-08",
"claude-3-7-sonnet-latest": "2024-10",
"claude-3-5-haiku-latest": "2024-10",
"gpt-4.1": "2024-06",
"gpt-4.1-mini": "2024-06",
"deepseek-chat": "2024-07",
"deepseek-coder": "2024-07",
};
@@ -452,11 +442,12 @@ export const DEFAULT_TTS_VOICES = [
export const VISION_MODEL_REGEXES = [
/vision/,
/gpt-4o/,
/gpt-4\.1/,
/claude-3/,
/gemini-1\.5/,
/gemini-exp/,
/gemini-2\.0/,
/gemini-2\.5-pro/,
/learnlm/,
/qwen-vl/,
/qwen2-vl/,
@@ -464,77 +455,24 @@ export const VISION_MODEL_REGEXES = [
/^dall-e-3$/, // Matches exactly "dall-e-3"
/glm-4v/,
/vl/i,
/o1/,
];
export const EXCLUDE_VISION_MODEL_REGEXES = [/claude-3-5-haiku-20241022/];
const openaiModels = [
// As of July 2024, gpt-4o-mini should be used in place of gpt-3.5-turbo,
// as it is cheaper, more capable, multimodal, and just as fast. gpt-3.5-turbo is still available for use in the API.
"gpt-3.5-turbo",
"gpt-3.5-turbo-1106",
"gpt-3.5-turbo-0125",
"gpt-4",
"gpt-4-0613",
"gpt-4-32k",
"gpt-4-32k-0613",
"gpt-4-turbo",
"gpt-4-turbo-preview",
"gpt-4o",
"gpt-4o-2024-05-13",
"gpt-4o-2024-08-06",
"gpt-4o-2024-11-20",
"chatgpt-4o-latest",
"gpt-4o-mini",
"gpt-4o-mini-2024-07-18",
"gpt-4-vision-preview",
"gpt-4-turbo-2024-04-09",
"gpt-4-1106-preview",
"dall-e-3",
"o1-mini",
"o1-preview",
"o3-mini",
];
const openaiModels = ["dall-e-3", "o1", "o3-mini", "gpt-4.1", "gpt-4.1-mini"];
const googleModels = [
"gemini-1.0-pro", // Deprecated on 2/15/2025
"gemini-1.5-pro-latest",
"gemini-1.5-pro",
"gemini-1.5-pro-002",
"gemini-1.5-pro-exp-0827",
"gemini-1.5-flash-latest",
"gemini-1.5-flash-8b-latest",
"gemini-1.5-flash",
"gemini-1.5-flash-8b",
"gemini-1.5-flash-002",
"gemini-1.5-flash-exp-0827",
"learnlm-1.5-pro-experimental",
"gemini-exp-1114",
"gemini-exp-1121",
"gemini-exp-1206",
"gemini-2.0-flash",
"gemini-2.0-flash-exp",
"gemini-2.0-flash-lite-preview-02-05",
"gemini-2.0-flash-thinking-exp",
"gemini-2.0-flash-thinking-exp-1219",
"gemini-2.0-flash-thinking-exp-01-21",
"gemini-2.0-pro-exp",
"gemini-2.0-pro-exp-02-05",
"gemini-2.0-flash-lite",
"gemini-2.5-pro-exp-03-25",
];
const anthropicModels = [
"claude-instant-1.2",
"claude-2.0",
"claude-2.1",
"claude-3-sonnet-20240229",
"claude-3-opus-20240229",
"claude-3-opus-latest",
"claude-3-haiku-20240307",
"claude-3-5-haiku-20241022",
"claude-3-5-haiku-latest",
"claude-3-5-sonnet-20240620",
"claude-3-5-sonnet-20241022",
"claude-3-5-sonnet-latest",
"claude-3-7-sonnet-latest",
];
const baiduModels = [
@@ -568,6 +506,9 @@ const alibabaModes = [
"qwen-max-0403",
"qwen-max-0107",
"qwen-max-longcontext",
"qwen-omni-turbo",
"qwen-vl-plus",
"qwen-vl-max",
];
const tencentModels = [

View File

@@ -66,14 +66,14 @@ export const DEFAULT_CONFIG = {
modelConfig: {
model: "gpt-4o-mini" as ModelType,
providerName: "OpenAI" as ServiceProvider,
temperature: 0.5,
temperature: 0.2,
top_p: 1,
max_tokens: 4000,
presence_penalty: 0,
frequency_penalty: 0,
sendMemory: true,
historyMessageCount: 4,
compressMessageLengthThreshold: 1000,
historyMessageCount: 20,
compressMessageLengthThreshold: 5000,
compressModel: "",
compressProviderName: "",
enableInjectSystemPrompts: true,

View File

@@ -304,9 +304,18 @@ export function getTimeoutMSByModel(model: string) {
model.startsWith("o1") ||
model.startsWith("o3") ||
model.includes("deepseek-r") ||
model.includes("-thinking")
)
model.includes("-thinking") ||
model.includes("pro")
) {
console.log(
"thinking model is " +
model +
" timeout is " +
REQUEST_TIMEOUT_MS_FOR_THINKING,
);
return REQUEST_TIMEOUT_MS_FOR_THINKING;
}
console.log("normal model is " + model + " timeout is " + REQUEST_TIMEOUT_MS);
return REQUEST_TIMEOUT_MS;
}

View File

@@ -3,7 +3,7 @@ import {
UPLOAD_URL,
REQUEST_TIMEOUT_MS,
} from "@/app/constant";
import { RequestMessage } from "@/app/client/api";
import { MultimodalContent, RequestMessage } from "@/app/client/api";
import Locale from "@/app/locales";
import {
EventStreamContentType,
@@ -70,8 +70,9 @@ export function compressImage(file: Blob, maxSize: number): Promise<string> {
});
}
export async function preProcessImageContent(
export async function preProcessImageContentBase(
content: RequestMessage["content"],
transformImageUrl: (url: string) => Promise<{ [key: string]: any }>,
) {
if (typeof content === "string") {
return content;
@@ -81,7 +82,7 @@ export async function preProcessImageContent(
if (part?.type == "image_url" && part?.image_url?.url) {
try {
const url = await cacheImageToBase64Image(part?.image_url?.url);
result.push({ type: part.type, image_url: { url } });
result.push(await transformImageUrl(url));
} catch (error) {
console.error("Error processing image URL:", error);
}
@@ -92,6 +93,23 @@ export async function preProcessImageContent(
return result;
}
export async function preProcessImageContent(
content: RequestMessage["content"],
) {
return preProcessImageContentBase(content, async (url) => ({
type: "image_url",
image_url: { url },
})) as Promise<MultimodalContent[] | string>;
}
export async function preProcessImageContentForAlibabaDashScope(
content: RequestMessage["content"],
) {
return preProcessImageContentBase(content, async (url) => ({
image: url,
}));
}
const imageCaches: Record<string, string> = {};
export function cacheImageToBase64Image(imageUrl: string) {
if (imageUrl.includes(CACHE_URL_PREFIX)) {

View File

@@ -15,6 +15,8 @@ const config: Config = {
moduleNameMapper: {
"^@/(.*)$": "<rootDir>/$1",
},
extensionsToTreatAsEsm: [".ts", ".tsx"],
injectGlobals: true,
};
// createJestConfig is exported this way to ensure that next/jest can load the Next.js config which is async

View File

@@ -1,24 +1,22 @@
// Learn more: https://github.com/testing-library/jest-dom
import "@testing-library/jest-dom";
import { jest } from "@jest/globals";
global.fetch = jest.fn(() =>
Promise.resolve({
ok: true,
status: 200,
json: () => Promise.resolve({}),
json: () => Promise.resolve([]),
headers: new Headers(),
redirected: false,
statusText: "OK",
type: "basic",
url: "",
clone: function () {
return this;
},
body: null,
bodyUsed: false,
arrayBuffer: () => Promise.resolve(new ArrayBuffer(0)),
blob: () => Promise.resolve(new Blob()),
formData: () => Promise.resolve(new FormData()),
text: () => Promise.resolve(""),
}),
} as Response),
);

17486
package-lock.json generated Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -17,8 +17,8 @@
"prompts": "node ./scripts/fetch-prompts.mjs",
"prepare": "husky install",
"proxy-dev": "sh ./scripts/init-proxy.sh && proxychains -f ./scripts/proxychains.conf yarn dev",
"test": "jest --watch",
"test:ci": "jest --ci"
"test": "node --no-warnings --experimental-vm-modules $(yarn bin jest) --watch",
"test:ci": "node --no-warnings --experimental-vm-modules $(yarn bin jest) --ci"
},
"dependencies": {
"@fortaine/fetch-event-source": "^3.0.6",

View File

@@ -3,7 +3,7 @@ import { isModelNotavailableInServer } from "../app/utils/model";
describe("isModelNotavailableInServer", () => {
test("test model will return false, which means the model is available", () => {
const customModels = "";
const modelName = "gpt-4";
const modelName = "gpt-4.1";
const providerNames = "OpenAI";
const result = isModelNotavailableInServer(
customModels,

View File

@@ -1,3 +1,4 @@
import { jest } from "@jest/globals";
import { isVisionModel } from "../app/utils";
describe("isVisionModel", () => {
@@ -14,10 +15,11 @@ describe("isVisionModel", () => {
test("should identify vision models using regex patterns", () => {
const visionModels = [
"gpt-4-vision",
"gpt-4.1",
"claude-3-opus",
"gemini-1.5-pro",
"gemini-2.0",
"gemini-2.5-pro",
"gemini-exp-vision",
"learnlm-vision",
"qwen-vl-max",
@@ -50,7 +52,7 @@ describe("isVisionModel", () => {
test("should identify models from VISION_MODELS env var", () => {
process.env.VISION_MODELS = "custom-vision-model,another-vision-model";
expect(isVisionModel("custom-vision-model")).toBe(true);
expect(isVisionModel("another-vision-model")).toBe(true);
expect(isVisionModel("unrelated-model")).toBe(false);
@@ -64,4 +66,4 @@ describe("isVisionModel", () => {
expect(isVisionModel("unrelated-model")).toBe(false);
expect(isVisionModel("gpt-4-vision")).toBe(true);
});
});
});

3093
yarn.lock

File diff suppressed because it is too large Load Diff