Compare commits

...

16 Commits

Author SHA1 Message Date
Mike Klimin
61f879d717 Merge 956ec96d51 into 80d7fd9b98 2025-07-14 17:56:59 +08:00
RiverRay
80d7fd9b98 Merge pull request #6562 from LePao1/main
Some checks failed
Run Tests / test (push) Has been cancelled
fix: Update the regular expressions to support image upload functionality for multimodal Claude 4 and Gemini 2.5 series.
2025-07-14 13:37:52 +08:00
LePao1
e8a18d0b38 fix: Update the regular expressions to support image upload functionality for multimodal Claude 4 and Gemini 2.5 series. 2025-07-13 21:20:20 +08:00
RiverRay
0031544e14 Merge pull request #6552 from hyiip/main
Some checks failed
Run Tests / test (push) Has been cancelled
Migrate to claude 4
2025-07-08 23:35:22 +08:00
RiverRay
1f33ceee8f Merge pull request #6557 from JI4JUN/feat/support-302ai-provider
Feat/support 302ai provider
2025-07-08 23:34:38 +08:00
JI4JUN
666ca734ec docs: update README 2025-07-07 18:20:04 +08:00
JI4JUN
fda2eb1fb5 docs: update README 2025-07-07 18:18:46 +08:00
JI4JUN
93f8340744 Merge branch 'main' into feat/support-302ai-provider 2025-07-07 18:17:57 +08:00
hyiip
21d39b8dd6 Migrate to claude 4 2025-07-02 22:14:32 +08:00
RiverRay
814fd2786e Merge pull request #6542 from JI4JUN/feat/support-302ai-provider
Some checks failed
Run Tests / test (push) Has been cancelled
Feat/support 302ai provider
2025-06-30 20:29:47 +08:00
river
29dbffac3e fix: update section title for Sponsor AI API in README 2025-06-30 14:58:36 +08:00
river
92532b2c74 fix: update 302.AI banners in README files and standardize formatting 2025-06-30 14:57:37 +08:00
river
4f16ca1320 fix: update 302.AI banners in README files and remove old images 2025-06-30 14:54:47 +08:00
RiverRay
673f907ea4 Update README.md
Some checks failed
Run Tests / test (push) Has been cancelled
2025-06-19 20:18:28 +08:00
Mihail Klimin
956ec96d51 fix(types): improve typings in stream.ts
- Replace generic Function type with specific () => void for unlisten
- Enhance typing for setRequestId from Function to ((value: number) => void)
- Address code review feedback from Biome linter
2025-03-16 12:03:49 +03:00
Mihail Klimin
8fa7c14f18 feat(tauri): Migrate from Tauri v1 to v2
# Summary
This commit completes the migration from Tauri v1 to v2, resolves configuration issues, upgrades Next.js, and adds test coverage for critical components to ensure stability during the transition.

# Details
## Tauri v2 Migration
- Updated Tauri dependencies to v2.3.0 series in package.json
- Restructured build configuration in `/app/config/build.ts` to align with Tauri v2 requirements
- Fixed imports and API usage patterns across the codebase
- Added compatibility layer for window.__TAURI__ references to maintain backward compatibility

## Next.js Issues
- Upgraded Next.js from 14.1.1 to 14.2.24
- Resolved caching problems with Server Actions
- Updated eslint-config-next to match the new version
- Cleared Next.js cache and temporary files to address build issues

## Testing & Stability
- Added comprehensive tests for `stream.ts` to verify streaming functionality
- Created mocks for Tauri API to support test environment
- Verified that critical functionality continues to work correctly
- Translated all comments to English for consistency

## Infrastructure
- Fixed peer dependency warnings during installation
- Ensured proper integration with Tauri v2 plugins (clipboard-manager, dialog, fs, http, notification, shell, updater, window-state)

# Approach
Prioritized stability by:
1. Making minimal necessary changes to configuration files
2. Preserving most `window.__TAURI__` calls as they still function in v2
3. Planning gradual migration to new APIs with test coverage for critical components
4. Documenting areas that will require future attention

# Testing
- Created unit tests for critical streaming functionality
- Performed manual testing of key application features
- Verified successful build and launch with Tauri v2

# Future Work
- Future PRs will gradually replace deprecated Tauri v1 API calls with v2 equivalents
- Additional test coverage will be added for other critical components
2025-03-16 02:14:47 +03:00
28 changed files with 16072 additions and 1964 deletions

View File

@@ -4,21 +4,13 @@
<img src="https://github.com/user-attachments/assets/83bdcc07-ae5e-4954-a53a-ac151ba6ccf3" width="1000" alt="icon"/> <img src="https://github.com/user-attachments/assets/83bdcc07-ae5e-4954-a53a-ac151ba6ccf3" width="1000" alt="icon"/>
</a> </a>
<a href='https://302.ai/'>
<img src="./docs/images/302AI-banner-en.jpg" width=400 alt="icon"/>
</a>
[302.AI](https://302.ai/) is a pay-as-you-go AI application platform that offers the most comprehensive AI APIs and online applications available.
<h1 align="center">NextChat</h1> <h1 align="center">NextChat</h1>
English / [简体中文](./README_CN.md) English / [简体中文](./README_CN.md)
<a href="https://trendshift.io/repositories/5973" target="_blank"><img src="https://trendshift.io/api/badge/repositories/5973" alt="ChatGPTNextWeb%2FChatGPT-Next-Web | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a> <a href="https://trendshift.io/repositories/5973" target="_blank"><img src="https://trendshift.io/api/badge/repositories/5973" alt="ChatGPTNextWeb%2FChatGPT-Next-Web | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
✨ Light and Fast AI Assistant,with Claude, DeepSeek, GPT4 & Gemini Pro support.
✨ Light and Fast AI Assistant,with Claude, DeepSeek, GPT4 & Gemini Pro support.
[![Saas][Saas-image]][saas-url] [![Saas][Saas-image]][saas-url]
[![Web][Web-image]][web-url] [![Web][Web-image]][web-url]
@@ -26,8 +18,7 @@ English / [简体中文](./README_CN.md)
[![MacOS][MacOS-image]][download-url] [![MacOS][MacOS-image]][download-url]
[![Linux][Linux-image]][download-url] [![Linux][Linux-image]][download-url]
[NextChatAI](https://nextchat.club?utm_source=readme) / [iOS APP](https://apps.apple.com/us/app/nextchat-ai/id6743085599) / [Web App Demo](https://app.nextchat.club) / [Desktop App](https://github.com/Yidadaa/ChatGPT-Next-Web/releases) / [Enterprise Edition](#enterprise-edition) [NextChatAI](https://nextchat.club?utm_source=readme) / [iOS APP](https://apps.apple.com/us/app/nextchat-ai/id6743085599) / [Web App Demo](https://app.nextchat.club) / [Desktop App](https://github.com/Yidadaa/ChatGPT-Next-Web/releases) / [Enterprise Edition](#enterprise-edition)
[saas-url]: https://nextchat.club?utm_source=readme [saas-url]: https://nextchat.club?utm_source=readme
[saas-image]: https://img.shields.io/badge/NextChat-Saas-green?logo=microsoftedge [saas-image]: https://img.shields.io/badge/NextChat-Saas-green?logo=microsoftedge
@@ -38,29 +29,38 @@ English / [简体中文](./README_CN.md)
[MacOS-image]: https://img.shields.io/badge/-MacOS-black?logo=apple [MacOS-image]: https://img.shields.io/badge/-MacOS-black?logo=apple
[Linux-image]: https://img.shields.io/badge/-Linux-333?logo=ubuntu [Linux-image]: https://img.shields.io/badge/-Linux-333?logo=ubuntu
[<img src="https://zeabur.com/button.svg" alt="Deploy on Zeabur" height="30">](https://zeabur.com/templates/ZBUEFA) [<img src="https://vercel.com/button" alt="Deploy on Vercel" height="30">](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2FChatGPTNextWeb%2FChatGPT-Next-Web&env=OPENAI_API_KEY&env=CODE&project-name=nextchat&repository-name=NextChat) [<img src="https://gitpod.io/button/open-in-gitpod.svg" alt="Open in Gitpod" height="30">](https://gitpod.io/#https://github.com/ChatGPTNextWeb/NextChat) [<img src="https://zeabur.com/button.svg" alt="Deploy on Zeabur" height="30">](https://zeabur.com/templates/ZBUEFA) [<img src="https://vercel.com/button" alt="Deploy on Vercel" height="30">](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2FChatGPTNextWeb%2FChatGPT-Next-Web&env=OPENAI_API_KEY&env=CODE&project-name=nextchat&repository-name=NextChat) [<img src="https://gitpod.io/button/open-in-gitpod.svg" alt="Open in Gitpod" height="30">](https://gitpod.io/#https://github.com/ChatGPTNextWeb/NextChat)
[<img src="https://github.com/user-attachments/assets/903482d4-3e87-4134-9af1-f2588fa90659" height="50" width="" >](https://monica.im/?utm=nxcrp) [<img src="https://github.com/user-attachments/assets/903482d4-3e87-4134-9af1-f2588fa90659" height="50" width="" >](https://monica.im/?utm=nxcrp)
</div> </div>
## ❤️ Sponsor AI API
<a href='https://302.ai/'>
<img src="https://github.com/user-attachments/assets/a03edf82-2031-4f23-bdb8-bfc0bfd168a4" width="100%" alt="icon"/>
</a>
[302.AI](https://302.ai/) is a pay-as-you-go AI application platform that offers the most comprehensive AI APIs and online applications available.
## 🥳 Cheer for NextChat iOS Version Online! ## 🥳 Cheer for NextChat iOS Version Online!
> [👉 Click Here to Install Now](https://apps.apple.com/us/app/nextchat-ai/id6743085599) > [👉 Click Here to Install Now](https://apps.apple.com/us/app/nextchat-ai/id6743085599)
> [❤️ Source Code Coming Soon](https://github.com/ChatGPTNextWeb/NextChat-iOS) > [❤️ Source Code Coming Soon](https://github.com/ChatGPTNextWeb/NextChat-iOS)
![Github iOS Image](https://github.com/user-attachments/assets/e0aa334f-4c13-4dc9-8310-e3b09fa4b9f3) ![Github iOS Image](https://github.com/user-attachments/assets/e0aa334f-4c13-4dc9-8310-e3b09fa4b9f3)
## 🫣 NextChat Support MCP !
## 🫣 NextChat Support MCP !
> Before build, please set env ENABLE_MCP=true > Before build, please set env ENABLE_MCP=true
<img src="https://github.com/user-attachments/assets/d8851f40-4e36-4335-b1a4-ec1e11488c7e"/> <img src="https://github.com/user-attachments/assets/d8851f40-4e36-4335-b1a4-ec1e11488c7e"/>
## Enterprise Edition ## Enterprise Edition
Meeting Your Company's Privatization and Customization Deployment Requirements: Meeting Your Company's Privatization and Customization Deployment Requirements:
- **Brand Customization**: Tailored VI/UI to seamlessly align with your corporate brand image. - **Brand Customization**: Tailored VI/UI to seamlessly align with your corporate brand image.
- **Resource Integration**: Unified configuration and management of dozens of AI resources by company administrators, ready for use by team members. - **Resource Integration**: Unified configuration and management of dozens of AI resources by company administrators, ready for use by team members.
- **Permission Control**: Clearly defined member permissions, resource permissions, and knowledge base permissions, all controlled via a corporate-grade Admin Panel. - **Permission Control**: Clearly defined member permissions, resource permissions, and knowledge base permissions, all controlled via a corporate-grade Admin Panel.
@@ -77,7 +77,6 @@ For enterprise inquiries, please contact: **business@nextchat.dev**
![More](./docs/images/more.png) ![More](./docs/images/more.png)
## Features ## Features
- **Deploy for free with one-click** on Vercel in under 1 minute - **Deploy for free with one-click** on Vercel in under 1 minute
@@ -113,10 +112,11 @@ For enterprise inquiries, please contact: **business@nextchat.dev**
- [ ] local knowledge base - [ ] local knowledge base
## What's New ## What's New
- 🚀 v2.15.8 Now supports Realtime Chat [#5672](https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web/issues/5672) - 🚀 v2.15.8 Now supports Realtime Chat [#5672](https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web/issues/5672)
- 🚀 v2.15.4 The Application supports using Tauri fetch LLM API, MORE SECURITY! [#5379](https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web/issues/5379) - 🚀 v2.15.4 The Application supports using Tauri fetch LLM API, MORE SECURITY! [#5379](https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web/issues/5379)
- 🚀 v2.15.0 Now supports Plugins! Read this: [NextChat-Awesome-Plugins](https://github.com/ChatGPTNextWeb/NextChat-Awesome-Plugins) - 🚀 v2.15.0 Now supports Plugins! Read this: [NextChat-Awesome-Plugins](https://github.com/ChatGPTNextWeb/NextChat-Awesome-Plugins)
- 🚀 v2.14.0 Now supports Artifacts & SD - 🚀 v2.14.0 Now supports Artifacts & SD
- 🚀 v2.10.1 support Google Gemini Pro model. - 🚀 v2.10.1 support Google Gemini Pro model.
- 🚀 v2.9.11 you can use azure endpoint now. - 🚀 v2.9.11 you can use azure endpoint now.
- 🚀 v2.8 now we have a client that runs across all platforms! - 🚀 v2.8 now we have a client that runs across all platforms!
@@ -316,10 +316,12 @@ To control custom models, use `+` to add a custom model, use `-` to hide a model
User `-all` to disable all default models, `+all` to enable all default models. User `-all` to disable all default models, `+all` to enable all default models.
For Azure: use `modelName@Azure=deploymentName` to customize model name and deployment name. For Azure: use `modelName@Azure=deploymentName` to customize model name and deployment name.
> Example: `+gpt-3.5-turbo@Azure=gpt35` will show option `gpt35(Azure)` in model list. > Example: `+gpt-3.5-turbo@Azure=gpt35` will show option `gpt35(Azure)` in model list.
> If you only can use Azure model, `-all,+gpt-3.5-turbo@Azure=gpt35` will `gpt35(Azure)` the only option in model list. > If you only can use Azure model, `-all,+gpt-3.5-turbo@Azure=gpt35` will `gpt35(Azure)` the only option in model list.
For ByteDance: use `modelName@bytedance=deploymentName` to customize model name and deployment name. For ByteDance: use `modelName@bytedance=deploymentName` to customize model name and deployment name.
> Example: `+Doubao-lite-4k@bytedance=ep-xxxxx-xxx` will show option `Doubao-lite-4k(ByteDance)` in model list. > Example: `+Doubao-lite-4k@bytedance=ep-xxxxx-xxx` will show option `Doubao-lite-4k(ByteDance)` in model list.
### `DEFAULT_MODEL` optional ### `DEFAULT_MODEL` optional
@@ -336,8 +338,9 @@ Add additional models to have vision capabilities, beyond the default pattern ma
### `WHITE_WEBDAV_ENDPOINTS` (optional) ### `WHITE_WEBDAV_ENDPOINTS` (optional)
You can use this option if you want to increase the number of webdav service addresses you are allowed to access, as required by the format You can use this option if you want to increase the number of webdav service addresses you are allowed to access, as required by the format
- Each address must be a complete endpoint
> `https://xxxx/yyy` - Each address must be a complete endpoint
> `https://xxxx/yyy`
- Multiple addresses are connected by ', ' - Multiple addresses are connected by ', '
### `DEFAULT_INPUT_TEMPLATE` (optional) ### `DEFAULT_INPUT_TEMPLATE` (optional)
@@ -352,7 +355,6 @@ Stability API key.
Customize Stability API url. Customize Stability API url.
### `ENABLE_MCP` (optional) ### `ENABLE_MCP` (optional)
Enable MCPModel Context ProtocolFeature Enable MCPModel Context ProtocolFeature
@@ -365,13 +367,20 @@ SiliconFlow API Key.
SiliconFlow API URL. SiliconFlow API URL.
### `AI302_API_KEY` (optional)
302.AI API Key.
### `AI302_URL` (optional)
302.AI API URL.
## Requirements ## Requirements
NodeJS >= 18, Docker >= 20 NodeJS >= 18, Docker >= 20
## Development ## Development
[![Open in Gitpod](https://gitpod.io/button/open-in-gitpod.svg)](https://gitpod.io/#https://github.com/Yidadaa/ChatGPT-Next-Web) [![Open in Gitpod](https://gitpod.io/button/open-in-gitpod.svg)](https://gitpod.io/#https://github.com/Yidadaa/ChatGPT-Next-Web)
Before starting development, you must create a new `.env.local` file at project root, and place your api key into it: Before starting development, you must create a new `.env.local` file at project root, and place your api key into it:
@@ -395,7 +404,6 @@ yarn dev
## Deployment ## Deployment
### Docker (Recommended) ### Docker (Recommended)
```shell ```shell
@@ -453,8 +461,6 @@ bash <(curl -s https://raw.githubusercontent.com/Yidadaa/ChatGPT-Next-Web/main/s
- [How to use Vercel (No English)](./docs/vercel-cn.md) - [How to use Vercel (No English)](./docs/vercel-cn.md)
- [User Manual (Only Chinese, WIP)](./docs/user-manual-cn.md) - [User Manual (Only Chinese, WIP)](./docs/user-manual-cn.md)
## Translation ## Translation
If you want to add a new translation, read this [document](./docs/translation.md). If you want to add a new translation, read this [document](./docs/translation.md).
@@ -465,8 +471,6 @@ If you want to add a new translation, read this [document](./docs/translation.md
## Special Thanks ## Special Thanks
### Contributors ### Contributors
<a href="https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web/graphs/contributors"> <a href="https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web/graphs/contributors">

View File

@@ -4,25 +4,28 @@
<img src="./docs/images/ent.svg" alt="icon"/> <img src="./docs/images/ent.svg" alt="icon"/>
</a> </a>
<a href='https://302.ai/'>
<img src="./docs/images/302AI-banner-zh.jpg" width=400 alt="icon"/>
</a>
[302.AI](https://302.ai/) 是一个按需付费的AI应用平台提供市面上最全的AI API和AI在线应用。
<h1 align="center">NextChat</h1> <h1 align="center">NextChat</h1>
一键免费部署你的私人 ChatGPT 网页应用,支持 Claude, GPT4 & Gemini Pro 模型。 一键免费部署你的私人 ChatGPT 网页应用,支持 Claude, GPT4 & Gemini Pro 模型。
[NextChatAI](https://nextchat.club?utm_source=readme) / [企业版](#%E4%BC%81%E4%B8%9A%E7%89%88) / [演示 Demo](https://chat-gpt-next-web.vercel.app/) / [反馈 Issues](https://github.com/Yidadaa/ChatGPT-Next-Web/issues) / [加入 Discord](https://discord.gg/zrhvHCr79N) [NextChatAI](https://nextchat.club?utm_source=readme) / [企业版](#%E4%BC%81%E4%B8%9A%E7%89%88) / [演示 Demo](https://chat-gpt-next-web.vercel.app/) / [反馈 Issues](https://github.com/Yidadaa/ChatGPT-Next-Web/issues) / [加入 Discord](https://discord.gg/zrhvHCr79N)
[<img src="https://vercel.com/button" alt="Deploy on Zeabur" height="30">](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2FChatGPTNextWeb%2FChatGPT-Next-Web&env=OPENAI_API_KEY&env=CODE&project-name=nextchat&repository-name=NextChat) [<img src="https://zeabur.com/button.svg" alt="Deploy on Zeabur" height="30">](https://zeabur.com/templates/ZBUEFA) [<img src="https://gitpod.io/button/open-in-gitpod.svg" alt="Open in Gitpod" height="30">](https://gitpod.io/#https://github.com/Yidadaa/ChatGPT-Next-Web) [<img src="https://vercel.com/button" alt="Deploy on Zeabur" height="30">](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2FChatGPTNextWeb%2FChatGPT-Next-Web&env=OPENAI_API_KEY&env=CODE&project-name=nextchat&repository-name=NextChat) [<img src="https://zeabur.com/button.svg" alt="Deploy on Zeabur" height="30">](https://zeabur.com/templates/ZBUEFA) [<img src="https://gitpod.io/button/open-in-gitpod.svg" alt="Open in Gitpod" height="30">](https://gitpod.io/#https://github.com/Yidadaa/ChatGPT-Next-Web)
</div> </div>
## Sponsor AI API
<a href='https://302.ai/'>
<img src="https://github.com/user-attachments/assets/d8c0c513-1e18-4d3b-a2a9-ff3696aec0d4" width="100%" alt="icon"/>
</a>
[302.AI](https://302.ai/) 是一个按需付费的AI应用平台提供市面上最全的AI API和AI在线应用。
## 企业版 ## 企业版
满足您公司私有化部署和定制需求 满足您公司私有化部署和定制需求
- **品牌定制**:企业量身定制 VI/UI与企业品牌形象无缝契合 - **品牌定制**:企业量身定制 VI/UI与企业品牌形象无缝契合
- **资源集成**:由企业管理人员统一配置和管理数十种 AI 资源,团队成员开箱即用 - **资源集成**:由企业管理人员统一配置和管理数十种 AI 资源,团队成员开箱即用
- **权限管理**:成员权限、资源权限、知识库权限层级分明,企业级 Admin Panel 统一控制 - **权限管理**:成员权限、资源权限、知识库权限层级分明,企业级 Admin Panel 统一控制
@@ -35,7 +38,6 @@
<img width="300" src="https://github.com/user-attachments/assets/bb29a11d-ff75-48a8-b1f8-d2d7238cf987"> <img width="300" src="https://github.com/user-attachments/assets/bb29a11d-ff75-48a8-b1f8-d2d7238cf987">
## 开始使用 ## 开始使用
1. 准备好你的 [OpenAI API Key](https://platform.openai.com/account/api-keys); 1. 准备好你的 [OpenAI API Key](https://platform.openai.com/account/api-keys);
@@ -207,7 +209,6 @@ DeepSeek Api Key.
DeepSeek Api Url. DeepSeek Api Url.
### `HIDE_USER_API_KEY` (可选) ### `HIDE_USER_API_KEY` (可选)
如果你不想让用户自行填入 API Key将此环境变量设置为 1 即可。 如果你不想让用户自行填入 API Key将此环境变量设置为 1 即可。
@@ -227,8 +228,9 @@ DeepSeek Api Url.
### `WHITE_WEBDAV_ENDPOINTS` (可选) ### `WHITE_WEBDAV_ENDPOINTS` (可选)
如果你想增加允许访问的webdav服务地址可以使用该选项格式要求 如果你想增加允许访问的webdav服务地址可以使用该选项格式要求
- 每一个地址必须是一个完整的 endpoint - 每一个地址必须是一个完整的 endpoint
> `https://xxxx/xxx` > `https://xxxx/xxx`
- 多个地址以`,`相连 - 多个地址以`,`相连
### `CUSTOM_MODELS` (可选) ### `CUSTOM_MODELS` (可选)
@@ -239,12 +241,13 @@ DeepSeek Api Url.
用来控制模型列表,使用 `+` 增加一个模型,使用 `-` 来隐藏一个模型,使用 `模型名=展示名` 来自定义模型的展示名,用英文逗号隔开。 用来控制模型列表,使用 `+` 增加一个模型,使用 `-` 来隐藏一个模型,使用 `模型名=展示名` 来自定义模型的展示名,用英文逗号隔开。
在Azure的模式下支持使用`modelName@Azure=deploymentName`的方式配置模型名称和部署名称(deploy-name) 在Azure的模式下支持使用`modelName@Azure=deploymentName`的方式配置模型名称和部署名称(deploy-name)
> 示例:`+gpt-3.5-turbo@Azure=gpt35`这个配置会在模型列表显示一个`gpt35(Azure)`的选项。 > 示例:`+gpt-3.5-turbo@Azure=gpt35`这个配置会在模型列表显示一个`gpt35(Azure)`的选项。
> 如果你只能使用Azure模式那么设置 `-all,+gpt-3.5-turbo@Azure=gpt35` 则可以让对话的默认使用 `gpt35(Azure)` > 如果你只能使用Azure模式那么设置 `-all,+gpt-3.5-turbo@Azure=gpt35` 则可以让对话的默认使用 `gpt35(Azure)`
在ByteDance的模式下支持使用`modelName@bytedance=deploymentName`的方式配置模型名称和部署名称(deploy-name) 在ByteDance的模式下支持使用`modelName@bytedance=deploymentName`的方式配置模型名称和部署名称(deploy-name)
> 示例: `+Doubao-lite-4k@bytedance=ep-xxxxx-xxx`这个配置会在模型列表显示一个`Doubao-lite-4k(ByteDance)`的选项
> 示例: `+Doubao-lite-4k@bytedance=ep-xxxxx-xxx`这个配置会在模型列表显示一个`Doubao-lite-4k(ByteDance)`的选项
### `DEFAULT_MODEL` (可选) ### `DEFAULT_MODEL` (可选)
@@ -281,6 +284,14 @@ SiliconFlow API Key.
SiliconFlow API URL. SiliconFlow API URL.
### `AI302_API_KEY` (optional)
302.AI API Key.
### `AI302_URL` (optional)
302.AI API URL.
## 开发 ## 开发
点击下方按钮,开始二次开发: 点击下方按钮,开始二次开发:
@@ -305,6 +316,7 @@ BASE_URL=https://b.nextweb.fun/api/proxy
## 部署 ## 部署
### 宝塔面板部署 ### 宝塔面板部署
> [简体中文 > 如何通过宝塔一键部署](./docs/bt-cn.md) > [简体中文 > 如何通过宝塔一键部署](./docs/bt-cn.md)
### 容器部署 (推荐) ### 容器部署 (推荐)

View File

@@ -1,26 +1,28 @@
<div align="center"> <div align="center">
<img src="./docs/images/ent.svg" alt="プレビュー"/> <img src="./docs/images/ent.svg" alt="プレビュー"/>
<a href='https://302.ai/'>
<img src="./docs/images/302AI-banner-jp.jpg" width=400 alt="icon"/>
</a>
[302.AI](https://302.ai/) は、オンデマンドで支払うAIアプリケーションプラットフォームで、最も安全なAI APIとAIオンラインアプリケーションを提供します。
<h1 align="center">NextChat</h1> <h1 align="center">NextChat</h1>
ワンクリックで無料であなた専用の ChatGPT ウェブアプリをデプロイ。GPT3、GPT4 & Gemini Pro モデルをサポート。 ワンクリックで無料であなた専用の ChatGPT ウェブアプリをデプロイ。GPT3、GPT4 & Gemini Pro モデルをサポート。
[NextChatAI](https://nextchat.club?utm_source=readme) / [企業版](#企業版) / [デモ](https://chat-gpt-next-web.vercel.app/) / [フィードバック](https://github.com/Yidadaa/ChatGPT-Next-Web/issues) / [Discordに参加](https://discord.gg/zrhvHCr79N) [NextChatAI](https://nextchat.club?utm_source=readme) / [企業版](#企業版) / [デモ](https://chat-gpt-next-web.vercel.app/) / [フィードバック](https://github.com/Yidadaa/ChatGPT-Next-Web/issues) / [Discordに参加](https://discord.gg/zrhvHCr79N)
[<img src="https://vercel.com/button" alt="Zeaburでデプロイ" height="30">](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2FChatGPTNextWeb%2FChatGPT-Next-Web&env=OPENAI_API_KEY&env=CODE&project-name=nextchat&repository-name=NextChat) [<img src="https://zeabur.com/button.svg" alt="Zeaburでデプロイ" height="30">](https://zeabur.com/templates/ZBUEFA) [<img src="https://gitpod.io/button/open-in-gitpod.svg" alt="Gitpodで開く" height="30">](https://gitpod.io/#https://github.com/Yidadaa/ChatGPT-Next-Web) [<img src="https://vercel.com/button" alt="Zeaburでデプロイ" height="30">](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2FChatGPTNextWeb%2FChatGPT-Next-Web&env=OPENAI_API_KEY&env=CODE&project-name=nextchat&repository-name=NextChat) [<img src="https://zeabur.com/button.svg" alt="Zeaburでデプロイ" height="30">](https://zeabur.com/templates/ZBUEFA) [<img src="https://gitpod.io/button/open-in-gitpod.svg" alt="Gitpodで開く" height="30">](https://gitpod.io/#https://github.com/Yidadaa/ChatGPT-Next-Web)
</div> </div>
## Sponsor AI API
<a href='https://302.ai/'>
<img src="https://github.com/user-attachments/assets/6cf24233-1010-43e0-9a83-a11159866175" width="100%" alt="icon"/>
</a>
[302.AI](https://302.ai/) は、オンデマンドで支払うAIアプリケーションプラットフォームで、最も安全なAI APIとAIオンラインアプリケーションを提供します。
## 企業版 ## 企業版
あなたの会社のプライベートデプロイとカスタマイズのニーズに応える あなたの会社のプライベートデプロイとカスタマイズのニーズに応える
- **ブランドカスタマイズ**:企業向けに特別に設計された VI/UI、企業ブランドイメージとシームレスにマッチ - **ブランドカスタマイズ**:企業向けに特別に設計された VI/UI、企業ブランドイメージとシームレスにマッチ
- **リソース統合**企業管理者が数十種類のAIリソースを統一管理、チームメンバーはすぐに使用可能 - **リソース統合**企業管理者が数十種類のAIリソースを統一管理、チームメンバーはすぐに使用可能
- **権限管理**メンバーの権限、リソースの権限、ナレッジベースの権限を明確にし、企業レベルのAdmin Panelで統一管理 - **権限管理**メンバーの権限、リソースの権限、ナレッジベースの権限を明確にし、企業レベルのAdmin Panelで統一管理
@@ -31,7 +33,6 @@
企業版のお問い合わせ: **business@nextchat.dev** 企業版のお問い合わせ: **business@nextchat.dev**
## 始めに ## 始めに
1. [OpenAI API Key](https://platform.openai.com/account/api-keys)を準備する; 1. [OpenAI API Key](https://platform.openai.com/account/api-keys)を準備する;
@@ -46,7 +47,6 @@
</div> </div>
## 更新を維持する ## 更新を維持する
もし上記の手順に従ってワンクリックでプロジェクトをデプロイした場合、「更新があります」というメッセージが常に表示されることがあります。これは、Vercel がデフォルトで新しいプロジェクトを作成するためで、本プロジェクトを fork していないことが原因です。そのため、正しく更新を検出できません。 もし上記の手順に従ってワンクリックでプロジェクトをデプロイした場合、「更新があります」というメッセージが常に表示されることがあります。これは、Vercel がデフォルトで新しいプロジェクトを作成するためで、本プロジェクトを fork していないことが原因です。そのため、正しく更新を検出できません。
@@ -57,7 +57,6 @@
- ページ右上の fork ボタンを使って、本プロジェクトを fork する - ページ右上の fork ボタンを使って、本プロジェクトを fork する
- Vercel で再度選択してデプロイする、[詳細な手順はこちらを参照してください](./docs/vercel-ja.md)。 - Vercel で再度選択してデプロイする、[詳細な手順はこちらを参照してください](./docs/vercel-ja.md)。
### 自動更新を開く ### 自動更新を開く
> Upstream Sync の実行エラーが発生した場合は、[手動で Sync Fork](./README_JA.md#手動でコードを更新する) してください! > Upstream Sync の実行エラーが発生した場合は、[手動で Sync Fork](./README_JA.md#手動でコードを更新する) してください!
@@ -68,15 +67,12 @@
![自動更新を有効にする](./docs/images/enable-actions-sync.jpg) ![自動更新を有効にする](./docs/images/enable-actions-sync.jpg)
### 手動でコードを更新する ### 手動でコードを更新する
手動で即座に更新したい場合は、[GitHub のドキュメント](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork)を参照して、fork したプロジェクトを上流のコードと同期する方法を確認してください。 手動で即座に更新したい場合は、[GitHub のドキュメント](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork)を参照して、fork したプロジェクトを上流のコードと同期する方法を確認してください。
このプロジェクトをスターまたはウォッチしたり、作者をフォローすることで、新機能の更新通知をすぐに受け取ることができます。 このプロジェクトをスターまたはウォッチしたり、作者をフォローすることで、新機能の更新通知をすぐに受け取ることができます。
## ページアクセスパスワードを設定する ## ページアクセスパスワードを設定する
> パスワードを設定すると、ユーザーは設定ページでアクセスコードを手動で入力しない限り、通常のチャットができず、未承認の状態であることを示すメッセージが表示されます。 > パスワードを設定すると、ユーザーは設定ページでアクセスコードを手動で入力しない限り、通常のチャットができず、未承認の状態であることを示すメッセージが表示されます。
@@ -91,7 +87,6 @@ code1,code2,code3
この環境変数を追加または変更した後、**プロジェクトを再デプロイ**して変更を有効にしてください。 この環境変数を追加または変更した後、**プロジェクトを再デプロイ**して変更を有効にしてください。
## 環境変数 ## 環境変数
> 本プロジェクトのほとんどの設定は環境変数で行います。チュートリアル:[Vercel の環境変数を変更する方法](./docs/vercel-ja.md)。 > 本プロジェクトのほとんどの設定は環境変数で行います。チュートリアル:[Vercel の環境変数を変更する方法](./docs/vercel-ja.md)。
@@ -202,8 +197,9 @@ ByteDance API の URL。
### `WHITE_WEBDAV_ENDPOINTS` (オプション) ### `WHITE_WEBDAV_ENDPOINTS` (オプション)
アクセス許可を与える WebDAV サービスのアドレスを追加したい場合、このオプションを使用します。フォーマット要件: アクセス許可を与える WebDAV サービスのアドレスを追加したい場合、このオプションを使用します。フォーマット要件:
- 各アドレスは完全なエンドポイントでなければなりません。 - 各アドレスは完全なエンドポイントでなければなりません。
> `https://xxxx/xxx` > `https://xxxx/xxx`
- 複数のアドレスは `,` で接続します。 - 複数のアドレスは `,` で接続します。
### `CUSTOM_MODELS` (オプション) ### `CUSTOM_MODELS` (オプション)
@@ -214,9 +210,11 @@ ByteDance API の URL。
モデルリストを管理します。`+` でモデルを追加し、`-` でモデルを非表示にし、`モデル名=表示名` でモデルの表示名をカスタマイズし、カンマで区切ります。 モデルリストを管理します。`+` でモデルを追加し、`-` でモデルを非表示にし、`モデル名=表示名` でモデルの表示名をカスタマイズし、カンマで区切ります。
Azure モードでは、`modelName@Azure=deploymentName` 形式でモデル名とデプロイ名deploy-nameを設定できます。 Azure モードでは、`modelName@Azure=deploymentName` 形式でモデル名とデプロイ名deploy-nameを設定できます。
> 例:`+gpt-3.5-turbo@Azure=gpt35` この設定でモデルリストに `gpt35(Azure)` のオプションが表示されます。 > 例:`+gpt-3.5-turbo@Azure=gpt35` この設定でモデルリストに `gpt35(Azure)` のオプションが表示されます。
ByteDance モードでは、`modelName@bytedance=deploymentName` 形式でモデル名とデプロイ名deploy-nameを設定できます。 ByteDance モードでは、`modelName@bytedance=deploymentName` 形式でモデル名とデプロイ名deploy-nameを設定できます。
> 例: `+Doubao-lite-4k@bytedance=ep-xxxxx-xxx` この設定でモデルリストに `Doubao-lite-4k(ByteDance)` のオプションが表示されます。 > 例: `+Doubao-lite-4k@bytedance=ep-xxxxx-xxx` この設定でモデルリストに `Doubao-lite-4k(ByteDance)` のオプションが表示されます。
### `DEFAULT_MODEL` (オプション) ### `DEFAULT_MODEL` (オプション)
@@ -234,6 +232,13 @@ ByteDance モードでは、`modelName@bytedance=deploymentName` 形式でモデ
『設定』の『ユーザー入力前処理』の初期設定に使用するテンプレートをカスタマイズします。 『設定』の『ユーザー入力前処理』の初期設定に使用するテンプレートをカスタマイズします。
### `AI302_API_KEY` (オプション)
302.AI API キー.
### `AI302_URL` (オプション)
302.AI API の URL.
## 開発 ## 開発
@@ -247,14 +252,12 @@ ByteDance モードでは、`modelName@bytedance=deploymentName` 形式でモデ
OPENAI_API_KEY=<your api key here> OPENAI_API_KEY=<your api key here>
``` ```
### ローカル開発 ### ローカル開発
1. Node.js 18 と Yarn をインストールします。具体的な方法は ChatGPT にお尋ねください。 1. Node.js 18 と Yarn をインストールします。具体的な方法は ChatGPT にお尋ねください。
2. `yarn install && yarn dev` を実行します。⚠️ 注意:このコマンドはローカル開発用であり、デプロイには使用しないでください。 2. `yarn install && yarn dev` を実行します。⚠️ 注意:このコマンドはローカル開発用であり、デプロイには使用しないでください。
3. ローカルでデプロイしたい場合は、`yarn install && yarn build && yarn start` コマンドを使用してください。プロセスを守るために pm2 を使用することもできます。詳細は ChatGPT にお尋ねください。 3. ローカルでデプロイしたい場合は、`yarn install && yarn build && yarn start` コマンドを使用してください。プロセスを守るために pm2 を使用することもできます。詳細は ChatGPT にお尋ねください。
## デプロイ ## デプロイ
### コンテナデプロイ(推奨) ### コンテナデプロイ(推奨)
@@ -291,7 +294,6 @@ docker run -d -p 3000:3000 \
他の環境変数を指定する必要がある場合は、上記のコマンドに `-e 環境変数=環境変数値` を追加して指定してください。 他の環境変数を指定する必要がある場合は、上記のコマンドに `-e 環境変数=環境変数値` を追加して指定してください。
### ローカルデプロイ ### ローカルデプロイ
コンソールで以下のコマンドを実行します: コンソールで以下のコマンドを実行します:
@@ -302,7 +304,6 @@ bash <(curl -s https://raw.githubusercontent.com/Yidadaa/ChatGPT-Next-Web/main/s
⚠️ 注意インストール中に問題が発生した場合は、Docker を使用してデプロイしてください。 ⚠️ 注意インストール中に問題が発生した場合は、Docker を使用してデプロイしてください。
## 謝辞 ## 謝辞
### 寄付者 ### 寄付者
@@ -317,7 +318,6 @@ bash <(curl -s https://raw.githubusercontent.com/Yidadaa/ChatGPT-Next-Web/main/s
- [one-api](https://github.com/songquanpeng/one-api): 一つのプラットフォームで大規模モデルのクォータ管理を提供し、市場に出回っているすべての主要な大規模言語モデルをサポートします。 - [one-api](https://github.com/songquanpeng/one-api): 一つのプラットフォームで大規模モデルのクォータ管理を提供し、市場に出回っているすべての主要な大規模言語モデルをサポートします。
## オープンソースライセンス ## オープンソースライセンス
[MIT](https://opensource.org/license/mit/) [MIT](https://opensource.org/license/mit/)

View File

@@ -224,7 +224,7 @@ export class ClaudeApi implements LLMApi {
let chunkJson: let chunkJson:
| undefined | undefined
| { | {
type: "content_block_delta" | "content_block_stop"; type: "content_block_delta" | "content_block_stop" | "message_delta" | "message_stop";
content_block?: { content_block?: {
type: "tool_use"; type: "tool_use";
id: string; id: string;
@@ -234,11 +234,20 @@ export class ClaudeApi implements LLMApi {
type: "text_delta" | "input_json_delta"; type: "text_delta" | "input_json_delta";
text?: string; text?: string;
partial_json?: string; partial_json?: string;
stop_reason?: string;
}; };
index: number; index: number;
}; };
chunkJson = JSON.parse(text); chunkJson = JSON.parse(text);
// Handle refusal stop reason in message_delta
if (chunkJson?.delta?.stop_reason === "refusal") {
// Return a message to display to the user
const refusalMessage = "\n\n[Assistant refused to respond. Please modify your request and try again.]";
options.onError?.(new Error("Content policy violation: " + refusalMessage));
return refusalMessage;
}
if (chunkJson?.content_block?.type == "tool_use") { if (chunkJson?.content_block?.type == "tool_use") {
index += 1; index += 1;
const id = chunkJson?.content_block.id; const id = chunkJson?.content_block.id;

View File

@@ -40,6 +40,8 @@ import { type ClientApi, getClientApi } from "../client/api";
import { getMessageTextContent } from "../utils"; import { getMessageTextContent } from "../utils";
import { MaskAvatar } from "./mask"; import { MaskAvatar } from "./mask";
import clsx from "clsx"; import clsx from "clsx";
import { save } from "@tauri-apps/plugin-dialog";
import { writeFile } from "@tauri-apps/plugin-fs";
const Markdown = dynamic(async () => (await import("./markdown")).Markdown, { const Markdown = dynamic(async () => (await import("./markdown")).Markdown, {
loading: () => <LoadingIcon />, loading: () => <LoadingIcon />,
@@ -456,7 +458,7 @@ export function ImagePreviewer(props: {
if (isMobile || (isApp && window.__TAURI__)) { if (isMobile || (isApp && window.__TAURI__)) {
if (isApp && window.__TAURI__) { if (isApp && window.__TAURI__) {
const result = await window.__TAURI__.dialog.save({ const result = await save({
defaultPath: `${props.topic}.png`, defaultPath: `${props.topic}.png`,
filters: [ filters: [
{ {
@@ -474,7 +476,7 @@ export function ImagePreviewer(props: {
const response = await fetch(blob); const response = await fetch(blob);
const buffer = await response.arrayBuffer(); const buffer = await response.arrayBuffer();
const uint8Array = new Uint8Array(buffer); const uint8Array = new Uint8Array(buffer);
await window.__TAURI__.fs.writeBinaryFile(result, uint8Array); await writeFile(result, uint8Array);
showToast(Locale.Download.Success); showToast(Locale.Download.Success);
} else { } else {
showToast(Locale.Download.Failed); showToast(Locale.Download.Failed);

View File

@@ -10,7 +10,7 @@ export const getBuildConfig = () => {
const buildMode = process.env.BUILD_MODE ?? "standalone"; const buildMode = process.env.BUILD_MODE ?? "standalone";
const isApp = !!process.env.BUILD_APP; const isApp = !!process.env.BUILD_APP;
const version = "v" + tauriConfig.package.version; const version = "v" + tauriConfig.version;
const commitInfo = (() => { const commitInfo = (() => {
try { try {

View File

@@ -479,19 +479,20 @@ export const VISION_MODEL_REGEXES = [
/vision/, /vision/,
/gpt-4o/, /gpt-4o/,
/gpt-4\.1/, /gpt-4\.1/,
/claude-3/, /claude.*[34]/,
/gemini-1\.5/, /gemini-1\.5/,
/gemini-exp/, /gemini-exp/,
/gemini-2\.0/, /gemini-2\.[05]/,
/learnlm/, /learnlm/,
/qwen-vl/, /qwen-vl/,
/qwen2-vl/, /qwen2-vl/,
/gpt-4-turbo(?!.*preview)/, // Matches "gpt-4-turbo" but not "gpt-4-turbo-preview" /gpt-4-turbo(?!.*preview)/,
/^dall-e-3$/, // Matches exactly "dall-e-3" /^dall-e-3$/,
/glm-4v/, /glm-4v/,
/vl/i, /vl/i,
/o3/, /o3/,
/o4-mini/, /o4-mini/,
/grok-4/i,
]; ];
export const EXCLUDE_VISION_MODEL_REGEXES = [/claude-3-5-haiku-20241022/]; export const EXCLUDE_VISION_MODEL_REGEXES = [/claude-3-5-haiku-20241022/];
@@ -571,6 +572,8 @@ const anthropicModels = [
"claude-3-5-sonnet-latest", "claude-3-5-sonnet-latest",
"claude-3-7-sonnet-20250219", "claude-3-7-sonnet-20250219",
"claude-3-7-sonnet-latest", "claude-3-7-sonnet-latest",
"claude-sonnet-4-20250514",
"claude-opus-4-20250514",
]; ];
const baiduModels = [ const baiduModels = [

View File

@@ -10,6 +10,11 @@ import { clientUpdate } from "../utils";
import ChatGptIcon from "../icons/chatgpt.png"; import ChatGptIcon from "../icons/chatgpt.png";
import Locale from "../locales"; import Locale from "../locales";
import { ClientApi } from "../client/api"; import { ClientApi } from "../client/api";
import {
isPermissionGranted,
requestPermission,
sendNotification,
} from "@tauri-apps/plugin-notification";
const ONE_MINUTE = 60 * 1000; const ONE_MINUTE = 60 * 1000;
const isApp = !!getClientConfig()?.isApp; const isApp = !!getClientConfig()?.isApp;
@@ -90,42 +95,39 @@ export const useUpdateStore = createPersistStore(
remoteVersion: remoteId, remoteVersion: remoteId,
})); }));
if (window.__TAURI__?.notification && isApp) { if (window.__TAURI__?.notification && isApp) {
// Check if notification permission is granted try {
await window.__TAURI__?.notification // Check if notification permission is granted
.isPermissionGranted() const granted = await isPermissionGranted();
.then((granted) => { if (!granted) {
if (!granted) { // Request permission to show notifications
return; const permission = await requestPermission();
} else { if (permission !== "granted") return;
// Request permission to show notifications }
window.__TAURI__?.notification
.requestPermission() if (version === remoteId) {
.then((permission) => { // Show a notification using Tauri
if (permission === "granted") { await sendNotification({
if (version === remoteId) { title: "NextChat",
// Show a notification using Tauri body: `${Locale.Settings.Update.IsLatest}`,
window.__TAURI__?.notification.sendNotification({ icon: `${ChatGptIcon.src}`,
title: "NextChat", sound: "Default",
body: `${Locale.Settings.Update.IsLatest}`, });
icon: `${ChatGptIcon.src}`, } else {
sound: "Default", const updateMessage = Locale.Settings.Update.FoundUpdate(
}); `${remoteId}`,
} else { );
const updateMessage = // Show a notification for the new version using Tauri
Locale.Settings.Update.FoundUpdate(`${remoteId}`); await sendNotification({
// Show a notification for the new version using Tauri title: "NextChat",
window.__TAURI__?.notification.sendNotification({ body: updateMessage,
title: "NextChat", icon: `${ChatGptIcon.src}`,
body: updateMessage, sound: "Default",
icon: `${ChatGptIcon.src}`, });
sound: "Default", clientUpdate();
}); }
clientUpdate(); } catch (error) {
} console.error("[Notification Error]", error);
} }
});
}
});
} }
console.log("[Got Upstream] ", remoteId); console.log("[Got Upstream] ", remoteId);
} catch (error) { } catch (error) {

View File

@@ -12,6 +12,9 @@ import { fetch as tauriStreamFetch } from "./utils/stream";
import { VISION_MODEL_REGEXES, EXCLUDE_VISION_MODEL_REGEXES } from "./constant"; import { VISION_MODEL_REGEXES, EXCLUDE_VISION_MODEL_REGEXES } from "./constant";
import { useAccessStore } from "./store"; import { useAccessStore } from "./store";
import { ModelSize } from "./typing"; import { ModelSize } from "./typing";
import { writeText } from "@tauri-apps/plugin-clipboard-manager";
import { save } from "@tauri-apps/plugin-dialog";
import { writeTextFile } from "@tauri-apps/plugin-fs";
export function trimTopic(topic: string) { export function trimTopic(topic: string) {
// Fix an issue where double quotes still show in the Indonesian language // Fix an issue where double quotes still show in the Indonesian language
@@ -20,15 +23,15 @@ export function trimTopic(topic: string) {
return ( return (
topic topic
// fix for gemini // fix for gemini
.replace(/^["“”*]+|["“”*]+$/g, "") .replace(/^["""*]+|["""*]+$/g, "")
.replace(/[,。!?”“"、,.!?*]*$/, "") .replace(/[,。!?""""、,.!?*]*$/, "")
); );
} }
export async function copyToClipboard(text: string) { export async function copyToClipboard(text: string) {
try { try {
if (window.__TAURI__) { if (window.__TAURI__) {
window.__TAURI__.writeText(text); await writeText(text);
} else { } else {
await navigator.clipboard.writeText(text); await navigator.clipboard.writeText(text);
} }
@@ -52,7 +55,7 @@ export async function copyToClipboard(text: string) {
export async function downloadAs(text: string, filename: string) { export async function downloadAs(text: string, filename: string) {
if (window.__TAURI__) { if (window.__TAURI__) {
const result = await window.__TAURI__.dialog.save({ const result = await save({
defaultPath: `${filename}`, defaultPath: `${filename}`,
filters: [ filters: [
{ {
@@ -68,7 +71,7 @@ export async function downloadAs(text: string, filename: string) {
if (result !== null) { if (result !== null) {
try { try {
await window.__TAURI__.fs.writeTextFile(result, text); await writeTextFile(result, text);
showToast(Locale.Download.Success); showToast(Locale.Download.Success);
} catch (error) { } catch (error) {
showToast(Locale.Download.Failed); showToast(Locale.Download.Failed);
@@ -448,24 +451,29 @@ export function getOperationId(operation: {
export function clientUpdate() { export function clientUpdate() {
// this a wild for updating client app // this a wild for updating client app
return window.__TAURI__?.updater const tauriApp = window.__TAURI__;
if (!tauriApp || !tauriApp.updater) return Promise.resolve();
return tauriApp.updater
.checkUpdate() .checkUpdate()
.then((updateResult) => { .then((updateResult: any) => {
if (updateResult.shouldUpdate) { if (updateResult.shouldUpdate) {
window.__TAURI__?.updater return tauriApp.updater
.installUpdate() .installUpdate()
.then((result) => { .then((result: any) => {
showToast(Locale.Settings.Update.Success); showToast(Locale.Settings.Update.Success);
}) })
.catch((e) => { .catch((e: any) => {
console.error("[Install Update Error]", e); console.error("[Install Update Error]", e);
showToast(Locale.Settings.Update.Failed); showToast(Locale.Settings.Update.Failed);
}); });
} }
return updateResult;
}) })
.catch((e) => { .catch((e: any) => {
console.error("[Check Update Error]", e); console.error("[Check Update Error]", e);
showToast(Locale.Settings.Update.Failed); showToast(Locale.Settings.Update.Failed);
return e;
}); });
} }

View File

@@ -3,6 +3,9 @@
// 1. invoke('stream_fetch', {url, method, headers, body}), get response with headers. // 1. invoke('stream_fetch', {url, method, headers, body}), get response with headers.
// 2. listen event: `stream-response` multi times to get body // 2. listen event: `stream-response` multi times to get body
import { invoke } from "@tauri-apps/api/core";
import { listen } from "@tauri-apps/api/event";
type ResponseEvent = { type ResponseEvent = {
id: number; id: number;
payload: { payload: {
@@ -27,8 +30,8 @@ export function fetch(url: string, options?: RequestInit): Promise<Response> {
headers: _headers = {}, headers: _headers = {},
body = [], body = [],
} = options || {}; } = options || {};
let unlisten: Function | undefined; let unlisten: (() => void) | undefined;
let setRequestId: Function | undefined; let setRequestId: ((value: number) => void) | undefined;
const requestIdPromise = new Promise((resolve) => (setRequestId = resolve)); const requestIdPromise = new Promise((resolve) => (setRequestId = resolve));
const ts = new TransformStream(); const ts = new TransformStream();
const writer = ts.writable.getWriter(); const writer = ts.writable.getWriter();
@@ -46,25 +49,24 @@ export function fetch(url: string, options?: RequestInit): Promise<Response> {
if (signal) { if (signal) {
signal.addEventListener("abort", () => close()); signal.addEventListener("abort", () => close());
} }
// @ts-ignore 2. listen response multi times, and write to Response.body
window.__TAURI__.event // Listen for stream response events
.listen("stream-response", (e: ResponseEvent) => listen("stream-response", (e: ResponseEvent) =>
requestIdPromise.then((request_id) => { requestIdPromise.then((request_id) => {
const { request_id: rid, chunk, status } = e?.payload || {}; const { request_id: rid, chunk, status } = e?.payload || {};
if (request_id != rid) { if (request_id != rid) {
return; return;
} }
if (chunk) { if (chunk) {
writer.ready.then(() => { writer.ready.then(() => {
writer.write(new Uint8Array(chunk)); writer.write(new Uint8Array(chunk));
}); });
} else if (status === 0) { } else if (status === 0) {
// end of body // end of body
close(); close();
} }
}), }),
) ).then((u: () => void) => (unlisten = u));
.then((u: Function) => (unlisten = u));
const headers: Record<string, string> = { const headers: Record<string, string> = {
Accept: "application/json, text/plain, */*", Accept: "application/json, text/plain, */*",
@@ -74,17 +76,16 @@ export function fetch(url: string, options?: RequestInit): Promise<Response> {
for (const item of new Headers(_headers || {})) { for (const item of new Headers(_headers || {})) {
headers[item[0]] = item[1]; headers[item[0]] = item[1];
} }
return window.__TAURI__ return invoke<StreamResponse>("stream_fetch", {
.invoke("stream_fetch", { method: method.toUpperCase(),
method: method.toUpperCase(), url,
url, headers,
headers, // TODO FormData
// TODO FormData body:
body: typeof body === "string"
typeof body === "string" ? Array.from(new TextEncoder().encode(body))
? Array.from(new TextEncoder().encode(body)) : [],
: [], })
})
.then((res: StreamResponse) => { .then((res: StreamResponse) => {
const { request_id, status, status_text: statusText, headers } = res; const { request_id, status, status_text: statusText, headers } = res;
setRequestId?.(request_id); setRequestId?.(request_id);

Binary file not shown.

Before

Width:  |  Height:  |  Size: 82 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 84 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 76 KiB

View File

@@ -26,6 +26,14 @@
"@modelcontextprotocol/sdk": "^1.0.4", "@modelcontextprotocol/sdk": "^1.0.4",
"@next/third-parties": "^14.1.0", "@next/third-parties": "^14.1.0",
"@svgr/webpack": "^6.5.1", "@svgr/webpack": "^6.5.1",
"@tauri-apps/plugin-clipboard-manager": "~2",
"@tauri-apps/plugin-dialog": "~2",
"@tauri-apps/plugin-fs": "~2",
"@tauri-apps/plugin-http": "~2",
"@tauri-apps/plugin-notification": "~2",
"@tauri-apps/plugin-shell": "~2",
"@tauri-apps/plugin-updater": "~2",
"@tauri-apps/plugin-window-state": "^2.2.1",
"@vercel/analytics": "^0.1.11", "@vercel/analytics": "^0.1.11",
"@vercel/speed-insights": "^1.0.2", "@vercel/speed-insights": "^1.0.2",
"axios": "^1.7.5", "axios": "^1.7.5",
@@ -39,7 +47,7 @@
"markdown-to-txt": "^2.0.1", "markdown-to-txt": "^2.0.1",
"mermaid": "^10.6.1", "mermaid": "^10.6.1",
"nanoid": "^5.0.3", "nanoid": "^5.0.3",
"next": "^14.1.1", "next": "14.2.24",
"node-fetch": "^3.3.1", "node-fetch": "^3.3.1",
"openapi-client-axios": "^7.5.5", "openapi-client-axios": "^7.5.5",
"react": "^18.2.0", "react": "^18.2.0",
@@ -59,8 +67,8 @@
"zustand": "^4.3.8" "zustand": "^4.3.8"
}, },
"devDependencies": { "devDependencies": {
"@tauri-apps/api": "^2.1.1", "@tauri-apps/api": "^2.3.0",
"@tauri-apps/cli": "1.5.11", "@tauri-apps/cli": "^2.3.1",
"@testing-library/dom": "^10.4.0", "@testing-library/dom": "^10.4.0",
"@testing-library/jest-dom": "^6.6.3", "@testing-library/jest-dom": "^6.6.3",
"@testing-library/react": "^16.1.0", "@testing-library/react": "^16.1.0",
@@ -75,7 +83,7 @@
"concurrently": "^8.2.2", "concurrently": "^8.2.2",
"cross-env": "^7.0.3", "cross-env": "^7.0.3",
"eslint": "^8.49.0", "eslint": "^8.49.0",
"eslint-config-next": "13.4.19", "eslint-config-next": "14.2.24",
"eslint-config-prettier": "^8.8.0", "eslint-config-prettier": "^8.8.0",
"eslint-plugin-prettier": "^5.1.3", "eslint-plugin-prettier": "^5.1.3",
"eslint-plugin-unused-imports": "^3.2.0", "eslint-plugin-unused-imports": "^3.2.0",

3887
src-tauri/Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -12,38 +12,31 @@ rust-version = "1.60"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html # See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[build-dependencies] [build-dependencies]
tauri-build = { version = "1.5.1", features = [] } tauri-build = { version = "2", features = [] }
[dependencies] [dependencies]
serde_json = "1.0" serde_json = "1.0"
serde = { version = "1.0", features = ["derive"] } serde = { version = "1.0", features = ["derive"] }
tauri = { version = "1.5.4", features = [ "http-all", tauri = { version = "2", features = [
"notification-all",
"fs-all",
"clipboard-all",
"dialog-all",
"shell-open",
"updater",
"window-close",
"window-hide",
"window-maximize",
"window-minimize",
"window-set-icon",
"window-set-ignore-cursor-events",
"window-set-resizable",
"window-show",
"window-start-dragging",
"window-unmaximize",
"window-unminimize",
] } ] }
tauri-plugin-window-state = { git = "https://github.com/tauri-apps/plugins-workspace", branch = "v1" } tauri-plugin-window-state = { version = "2" }
percent-encoding = "2.3.1" percent-encoding = "2.3.1"
reqwest = "0.11.18" reqwest = "0.11.18"
futures-util = "0.3.30" futures-util = "0.3.30"
bytes = "1.7.2" bytes = "1.7.2"
log = "0.4.21"
tauri-plugin-dialog = "2"
tauri-plugin-shell = "2"
tauri-plugin-notification = "2"
tauri-plugin-fs = "2"
tauri-plugin-http = "2"
tauri-plugin-clipboard-manager = "2"
[features] [features]
# this feature is used for production builds or when `devPath` points to the filesystem and the built-in dev server is disabled. # this feature is used for production builds or when `devPath` points to the filesystem and the built-in dev server is disabled.
# If you use cargo directly instead of tauri's cli you can use this feature flag to switch between tauri's `dev` and `build` modes. # If you use cargo directly instead of tauri's cli you can use this feature flag to switch between tauri's `dev` and `build` modes.
# DO NOT REMOVE!! # DO NOT REMOVE!!
custom-protocol = ["tauri/custom-protocol"] custom-protocol = ["tauri/custom-protocol"]
[target.'cfg(not(any(target_os = "android", target_os = "ios")))'.dependencies]
tauri-plugin-updater = "2"

View File

@@ -1,3 +1,3 @@
fn main() { fn main() {
tauri_build::build() tauri_build::build()
} }

View File

@@ -0,0 +1,14 @@
{
"identifier": "desktop-capability",
"platforms": [
"macOS",
"windows",
"linux"
],
"windows": [
"main"
],
"permissions": [
"updater:default"
]
}

View File

@@ -0,0 +1,57 @@
{
"identifier": "migrated",
"description": "permissions that were migrated from v1",
"local": true,
"windows": [
"main"
],
"permissions": [
"core:default",
"fs:allow-read-file",
"fs:allow-write-file",
"fs:allow-read-dir",
"fs:allow-copy-file",
"fs:allow-mkdir",
"fs:allow-remove",
"fs:allow-remove",
"fs:allow-rename",
"fs:allow-exists",
"core:window:allow-set-resizable",
"core:window:allow-maximize",
"core:window:allow-unmaximize",
"core:window:allow-minimize",
"core:window:allow-unminimize",
"core:window:allow-show",
"core:window:allow-hide",
"core:window:allow-close",
"core:window:allow-set-icon",
"core:window:allow-set-ignore-cursor-events",
"core:window:allow-start-dragging",
"shell:allow-open",
"dialog:allow-open",
"dialog:allow-save",
"dialog:allow-message",
"dialog:allow-ask",
"dialog:allow-confirm",
{
"identifier": "http:default",
"allow": [
{
"url": "https://*/"
},
{
"url": "http://*/"
}
]
},
"notification:default",
"clipboard-manager:allow-read-text",
"clipboard-manager:allow-write-text",
"dialog:default",
"shell:default",
"notification:default",
"fs:default",
"http:default",
"clipboard-manager:default"
]
}

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1 @@
{"desktop-capability":{"identifier":"desktop-capability","description":"","local":true,"windows":["main"],"permissions":["updater:default"],"platforms":["macOS","windows","linux"]},"migrated":{"identifier":"migrated","description":"permissions that were migrated from v1","local":true,"windows":["main"],"permissions":["core:default","fs:allow-read-file","fs:allow-write-file","fs:allow-read-dir","fs:allow-copy-file","fs:allow-mkdir","fs:allow-remove","fs:allow-remove","fs:allow-rename","fs:allow-exists","core:window:allow-set-resizable","core:window:allow-maximize","core:window:allow-unmaximize","core:window:allow-minimize","core:window:allow-unminimize","core:window:allow-show","core:window:allow-hide","core:window:allow-close","core:window:allow-set-icon","core:window:allow-set-ignore-cursor-events","core:window:allow-start-dragging","shell:allow-open","dialog:allow-open","dialog:allow-save","dialog:allow-message","dialog:allow-ask","dialog:allow-confirm",{"identifier":"http:default","allow":[{"url":"https://*/"},{"url":"http://*/"}]},"notification:default","clipboard-manager:allow-read-text","clipboard-manager:allow-write-text","dialog:default","shell:default","notification:default","fs:default","http:default","clipboard-manager:default"]}}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -4,9 +4,16 @@
mod stream; mod stream;
fn main() { fn main() {
tauri::Builder::default() tauri::Builder::default()
.invoke_handler(tauri::generate_handler![stream::stream_fetch]) .plugin(tauri_plugin_updater::Builder::new().build())
.plugin(tauri_plugin_window_state::Builder::default().build()) .plugin(tauri_plugin_clipboard_manager::init())
.run(tauri::generate_context!()) .plugin(tauri_plugin_http::init())
.expect("error while running tauri application"); .plugin(tauri_plugin_fs::init())
.plugin(tauri_plugin_notification::init())
.plugin(tauri_plugin_shell::init())
.plugin(tauri_plugin_dialog::init())
.invoke_handler(tauri::generate_handler![stream::stream_fetch])
.plugin(tauri_plugin_window_state::Builder::default().build())
.run(tauri::generate_context!())
.expect("error while running tauri application");
} }

View File

@@ -1,145 +1,176 @@
// //
// //
use std::time::Duration; use futures_util::StreamExt;
use std::collections::HashMap;
use std::error::Error; use std::error::Error;
use std::sync::atomic::{AtomicU32, Ordering}; use std::sync::atomic::{AtomicU32, Ordering};
use std::collections::HashMap; use std::time::Duration;
use futures_util::{StreamExt}; use tauri::Emitter;
use reqwest::Client; use tauri_plugin_http::reqwest;
use reqwest::header::{HeaderName, HeaderMap}; use tauri_plugin_http::reqwest::header::{HeaderMap, HeaderName};
use tauri_plugin_http::reqwest::Client;
static REQUEST_COUNTER: AtomicU32 = AtomicU32::new(0); static REQUEST_COUNTER: AtomicU32 = AtomicU32::new(0);
#[derive(Debug, Clone, serde::Serialize)] #[derive(Debug, Clone, serde::Serialize)]
pub struct StreamResponse { pub struct StreamResponse {
request_id: u32, request_id: u32,
status: u16, status: u16,
status_text: String, status_text: String,
headers: HashMap<String, String> headers: HashMap<String, String>,
} }
#[derive(Clone, serde::Serialize)] #[derive(Clone, serde::Serialize)]
pub struct EndPayload { pub struct EndPayload {
request_id: u32, request_id: u32,
status: u16, status: u16,
} }
#[derive(Clone, serde::Serialize)] #[derive(Clone, serde::Serialize)]
pub struct ChunkPayload { pub struct ChunkPayload {
request_id: u32, request_id: u32,
chunk: bytes::Bytes, chunk: Vec<u8>,
} }
#[tauri::command] #[tauri::command]
pub async fn stream_fetch( pub async fn stream_fetch(
window: tauri::Window, window: tauri::WebviewWindow,
method: String, method: String,
url: String, url: String,
headers: HashMap<String, String>, headers: HashMap<String, String>,
body: Vec<u8>, body: Vec<u8>,
) -> Result<StreamResponse, String> { ) -> Result<StreamResponse, String> {
let event_name = "stream-response";
let request_id = REQUEST_COUNTER.fetch_add(1, Ordering::SeqCst);
let event_name = "stream-response"; let mut _headers = HeaderMap::new();
let request_id = REQUEST_COUNTER.fetch_add(1, Ordering::SeqCst); for (key, value) in &headers {
_headers.insert(key.parse::<HeaderName>().unwrap(), value.parse().unwrap());
let mut _headers = HeaderMap::new();
for (key, value) in &headers {
_headers.insert(key.parse::<HeaderName>().unwrap(), value.parse().unwrap());
}
// println!("method: {:?}", method);
// println!("url: {:?}", url);
// println!("headers: {:?}", headers);
// println!("headers: {:?}", _headers);
let method = method.parse::<reqwest::Method>().map_err(|err| format!("failed to parse method: {}", err))?;
let client = Client::builder()
.default_headers(_headers)
.redirect(reqwest::redirect::Policy::limited(3))
.connect_timeout(Duration::new(3, 0))
.build()
.map_err(|err| format!("failed to generate client: {}", err))?;
let mut request = client.request(
method.clone(),
url.parse::<reqwest::Url>().map_err(|err| format!("failed to parse url: {}", err))?
);
if method == reqwest::Method::POST || method == reqwest::Method::PUT || method == reqwest::Method::PATCH {
let body = bytes::Bytes::from(body);
// println!("body: {:?}", body);
request = request.body(body);
}
// println!("client: {:?}", client);
// println!("request: {:?}", request);
let response_future = request.send();
let res = response_future.await;
let response = match res {
Ok(res) => {
// get response and emit to client
let mut headers = HashMap::new();
for (name, value) in res.headers() {
headers.insert(
name.as_str().to_string(),
std::str::from_utf8(value.as_bytes()).unwrap().to_string()
);
}
let status = res.status().as_u16();
tauri::async_runtime::spawn(async move {
let mut stream = res.bytes_stream();
while let Some(chunk) = stream.next().await {
match chunk {
Ok(bytes) => {
// println!("chunk: {:?}", bytes);
if let Err(e) = window.emit(event_name, ChunkPayload{ request_id, chunk: bytes }) {
println!("Failed to emit chunk payload: {:?}", e);
}
}
Err(err) => {
println!("Error chunk: {:?}", err);
}
}
}
if let Err(e) = window.emit(event_name, EndPayload{ request_id, status: 0 }) {
println!("Failed to emit end payload: {:?}", e);
}
});
StreamResponse {
request_id,
status,
status_text: "OK".to_string(),
headers,
}
} }
Err(err) => {
let error: String = err.source() // println!("method: {:?}", method);
.map(|e| e.to_string()) // println!("url: {:?}", url);
.unwrap_or_else(|| "Unknown error occurred".to_string()); // println!("headers: {:?}", headers);
println!("Error response: {:?}", error); // println!("headers: {:?}", _headers);
tauri::async_runtime::spawn( async move {
if let Err(e) = window.emit(event_name, ChunkPayload{ request_id, chunk: error.into() }) { let method = method
println!("Failed to emit chunk payload: {:?}", e); .parse::<reqwest::Method>()
} .map_err(|err| format!("failed to parse method: {}", err))?;
if let Err(e) = window.emit(event_name, EndPayload{ request_id, status: 0 }) { let client = Client::builder()
println!("Failed to emit end payload: {:?}", e); .default_headers(_headers)
} .redirect(reqwest::redirect::Policy::limited(3))
}); .connect_timeout(Duration::new(3, 0))
StreamResponse { .build()
request_id, .map_err(|err| format!("failed to generate client: {}", err))?;
status: 599,
status_text: "Error".to_string(), let mut request = client.request(
headers: HashMap::new(), method.clone(),
} url.parse::<reqwest::Url>()
.map_err(|err| format!("failed to parse url: {}", err))?,
);
if method == reqwest::Method::POST
|| method == reqwest::Method::PUT
|| method == reqwest::Method::PATCH
{
let body = bytes::Bytes::from(body);
// println!("body: {:?}", body);
request = request.body(body);
} }
};
// println!("Response: {:?}", response); // println!("client: {:?}", client);
Ok(response) // println!("request: {:?}", request);
let response_future = request.send();
let res = response_future.await;
let response = match res {
Ok(res) => {
// get response and emit to client
let mut headers = HashMap::new();
for (name, value) in res.headers() {
headers.insert(
name.as_str().to_string(),
std::str::from_utf8(value.as_bytes()).unwrap().to_string(),
);
}
let status = res.status().as_u16();
tauri::async_runtime::spawn(async move {
let mut stream = res.bytes_stream();
while let Some(chunk) = stream.next().await {
match chunk {
Ok(bytes) => {
// println!("chunk: {:?}", bytes);
if let Err(e) = window.emit(
event_name,
ChunkPayload {
request_id,
chunk: bytes.to_vec(),
},
) {
println!("Failed to emit chunk payload: {:?}", e);
}
}
Err(err) => {
println!("Error chunk: {:?}", err);
}
}
}
if let Err(e) = window.emit(
event_name,
EndPayload {
request_id,
status: 0,
},
) {
println!("Failed to emit end payload: {:?}", e);
}
});
StreamResponse {
request_id,
status,
status_text: "OK".to_string(),
headers,
}
}
Err(err) => {
let error: String = err
.source()
.map(|e| e.to_string())
.unwrap_or_else(|| "Unknown error occurred".to_string());
println!("Error response: {:?}", error);
tauri::async_runtime::spawn(async move {
if let Err(e) = window.emit(
event_name,
ChunkPayload {
request_id,
chunk: error.into_bytes(),
},
) {
println!("Failed to emit chunk payload: {:?}", e);
}
if let Err(e) = window.emit(
event_name,
EndPayload {
request_id,
status: 0,
},
) {
println!("Failed to emit end payload: {:?}", e);
}
});
StreamResponse {
request_id,
status: 599,
status_text: "Error".to_string(),
headers: HashMap::new(),
}
}
};
// println!("Response: {:?}", response);
Ok(response)
} }

View File

@@ -3,108 +3,61 @@
"build": { "build": {
"beforeBuildCommand": "yarn export", "beforeBuildCommand": "yarn export",
"beforeDevCommand": "yarn export:dev", "beforeDevCommand": "yarn export:dev",
"devPath": "http://localhost:3000", "frontendDist": "../out",
"distDir": "../out", "devUrl": "http://localhost:3000"
"withGlobalTauri": true
}, },
"package": { "bundle": {
"productName": "NextChat", "active": true,
"version": "2.15.8" "category": "DeveloperTool",
}, "copyright": "2023, Zhang Yifei All Rights Reserved.",
"tauri": { "targets": "all",
"allowlist": { "externalBin": [],
"all": false, "icon": [
"shell": { "icons/32x32.png",
"all": false, "icons/128x128.png",
"open": true "icons/128x128@2x.png",
}, "icons/icon.icns",
"dialog": { "icons/icon.ico"
"all": true, ],
"ask": true, "windows": {
"confirm": true, "certificateThumbprint": null,
"message": true, "digestAlgorithm": "sha256",
"open": true, "timestampUrl": ""
"save": true
},
"clipboard": {
"all": true,
"writeText": true,
"readText": true
},
"window": {
"all": false,
"close": true,
"hide": true,
"maximize": true,
"minimize": true,
"setIcon": true,
"setIgnoreCursorEvents": true,
"setResizable": true,
"show": true,
"startDragging": true,
"unmaximize": true,
"unminimize": true
},
"fs": {
"all": true
},
"notification": {
"all": true
},
"http": {
"all": true,
"request": true,
"scope": ["https://*", "http://*"]
}
}, },
"bundle": { "longDescription": "NextChat is a cross-platform ChatGPT client, including Web/Win/Linux/OSX/PWA.",
"active": true, "macOS": {
"category": "DeveloperTool", "entitlements": null,
"copyright": "2023, Zhang Yifei All Rights Reserved.", "exceptionDomain": "",
"frameworks": [],
"providerShortName": null,
"signingIdentity": null
},
"resources": [],
"shortDescription": "NextChat App",
"linux": {
"deb": { "deb": {
"depends": [] "depends": []
},
"externalBin": [],
"icon": [
"icons/32x32.png",
"icons/128x128.png",
"icons/128x128@2x.png",
"icons/icon.icns",
"icons/icon.ico"
],
"identifier": "com.yida.chatgpt.next.web",
"longDescription": "NextChat is a cross-platform ChatGPT client, including Web/Win/Linux/OSX/PWA.",
"macOS": {
"entitlements": null,
"exceptionDomain": "",
"frameworks": [],
"providerShortName": null,
"signingIdentity": null
},
"resources": [],
"shortDescription": "NextChat App",
"targets": "all",
"windows": {
"certificateThumbprint": null,
"digestAlgorithm": "sha256",
"timestampUrl": ""
} }
}, },
"security": { "createUpdaterArtifacts": "v1Compatible"
"csp": null, },
"dangerousUseHttpScheme": true "productName": "NextChat",
}, "mainBinaryName": "NextChat",
"version": "2.15.8",
"identifier": "com.yida.chatgpt.next.web",
"plugins": {
"updater": { "updater": {
"active": true,
"endpoints": [
"https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web/releases/latest/download/latest.json"
],
"dialog": true,
"windows": { "windows": {
"installMode": "passive" "installMode": "passive"
}, },
"endpoints": [
"https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web/releases/latest/download/latest.json"
],
"pubkey": "dW50cnVzdGVkIGNvbW1lbnQ6IG1pbmlzaWduIHB1YmxpYyBrZXk6IERFNDE4MENFM0Y1RTZBOTQKUldTVWFsNC96b0JCM3RqM2NmMnlFTmxIaStRaEJrTHNOU2VqRVlIV1hwVURoWUdVdEc1eDcxVEYK" "pubkey": "dW50cnVzdGVkIGNvbW1lbnQ6IG1pbmlzaWduIHB1YmxpYyBrZXk6IERFNDE4MENFM0Y1RTZBOTQKUldTVWFsNC96b0JCM3RqM2NmMnlFTmxIaStRaEJrTHNOU2VqRVlIV1hwVURoWUdVdEc1eDcxVEYK"
}, }
},
"app": {
"withGlobalTauri": true,
"windows": [ "windows": [
{ {
"fullscreen": false, "fullscreen": false,
@@ -113,8 +66,12 @@
"title": "NextChat", "title": "NextChat",
"width": 960, "width": 960,
"hiddenTitle": true, "hiddenTitle": true,
"titleBarStyle": "Overlay" "titleBarStyle": "Overlay",
"useHttpsScheme": false
} }
] ],
"security": {
"csp": null
}
} }
} }

167
test/stream-fetch.test.ts Normal file
View File

@@ -0,0 +1,167 @@
import { jest } from '@jest/globals';
// Create mocks
const mockInvoke = jest.fn();
const mockListen = jest.fn();
// Mock Tauri modules before import
jest.mock('@tauri-apps/api/core', () => ({
invoke: (...args: any[]) => mockInvoke(...args)
}));
jest.mock('@tauri-apps/api/event', () => ({
listen: (...args: any[]) => mockListen(...args)
}));
// Import function after mocking
import { fetch as streamFetch } from '../app/utils/stream';
import { invoke } from '@tauri-apps/api/core';
import { listen } from '@tauri-apps/api/event';
// Mock global objects
// @ts-ignore
global.TransformStream = class TransformStream {
writable = {
getWriter: () => ({
ready: Promise.resolve(),
// @ts-ignore
write: jest.fn().mockResolvedValue(undefined as any),
// @ts-ignore
close: jest.fn().mockResolvedValue(undefined as any)
})
};
readable = {} as any;
} as any;
// Add Response to global context
global.Response = class Response {
constructor(public body: any, public init: any) {
Object.assign(this, init);
}
status: number = 200;
statusText: string = 'OK';
headers: any = {};
} as any;
describe('stream-fetch', () => {
let originalFetch: any;
let originalWindow: any;
beforeAll(() => {
originalFetch = global.fetch;
originalWindow = global.window;
// Mock window object
Object.defineProperty(global, 'window', {
value: {
__TAURI__: true,
__TAURI_INTERNALS__: {
transformCallback: (callback: Function) => callback,
invoke: mockInvoke
},
fetch: jest.fn(),
Headers: class Headers {
constructor() {}
entries() { return []; }
},
TextEncoder: class TextEncoder {
encode(text: string) {
return new Uint8Array(Array.from(text).map(c => c.charCodeAt(0)));
}
},
navigator: {
userAgent: 'test-agent'
},
Response: class Response {
constructor(public body: any, public init: any) {}
status: number = 200;
statusText: string = 'OK';
headers: any = {};
}
},
writable: true,
});
});
afterAll(() => {
Object.defineProperty(global, 'window', {
value: originalWindow,
writable: true,
});
global.fetch = originalFetch;
});
beforeEach(() => {
jest.clearAllMocks();
// Set up default behavior for listen
mockListen.mockImplementation(() => Promise.resolve(() => {}));
});
test('should use native fetch when Tauri is unavailable', async () => {
// Temporarily remove __TAURI__
const tempWindow = { ...window };
delete (tempWindow as any).__TAURI__;
Object.defineProperty(global, 'window', {
value: tempWindow,
writable: true,
});
await streamFetch('https://example.com');
// Check that native fetch was called
expect(window.fetch).toHaveBeenCalledWith('https://example.com', undefined);
// Restore __TAURI__
Object.defineProperty(global, 'window', {
value: { ...tempWindow, __TAURI__: true },
writable: true,
});
});
test('should use Tauri API when Tauri is available', async () => {
// Mock successful response from Tauri
// @ts-ignore
mockInvoke.mockResolvedValue({
request_id: 123,
status: 200,
status_text: 'OK',
headers: {}
} as any);
// Call fetch function
await streamFetch('https://example.com');
// Check that Tauri invoke was called with correct parameters
expect(mockInvoke).toHaveBeenCalledWith(
'stream_fetch',
expect.objectContaining({
url: 'https://example.com'
}),
undefined
);
});
test('should add abort signal to request', async () => {
// Mock successful response from Tauri
// @ts-ignore
mockInvoke.mockResolvedValue({
request_id: 123,
status: 200,
status_text: 'OK',
headers: {}
} as any);
// Create AbortController
const controller = new AbortController();
const addEventListenerSpy = jest.spyOn(controller.signal, 'addEventListener');
// Call fetch with signal
await streamFetch('https://example.com', {
signal: controller.signal
});
// Check that signal was added
expect(addEventListenerSpy).toHaveBeenCalledWith('abort', expect.any(Function));
});
});

1932
yarn.lock

File diff suppressed because it is too large Load Diff