mirror of
				https://github.com/songquanpeng/one-api.git
				synced 2025-11-04 15:53:42 +08:00 
			
		
		
		
	Compare commits
	
		
			33 Commits
		
	
	
		
			v0.4.7-alp
			...
			v0.4.9-alp
		
	
	| Author | SHA1 | Date | |
|---|---|---|---|
| 
						 | 
					0cea9e6a6f | ||
| 
						 | 
					b1b3651e84 | ||
| 
						 | 
					8f6bd51f58 | ||
| 
						 | 
					bddbf57104 | ||
| 
						 | 
					9a16b0f9e5 | ||
| 
						 | 
					3530309a31 | ||
| 
						 | 
					733ebc067b | ||
| 
						 | 
					6a8567ac14 | ||
| 
						 | 
					aabc546691 | ||
| 
						 | 
					1c82b06f35 | ||
| 
						 | 
					9e4109672a | ||
| 
						 | 
					64c35334e6 | ||
| 
						 | 
					0ce572b405 | ||
| 
						 | 
					a326ac4b28 | ||
| 
						 | 
					05b0e77839 | ||
| 
						 | 
					51f19470bc | ||
| 
						 | 
					737672fb0b | ||
| 
						 | 
					0941e294bf | ||
| 
						 | 
					431d505f79 | ||
| 
						 | 
					f0dc7f3f06 | ||
| 
						 | 
					99fed1f850 | ||
| 
						 | 
					4dc5388a80 | ||
| 
						 | 
					f81f4c60b2 | ||
| 
						 | 
					c613d8b6b2 | ||
| 
						 | 
					7adac1c09c | ||
| 
						 | 
					6f05128368 | ||
| 
						 | 
					9b178a28a3 | ||
| 
						 | 
					4a6a7f4635 | ||
| 
						 | 
					6b1a24d650 | ||
| 
						 | 
					94ba3dd024 | ||
| 
						 | 
					f6eb4e5628 | ||
| 
						 | 
					57bd907f83 | ||
| 
						 | 
					dd8e8d5ee8 | 
							
								
								
									
										4
									
								
								.github/ISSUE_TEMPLATE/bug_report.md
									
									
									
									
										vendored
									
									
								
							
							
						
						
									
										4
									
								
								.github/ISSUE_TEMPLATE/bug_report.md
									
									
									
									
										vendored
									
									
								
							@@ -8,11 +8,13 @@ assignees: ''
 | 
			
		||||
---
 | 
			
		||||
 | 
			
		||||
**例行检查**
 | 
			
		||||
 | 
			
		||||
[//]: # (方框内删除已有的空格,填 x 号)
 | 
			
		||||
+ [ ] 我已确认目前没有类似 issue
 | 
			
		||||
+ [ ] 我已确认我已升级到最新版本
 | 
			
		||||
+ [ ] 我已完整查看过项目 README,尤其是常见问题部分
 | 
			
		||||
+ [ ] 我理解并愿意跟进此 issue,协助测试和提供反馈 
 | 
			
		||||
+ [ ] 我理解并认可上述内容,并理解项目维护者精力有限,不遵循规则的 issue 可能会被无视或直接关闭
 | 
			
		||||
+ [ ] 我理解并认可上述内容,并理解项目维护者精力有限,**不遵循规则的 issue 可能会被无视或直接关闭**
 | 
			
		||||
 | 
			
		||||
**问题描述**
 | 
			
		||||
 | 
			
		||||
 
 | 
			
		||||
							
								
								
									
										3
									
								
								.github/ISSUE_TEMPLATE/config.yml
									
									
									
									
										vendored
									
									
								
							
							
						
						
									
										3
									
								
								.github/ISSUE_TEMPLATE/config.yml
									
									
									
									
										vendored
									
									
								
							@@ -6,6 +6,3 @@ contact_links:
 | 
			
		||||
  - name: 赞赏支持
 | 
			
		||||
    url: https://iamazing.cn/page/reward
 | 
			
		||||
    about: 请作者喝杯咖啡,以激励作者持续开发
 | 
			
		||||
  - name: 付费部署或定制功能
 | 
			
		||||
    url: https://openai.justsong.cn/
 | 
			
		||||
    about: 加群后联系群主
 | 
			
		||||
 
 | 
			
		||||
							
								
								
									
										5
									
								
								.github/ISSUE_TEMPLATE/feature_request.md
									
									
									
									
										vendored
									
									
								
							
							
						
						
									
										5
									
								
								.github/ISSUE_TEMPLATE/feature_request.md
									
									
									
									
										vendored
									
									
								
							@@ -8,10 +8,13 @@ assignees: ''
 | 
			
		||||
---
 | 
			
		||||
 | 
			
		||||
**例行检查**
 | 
			
		||||
 | 
			
		||||
[//]: # (方框内删除已有的空格,填 x 号)
 | 
			
		||||
+ [ ] 我已确认目前没有类似 issue
 | 
			
		||||
+ [ ] 我已确认我已升级到最新版本
 | 
			
		||||
+ [ ] 我已完整查看过项目 README,已确定现有版本无法满足需求
 | 
			
		||||
+ [ ] 我理解并愿意跟进此 issue,协助测试和提供反馈
 | 
			
		||||
+ [ ] 我理解并认可上述内容,并理解项目维护者精力有限,不遵循规则的 issue 可能会被无视或直接关闭
 | 
			
		||||
+ [ ] 我理解并认可上述内容,并理解项目维护者精力有限,**不遵循规则的 issue 可能会被无视或直接关闭**
 | 
			
		||||
 | 
			
		||||
**功能描述**
 | 
			
		||||
 | 
			
		||||
 
 | 
			
		||||
							
								
								
									
										20
									
								
								README.en.md
									
									
									
									
									
								
							
							
						
						
									
										20
									
								
								README.en.md
									
									
									
									
									
								
							@@ -10,7 +10,7 @@
 | 
			
		||||
 | 
			
		||||
# One API
 | 
			
		||||
 | 
			
		||||
_✨ The all-in-one OpenAI interface, integrates various API access methods, ready to use ✨_
 | 
			
		||||
_✨ An OpenAI key management & redistribution system, easy to deploy & use ✨_
 | 
			
		||||
 | 
			
		||||
</div>
 | 
			
		||||
 | 
			
		||||
@@ -57,17 +57,14 @@ _✨ The all-in-one OpenAI interface, integrates various API access methods, rea
 | 
			
		||||
> **Note**: The latest image pulled from Docker may be an `alpha` release. Specify the version manually if you require stability.
 | 
			
		||||
 | 
			
		||||
## Features
 | 
			
		||||
1. Supports multiple API access channels. Welcome PRs or issue submissions for additional channels:
 | 
			
		||||
1. Supports multiple API access channels:
 | 
			
		||||
    + [x] Official OpenAI channel (support proxy configuration)
 | 
			
		||||
    + [x] **Azure OpenAI API**
 | 
			
		||||
    + [x] [API Distribute](https://api.gptjk.top/register?aff=QGxj)
 | 
			
		||||
    + [x] [OpenAI-SB](https://openai-sb.com)
 | 
			
		||||
    + [x] [API2D](https://api2d.com/r/197971)
 | 
			
		||||
    + [x] [OhMyGPT](https://aigptx.top?aff=uFpUl2Kf)
 | 
			
		||||
    + [x] [AI Proxy](https://aiproxy.io/?i=OneAPI) (invitation code: `OneAPI`)
 | 
			
		||||
    + [x] [API2GPT](http://console.api2gpt.com/m/00002S)
 | 
			
		||||
    + [x] [CloseAI](https://console.closeai-asia.com/r/2412)
 | 
			
		||||
    + [x] [AI.LS](https://ai.ls)
 | 
			
		||||
    + [x] [OpenAI Max](https://openaimax.com)
 | 
			
		||||
    + [x] Custom channel: Various third-party proxy services not included in the list
 | 
			
		||||
2. Supports access to multiple channels through **load balancing**.
 | 
			
		||||
3. Supports **stream mode** that enables typewriter-like effect through stream transmission.
 | 
			
		||||
@@ -174,6 +171,15 @@ Refer to [#175](https://github.com/songquanpeng/one-api/issues/175) for detailed
 | 
			
		||||
If you encounter a blank page after deployment, refer to [#97](https://github.com/songquanpeng/one-api/issues/97) for possible solutions.
 | 
			
		||||
 | 
			
		||||
### Deployment on Third-Party Platforms
 | 
			
		||||
<details>
 | 
			
		||||
<summary><strong>Deploy on Sealos</strong></summary>
 | 
			
		||||
<div>
 | 
			
		||||
 | 
			
		||||
Please refer to [this tutorial](https://github.com/c121914yu/FastGPT/blob/main/docs/deploy/one-api/sealos.md).
 | 
			
		||||
 | 
			
		||||
</div>
 | 
			
		||||
</details>
 | 
			
		||||
 | 
			
		||||
<details>
 | 
			
		||||
<summary><strong>Deployment on Zeabur</strong></summary>
 | 
			
		||||
<div>
 | 
			
		||||
@@ -240,7 +246,7 @@ If the channel ID is not provided, load balancing will be used to distribute the
 | 
			
		||||
    + Example: `CHANNEL_UPDATE_FREQUENCY=1440`
 | 
			
		||||
8. `CHANNEL_TEST_FREQUENCY`: When set, it periodically tests the channels, with the unit in minutes. If not set, no test will happen.
 | 
			
		||||
    + Example: `CHANNEL_TEST_FREQUENCY=1440`
 | 
			
		||||
9. `REQUEST_INTERVAL`: The time interval (in seconds) between requests when updating channel balances and testing channel availability. Default is no interval.
 | 
			
		||||
9. `POLLING_INTERVAL`: The time interval (in seconds) between requests when updating channel balances and testing channel availability. Default is no interval.
 | 
			
		||||
    + Example: `POLLING_INTERVAL=5`
 | 
			
		||||
 | 
			
		||||
### Command Line Parameters
 | 
			
		||||
 
 | 
			
		||||
							
								
								
									
										40
									
								
								README.md
									
									
									
									
									
								
							
							
						
						
									
										40
									
								
								README.md
									
									
									
									
									
								
							@@ -56,22 +56,19 @@ _✨ All in one 的 OpenAI 接口,整合各种 API 访问方式,开箱即用
 | 
			
		||||
> **Warning**:从 `v0.3` 版本升级到 `v0.4` 版本需要手动迁移数据库,请手动执行[数据库迁移脚本](./bin/migration_v0.3-v0.4.sql)。
 | 
			
		||||
 | 
			
		||||
## 功能
 | 
			
		||||
1. 支持多种 API 访问渠道,欢迎 PR 或提 issue 添加更多渠道:
 | 
			
		||||
   + [x] OpenAI 官方通道(支持配置代理)
 | 
			
		||||
1. 支持多种 API 访问渠道:
 | 
			
		||||
   + [x] OpenAI 官方通道(支持配置镜像)
 | 
			
		||||
   + [x] **Azure OpenAI API**
 | 
			
		||||
   + [x] [API Distribute](https://api.gptjk.top/register?aff=QGxj)
 | 
			
		||||
   + [x] [OpenAI-SB](https://openai-sb.com)
 | 
			
		||||
   + [x] [API2D](https://api2d.com/r/197971)
 | 
			
		||||
   + [x] [OhMyGPT](https://aigptx.top?aff=uFpUl2Kf)
 | 
			
		||||
   + [x] [AI Proxy](https://aiproxy.io/?i=OneAPI) (邀请码:`OneAPI`)
 | 
			
		||||
   + [x] [API2GPT](http://console.api2gpt.com/m/00002S)
 | 
			
		||||
   + [x] [CloseAI](https://console.closeai-asia.com/r/2412)
 | 
			
		||||
   + [x] [AI.LS](https://ai.ls)
 | 
			
		||||
   + [x] [OpenAI Max](https://openaimax.com)
 | 
			
		||||
   + [x] 自定义渠道:例如各种未收录的第三方代理服务
 | 
			
		||||
2. 支持通过**负载均衡**的方式访问多个渠道。
 | 
			
		||||
3. 支持 **stream 模式**,可以通过流式传输实现打字机效果。
 | 
			
		||||
4. 支持**多机部署**,[详见此处](#多机部署)。
 | 
			
		||||
5. 支持**令牌管理**,设置令牌的过期时间和使用次数。
 | 
			
		||||
5. 支持**令牌管理**,设置令牌的过期时间和额度。
 | 
			
		||||
6. 支持**兑换码管理**,支持批量生成和导出兑换码,可使用兑换码为账户进行充值。
 | 
			
		||||
7. 支持**通道管理**,批量创建通道。
 | 
			
		||||
8. 支持**用户分组**以及**渠道分组**,支持为不同分组设置不同的倍率。
 | 
			
		||||
@@ -80,16 +77,17 @@ _✨ All in one 的 OpenAI 接口,整合各种 API 访问方式,开箱即用
 | 
			
		||||
11. 支持**用户邀请奖励**。
 | 
			
		||||
12. 支持以美元为单位显示额度。
 | 
			
		||||
13. 支持发布公告,设置充值链接,设置新用户初始额度。
 | 
			
		||||
14. 支持丰富的**自定义**设置,
 | 
			
		||||
14. 支持模型映射,重定向用户的请求模型。
 | 
			
		||||
15. 支持丰富的**自定义**设置,
 | 
			
		||||
    1. 支持自定义系统名称,logo 以及页脚。
 | 
			
		||||
    2. 支持自定义首页和关于页面,可以选择使用 HTML & Markdown 代码进行自定义,或者使用一个单独的网页通过 iframe 嵌入。
 | 
			
		||||
15. 支持通过系统访问令牌访问管理 API。
 | 
			
		||||
16. 支持 Cloudflare Turnstile 用户校验。
 | 
			
		||||
17. 支持用户管理,支持**多种用户登录注册方式**:
 | 
			
		||||
16. 支持通过系统访问令牌访问管理 API。
 | 
			
		||||
17. 支持 Cloudflare Turnstile 用户校验。
 | 
			
		||||
18. 支持用户管理,支持**多种用户登录注册方式**:
 | 
			
		||||
    + 邮箱登录注册以及通过邮箱进行密码重置。
 | 
			
		||||
    + [GitHub 开放授权](https://github.com/settings/applications/new)。
 | 
			
		||||
    + 微信公众号授权(需要额外部署 [WeChat Server](https://github.com/songquanpeng/wechat-server))。
 | 
			
		||||
18. 未来其他大模型开放 API 后,将第一时间支持,并将其封装成同样的 API 访问方式。
 | 
			
		||||
19. 未来其他大模型开放 API 后,将第一时间支持,并将其封装成同样的 API 访问方式。
 | 
			
		||||
 | 
			
		||||
## 部署
 | 
			
		||||
### 基于 Docker 进行部署
 | 
			
		||||
@@ -114,6 +112,7 @@ server{
 | 
			
		||||
          proxy_set_header X-Forwarded-For $remote_addr;
 | 
			
		||||
          proxy_cache_bypass $http_upgrade;
 | 
			
		||||
          proxy_set_header Accept-Encoding gzip;
 | 
			
		||||
          proxy_read_timeout 300s;  # GPT-4 需要较长的超时时间,请自行调整
 | 
			
		||||
   }
 | 
			
		||||
}
 | 
			
		||||
```
 | 
			
		||||
@@ -195,6 +194,17 @@ docker run --name chatgpt-web -d -p 3002:3002 -e OPENAI_API_BASE_URL=https://ope
 | 
			
		||||
注意修改端口号、`OPENAI_API_BASE_URL` 和 `OPENAI_API_KEY`。
 | 
			
		||||
 | 
			
		||||
### 部署到第三方平台
 | 
			
		||||
<details>
 | 
			
		||||
<summary><strong>部署到 Sealos </strong></summary>
 | 
			
		||||
<div>
 | 
			
		||||
 | 
			
		||||
> Sealos 可视化部署,仅需 1 分钟。
 | 
			
		||||
 | 
			
		||||
参考这个[教程](https://github.com/c121914yu/FastGPT/blob/main/docs/deploy/one-api/sealos.md)中 1~5 步。
 | 
			
		||||
 | 
			
		||||
</div>
 | 
			
		||||
</details>
 | 
			
		||||
 | 
			
		||||
<details>
 | 
			
		||||
<summary><strong>部署到 Zeabur</strong></summary>
 | 
			
		||||
<div>
 | 
			
		||||
@@ -251,7 +261,7 @@ graph LR
 | 
			
		||||
   + 例子:`SESSION_SECRET=random_string`
 | 
			
		||||
3. `SQL_DSN`:设置之后将使用指定数据库而非 SQLite,请使用 MySQL 8.0 版本。
 | 
			
		||||
   + 例子:`SQL_DSN=root:123456@tcp(localhost:3306)/oneapi`
 | 
			
		||||
4. `FRONTEND_BASE_URL`:设置之后将使用指定的前端地址,而非后端地址。
 | 
			
		||||
4. `FRONTEND_BASE_URL`:设置之后将使用指定的前端地址,而非后端地址,仅限从服务器设置。
 | 
			
		||||
   + 例子:`FRONTEND_BASE_URL=https://openai.justsong.cn`
 | 
			
		||||
5. `SYNC_FREQUENCY`:设置之后将定期与数据库同步配置,单位为秒,未设置则不进行同步。
 | 
			
		||||
   + 例子:`SYNC_FREQUENCY=60`
 | 
			
		||||
@@ -261,7 +271,7 @@ graph LR
 | 
			
		||||
   + 例子:`CHANNEL_UPDATE_FREQUENCY=1440`
 | 
			
		||||
8. `CHANNEL_TEST_FREQUENCY`:设置之后将定期检查渠道,单位为分钟,未设置则不进行检查。
 | 
			
		||||
   + 例子:`CHANNEL_TEST_FREQUENCY=1440`
 | 
			
		||||
9. `REQUEST_INTERVAL`:批量更新渠道余额以及测试可用性时的请求间隔,单位为秒,默认无间隔。
 | 
			
		||||
9. `POLLING_INTERVAL`:批量更新渠道余额以及测试可用性时的请求间隔,单位为秒,默认无间隔。
 | 
			
		||||
   + 例子:`POLLING_INTERVAL=5`
 | 
			
		||||
 | 
			
		||||
### 命令行参数
 | 
			
		||||
@@ -298,6 +308,8 @@ https://openai.justsong.cn
 | 
			
		||||
5. ChatGPT Next Web 报错:`Failed to fetch`
 | 
			
		||||
   + 部署的时候不要设置 `BASE_URL`。
 | 
			
		||||
   + 检查你的接口地址和 API Key 有没有填对。
 | 
			
		||||
6. 报错:`当前分组负载已饱和,请稍后再试`
 | 
			
		||||
   + 上游通道 429 了。
 | 
			
		||||
 | 
			
		||||
## 相关项目
 | 
			
		||||
[FastGPT](https://github.com/c121914yu/FastGPT): 三分钟搭建 AI 知识库
 | 
			
		||||
 
 | 
			
		||||
@@ -12,14 +12,16 @@ total_time=0
 | 
			
		||||
times=()
 | 
			
		||||
 | 
			
		||||
for ((i=1; i<=count; i++)); do
 | 
			
		||||
  result=$(curl -o /dev/null -s -w %{time_total}\\n \
 | 
			
		||||
  result=$(curl -o /dev/null -s -w "%{http_code} %{time_total}\\n" \
 | 
			
		||||
           https://"$domain"/v1/chat/completions \
 | 
			
		||||
           -H "Content-Type: application/json" \
 | 
			
		||||
           -H "Authorization: Bearer $key" \
 | 
			
		||||
           -d '{"messages": [{"content": "echo hi", "role": "user"}], "model": "gpt-3.5-turbo", "stream": false, "max_tokens": 1}')
 | 
			
		||||
  echo "$result"
 | 
			
		||||
  total_time=$(bc <<< "$total_time + $result")
 | 
			
		||||
  times+=("$result")
 | 
			
		||||
  http_code=$(echo "$result" | awk '{print $1}')
 | 
			
		||||
  time=$(echo "$result" | awk '{print $2}')
 | 
			
		||||
  echo "HTTP status code: $http_code, Time taken: $time"
 | 
			
		||||
  total_time=$(bc <<< "$total_time + $time")
 | 
			
		||||
  times+=("$time")
 | 
			
		||||
done
 | 
			
		||||
 | 
			
		||||
average_time=$(echo "scale=4; $total_time / $count" | bc)
 | 
			
		||||
 
 | 
			
		||||
@@ -72,7 +72,7 @@ var RootUserEmail = ""
 | 
			
		||||
 | 
			
		||||
var IsMasterNode = os.Getenv("NODE_TYPE") != "slave"
 | 
			
		||||
 | 
			
		||||
var requestInterval, _ = strconv.Atoi(os.Getenv("REQUEST_INTERVAL"))
 | 
			
		||||
var requestInterval, _ = strconv.Atoi(os.Getenv("POLLING_INTERVAL"))
 | 
			
		||||
var RequestInterval = time.Duration(requestInterval) * time.Second
 | 
			
		||||
 | 
			
		||||
const (
 | 
			
		||||
@@ -148,6 +148,7 @@ const (
 | 
			
		||||
	ChannelTypeAIProxy   = 10
 | 
			
		||||
	ChannelTypePaLM      = 11
 | 
			
		||||
	ChannelTypeAPI2GPT   = 12
 | 
			
		||||
	ChannelTypeAIGC2D    = 13
 | 
			
		||||
)
 | 
			
		||||
 | 
			
		||||
var ChannelBaseURLs = []string{
 | 
			
		||||
@@ -164,4 +165,5 @@ var ChannelBaseURLs = []string{
 | 
			
		||||
	"https://api.aiproxy.io",       // 10
 | 
			
		||||
	"",                             // 11
 | 
			
		||||
	"https://api.api2gpt.com",      // 12
 | 
			
		||||
	"https://api.aigc2d.com",       // 13
 | 
			
		||||
}
 | 
			
		||||
 
 | 
			
		||||
@@ -31,7 +31,7 @@ var ModelRatio = map[string]float64{
 | 
			
		||||
	"curie":                   10,
 | 
			
		||||
	"babbage":                 10,
 | 
			
		||||
	"ada":                     10,
 | 
			
		||||
	"text-embedding-ada-002":  0.2,
 | 
			
		||||
	"text-embedding-ada-002":  0.05,
 | 
			
		||||
	"text-search-ada-doc-001": 10,
 | 
			
		||||
	"text-moderation-stable":  0.1,
 | 
			
		||||
	"text-moderation-latest":  0.1,
 | 
			
		||||
 
 | 
			
		||||
@@ -32,6 +32,9 @@ func GetSubscription(c *gin.Context) {
 | 
			
		||||
	if common.DisplayInCurrencyEnabled {
 | 
			
		||||
		amount /= common.QuotaPerUnit
 | 
			
		||||
	}
 | 
			
		||||
	if token != nil && token.UnlimitedQuota {
 | 
			
		||||
		amount = 100000000
 | 
			
		||||
	}
 | 
			
		||||
	subscription := OpenAISubscriptionResponse{
 | 
			
		||||
		Object:             "billing_subscription",
 | 
			
		||||
		HasPaymentMethod:   true,
 | 
			
		||||
 
 | 
			
		||||
@@ -61,6 +61,14 @@ type API2GPTUsageResponse struct {
 | 
			
		||||
	TotalRemaining float64 `json:"total_remaining"`
 | 
			
		||||
}
 | 
			
		||||
 | 
			
		||||
type APGC2DGPTUsageResponse struct {
 | 
			
		||||
	//Grants         interface{} `json:"grants"`
 | 
			
		||||
	Object         string  `json:"object"`
 | 
			
		||||
	TotalAvailable float64 `json:"total_available"`
 | 
			
		||||
	TotalGranted   float64 `json:"total_granted"`
 | 
			
		||||
	TotalUsed      float64 `json:"total_used"`
 | 
			
		||||
}
 | 
			
		||||
 | 
			
		||||
// GetAuthHeader get auth header
 | 
			
		||||
func GetAuthHeader(token string) http.Header {
 | 
			
		||||
	h := http.Header{}
 | 
			
		||||
@@ -150,6 +158,21 @@ func updateChannelAPI2GPTBalance(channel *model.Channel) (float64, error) {
 | 
			
		||||
	return response.TotalRemaining, nil
 | 
			
		||||
}
 | 
			
		||||
 | 
			
		||||
func updateChannelAIGC2DBalance(channel *model.Channel) (float64, error) {
 | 
			
		||||
	url := "https://api.aigc2d.com/dashboard/billing/credit_grants"
 | 
			
		||||
	body, err := GetResponseBody("GET", url, channel, GetAuthHeader(channel.Key))
 | 
			
		||||
	if err != nil {
 | 
			
		||||
		return 0, err
 | 
			
		||||
	}
 | 
			
		||||
	response := APGC2DGPTUsageResponse{}
 | 
			
		||||
	err = json.Unmarshal(body, &response)
 | 
			
		||||
	if err != nil {
 | 
			
		||||
		return 0, err
 | 
			
		||||
	}
 | 
			
		||||
	channel.UpdateBalance(response.TotalAvailable)
 | 
			
		||||
	return response.TotalAvailable, nil
 | 
			
		||||
}
 | 
			
		||||
 | 
			
		||||
func updateChannelBalance(channel *model.Channel) (float64, error) {
 | 
			
		||||
	baseURL := common.ChannelBaseURLs[channel.Type]
 | 
			
		||||
	switch channel.Type {
 | 
			
		||||
@@ -167,6 +190,8 @@ func updateChannelBalance(channel *model.Channel) (float64, error) {
 | 
			
		||||
		return updateChannelAIProxyBalance(channel)
 | 
			
		||||
	case common.ChannelTypeAPI2GPT:
 | 
			
		||||
		return updateChannelAPI2GPTBalance(channel)
 | 
			
		||||
	case common.ChannelTypeAIGC2D:
 | 
			
		||||
		return updateChannelAIGC2DBalance(channel)
 | 
			
		||||
	default:
 | 
			
		||||
		return 0, errors.New("尚未实现")
 | 
			
		||||
	}
 | 
			
		||||
 
 | 
			
		||||
@@ -224,6 +224,24 @@ func init() {
 | 
			
		||||
			Root:       "text-moderation-stable",
 | 
			
		||||
			Parent:     nil,
 | 
			
		||||
		},
 | 
			
		||||
		{
 | 
			
		||||
			Id:         "text-davinci-edit-001",
 | 
			
		||||
			Object:     "model",
 | 
			
		||||
			Created:    1677649963,
 | 
			
		||||
			OwnedBy:    "openai",
 | 
			
		||||
			Permission: permission,
 | 
			
		||||
			Root:       "text-davinci-edit-001",
 | 
			
		||||
			Parent:     nil,
 | 
			
		||||
		},
 | 
			
		||||
		{
 | 
			
		||||
			Id:         "code-davinci-edit-001",
 | 
			
		||||
			Object:     "model",
 | 
			
		||||
			Created:    1677649963,
 | 
			
		||||
			OwnedBy:    "openai",
 | 
			
		||||
			Permission: permission,
 | 
			
		||||
			Root:       "code-davinci-edit-001",
 | 
			
		||||
			Parent:     nil,
 | 
			
		||||
		},
 | 
			
		||||
	}
 | 
			
		||||
	openAIModelsMap = make(map[string]OpenAIModels)
 | 
			
		||||
	for _, model := range openAIModels {
 | 
			
		||||
 
 | 
			
		||||
@@ -4,6 +4,7 @@ import (
 | 
			
		||||
	"bufio"
 | 
			
		||||
	"bytes"
 | 
			
		||||
	"encoding/json"
 | 
			
		||||
	"errors"
 | 
			
		||||
	"fmt"
 | 
			
		||||
	"github.com/gin-gonic/gin"
 | 
			
		||||
	"io"
 | 
			
		||||
@@ -26,9 +27,46 @@ func relayTextHelper(c *gin.Context, relayMode int) *OpenAIErrorWithStatusCode {
 | 
			
		||||
			return errorWrapper(err, "bind_request_body_failed", http.StatusBadRequest)
 | 
			
		||||
		}
 | 
			
		||||
	}
 | 
			
		||||
	if relayMode == RelayModeModeration && textRequest.Model == "" {
 | 
			
		||||
	if relayMode == RelayModeModerations && textRequest.Model == "" {
 | 
			
		||||
		textRequest.Model = "text-moderation-latest"
 | 
			
		||||
	}
 | 
			
		||||
	// request validation
 | 
			
		||||
	if textRequest.Model == "" {
 | 
			
		||||
		return errorWrapper(errors.New("model is required"), "required_field_missing", http.StatusBadRequest)
 | 
			
		||||
	}
 | 
			
		||||
	switch relayMode {
 | 
			
		||||
	case RelayModeCompletions:
 | 
			
		||||
		if textRequest.Prompt == "" {
 | 
			
		||||
			return errorWrapper(errors.New("field prompt is required"), "required_field_missing", http.StatusBadRequest)
 | 
			
		||||
		}
 | 
			
		||||
	case RelayModeChatCompletions:
 | 
			
		||||
		if textRequest.Messages == nil || len(textRequest.Messages) == 0 {
 | 
			
		||||
			return errorWrapper(errors.New("field messages is required"), "required_field_missing", http.StatusBadRequest)
 | 
			
		||||
		}
 | 
			
		||||
	case RelayModeEmbeddings:
 | 
			
		||||
	case RelayModeModerations:
 | 
			
		||||
		if textRequest.Input == "" {
 | 
			
		||||
			return errorWrapper(errors.New("field input is required"), "required_field_missing", http.StatusBadRequest)
 | 
			
		||||
		}
 | 
			
		||||
	case RelayModeEdits:
 | 
			
		||||
		if textRequest.Instruction == "" {
 | 
			
		||||
			return errorWrapper(errors.New("field instruction is required"), "required_field_missing", http.StatusBadRequest)
 | 
			
		||||
		}
 | 
			
		||||
	}
 | 
			
		||||
	// map model name
 | 
			
		||||
	modelMapping := c.GetString("model_mapping")
 | 
			
		||||
	isModelMapped := false
 | 
			
		||||
	if modelMapping != "" {
 | 
			
		||||
		modelMap := make(map[string]string)
 | 
			
		||||
		err := json.Unmarshal([]byte(modelMapping), &modelMap)
 | 
			
		||||
		if err != nil {
 | 
			
		||||
			return errorWrapper(err, "unmarshal_model_mapping_failed", http.StatusInternalServerError)
 | 
			
		||||
		}
 | 
			
		||||
		if modelMap[textRequest.Model] != "" {
 | 
			
		||||
			textRequest.Model = modelMap[textRequest.Model]
 | 
			
		||||
			isModelMapped = true
 | 
			
		||||
		}
 | 
			
		||||
	}
 | 
			
		||||
	baseURL := common.ChannelBaseURLs[channelType]
 | 
			
		||||
	requestURL := c.Request.URL.String()
 | 
			
		||||
	if c.GetString("base_url") != "" {
 | 
			
		||||
@@ -64,7 +102,7 @@ func relayTextHelper(c *gin.Context, relayMode int) *OpenAIErrorWithStatusCode {
 | 
			
		||||
		promptTokens = countTokenMessages(textRequest.Messages, textRequest.Model)
 | 
			
		||||
	case RelayModeCompletions:
 | 
			
		||||
		promptTokens = countTokenInput(textRequest.Prompt, textRequest.Model)
 | 
			
		||||
	case RelayModeModeration:
 | 
			
		||||
	case RelayModeModerations:
 | 
			
		||||
		promptTokens = countTokenInput(textRequest.Input, textRequest.Model)
 | 
			
		||||
	}
 | 
			
		||||
	preConsumedTokens := common.PreConsumedQuota
 | 
			
		||||
@@ -90,7 +128,17 @@ func relayTextHelper(c *gin.Context, relayMode int) *OpenAIErrorWithStatusCode {
 | 
			
		||||
			return errorWrapper(err, "pre_consume_token_quota_failed", http.StatusForbidden)
 | 
			
		||||
		}
 | 
			
		||||
	}
 | 
			
		||||
	req, err := http.NewRequest(c.Request.Method, fullRequestURL, c.Request.Body)
 | 
			
		||||
	var requestBody io.Reader
 | 
			
		||||
	if isModelMapped {
 | 
			
		||||
		jsonStr, err := json.Marshal(textRequest)
 | 
			
		||||
		if err != nil {
 | 
			
		||||
			return errorWrapper(err, "marshal_text_request_failed", http.StatusInternalServerError)
 | 
			
		||||
		}
 | 
			
		||||
		requestBody = bytes.NewBuffer(jsonStr)
 | 
			
		||||
	} else {
 | 
			
		||||
		requestBody = c.Request.Body
 | 
			
		||||
	}
 | 
			
		||||
	req, err := http.NewRequest(c.Request.Method, fullRequestURL, requestBody)
 | 
			
		||||
	if err != nil {
 | 
			
		||||
		return errorWrapper(err, "new_request_failed", http.StatusInternalServerError)
 | 
			
		||||
	}
 | 
			
		||||
@@ -124,7 +172,10 @@ func relayTextHelper(c *gin.Context, relayMode int) *OpenAIErrorWithStatusCode {
 | 
			
		||||
	defer func() {
 | 
			
		||||
		if consumeQuota {
 | 
			
		||||
			quota := 0
 | 
			
		||||
			completionRatio := 1.333333 // default for gpt-3
 | 
			
		||||
			completionRatio := 1.0
 | 
			
		||||
			if strings.HasPrefix(textRequest.Model, "gpt-3.5") {
 | 
			
		||||
				completionRatio = 1.333333
 | 
			
		||||
			}
 | 
			
		||||
			if strings.HasPrefix(textRequest.Model, "gpt-4") {
 | 
			
		||||
				completionRatio = 2
 | 
			
		||||
			}
 | 
			
		||||
@@ -139,17 +190,29 @@ func relayTextHelper(c *gin.Context, relayMode int) *OpenAIErrorWithStatusCode {
 | 
			
		||||
			if ratio != 0 && quota <= 0 {
 | 
			
		||||
				quota = 1
 | 
			
		||||
			}
 | 
			
		||||
			totalTokens := promptTokens + completionTokens
 | 
			
		||||
			if totalTokens == 0 {
 | 
			
		||||
				// in this case, must be some error happened
 | 
			
		||||
				// we cannot just return, because we may have to return the pre-consumed quota
 | 
			
		||||
				quota = 0
 | 
			
		||||
			}
 | 
			
		||||
			quotaDelta := quota - preConsumedQuota
 | 
			
		||||
			err := model.PostConsumeTokenQuota(tokenId, quotaDelta)
 | 
			
		||||
			if err != nil {
 | 
			
		||||
				common.SysError("error consuming token remain quota: " + err.Error())
 | 
			
		||||
			}
 | 
			
		||||
			tokenName := c.GetString("token_name")
 | 
			
		||||
			logContent := fmt.Sprintf("模型倍率 %.2f,分组倍率 %.2f", modelRatio, groupRatio)
 | 
			
		||||
			model.RecordConsumeLog(userId, promptTokens, completionTokens, textRequest.Model, tokenName, quota, logContent)
 | 
			
		||||
			model.UpdateUserUsedQuotaAndRequestCount(userId, quota)
 | 
			
		||||
			channelId := c.GetInt("channel_id")
 | 
			
		||||
			model.UpdateChannelUsedQuota(channelId, quota)
 | 
			
		||||
			err = model.CacheUpdateUserQuota(userId)
 | 
			
		||||
			if err != nil {
 | 
			
		||||
				common.SysError("error update user quota cache: " + err.Error())
 | 
			
		||||
			}
 | 
			
		||||
			if quota != 0 {
 | 
			
		||||
				tokenName := c.GetString("token_name")
 | 
			
		||||
				logContent := fmt.Sprintf("模型倍率 %.2f,分组倍率 %.2f", modelRatio, groupRatio)
 | 
			
		||||
				model.RecordConsumeLog(userId, promptTokens, completionTokens, textRequest.Model, tokenName, quota, logContent)
 | 
			
		||||
				model.UpdateUserUsedQuotaAndRequestCount(userId, quota)
 | 
			
		||||
				channelId := c.GetInt("channel_id")
 | 
			
		||||
				model.UpdateChannelUsedQuota(channelId, quota)
 | 
			
		||||
			}
 | 
			
		||||
		}
 | 
			
		||||
	}()
 | 
			
		||||
 | 
			
		||||
 
 | 
			
		||||
@@ -19,22 +19,24 @@ const (
 | 
			
		||||
	RelayModeChatCompletions
 | 
			
		||||
	RelayModeCompletions
 | 
			
		||||
	RelayModeEmbeddings
 | 
			
		||||
	RelayModeModeration
 | 
			
		||||
	RelayModeModerations
 | 
			
		||||
	RelayModeImagesGenerations
 | 
			
		||||
	RelayModeEdits
 | 
			
		||||
)
 | 
			
		||||
 | 
			
		||||
// https://platform.openai.com/docs/api-reference/chat
 | 
			
		||||
 | 
			
		||||
type GeneralOpenAIRequest struct {
 | 
			
		||||
	Model       string    `json:"model"`
 | 
			
		||||
	Messages    []Message `json:"messages"`
 | 
			
		||||
	Prompt      any       `json:"prompt"`
 | 
			
		||||
	Stream      bool      `json:"stream"`
 | 
			
		||||
	MaxTokens   int       `json:"max_tokens"`
 | 
			
		||||
	Temperature float64   `json:"temperature"`
 | 
			
		||||
	TopP        float64   `json:"top_p"`
 | 
			
		||||
	N           int       `json:"n"`
 | 
			
		||||
	Input       any       `json:"input"`
 | 
			
		||||
	Model       string    `json:"model,omitempty"`
 | 
			
		||||
	Messages    []Message `json:"messages,omitempty"`
 | 
			
		||||
	Prompt      any       `json:"prompt,omitempty"`
 | 
			
		||||
	Stream      bool      `json:"stream,omitempty"`
 | 
			
		||||
	MaxTokens   int       `json:"max_tokens,omitempty"`
 | 
			
		||||
	Temperature float64   `json:"temperature,omitempty"`
 | 
			
		||||
	TopP        float64   `json:"top_p,omitempty"`
 | 
			
		||||
	N           int       `json:"n,omitempty"`
 | 
			
		||||
	Input       any       `json:"input,omitempty"`
 | 
			
		||||
	Instruction string    `json:"instruction,omitempty"`
 | 
			
		||||
}
 | 
			
		||||
 | 
			
		||||
type ChatRequest struct {
 | 
			
		||||
@@ -99,9 +101,11 @@ func Relay(c *gin.Context) {
 | 
			
		||||
	} else if strings.HasPrefix(c.Request.URL.Path, "/v1/embeddings") {
 | 
			
		||||
		relayMode = RelayModeEmbeddings
 | 
			
		||||
	} else if strings.HasPrefix(c.Request.URL.Path, "/v1/moderations") {
 | 
			
		||||
		relayMode = RelayModeModeration
 | 
			
		||||
		relayMode = RelayModeModerations
 | 
			
		||||
	} else if strings.HasPrefix(c.Request.URL.Path, "/v1/images/generations") {
 | 
			
		||||
		relayMode = RelayModeImagesGenerations
 | 
			
		||||
	} else if strings.HasPrefix(c.Request.URL.Path, "/v1/edits") {
 | 
			
		||||
		relayMode = RelayModeEdits
 | 
			
		||||
	}
 | 
			
		||||
	var err *OpenAIErrorWithStatusCode
 | 
			
		||||
	switch relayMode {
 | 
			
		||||
 
 | 
			
		||||
@@ -456,5 +456,7 @@
 | 
			
		||||
  "提示": "Prompt",
 | 
			
		||||
  "补全": "Completion",
 | 
			
		||||
  "消耗额度": "Used Quota",
 | 
			
		||||
  "可选值": "Optional Values"
 | 
			
		||||
  "可选值": "Optional Values",
 | 
			
		||||
  "渠道不存在:%d": "Channel does not exist: %d",
 | 
			
		||||
  "数据库一致性已被破坏,请联系管理员": "Database consistency has been broken, please contact the administrator"
 | 
			
		||||
}
 | 
			
		||||
 
 | 
			
		||||
							
								
								
									
										11
									
								
								main.go
									
									
									
									
									
								
							
							
						
						
									
										11
									
								
								main.go
									
									
									
									
									
								
							@@ -4,7 +4,6 @@ import (
 | 
			
		||||
	"embed"
 | 
			
		||||
	"github.com/gin-contrib/sessions"
 | 
			
		||||
	"github.com/gin-contrib/sessions/cookie"
 | 
			
		||||
	"github.com/gin-contrib/sessions/redis"
 | 
			
		||||
	"github.com/gin-gonic/gin"
 | 
			
		||||
	"one-api/common"
 | 
			
		||||
	"one-api/controller"
 | 
			
		||||
@@ -82,14 +81,8 @@ func main() {
 | 
			
		||||
	server.Use(middleware.CORS())
 | 
			
		||||
 | 
			
		||||
	// Initialize session store
 | 
			
		||||
	if common.RedisEnabled {
 | 
			
		||||
		opt := common.ParseRedisOption()
 | 
			
		||||
		store, _ := redis.NewStore(opt.MinIdleConns, opt.Network, opt.Addr, opt.Password, []byte(common.SessionSecret))
 | 
			
		||||
		server.Use(sessions.Sessions("session", store))
 | 
			
		||||
	} else {
 | 
			
		||||
		store := cookie.NewStore([]byte(common.SessionSecret))
 | 
			
		||||
		server.Use(sessions.Sessions("session", store))
 | 
			
		||||
	}
 | 
			
		||||
	store := cookie.NewStore([]byte(common.SessionSecret))
 | 
			
		||||
	server.Use(sessions.Sessions("session", store))
 | 
			
		||||
 | 
			
		||||
	router.SetRouter(server, buildFS, indexPage)
 | 
			
		||||
	var port = os.Getenv("PORT")
 | 
			
		||||
 
 | 
			
		||||
@@ -75,9 +75,14 @@ func Distribute() func(c *gin.Context) {
 | 
			
		||||
			}
 | 
			
		||||
			channel, err = model.CacheGetRandomSatisfiedChannel(userGroup, modelRequest.Model)
 | 
			
		||||
			if err != nil {
 | 
			
		||||
				message := "无可用渠道"
 | 
			
		||||
				if channel != nil {
 | 
			
		||||
					common.SysError(fmt.Sprintf("渠道不存在:%d", channel.Id))
 | 
			
		||||
					message = "数据库一致性已被破坏,请联系管理员"
 | 
			
		||||
				}
 | 
			
		||||
				c.JSON(http.StatusServiceUnavailable, gin.H{
 | 
			
		||||
					"error": gin.H{
 | 
			
		||||
						"message": "无可用渠道",
 | 
			
		||||
						"message": message,
 | 
			
		||||
						"type":    "one_api_error",
 | 
			
		||||
					},
 | 
			
		||||
				})
 | 
			
		||||
@@ -88,6 +93,7 @@ func Distribute() func(c *gin.Context) {
 | 
			
		||||
		c.Set("channel", channel.Type)
 | 
			
		||||
		c.Set("channel_id", channel.Id)
 | 
			
		||||
		c.Set("channel_name", channel.Name)
 | 
			
		||||
		c.Set("model_mapping", channel.ModelMapping)
 | 
			
		||||
		c.Request.Header.Set("Authorization", fmt.Sprintf("Bearer %s", channel.Key))
 | 
			
		||||
		c.Set("base_url", channel.BaseURL)
 | 
			
		||||
		if channel.Type == common.ChannelTypeAzure {
 | 
			
		||||
 
 | 
			
		||||
@@ -24,6 +24,7 @@ func GetRandomSatisfiedChannel(group string, model string) (*Channel, error) {
 | 
			
		||||
		return nil, err
 | 
			
		||||
	}
 | 
			
		||||
	channel := Channel{}
 | 
			
		||||
	channel.Id = ability.ChannelId
 | 
			
		||||
	err = DB.First(&channel, "id = ?", ability.ChannelId).Error
 | 
			
		||||
	return &channel, err
 | 
			
		||||
}
 | 
			
		||||
 
 | 
			
		||||
@@ -83,6 +83,18 @@ func CacheGetUserQuota(id int) (quota int, err error) {
 | 
			
		||||
	return quota, err
 | 
			
		||||
}
 | 
			
		||||
 | 
			
		||||
func CacheUpdateUserQuota(id int) error {
 | 
			
		||||
	if !common.RedisEnabled {
 | 
			
		||||
		return nil
 | 
			
		||||
	}
 | 
			
		||||
	quota, err := GetUserQuota(id)
 | 
			
		||||
	if err != nil {
 | 
			
		||||
		return err
 | 
			
		||||
	}
 | 
			
		||||
	err = common.RedisSet(fmt.Sprintf("user_quota:%d", id), fmt.Sprintf("%d", quota), UserId2QuotaCacheSeconds*time.Second)
 | 
			
		||||
	return err
 | 
			
		||||
}
 | 
			
		||||
 | 
			
		||||
func CacheIsUserEnabled(userId int) bool {
 | 
			
		||||
	if !common.RedisEnabled {
 | 
			
		||||
		return IsUserEnabled(userId)
 | 
			
		||||
@@ -108,7 +120,7 @@ var channelSyncLock sync.RWMutex
 | 
			
		||||
func InitChannelCache() {
 | 
			
		||||
	newChannelId2channel := make(map[int]*Channel)
 | 
			
		||||
	var channels []*Channel
 | 
			
		||||
	DB.Find(&channels)
 | 
			
		||||
	DB.Where("status = ?", common.ChannelStatusEnabled).Find(&channels)
 | 
			
		||||
	for _, channel := range channels {
 | 
			
		||||
		newChannelId2channel[channel.Id] = channel
 | 
			
		||||
	}
 | 
			
		||||
 
 | 
			
		||||
@@ -22,6 +22,7 @@ type Channel struct {
 | 
			
		||||
	Models             string  `json:"models"`
 | 
			
		||||
	Group              string  `json:"group" gorm:"type:varchar(32);default:'default'"`
 | 
			
		||||
	UsedQuota          int64   `json:"used_quota" gorm:"bigint;default:0"`
 | 
			
		||||
	ModelMapping       string  `json:"model_mapping" gorm:"type:varchar(1024);default:''"`
 | 
			
		||||
}
 | 
			
		||||
 | 
			
		||||
func GetAllChannels(startIdx int, num int, selectAll bool) ([]*Channel, error) {
 | 
			
		||||
@@ -36,7 +37,7 @@ func GetAllChannels(startIdx int, num int, selectAll bool) ([]*Channel, error) {
 | 
			
		||||
}
 | 
			
		||||
 | 
			
		||||
func SearchChannels(keyword string) (channels []*Channel, err error) {
 | 
			
		||||
	err = DB.Omit("key").Where("id = ? or name LIKE ? or key = ?", keyword, keyword+"%", keyword).Find(&channels).Error
 | 
			
		||||
	err = DB.Omit("key").Where("id = ? or name LIKE ? or `key` = ?", keyword, keyword+"%", keyword).Find(&channels).Error
 | 
			
		||||
	return channels, err
 | 
			
		||||
}
 | 
			
		||||
 | 
			
		||||
 
 | 
			
		||||
@@ -19,7 +19,7 @@ func SetRelayRouter(router *gin.Engine) {
 | 
			
		||||
	{
 | 
			
		||||
		relayV1Router.POST("/completions", controller.Relay)
 | 
			
		||||
		relayV1Router.POST("/chat/completions", controller.Relay)
 | 
			
		||||
		relayV1Router.POST("/edits", controller.RelayNotImplemented)
 | 
			
		||||
		relayV1Router.POST("/edits", controller.Relay)
 | 
			
		||||
		relayV1Router.POST("/images/generations", controller.RelayNotImplemented)
 | 
			
		||||
		relayV1Router.POST("/images/edits", controller.RelayNotImplemented)
 | 
			
		||||
		relayV1Router.POST("/images/variations", controller.RelayNotImplemented)
 | 
			
		||||
 
 | 
			
		||||
@@ -38,6 +38,8 @@ function renderBalance(type, balance) {
 | 
			
		||||
      return <span>{renderNumber(balance)}</span>;
 | 
			
		||||
    case 12: // API2GPT
 | 
			
		||||
      return <span>¥{balance.toFixed(2)}</span>;
 | 
			
		||||
    case 13: // AIGC2D
 | 
			
		||||
      return <span>{renderNumber(balance)}</span>;
 | 
			
		||||
    default:
 | 
			
		||||
      return <span>不支持</span>;
 | 
			
		||||
  }
 | 
			
		||||
 
 | 
			
		||||
@@ -74,9 +74,6 @@ const OperationSetting = () => {
 | 
			
		||||
  const submitConfig = async (group) => {
 | 
			
		||||
    switch (group) {
 | 
			
		||||
      case 'monitor':
 | 
			
		||||
        if (originInputs['AutomaticDisableChannelEnabled'] !== inputs.AutomaticDisableChannelEnabled) {
 | 
			
		||||
          await updateOption('AutomaticDisableChannelEnabled', inputs.AutomaticDisableChannelEnabled);
 | 
			
		||||
        }
 | 
			
		||||
        if (originInputs['ChannelDisableThreshold'] !== inputs.ChannelDisableThreshold) {
 | 
			
		||||
          await updateOption('ChannelDisableThreshold', inputs.ChannelDisableThreshold);
 | 
			
		||||
        }
 | 
			
		||||
 
 | 
			
		||||
@@ -9,5 +9,6 @@ export const CHANNEL_OPTIONS = [
 | 
			
		||||
  { key: 7, text: 'OhMyGPT', value: 7, color: 'purple' },
 | 
			
		||||
  { key: 9, text: 'AI.LS', value: 9, color: 'yellow' },
 | 
			
		||||
  { key: 10, text: 'AI Proxy', value: 10, color: 'purple' },
 | 
			
		||||
  { key: 12, text: 'API2GPT', value: 12, color: 'blue' }
 | 
			
		||||
];
 | 
			
		||||
  { key: 12, text: 'API2GPT', value: 12, color: 'blue' },
 | 
			
		||||
  { key: 13, text: 'AIGC2D', value: 13, color: 'purple' }
 | 
			
		||||
];
 | 
			
		||||
@@ -1,9 +1,15 @@
 | 
			
		||||
import React, { useEffect, useState } from 'react';
 | 
			
		||||
import { Button, Form, Header, Message, Segment } from 'semantic-ui-react';
 | 
			
		||||
import { useParams } from 'react-router-dom';
 | 
			
		||||
import { API, showError, showInfo, showSuccess } from '../../helpers';
 | 
			
		||||
import { API, showError, showInfo, showSuccess, verifyJSON } from '../../helpers';
 | 
			
		||||
import { CHANNEL_OPTIONS } from '../../constants';
 | 
			
		||||
 | 
			
		||||
const MODEL_MAPPING_EXAMPLE = {
 | 
			
		||||
  'gpt-3.5-turbo-0301': 'gpt-3.5-turbo',
 | 
			
		||||
  'gpt-4-0314': 'gpt-4',
 | 
			
		||||
  'gpt-4-32k-0314': 'gpt-4-32k'
 | 
			
		||||
};
 | 
			
		||||
 | 
			
		||||
const EditChannel = () => {
 | 
			
		||||
  const params = useParams();
 | 
			
		||||
  const channelId = params.id;
 | 
			
		||||
@@ -15,6 +21,7 @@ const EditChannel = () => {
 | 
			
		||||
    key: '',
 | 
			
		||||
    base_url: '',
 | 
			
		||||
    other: '',
 | 
			
		||||
    model_mapping: '',
 | 
			
		||||
    models: [],
 | 
			
		||||
    groups: ['default']
 | 
			
		||||
  };
 | 
			
		||||
@@ -42,6 +49,9 @@ const EditChannel = () => {
 | 
			
		||||
      } else {
 | 
			
		||||
        data.groups = data.group.split(',');
 | 
			
		||||
      }
 | 
			
		||||
      if (data.model_mapping !== '') {
 | 
			
		||||
        data.model_mapping = JSON.stringify(JSON.parse(data.model_mapping), null, 2);
 | 
			
		||||
      }
 | 
			
		||||
      setInputs(data);
 | 
			
		||||
    } else {
 | 
			
		||||
      showError(message);
 | 
			
		||||
@@ -94,6 +104,10 @@ const EditChannel = () => {
 | 
			
		||||
      showInfo('请至少选择一个模型!');
 | 
			
		||||
      return;
 | 
			
		||||
    }
 | 
			
		||||
    if (inputs.model_mapping !== '' && !verifyJSON(inputs.model_mapping)) {
 | 
			
		||||
      showInfo('模型映射必须是合法的 JSON 格式!');
 | 
			
		||||
      return;
 | 
			
		||||
    }
 | 
			
		||||
    let localInputs = inputs;
 | 
			
		||||
    if (localInputs.base_url.endsWith('/')) {
 | 
			
		||||
      localInputs.base_url = localInputs.base_url.slice(0, localInputs.base_url.length - 1);
 | 
			
		||||
@@ -246,6 +260,17 @@ const EditChannel = () => {
 | 
			
		||||
              handleInputChange(null, { name: 'models', value: [] });
 | 
			
		||||
            }}>清除所有模型</Button>
 | 
			
		||||
          </div>
 | 
			
		||||
          <Form.Field>
 | 
			
		||||
            <Form.TextArea
 | 
			
		||||
              label='模型映射'
 | 
			
		||||
              placeholder={`为一个 JSON 文本,键为用户请求的模型名称,值为要替换的模型名称,例如:\n${JSON.stringify(MODEL_MAPPING_EXAMPLE, null, 2)}`}
 | 
			
		||||
              name='model_mapping'
 | 
			
		||||
              onChange={handleInputChange}
 | 
			
		||||
              value={inputs.model_mapping}
 | 
			
		||||
              style={{ minHeight: 150, fontFamily: 'JetBrains Mono, Consolas' }}
 | 
			
		||||
              autoComplete='new-password'
 | 
			
		||||
            />
 | 
			
		||||
          </Form.Field>
 | 
			
		||||
          {
 | 
			
		||||
            batch ? <Form.Field>
 | 
			
		||||
              <Form.TextArea
 | 
			
		||||
 
 | 
			
		||||
@@ -2,6 +2,7 @@ import React, { useEffect, useState } from 'react';
 | 
			
		||||
import { Button, Form, Header, Segment } from 'semantic-ui-react';
 | 
			
		||||
import { useParams } from 'react-router-dom';
 | 
			
		||||
import { API, showError, showSuccess } from '../../helpers';
 | 
			
		||||
import { renderQuota, renderQuotaWithPrompt } from '../../helpers/render';
 | 
			
		||||
 | 
			
		||||
const EditUser = () => {
 | 
			
		||||
  const params = useParams();
 | 
			
		||||
@@ -134,7 +135,7 @@ const EditUser = () => {
 | 
			
		||||
              </Form.Field>
 | 
			
		||||
              <Form.Field>
 | 
			
		||||
                <Form.Input
 | 
			
		||||
                  label='剩余额度'
 | 
			
		||||
                  label={`剩余额度${renderQuotaWithPrompt(quota)}`}
 | 
			
		||||
                  name='quota'
 | 
			
		||||
                  placeholder={'请输入新的剩余额度'}
 | 
			
		||||
                  onChange={handleInputChange}
 | 
			
		||||
 
 | 
			
		||||
		Reference in New Issue
	
	Block a user