Compare commits

...

25 Commits

Author SHA1 Message Date
JustSong
bddbf57104 fix: fix AutomaticDisableChannelEnabled option not ignored (close #217) 2023-06-29 15:54:12 +08:00
JustSong
9a16b0f9e5 docs: update README 2023-06-29 11:39:43 +08:00
JustSong
3530309a31 docs: update issue templates 2023-06-29 11:33:51 +08:00
JustSong
733ebc067b docs: update issue templates 2023-06-29 11:33:02 +08:00
JustSong
6a8567ac14 docs: update issue templates 2023-06-29 11:32:36 +08:00
JustSong
aabc546691 fix: fix the wrong message when channel is deleted 2023-06-29 11:27:34 +08:00
JustSong
1c82b06f35 docs: update README 2023-06-29 10:59:41 +08:00
JustSong
9e4109672a docs: update README 2023-06-28 13:38:09 +08:00
JustSong
64c35334e6 docs: update README 2023-06-28 13:19:01 +08:00
Archer
0ce572b405 docs: update README (#210)
* deploy docs

* docs: update README

---------

Co-authored-by: JustSong <songquanpeng@foxmail.com>
2023-06-28 13:01:29 +08:00
mrhaoji
a326ac4b28 chore: more hints in model mapping textarea (#205)
* chore: more hints in model mapping textarea

* fix: fix variable not defined

---------

Co-authored-by: JustSong <songquanpeng@foxmail.com>
2023-06-28 12:56:01 +08:00
JustSong
05b0e77839 docs: update README 2023-06-27 23:36:12 +08:00
JustSong
51f19470bc fix: fix wrong env var name 2023-06-27 23:34:23 +08:00
JustSong
737672fb0b fix: update cached user quota after post-consuming (close #204) 2023-06-27 19:22:58 +08:00
JustSong
0941e294bf feat: support model remap now 2023-06-27 13:42:45 +08:00
JustSong
431d505f79 refactor: do not use redis to store session 2023-06-26 16:10:59 +08:00
mrhaoji
f0dc7f3f06 fix: InitChannelCache does not filter disabled channels (#201)
* chore: Show the HTTP status code in the test_time script to determine the success or failure of the request.

* fix: InitChannelCache does not filter disabled channels

* chore: do not hardcode

---------

Co-authored-by: JustSong <songquanpeng@foxmail.com>
2023-06-25 23:14:15 +08:00
mrhaoji
99fed1f850 chore: show the HTTP status code in the test_time script to determine the success or failure of the request (#200) 2023-06-25 22:58:16 +08:00
JustSong
4dc5388a80 chore: do not show completion ratio anymore 2023-06-25 20:29:42 +08:00
JustSong
f81f4c60b2 docs: update README 2023-06-25 15:14:52 +08:00
JustSong
c613d8b6b2 docs: update README 2023-06-25 15:14:09 +08:00
JustSong
7adac1c09c chore: update default ratio for text-embedding-ada-002 2023-06-25 12:07:42 +08:00
mrhaoji
6f05128368 chore: show equivalent amount next to remaining quota in the user editing page (#198) 2023-06-25 11:54:05 +08:00
JustSong
9b178a28a3 feat: support /v1/edits now (close #196) 2023-06-25 11:46:23 +08:00
JustSong
4a6a7f4635 chore: update the number that representing the unlimited quota 2023-06-25 10:52:46 +08:00
23 changed files with 187 additions and 72 deletions

View File

@@ -8,11 +8,13 @@ assignees: ''
---
**例行检查**
[//]: # (方框内删除已有的空格,填 x 号)
+ [ ] 我已确认目前没有类似 issue
+ [ ] 我已确认我已升级到最新版本
+ [ ] 我已完整查看过项目 README尤其是常见问题部分
+ [ ] 我理解并愿意跟进此 issue协助测试和提供反馈
+ [ ] 我理解并认可上述内容,并理解项目维护者精力有限,不遵循规则的 issue 可能会被无视或直接关闭
+ [ ] 我理解并认可上述内容,并理解项目维护者精力有限,**不遵循规则的 issue 可能会被无视或直接关闭**
**问题描述**

View File

@@ -6,6 +6,3 @@ contact_links:
- name: 赞赏支持
url: https://iamazing.cn/page/reward
about: 请作者喝杯咖啡,以激励作者持续开发
- name: 付费部署或定制功能
url: https://openai.justsong.cn/
about: 加群后联系群主

View File

@@ -8,10 +8,13 @@ assignees: ''
---
**例行检查**
[//]: # (方框内删除已有的空格,填 x 号)
+ [ ] 我已确认目前没有类似 issue
+ [ ] 我已确认我已升级到最新版本
+ [ ] 我已完整查看过项目 README已确定现有版本无法满足需求
+ [ ] 我理解并愿意跟进此 issue协助测试和提供反馈
+ [ ] 我理解并认可上述内容,并理解项目维护者精力有限,不遵循规则的 issue 可能会被无视或直接关闭
+ [ ] 我理解并认可上述内容,并理解项目维护者精力有限,**不遵循规则的 issue 可能会被无视或直接关闭**
**功能描述**

View File

@@ -10,7 +10,7 @@
# One API
_✨ The all-in-one OpenAI interface, integrates various API access methods, ready to use ✨_
_✨ An OpenAI key management & redistribution system, easy to deploy & use ✨_
</div>
@@ -57,17 +57,14 @@ _✨ The all-in-one OpenAI interface, integrates various API access methods, rea
> **Note**: The latest image pulled from Docker may be an `alpha` release. Specify the version manually if you require stability.
## Features
1. Supports multiple API access channels. Welcome PRs or issue submissions for additional channels:
1. Supports multiple API access channels:
+ [x] Official OpenAI channel (support proxy configuration)
+ [x] **Azure OpenAI API**
+ [x] [API Distribute](https://api.gptjk.top/register?aff=QGxj)
+ [x] [OpenAI-SB](https://openai-sb.com)
+ [x] [API2D](https://api2d.com/r/197971)
+ [x] [OhMyGPT](https://aigptx.top?aff=uFpUl2Kf)
+ [x] [AI Proxy](https://aiproxy.io/?i=OneAPI) (invitation code: `OneAPI`)
+ [x] [API2GPT](http://console.api2gpt.com/m/00002S)
+ [x] [CloseAI](https://console.closeai-asia.com/r/2412)
+ [x] [AI.LS](https://ai.ls)
+ [x] [OpenAI Max](https://openaimax.com)
+ [x] Custom channel: Various third-party proxy services not included in the list
2. Supports access to multiple channels through **load balancing**.
3. Supports **stream mode** that enables typewriter-like effect through stream transmission.
@@ -174,6 +171,15 @@ Refer to [#175](https://github.com/songquanpeng/one-api/issues/175) for detailed
If you encounter a blank page after deployment, refer to [#97](https://github.com/songquanpeng/one-api/issues/97) for possible solutions.
### Deployment on Third-Party Platforms
<details>
<summary><strong>Deploy on Sealos</strong></summary>
<div>
Please refer to [this tutorial](https://github.com/c121914yu/FastGPT/blob/main/docs/deploy/one-api/sealos.md).
</div>
</details>
<details>
<summary><strong>Deployment on Zeabur</strong></summary>
<div>
@@ -240,7 +246,7 @@ If the channel ID is not provided, load balancing will be used to distribute the
+ Example: `CHANNEL_UPDATE_FREQUENCY=1440`
8. `CHANNEL_TEST_FREQUENCY`: When set, it periodically tests the channels, with the unit in minutes. If not set, no test will happen.
+ Example: `CHANNEL_TEST_FREQUENCY=1440`
9. `REQUEST_INTERVAL`: The time interval (in seconds) between requests when updating channel balances and testing channel availability. Default is no interval.
9. `POLLING_INTERVAL`: The time interval (in seconds) between requests when updating channel balances and testing channel availability. Default is no interval.
+ Example: `POLLING_INTERVAL=5`
### Command Line Parameters

View File

@@ -56,22 +56,19 @@ _✨ All in one 的 OpenAI 接口,整合各种 API 访问方式,开箱即用
> **Warning**:从 `v0.3` 版本升级到 `v0.4` 版本需要手动迁移数据库,请手动执行[数据库迁移脚本](./bin/migration_v0.3-v0.4.sql)。
## 功能
1. 支持多种 API 访问渠道,欢迎 PR 或提 issue 添加更多渠道
+ [x] OpenAI 官方通道(支持配置代理
1. 支持多种 API 访问渠道:
+ [x] OpenAI 官方通道(支持配置镜像
+ [x] **Azure OpenAI API**
+ [x] [API Distribute](https://api.gptjk.top/register?aff=QGxj)
+ [x] [OpenAI-SB](https://openai-sb.com)
+ [x] [API2D](https://api2d.com/r/197971)
+ [x] [OhMyGPT](https://aigptx.top?aff=uFpUl2Kf)
+ [x] [AI Proxy](https://aiproxy.io/?i=OneAPI) (邀请码:`OneAPI`
+ [x] [API2GPT](http://console.api2gpt.com/m/00002S)
+ [x] [CloseAI](https://console.closeai-asia.com/r/2412)
+ [x] [AI.LS](https://ai.ls)
+ [x] [OpenAI Max](https://openaimax.com)
+ [x] 自定义渠道:例如各种未收录的第三方代理服务
2. 支持通过**负载均衡**的方式访问多个渠道。
3. 支持 **stream 模式**,可以通过流式传输实现打字机效果。
4. 支持**多机部署**[详见此处](#多机部署)。
5. 支持**令牌管理**,设置令牌的过期时间和使用次数
5. 支持**令牌管理**,设置令牌的过期时间和额度
6. 支持**兑换码管理**,支持批量生成和导出兑换码,可使用兑换码为账户进行充值。
7. 支持**通道管理**,批量创建通道。
8. 支持**用户分组**以及**渠道分组**,支持为不同分组设置不同的倍率。
@@ -80,16 +77,17 @@ _✨ All in one 的 OpenAI 接口,整合各种 API 访问方式,开箱即用
11. 支持**用户邀请奖励**。
12. 支持以美元为单位显示额度。
13. 支持发布公告,设置充值链接,设置新用户初始额度。
14. 支持丰富的**自定义**设置,
14. 支持模型映射,重定向用户的请求模型。
15. 支持丰富的**自定义**设置,
1. 支持自定义系统名称logo 以及页脚。
2. 支持自定义首页和关于页面,可以选择使用 HTML & Markdown 代码进行自定义,或者使用一个单独的网页通过 iframe 嵌入。
15. 支持通过系统访问令牌访问管理 API。
16. 支持 Cloudflare Turnstile 用户校验。
17. 支持用户管理,支持**多种用户登录注册方式**
16. 支持通过系统访问令牌访问管理 API。
17. 支持 Cloudflare Turnstile 用户校验。
18. 支持用户管理,支持**多种用户登录注册方式**
+ 邮箱登录注册以及通过邮箱进行密码重置。
+ [GitHub 开放授权](https://github.com/settings/applications/new)。
+ 微信公众号授权(需要额外部署 [WeChat Server](https://github.com/songquanpeng/wechat-server))。
18. 未来其他大模型开放 API 后,将第一时间支持,并将其封装成同样的 API 访问方式。
19. 未来其他大模型开放 API 后,将第一时间支持,并将其封装成同样的 API 访问方式。
## 部署
### 基于 Docker 进行部署
@@ -114,6 +112,7 @@ server{
proxy_set_header X-Forwarded-For $remote_addr;
proxy_cache_bypass $http_upgrade;
proxy_set_header Accept-Encoding gzip;
proxy_read_timeout 300s; # GPT-4 需要较长的超时时间,请自行调整
}
}
```
@@ -195,6 +194,17 @@ docker run --name chatgpt-web -d -p 3002:3002 -e OPENAI_API_BASE_URL=https://ope
注意修改端口号、`OPENAI_API_BASE_URL` 和 `OPENAI_API_KEY`。
### 部署到第三方平台
<details>
<summary><strong>部署到 Sealos </strong></summary>
<div>
> Sealos 可视化部署,仅需 1 分钟。
参考这个[教程](https://github.com/c121914yu/FastGPT/blob/main/docs/deploy/one-api/sealos.md)中 1~5 步。
</div>
</details>
<details>
<summary><strong>部署到 Zeabur</strong></summary>
<div>
@@ -251,7 +261,7 @@ graph LR
+ 例子:`SESSION_SECRET=random_string`
3. `SQL_DSN`:设置之后将使用指定数据库而非 SQLite请使用 MySQL 8.0 版本。
+ 例子:`SQL_DSN=root:123456@tcp(localhost:3306)/oneapi`
4. `FRONTEND_BASE_URL`:设置之后将使用指定的前端地址,而非后端地址。
4. `FRONTEND_BASE_URL`:设置之后将使用指定的前端地址,而非后端地址,仅限从服务器设置
+ 例子:`FRONTEND_BASE_URL=https://openai.justsong.cn`
5. `SYNC_FREQUENCY`:设置之后将定期与数据库同步配置,单位为秒,未设置则不进行同步。
+ 例子:`SYNC_FREQUENCY=60`
@@ -261,7 +271,7 @@ graph LR
+ 例子:`CHANNEL_UPDATE_FREQUENCY=1440`
8. `CHANNEL_TEST_FREQUENCY`:设置之后将定期检查渠道,单位为分钟,未设置则不进行检查。
+ 例子:`CHANNEL_TEST_FREQUENCY=1440`
9. `REQUEST_INTERVAL`:批量更新渠道余额以及测试可用性时的请求间隔,单位为秒,默认无间隔。
9. `POLLING_INTERVAL`:批量更新渠道余额以及测试可用性时的请求间隔,单位为秒,默认无间隔。
+ 例子:`POLLING_INTERVAL=5`
### 命令行参数

View File

@@ -12,14 +12,16 @@ total_time=0
times=()
for ((i=1; i<=count; i++)); do
result=$(curl -o /dev/null -s -w %{time_total}\\n \
result=$(curl -o /dev/null -s -w "%{http_code} %{time_total}\\n" \
https://"$domain"/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $key" \
-d '{"messages": [{"content": "echo hi", "role": "user"}], "model": "gpt-3.5-turbo", "stream": false, "max_tokens": 1}')
echo "$result"
total_time=$(bc <<< "$total_time + $result")
times+=("$result")
http_code=$(echo "$result" | awk '{print $1}')
time=$(echo "$result" | awk '{print $2}')
echo "HTTP status code: $http_code, Time taken: $time"
total_time=$(bc <<< "$total_time + $time")
times+=("$time")
done
average_time=$(echo "scale=4; $total_time / $count" | bc)

View File

@@ -72,7 +72,7 @@ var RootUserEmail = ""
var IsMasterNode = os.Getenv("NODE_TYPE") != "slave"
var requestInterval, _ = strconv.Atoi(os.Getenv("REQUEST_INTERVAL"))
var requestInterval, _ = strconv.Atoi(os.Getenv("POLLING_INTERVAL"))
var RequestInterval = time.Duration(requestInterval) * time.Second
const (

View File

@@ -31,7 +31,7 @@ var ModelRatio = map[string]float64{
"curie": 10,
"babbage": 10,
"ada": 10,
"text-embedding-ada-002": 0.2,
"text-embedding-ada-002": 0.05,
"text-search-ada-doc-001": 10,
"text-moderation-stable": 0.1,
"text-moderation-latest": 0.1,

View File

@@ -33,7 +33,7 @@ func GetSubscription(c *gin.Context) {
amount /= common.QuotaPerUnit
}
if token != nil && token.UnlimitedQuota {
amount = 99999999.9999
amount = 100000000
}
subscription := OpenAISubscriptionResponse{
Object: "billing_subscription",

View File

@@ -224,6 +224,24 @@ func init() {
Root: "text-moderation-stable",
Parent: nil,
},
{
Id: "text-davinci-edit-001",
Object: "model",
Created: 1677649963,
OwnedBy: "openai",
Permission: permission,
Root: "text-davinci-edit-001",
Parent: nil,
},
{
Id: "code-davinci-edit-001",
Object: "model",
Created: 1677649963,
OwnedBy: "openai",
Permission: permission,
Root: "code-davinci-edit-001",
Parent: nil,
},
}
openAIModelsMap = make(map[string]OpenAIModels)
for _, model := range openAIModels {

View File

@@ -27,7 +27,7 @@ func relayTextHelper(c *gin.Context, relayMode int) *OpenAIErrorWithStatusCode {
return errorWrapper(err, "bind_request_body_failed", http.StatusBadRequest)
}
}
if relayMode == RelayModeModeration && textRequest.Model == "" {
if relayMode == RelayModeModerations && textRequest.Model == "" {
textRequest.Model = "text-moderation-latest"
}
// request validation
@@ -37,16 +37,34 @@ func relayTextHelper(c *gin.Context, relayMode int) *OpenAIErrorWithStatusCode {
switch relayMode {
case RelayModeCompletions:
if textRequest.Prompt == "" {
return errorWrapper(errors.New("prompt is required"), "required_field_missing", http.StatusBadRequest)
return errorWrapper(errors.New("field prompt is required"), "required_field_missing", http.StatusBadRequest)
}
case RelayModeChatCompletions:
if len(textRequest.Messages) == 0 {
return errorWrapper(errors.New("messages is required"), "required_field_missing", http.StatusBadRequest)
if textRequest.Messages == nil || len(textRequest.Messages) == 0 {
return errorWrapper(errors.New("field messages is required"), "required_field_missing", http.StatusBadRequest)
}
case RelayModeEmbeddings:
case RelayModeModeration:
case RelayModeModerations:
if textRequest.Input == "" {
return errorWrapper(errors.New("input is required"), "required_field_missing", http.StatusBadRequest)
return errorWrapper(errors.New("field input is required"), "required_field_missing", http.StatusBadRequest)
}
case RelayModeEdits:
if textRequest.Instruction == "" {
return errorWrapper(errors.New("field instruction is required"), "required_field_missing", http.StatusBadRequest)
}
}
// map model name
modelMapping := c.GetString("model_mapping")
isModelMapped := false
if modelMapping != "" {
modelMap := make(map[string]string)
err := json.Unmarshal([]byte(modelMapping), &modelMap)
if err != nil {
return errorWrapper(err, "unmarshal_model_mapping_failed", http.StatusInternalServerError)
}
if modelMap[textRequest.Model] != "" {
textRequest.Model = modelMap[textRequest.Model]
isModelMapped = true
}
}
baseURL := common.ChannelBaseURLs[channelType]
@@ -84,7 +102,7 @@ func relayTextHelper(c *gin.Context, relayMode int) *OpenAIErrorWithStatusCode {
promptTokens = countTokenMessages(textRequest.Messages, textRequest.Model)
case RelayModeCompletions:
promptTokens = countTokenInput(textRequest.Prompt, textRequest.Model)
case RelayModeModeration:
case RelayModeModerations:
promptTokens = countTokenInput(textRequest.Input, textRequest.Model)
}
preConsumedTokens := common.PreConsumedQuota
@@ -110,7 +128,17 @@ func relayTextHelper(c *gin.Context, relayMode int) *OpenAIErrorWithStatusCode {
return errorWrapper(err, "pre_consume_token_quota_failed", http.StatusForbidden)
}
}
req, err := http.NewRequest(c.Request.Method, fullRequestURL, c.Request.Body)
var requestBody io.Reader
if isModelMapped {
jsonStr, err := json.Marshal(textRequest)
if err != nil {
return errorWrapper(err, "marshal_text_request_failed", http.StatusInternalServerError)
}
requestBody = bytes.NewBuffer(jsonStr)
} else {
requestBody = c.Request.Body
}
req, err := http.NewRequest(c.Request.Method, fullRequestURL, requestBody)
if err != nil {
return errorWrapper(err, "new_request_failed", http.StatusInternalServerError)
}
@@ -144,7 +172,10 @@ func relayTextHelper(c *gin.Context, relayMode int) *OpenAIErrorWithStatusCode {
defer func() {
if consumeQuota {
quota := 0
completionRatio := 1.333333 // default for gpt-3
completionRatio := 1.0
if strings.HasPrefix(textRequest.Model, "gpt-3.5") {
completionRatio = 1.333333
}
if strings.HasPrefix(textRequest.Model, "gpt-4") {
completionRatio = 2
}
@@ -170,6 +201,10 @@ func relayTextHelper(c *gin.Context, relayMode int) *OpenAIErrorWithStatusCode {
if err != nil {
common.SysError("error consuming token remain quota: " + err.Error())
}
err = model.CacheUpdateUserQuota(userId)
if err != nil {
common.SysError("error update user quota cache: " + err.Error())
}
if quota != 0 {
tokenName := c.GetString("token_name")
logContent := fmt.Sprintf("模型倍率 %.2f,分组倍率 %.2f", modelRatio, groupRatio)

View File

@@ -19,22 +19,24 @@ const (
RelayModeChatCompletions
RelayModeCompletions
RelayModeEmbeddings
RelayModeModeration
RelayModeModerations
RelayModeImagesGenerations
RelayModeEdits
)
// https://platform.openai.com/docs/api-reference/chat
type GeneralOpenAIRequest struct {
Model string `json:"model"`
Messages []Message `json:"messages"`
Prompt any `json:"prompt"`
Stream bool `json:"stream"`
MaxTokens int `json:"max_tokens"`
Temperature float64 `json:"temperature"`
TopP float64 `json:"top_p"`
N int `json:"n"`
Input any `json:"input"`
Model string `json:"model,omitempty"`
Messages []Message `json:"messages,omitempty"`
Prompt any `json:"prompt,omitempty"`
Stream bool `json:"stream,omitempty"`
MaxTokens int `json:"max_tokens,omitempty"`
Temperature float64 `json:"temperature,omitempty"`
TopP float64 `json:"top_p,omitempty"`
N int `json:"n,omitempty"`
Input any `json:"input,omitempty"`
Instruction string `json:"instruction,omitempty"`
}
type ChatRequest struct {
@@ -99,9 +101,11 @@ func Relay(c *gin.Context) {
} else if strings.HasPrefix(c.Request.URL.Path, "/v1/embeddings") {
relayMode = RelayModeEmbeddings
} else if strings.HasPrefix(c.Request.URL.Path, "/v1/moderations") {
relayMode = RelayModeModeration
relayMode = RelayModeModerations
} else if strings.HasPrefix(c.Request.URL.Path, "/v1/images/generations") {
relayMode = RelayModeImagesGenerations
} else if strings.HasPrefix(c.Request.URL.Path, "/v1/edits") {
relayMode = RelayModeEdits
}
var err *OpenAIErrorWithStatusCode
switch relayMode {

View File

@@ -456,5 +456,7 @@
"提示": "Prompt",
"补全": "Completion",
"消耗额度": "Used Quota",
"可选值": "Optional Values"
"可选值": "Optional Values",
"渠道不存在:%d": "Channel does not exist: %d",
"数据库一致性已被破坏,请联系管理员": "Database consistency has been broken, please contact the administrator"
}

11
main.go
View File

@@ -4,7 +4,6 @@ import (
"embed"
"github.com/gin-contrib/sessions"
"github.com/gin-contrib/sessions/cookie"
"github.com/gin-contrib/sessions/redis"
"github.com/gin-gonic/gin"
"one-api/common"
"one-api/controller"
@@ -82,14 +81,8 @@ func main() {
server.Use(middleware.CORS())
// Initialize session store
if common.RedisEnabled {
opt := common.ParseRedisOption()
store, _ := redis.NewStore(opt.MinIdleConns, opt.Network, opt.Addr, opt.Password, []byte(common.SessionSecret))
server.Use(sessions.Sessions("session", store))
} else {
store := cookie.NewStore([]byte(common.SessionSecret))
server.Use(sessions.Sessions("session", store))
}
store := cookie.NewStore([]byte(common.SessionSecret))
server.Use(sessions.Sessions("session", store))
router.SetRouter(server, buildFS, indexPage)
var port = os.Getenv("PORT")

View File

@@ -75,9 +75,14 @@ func Distribute() func(c *gin.Context) {
}
channel, err = model.CacheGetRandomSatisfiedChannel(userGroup, modelRequest.Model)
if err != nil {
message := "无可用渠道"
if channel != nil {
common.SysError(fmt.Sprintf("渠道不存在:%d", channel.Id))
message = "数据库一致性已被破坏,请联系管理员"
}
c.JSON(http.StatusServiceUnavailable, gin.H{
"error": gin.H{
"message": "无可用渠道",
"message": message,
"type": "one_api_error",
},
})
@@ -88,6 +93,7 @@ func Distribute() func(c *gin.Context) {
c.Set("channel", channel.Type)
c.Set("channel_id", channel.Id)
c.Set("channel_name", channel.Name)
c.Set("model_mapping", channel.ModelMapping)
c.Request.Header.Set("Authorization", fmt.Sprintf("Bearer %s", channel.Key))
c.Set("base_url", channel.BaseURL)
if channel.Type == common.ChannelTypeAzure {

View File

@@ -24,6 +24,7 @@ func GetRandomSatisfiedChannel(group string, model string) (*Channel, error) {
return nil, err
}
channel := Channel{}
channel.Id = ability.ChannelId
err = DB.First(&channel, "id = ?", ability.ChannelId).Error
return &channel, err
}

View File

@@ -83,6 +83,18 @@ func CacheGetUserQuota(id int) (quota int, err error) {
return quota, err
}
func CacheUpdateUserQuota(id int) error {
if !common.RedisEnabled {
return nil
}
quota, err := GetUserQuota(id)
if err != nil {
return err
}
err = common.RedisSet(fmt.Sprintf("user_quota:%d", id), fmt.Sprintf("%d", quota), UserId2QuotaCacheSeconds*time.Second)
return err
}
func CacheIsUserEnabled(userId int) bool {
if !common.RedisEnabled {
return IsUserEnabled(userId)
@@ -108,7 +120,7 @@ var channelSyncLock sync.RWMutex
func InitChannelCache() {
newChannelId2channel := make(map[int]*Channel)
var channels []*Channel
DB.Find(&channels)
DB.Where("status = ?", common.ChannelStatusEnabled).Find(&channels)
for _, channel := range channels {
newChannelId2channel[channel.Id] = channel
}

View File

@@ -22,6 +22,7 @@ type Channel struct {
Models string `json:"models"`
Group string `json:"group" gorm:"type:varchar(32);default:'default'"`
UsedQuota int64 `json:"used_quota" gorm:"bigint;default:0"`
ModelMapping string `json:"model_mapping" gorm:"type:varchar(1024);default:''"`
}
func GetAllChannels(startIdx int, num int, selectAll bool) ([]*Channel, error) {

View File

@@ -19,7 +19,7 @@ func SetRelayRouter(router *gin.Engine) {
{
relayV1Router.POST("/completions", controller.Relay)
relayV1Router.POST("/chat/completions", controller.Relay)
relayV1Router.POST("/edits", controller.RelayNotImplemented)
relayV1Router.POST("/edits", controller.Relay)
relayV1Router.POST("/images/generations", controller.RelayNotImplemented)
relayV1Router.POST("/images/edits", controller.RelayNotImplemented)
relayV1Router.POST("/images/variations", controller.RelayNotImplemented)

View File

@@ -74,9 +74,6 @@ const OperationSetting = () => {
const submitConfig = async (group) => {
switch (group) {
case 'monitor':
if (originInputs['AutomaticDisableChannelEnabled'] !== inputs.AutomaticDisableChannelEnabled) {
await updateOption('AutomaticDisableChannelEnabled', inputs.AutomaticDisableChannelEnabled);
}
if (originInputs['ChannelDisableThreshold'] !== inputs.ChannelDisableThreshold) {
await updateOption('ChannelDisableThreshold', inputs.ChannelDisableThreshold);
}

View File

@@ -10,4 +10,4 @@ export const CHANNEL_OPTIONS = [
{ key: 9, text: 'AI.LS', value: 9, color: 'yellow' },
{ key: 10, text: 'AI Proxy', value: 10, color: 'purple' },
{ key: 12, text: 'API2GPT', value: 12, color: 'blue' }
];
];

View File

@@ -1,9 +1,15 @@
import React, { useEffect, useState } from 'react';
import { Button, Form, Header, Message, Segment } from 'semantic-ui-react';
import { useParams } from 'react-router-dom';
import { API, showError, showInfo, showSuccess } from '../../helpers';
import { API, showError, showInfo, showSuccess, verifyJSON } from '../../helpers';
import { CHANNEL_OPTIONS } from '../../constants';
const MODEL_MAPPING_EXAMPLE = {
'gpt-3.5-turbo-0301': 'gpt-3.5-turbo',
'gpt-4-0314': 'gpt-4',
'gpt-4-32k-0314': 'gpt-4-32k'
};
const EditChannel = () => {
const params = useParams();
const channelId = params.id;
@@ -15,6 +21,7 @@ const EditChannel = () => {
key: '',
base_url: '',
other: '',
model_mapping: '',
models: [],
groups: ['default']
};
@@ -42,6 +49,9 @@ const EditChannel = () => {
} else {
data.groups = data.group.split(',');
}
if (data.model_mapping !== '') {
data.model_mapping = JSON.stringify(JSON.parse(data.model_mapping), null, 2);
}
setInputs(data);
} else {
showError(message);
@@ -94,6 +104,10 @@ const EditChannel = () => {
showInfo('请至少选择一个模型!');
return;
}
if (inputs.model_mapping !== '' && !verifyJSON(inputs.model_mapping)) {
showInfo('模型映射必须是合法的 JSON 格式!');
return;
}
let localInputs = inputs;
if (localInputs.base_url.endsWith('/')) {
localInputs.base_url = localInputs.base_url.slice(0, localInputs.base_url.length - 1);
@@ -246,6 +260,17 @@ const EditChannel = () => {
handleInputChange(null, { name: 'models', value: [] });
}}>清除所有模型</Button>
</div>
<Form.Field>
<Form.TextArea
label='模型映射'
placeholder={`为一个 JSON 文本,键为用户请求的模型名称,值为要替换的模型名称,例如:\n${JSON.stringify(MODEL_MAPPING_EXAMPLE, null, 2)}`}
name='model_mapping'
onChange={handleInputChange}
value={inputs.model_mapping}
style={{ minHeight: 150, fontFamily: 'JetBrains Mono, Consolas' }}
autoComplete='new-password'
/>
</Form.Field>
{
batch ? <Form.Field>
<Form.TextArea

View File

@@ -2,6 +2,7 @@ import React, { useEffect, useState } from 'react';
import { Button, Form, Header, Segment } from 'semantic-ui-react';
import { useParams } from 'react-router-dom';
import { API, showError, showSuccess } from '../../helpers';
import { renderQuota, renderQuotaWithPrompt } from '../../helpers/render';
const EditUser = () => {
const params = useParams();
@@ -134,7 +135,7 @@ const EditUser = () => {
</Form.Field>
<Form.Field>
<Form.Input
label='剩余额度'
label={`剩余额度${renderQuotaWithPrompt(quota)}`}
name='quota'
placeholder={'请输入新的剩余额度'}
onChange={handleInputChange}