Compare commits

...

19 Commits

Author SHA1 Message Date
JustSong
8d34b7a77e feat: able to delete all manually disabled channels (close #539) 2023-10-02 13:06:27 +08:00
JustSong
cbd62011b8 chore: add database migration prompt 2023-10-02 12:13:30 +08:00
JustSong
4701897e2e chore: sync database indices with pro version 2023-10-02 12:11:02 +08:00
JustSong
0f6c132a80 docs: update readme 2023-10-01 13:26:42 +08:00
JustSong
3cac45dc85 docs: update readme 2023-10-01 13:16:25 +08:00
JustSong
47c08c72ce fix: check user quota when pre-consume quota 2023-10-01 12:49:40 +08:00
JustSong
53b2cace0b chore: sync model schema 2023-09-29 18:13:57 +08:00
JustSong
f0fc991b44 chore: remind user to change default password (close #516) 2023-09-29 18:07:20 +08:00
JustSong
594f06e7b0 perf: lazy initialization for token encoders (close #566) 2023-09-29 17:56:11 +08:00
JustSong
197d1d7a9d docs: update readme 2023-09-29 17:49:47 +08:00
JustSong
f9b748c2ca chore: add MEMORY_CACHE_ENABLED env variable 2023-09-29 11:38:27 +08:00
JustSong
fd98463611 chore: update ali's model name 2023-09-23 22:57:59 +08:00
JustSong
f5a1cd3463 feat: add support for gpt-3.5-turbo-instruct (close #545) 2023-09-23 22:37:11 +08:00
igophper
8651451e53 fix: sum null to 0 (#541)
Co-authored-by: igophper <admin@jialilgu.cn>
2023-09-19 22:39:54 +08:00
JustSong
1c5bb97a42 fix: fix gorm expression
Co-authored-by: 初音控灬 <xyfacai@gmail.com>
2023-09-18 23:11:37 +08:00
JustSong
de868e4e4e fix: fix gorm expression
Co-authored-by: 初音控灬 <xyfacai@gmail.com>
2023-09-18 23:07:59 +08:00
JustSong
1d258cc898 fix: add default value for base url 2023-09-18 22:49:05 +08:00
JustSong
37e09d764c fix: fix random selection is not working when directly using database 2023-09-18 22:39:10 +08:00
JustSong
159b9e3369 fix: fix unable to set zero value for base url & model mapping 2023-09-18 22:07:17 +08:00
24 changed files with 260 additions and 118 deletions

View File

@@ -59,6 +59,9 @@ _✨ 通过标准的 OpenAI API 格式访问所有的大模型,开箱即用
> **Warning** > **Warning**
> 使用 Docker 拉取的最新镜像可能是 `alpha` 版本,如果追求稳定性请手动指定版本。 > 使用 Docker 拉取的最新镜像可能是 `alpha` 版本,如果追求稳定性请手动指定版本。
> **Warning**
> 使用 root 用户初次登录系统后,务必修改默认密码 `123456`
## 功能 ## 功能
1. 支持多种大模型: 1. 支持多种大模型:
+ [x] [OpenAI ChatGPT 系列模型](https://platform.openai.com/docs/guides/gpt/chat-completions-api)(支持 [Azure OpenAI API](https://learn.microsoft.com/en-us/azure/ai-services/openai/reference) + [x] [OpenAI ChatGPT 系列模型](https://platform.openai.com/docs/guides/gpt/chat-completions-api)(支持 [Azure OpenAI API](https://learn.microsoft.com/en-us/azure/ai-services/openai/reference)
@@ -103,11 +106,17 @@ _✨ 通过标准的 OpenAI API 格式访问所有的大模型,开箱即用
## 部署 ## 部署
### 基于 Docker 进行部署 ### 基于 Docker 进行部署
部署命令:`docker run --name one-api -d --restart always -p 3000:3000 -e TZ=Asia/Shanghai -v /home/ubuntu/data/one-api:/data justsong/one-api` ```shell
# 使用 SQLite 的部署命令:
docker run --name one-api -d --restart always -p 3000:3000 -e TZ=Asia/Shanghai -v /home/ubuntu/data/one-api:/data justsong/one-api
# 使用 MySQL 的部署命令,在上面的基础上添加 `-e SQL_DSN="root:123456@tcp(localhost:3306)/oneapi"`,请自行修改数据库连接参数,不清楚如何修改请参见下面环境变量一节。
# 例如:
docker run --name one-api -d --restart always -p 3000:3000 -e SQL_DSN="root:123456@tcp(localhost:3306)/oneapi" -e TZ=Asia/Shanghai -v /home/ubuntu/data/one-api:/data justsong/one-api
```
其中,`-p 3000:3000` 中的第一个 `3000` 是宿主机的端口,可以根据需要进行修改。 其中,`-p 3000:3000` 中的第一个 `3000` 是宿主机的端口,可以根据需要进行修改。
数据将会保存在宿主机的 `/home/ubuntu/data/one-api` 目录,请确保该目录存在且具有写入权限,或者更改为合适的目录。 数据和日志将会保存在宿主机的 `/home/ubuntu/data/one-api` 目录,请确保该目录存在且具有写入权限,或者更改为合适的目录。
如果启动失败,请添加 `--privileged=true`,具体参考 https://github.com/songquanpeng/one-api/issues/482 。 如果启动失败,请添加 `--privileged=true`,具体参考 https://github.com/songquanpeng/one-api/issues/482 。
@@ -236,7 +245,7 @@ docker run --name chatgpt-web -d -p 3002:3002 -e OPENAI_API_BASE_URL=https://ope
<summary><strong>部署到 Zeabur</strong></summary> <summary><strong>部署到 Zeabur</strong></summary>
<div> <div>
> Zeabur 的服务器在国外,自动解决了网络的问题,同时免费的额度也足够个人使用 > Zeabur 的服务器在国外,自动解决了网络的问题,同时免费的额度也足够个人使用
1. 首先 fork 一份代码。 1. 首先 fork 一份代码。
2. 进入 [Zeabur](https://zeabur.com?referralCode=songquanpeng),登录,进入控制台。 2. 进入 [Zeabur](https://zeabur.com?referralCode=songquanpeng),登录,进入控制台。
@@ -251,6 +260,17 @@ docker run --name chatgpt-web -d -p 3002:3002 -e OPENAI_API_BASE_URL=https://ope
</div> </div>
</details> </details>
<details>
<summary><strong>部署到 Render</strong></summary>
<div>
> Render 提供免费额度,绑卡后可以进一步提升额度
Render 可以直接部署 docker 镜像,不需要 fork 仓库https://dashboard.render.com
</div>
</details>
## 配置 ## 配置
系统本身开箱即用。 系统本身开箱即用。
@@ -278,10 +298,11 @@ OPENAI_API_BASE="https://<HOST>:<PORT>/v1"
```mermaid ```mermaid
graph LR graph LR
A(用户) A(用户)
A --->|请求| B(One API) A --->|使用 One API 分发的 key 进行请求| B(One API)
B -->|中继请求| C(OpenAI) B -->|中继请求| C(OpenAI)
B -->|中继请求| D(Azure) B -->|中继请求| D(Azure)
B -->|中继请求| E(其他下游渠道) B -->|中继请求| E(其他 OpenAI API 格式下游渠道)
B -->|中继并修改请求体和返回体| F(非 OpenAI API 格式下游渠道)
``` ```
可以通过在令牌后面添加渠道 ID 的方式指定使用哪一个渠道处理本次请求,例如:`Authorization: Bearer ONE_API_KEY-CHANNEL_ID`。 可以通过在令牌后面添加渠道 ID 的方式指定使用哪一个渠道处理本次请求,例如:`Authorization: Bearer ONE_API_KEY-CHANNEL_ID`。
@@ -309,22 +330,24 @@ graph LR
+ `SQL_CONN_MAX_LIFETIME`:连接的最大生命周期,默认为 `60`,单位分钟。 + `SQL_CONN_MAX_LIFETIME`:连接的最大生命周期,默认为 `60`,单位分钟。
4. `FRONTEND_BASE_URL`:设置之后将重定向页面请求到指定的地址,仅限从服务器设置。 4. `FRONTEND_BASE_URL`:设置之后将重定向页面请求到指定的地址,仅限从服务器设置。
+ 例子:`FRONTEND_BASE_URL=https://openai.justsong.cn` + 例子:`FRONTEND_BASE_URL=https://openai.justsong.cn`
5. `SYNC_FREQUENCY`:设置之后将定期与数据库同步配置,单位为秒,未设置则不进行同步 5. `MEMORY_CACHE_ENABLED`:启用内存缓存,会导致用户额度的更新存在一定的延迟,可选值为 `true` 和 `false`,未设置则默认为 `false`
+ 例子:`MEMORY_CACHE_ENABLED=true`
6. `SYNC_FREQUENCY`:在启用缓存的情况下与数据库同步配置的频率,单位为秒,默认为 `600` 秒。
+ 例子:`SYNC_FREQUENCY=60` + 例子:`SYNC_FREQUENCY=60`
6. `NODE_TYPE`:设置之后将指定节点类型,可选值为 `master` 和 `slave`,未设置则默认为 `master`。 7. `NODE_TYPE`:设置之后将指定节点类型,可选值为 `master` 和 `slave`,未设置则默认为 `master`。
+ 例子:`NODE_TYPE=slave` + 例子:`NODE_TYPE=slave`
7. `CHANNEL_UPDATE_FREQUENCY`:设置之后将定期更新渠道余额,单位为分钟,未设置则不进行更新。 8. `CHANNEL_UPDATE_FREQUENCY`:设置之后将定期更新渠道余额,单位为分钟,未设置则不进行更新。
+ 例子:`CHANNEL_UPDATE_FREQUENCY=1440` + 例子:`CHANNEL_UPDATE_FREQUENCY=1440`
8. `CHANNEL_TEST_FREQUENCY`:设置之后将定期检查渠道,单位为分钟,未设置则不进行检查。 9. `CHANNEL_TEST_FREQUENCY`:设置之后将定期检查渠道,单位为分钟,未设置则不进行检查。
+ 例子:`CHANNEL_TEST_FREQUENCY=1440` + 例子:`CHANNEL_TEST_FREQUENCY=1440`
9. `POLLING_INTERVAL`:批量更新渠道余额以及测试可用性时的请求间隔,单位为秒,默认无间隔。 10. `POLLING_INTERVAL`:批量更新渠道余额以及测试可用性时的请求间隔,单位为秒,默认无间隔。
+ 例子:`POLLING_INTERVAL=5` + 例子:`POLLING_INTERVAL=5`
10. `BATCH_UPDATE_ENABLED`:启用数据库批量更新聚合,会导致用户额度的更新存在一定的延迟可选值为 `true` 和 `false`,未设置则默认为 `false`。 11. `BATCH_UPDATE_ENABLED`:启用数据库批量更新聚合,会导致用户额度的更新存在一定的延迟可选值为 `true` 和 `false`,未设置则默认为 `false`。
+ 例子:`BATCH_UPDATE_ENABLED=true` + 例子:`BATCH_UPDATE_ENABLED=true`
+ 如果你遇到了数据库连接数过多的问题,可以尝试启用该选项。 + 如果你遇到了数据库连接数过多的问题,可以尝试启用该选项。
11. `BATCH_UPDATE_INTERVAL=5`:批量更新聚合的时间间隔,单位为秒,默认为 `5`。 12. `BATCH_UPDATE_INTERVAL=5`:批量更新聚合的时间间隔,单位为秒,默认为 `5`。
+ 例子:`BATCH_UPDATE_INTERVAL=5` + 例子:`BATCH_UPDATE_INTERVAL=5`
12. 请求频率限制: 13. 请求频率限制:
+ `GLOBAL_API_RATE_LIMIT`:全局 API 速率限制(除中继请求外),单 ip 三分钟内的最大请求数,默认为 `180`。 + `GLOBAL_API_RATE_LIMIT`:全局 API 速率限制(除中继请求外),单 ip 三分钟内的最大请求数,默认为 `180`。
+ `GLOBAL_WEB_RATE_LIMIT`:全局 Web 速率限制,单 ip 三分钟内的最大请求数,默认为 `60`。 + `GLOBAL_WEB_RATE_LIMIT`:全局 Web 速率限制,单 ip 三分钟内的最大请求数,默认为 `60`。

View File

@@ -56,6 +56,7 @@ var EmailDomainWhitelist = []string{
} }
var DebugEnabled = os.Getenv("DEBUG") == "true" var DebugEnabled = os.Getenv("DEBUG") == "true"
var MemoryCacheEnabled = os.Getenv("MEMORY_CACHE_ENABLED") == "true"
var LogConsumeEnabled = true var LogConsumeEnabled = true
@@ -92,7 +93,7 @@ var IsMasterNode = os.Getenv("NODE_TYPE") != "slave"
var requestInterval, _ = strconv.Atoi(os.Getenv("POLLING_INTERVAL")) var requestInterval, _ = strconv.Atoi(os.Getenv("POLLING_INTERVAL"))
var RequestInterval = time.Duration(requestInterval) * time.Second var RequestInterval = time.Duration(requestInterval) * time.Second
var SyncFrequency = 10 * 60 // unit is second, will be overwritten by SYNC_FREQUENCY var SyncFrequency = GetOrDefault("SYNC_FREQUENCY", 10*60) // unit is second
var BatchUpdateEnabled = false var BatchUpdateEnabled = false
var BatchUpdateInterval = GetOrDefault("BATCH_UPDATE_INTERVAL", 5) var BatchUpdateInterval = GetOrDefault("BATCH_UPDATE_INTERVAL", 5)
@@ -155,9 +156,10 @@ const (
) )
const ( const (
ChannelStatusUnknown = 0 ChannelStatusUnknown = 0
ChannelStatusEnabled = 1 // don't use 0, 0 is the default value! ChannelStatusEnabled = 1 // don't use 0, 0 is the default value!
ChannelStatusDisabled = 2 // also don't use 0 ChannelStatusManuallyDisabled = 2 // also don't use 0
ChannelStatusAutoDisabled = 3
) )
const ( const (

View File

@@ -24,6 +24,7 @@ var ModelRatio = map[string]float64{
"gpt-3.5-turbo-0613": 0.75, "gpt-3.5-turbo-0613": 0.75,
"gpt-3.5-turbo-16k": 1.5, // $0.003 / 1K tokens "gpt-3.5-turbo-16k": 1.5, // $0.003 / 1K tokens
"gpt-3.5-turbo-16k-0613": 1.5, "gpt-3.5-turbo-16k-0613": 1.5,
"gpt-3.5-turbo-instruct": 0.75, // $0.0015 / 1K tokens
"text-ada-001": 0.2, "text-ada-001": 0.2,
"text-babbage-001": 0.25, "text-babbage-001": 0.25,
"text-curie-001": 1, "text-curie-001": 1,
@@ -50,8 +51,8 @@ var ModelRatio = map[string]float64{
"chatglm_pro": 0.7143, // ¥0.01 / 1k tokens "chatglm_pro": 0.7143, // ¥0.01 / 1k tokens
"chatglm_std": 0.3572, // ¥0.005 / 1k tokens "chatglm_std": 0.3572, // ¥0.005 / 1k tokens
"chatglm_lite": 0.1429, // ¥0.002 / 1k tokens "chatglm_lite": 0.1429, // ¥0.002 / 1k tokens
"qwen-v1": 0.8572, // ¥0.012 / 1k tokens "qwen-turbo": 0.8572, // ¥0.012 / 1k tokens
"qwen-plus-v1": 1, // ¥0.014 / 1k tokens "qwen-plus": 10, // ¥0.14 / 1k tokens
"text-embedding-v1": 0.05, // ¥0.0007 / 1k tokens "text-embedding-v1": 0.05, // ¥0.0007 / 1k tokens
"SparkDesk": 1.2858, // ¥0.018 / 1k tokens "SparkDesk": 1.2858, // ¥0.018 / 1k tokens
"360GPT_S2_V9": 0.8572, // ¥0.012 / 1k tokens "360GPT_S2_V9": 0.8572, // ¥0.012 / 1k tokens

View File

@@ -111,7 +111,7 @@ func GetResponseBody(method, url string, channel *model.Channel, headers http.He
} }
func updateChannelCloseAIBalance(channel *model.Channel) (float64, error) { func updateChannelCloseAIBalance(channel *model.Channel) (float64, error) {
url := fmt.Sprintf("%s/dashboard/billing/credit_grants", channel.BaseURL) url := fmt.Sprintf("%s/dashboard/billing/credit_grants", channel.GetBaseURL())
body, err := GetResponseBody("GET", url, channel, GetAuthHeader(channel.Key)) body, err := GetResponseBody("GET", url, channel, GetAuthHeader(channel.Key))
if err != nil { if err != nil {
@@ -201,18 +201,18 @@ func updateChannelAIGC2DBalance(channel *model.Channel) (float64, error) {
func updateChannelBalance(channel *model.Channel) (float64, error) { func updateChannelBalance(channel *model.Channel) (float64, error) {
baseURL := common.ChannelBaseURLs[channel.Type] baseURL := common.ChannelBaseURLs[channel.Type]
if channel.BaseURL == "" { if channel.GetBaseURL() == "" {
channel.BaseURL = baseURL channel.BaseURL = &baseURL
} }
switch channel.Type { switch channel.Type {
case common.ChannelTypeOpenAI: case common.ChannelTypeOpenAI:
if channel.BaseURL != "" { if channel.GetBaseURL() != "" {
baseURL = channel.BaseURL baseURL = channel.GetBaseURL()
} }
case common.ChannelTypeAzure: case common.ChannelTypeAzure:
return 0, errors.New("尚未实现") return 0, errors.New("尚未实现")
case common.ChannelTypeCustom: case common.ChannelTypeCustom:
baseURL = channel.BaseURL baseURL = channel.GetBaseURL()
case common.ChannelTypeCloseAI: case common.ChannelTypeCloseAI:
return updateChannelCloseAIBalance(channel) return updateChannelCloseAIBalance(channel)
case common.ChannelTypeOpenAISB: case common.ChannelTypeOpenAISB:

View File

@@ -42,10 +42,10 @@ func testChannel(channel *model.Channel, request ChatRequest) (err error, openai
} }
requestURL := common.ChannelBaseURLs[channel.Type] requestURL := common.ChannelBaseURLs[channel.Type]
if channel.Type == common.ChannelTypeAzure { if channel.Type == common.ChannelTypeAzure {
requestURL = fmt.Sprintf("%s/openai/deployments/%s/chat/completions?api-version=2023-03-15-preview", channel.BaseURL, request.Model) requestURL = fmt.Sprintf("%s/openai/deployments/%s/chat/completions?api-version=2023-03-15-preview", channel.GetBaseURL(), request.Model)
} else { } else {
if channel.BaseURL != "" { if channel.GetBaseURL() != "" {
requestURL = channel.BaseURL requestURL = channel.GetBaseURL()
} }
requestURL += "/v1/chat/completions" requestURL += "/v1/chat/completions"
} }
@@ -141,7 +141,7 @@ func disableChannel(channelId int, channelName string, reason string) {
if common.RootUserEmail == "" { if common.RootUserEmail == "" {
common.RootUserEmail = model.GetRootUserEmail() common.RootUserEmail = model.GetRootUserEmail()
} }
model.UpdateChannelStatusById(channelId, common.ChannelStatusDisabled) model.UpdateChannelStatusById(channelId, common.ChannelStatusAutoDisabled)
subject := fmt.Sprintf("通道「%s」#%d已被禁用", channelName, channelId) subject := fmt.Sprintf("通道「%s」#%d已被禁用", channelName, channelId)
content := fmt.Sprintf("通道「%s」#%d已被禁用原因%s", channelName, channelId, reason) content := fmt.Sprintf("通道「%s」#%d已被禁用原因%s", channelName, channelId, reason)
err := common.SendEmail(subject, common.RootUserEmail, content) err := common.SendEmail(subject, common.RootUserEmail, content)

View File

@@ -127,6 +127,23 @@ func DeleteChannel(c *gin.Context) {
return return
} }
func DeleteManuallyDisabledChannel(c *gin.Context) {
rows, err := model.DeleteChannelByStatus(common.ChannelStatusManuallyDisabled)
if err != nil {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": err.Error(),
})
return
}
c.JSON(http.StatusOK, gin.H{
"success": true,
"message": "",
"data": rows,
})
return
}
func UpdateChannel(c *gin.Context) { func UpdateChannel(c *gin.Context) {
channel := model.Channel{} channel := model.Channel{}
err := c.ShouldBindJSON(&channel) err := c.ShouldBindJSON(&channel)

View File

@@ -117,6 +117,15 @@ func init() {
Root: "gpt-3.5-turbo-16k-0613", Root: "gpt-3.5-turbo-16k-0613",
Parent: nil, Parent: nil,
}, },
{
Id: "gpt-3.5-turbo-instruct",
Object: "model",
Created: 1677649963,
OwnedBy: "openai",
Permission: permission,
Root: "gpt-3.5-turbo-instruct",
Parent: nil,
},
{ {
Id: "gpt-4", Id: "gpt-4",
Object: "model", Object: "model",
@@ -343,21 +352,21 @@ func init() {
Parent: nil, Parent: nil,
}, },
{ {
Id: "qwen-v1", Id: "qwen-turbo",
Object: "model", Object: "model",
Created: 1677649963, Created: 1677649963,
OwnedBy: "ali", OwnedBy: "ali",
Permission: permission, Permission: permission,
Root: "qwen-v1", Root: "qwen-turbo",
Parent: nil, Parent: nil,
}, },
{ {
Id: "qwen-plus-v1", Id: "qwen-plus",
Object: "model", Object: "model",
Created: 1677649963, Created: 1677649963,
OwnedBy: "ali", OwnedBy: "ali",
Permission: permission, Permission: permission,
Root: "qwen-plus-v1", Root: "qwen-plus",
Parent: nil, Parent: nil,
}, },
{ {

View File

@@ -4,6 +4,7 @@ import (
"bytes" "bytes"
"context" "context"
"encoding/json" "encoding/json"
"errors"
"fmt" "fmt"
"io" "io"
"net/http" "net/http"
@@ -31,6 +32,9 @@ func relayAudioHelper(c *gin.Context, relayMode int) *OpenAIErrorWithStatusCode
if err != nil { if err != nil {
return errorWrapper(err, "get_user_quota_failed", http.StatusInternalServerError) return errorWrapper(err, "get_user_quota_failed", http.StatusInternalServerError)
} }
if userQuota-preConsumedQuota < 0 {
return errorWrapper(errors.New("user quota is not enough"), "insufficient_user_quota", http.StatusForbidden)
}
err = model.CacheDecreaseUserQuota(userId, preConsumedQuota) err = model.CacheDecreaseUserQuota(userId, preConsumedQuota)
if err != nil { if err != nil {
return errorWrapper(err, "decrease_user_quota_failed", http.StatusInternalServerError) return errorWrapper(err, "decrease_user_quota_failed", http.StatusInternalServerError)

View File

@@ -99,7 +99,7 @@ func relayImageHelper(c *gin.Context, relayMode int) *OpenAIErrorWithStatusCode
quota := int(ratio*sizeRatio*1000) * imageRequest.N quota := int(ratio*sizeRatio*1000) * imageRequest.N
if consumeQuota && userQuota-quota < 0 { if consumeQuota && userQuota-quota < 0 {
return errorWrapper(err, "insufficient_user_quota", http.StatusForbidden) return errorWrapper(errors.New("user quota is not enough"), "insufficient_user_quota", http.StatusForbidden)
} }
req, err := http.NewRequest(c.Request.Method, fullRequestURL, requestBody) req, err := http.NewRequest(c.Request.Method, fullRequestURL, requestBody)

View File

@@ -204,6 +204,9 @@ func relayTextHelper(c *gin.Context, relayMode int) *OpenAIErrorWithStatusCode {
if err != nil { if err != nil {
return errorWrapper(err, "get_user_quota_failed", http.StatusInternalServerError) return errorWrapper(err, "get_user_quota_failed", http.StatusInternalServerError)
} }
if userQuota-preConsumedQuota < 0 {
return errorWrapper(errors.New("user quota is not enough"), "insufficient_user_quota", http.StatusForbidden)
}
err = model.CacheDecreaseUserQuota(userId, preConsumedQuota) err = model.CacheDecreaseUserQuota(userId, preConsumedQuota)
if err != nil { if err != nil {
return errorWrapper(err, "decrease_user_quota_failed", http.StatusInternalServerError) return errorWrapper(err, "decrease_user_quota_failed", http.StatusInternalServerError)

View File

@@ -9,44 +9,53 @@ import (
"net/http" "net/http"
"one-api/common" "one-api/common"
"strconv" "strconv"
"strings"
) )
var stopFinishReason = "stop" var stopFinishReason = "stop"
// tokenEncoderMap won't grow after initialization
var tokenEncoderMap = map[string]*tiktoken.Tiktoken{} var tokenEncoderMap = map[string]*tiktoken.Tiktoken{}
var defaultTokenEncoder *tiktoken.Tiktoken
func InitTokenEncoders() { func InitTokenEncoders() {
common.SysLog("initializing token encoders") common.SysLog("initializing token encoders")
fallbackTokenEncoder, err := tiktoken.EncodingForModel("gpt-3.5-turbo") gpt35TokenEncoder, err := tiktoken.EncodingForModel("gpt-3.5-turbo")
if err != nil { if err != nil {
common.FatalLog(fmt.Sprintf("failed to get fallback token encoder: %s", err.Error())) common.FatalLog(fmt.Sprintf("failed to get gpt-3.5-turbo token encoder: %s", err.Error()))
}
defaultTokenEncoder = gpt35TokenEncoder
gpt4TokenEncoder, err := tiktoken.EncodingForModel("gpt-4")
if err != nil {
common.FatalLog(fmt.Sprintf("failed to get gpt-4 token encoder: %s", err.Error()))
} }
for model, _ := range common.ModelRatio { for model, _ := range common.ModelRatio {
tokenEncoder, err := tiktoken.EncodingForModel(model) if strings.HasPrefix(model, "gpt-3.5") {
if err != nil { tokenEncoderMap[model] = gpt35TokenEncoder
common.SysError(fmt.Sprintf("using fallback encoder for model %s", model)) } else if strings.HasPrefix(model, "gpt-4") {
tokenEncoderMap[model] = fallbackTokenEncoder tokenEncoderMap[model] = gpt4TokenEncoder
continue } else {
tokenEncoderMap[model] = nil
} }
tokenEncoderMap[model] = tokenEncoder
} }
common.SysLog("token encoders initialized") common.SysLog("token encoders initialized")
} }
func getTokenEncoder(model string) *tiktoken.Tiktoken { func getTokenEncoder(model string) *tiktoken.Tiktoken {
if tokenEncoder, ok := tokenEncoderMap[model]; ok { tokenEncoder, ok := tokenEncoderMap[model]
if ok && tokenEncoder != nil {
return tokenEncoder return tokenEncoder
} }
tokenEncoder, err := tiktoken.EncodingForModel(model) if ok {
if err != nil { tokenEncoder, err := tiktoken.EncodingForModel(model)
common.SysError(fmt.Sprintf("failed to get token encoder for model %s: %s, using encoder for gpt-3.5-turbo", model, err.Error()))
tokenEncoder, err = tiktoken.EncodingForModel("gpt-3.5-turbo")
if err != nil { if err != nil {
common.FatalLog(fmt.Sprintf("failed to get token encoder for model gpt-3.5-turbo: %s", err.Error())) common.SysError(fmt.Sprintf("failed to get token encoder for model %s: %s, using encoder for gpt-3.5-turbo", model, err.Error()))
tokenEncoder = defaultTokenEncoder
} }
tokenEncoderMap[model] = tokenEncoder
return tokenEncoder
} }
tokenEncoderMap[model] = tokenEncoder return defaultTokenEncoder
return tokenEncoder
} }
func getTokenNum(tokenEncoder *tiktoken.Tiktoken, text string) int { func getTokenNum(tokenEncoder *tiktoken.Tiktoken, text string) int {

20
main.go
View File

@@ -2,6 +2,7 @@ package main
import ( import (
"embed" "embed"
"fmt"
"github.com/gin-contrib/sessions" "github.com/gin-contrib/sessions"
"github.com/gin-contrib/sessions/cookie" "github.com/gin-contrib/sessions/cookie"
"github.com/gin-gonic/gin" "github.com/gin-gonic/gin"
@@ -50,18 +51,17 @@ func main() {
// Initialize options // Initialize options
model.InitOptionMap() model.InitOptionMap()
if common.RedisEnabled { if common.RedisEnabled {
// for compatibility with old versions
common.MemoryCacheEnabled = true
}
if common.MemoryCacheEnabled {
common.SysLog("memory cache enabled")
common.SysError(fmt.Sprintf("sync frequency: %d seconds", common.SyncFrequency))
model.InitChannelCache() model.InitChannelCache()
} }
if os.Getenv("SYNC_FREQUENCY") != "" { if common.MemoryCacheEnabled {
frequency, err := strconv.Atoi(os.Getenv("SYNC_FREQUENCY")) go model.SyncOptions(common.SyncFrequency)
if err != nil { go model.SyncChannelCache(common.SyncFrequency)
common.FatalLog("failed to parse SYNC_FREQUENCY: " + err.Error())
}
common.SyncFrequency = frequency
go model.SyncOptions(frequency)
if common.RedisEnabled {
go model.SyncChannelCache(frequency)
}
} }
if os.Getenv("CHANNEL_UPDATE_FREQUENCY") != "" { if os.Getenv("CHANNEL_UPDATE_FREQUENCY") != "" {
frequency, err := strconv.Atoi(os.Getenv("CHANNEL_UPDATE_FREQUENCY")) frequency, err := strconv.Atoi(os.Getenv("CHANNEL_UPDATE_FREQUENCY"))

View File

@@ -94,7 +94,7 @@ func TokenAuth() func(c *gin.Context) {
abortWithMessage(c, http.StatusUnauthorized, err.Error()) abortWithMessage(c, http.StatusUnauthorized, err.Error())
return return
} }
userEnabled, err := model.IsUserEnabled(token.UserId) userEnabled, err := model.CacheIsUserEnabled(token.UserId)
if err != nil { if err != nil {
abortWithMessage(c, http.StatusInternalServerError, err.Error()) abortWithMessage(c, http.StatusInternalServerError, err.Error())
return return

View File

@@ -82,9 +82,9 @@ func Distribute() func(c *gin.Context) {
c.Set("channel", channel.Type) c.Set("channel", channel.Type)
c.Set("channel_id", channel.Id) c.Set("channel_id", channel.Id)
c.Set("channel_name", channel.Name) c.Set("channel_name", channel.Name)
c.Set("model_mapping", channel.ModelMapping) c.Set("model_mapping", channel.GetModelMapping())
c.Request.Header.Set("Authorization", fmt.Sprintf("Bearer %s", channel.Key)) c.Request.Header.Set("Authorization", fmt.Sprintf("Bearer %s", channel.Key))
c.Set("base_url", channel.BaseURL) c.Set("base_url", channel.GetBaseURL())
switch channel.Type { switch channel.Type {
case common.ChannelTypeAzure: case common.ChannelTypeAzure:
c.Set("api_version", channel.Other) c.Set("api_version", channel.Other)

View File

@@ -10,16 +10,18 @@ type Ability struct {
Model string `json:"model" gorm:"primaryKey;autoIncrement:false"` Model string `json:"model" gorm:"primaryKey;autoIncrement:false"`
ChannelId int `json:"channel_id" gorm:"primaryKey;autoIncrement:false;index"` ChannelId int `json:"channel_id" gorm:"primaryKey;autoIncrement:false;index"`
Enabled bool `json:"enabled"` Enabled bool `json:"enabled"`
Priority *int64 `json:"priority" gorm:"bigint;default:0"` Priority *int64 `json:"priority" gorm:"bigint;default:0;index"`
} }
func GetRandomSatisfiedChannel(group string, model string) (*Channel, error) { func GetRandomSatisfiedChannel(group string, model string) (*Channel, error) {
ability := Ability{} ability := Ability{}
var err error = nil var err error = nil
maxPrioritySubQuery := DB.Model(&Ability{}).Select("MAX(priority)").Where("`group` = ? and model = ? and enabled = 1", group, model)
channelQuery := DB.Where("`group` = ? and model = ? and enabled = 1 and priority = (?)", group, model, maxPrioritySubQuery)
if common.UsingSQLite { if common.UsingSQLite {
err = DB.Where("`group` = ? and model = ? and enabled = 1", group, model).Order("CASE WHEN priority <> 0 THEN priority ELSE RANDOM() END DESC ").Limit(1).First(&ability).Error err = channelQuery.Order("RANDOM()").First(&ability).Error
} else { } else {
err = DB.Where("`group` = ? and model = ? and enabled = 1", group, model).Order("CASE WHEN priority <> 0 THEN priority ELSE RAND() END DESC").Limit(1).First(&ability).Error err = channelQuery.Order("RAND()").First(&ability).Error
} }
if err != nil { if err != nil {
return nil, err return nil, err

View File

@@ -186,7 +186,7 @@ func SyncChannelCache(frequency int) {
} }
func CacheGetRandomSatisfiedChannel(group string, model string) (*Channel, error) { func CacheGetRandomSatisfiedChannel(group string, model string) (*Channel, error) {
if !common.RedisEnabled { if !common.MemoryCacheEnabled {
return GetRandomSatisfiedChannel(group, model) return GetRandomSatisfiedChannel(group, model)
} }
channelSyncLock.RLock() channelSyncLock.RLock()

View File

@@ -11,18 +11,18 @@ type Channel struct {
Key string `json:"key" gorm:"not null;index"` Key string `json:"key" gorm:"not null;index"`
Status int `json:"status" gorm:"default:1"` Status int `json:"status" gorm:"default:1"`
Name string `json:"name" gorm:"index"` Name string `json:"name" gorm:"index"`
Weight int `json:"weight"` Weight *uint `json:"weight" gorm:"default:0"`
CreatedTime int64 `json:"created_time" gorm:"bigint"` CreatedTime int64 `json:"created_time" gorm:"bigint"`
TestTime int64 `json:"test_time" gorm:"bigint"` TestTime int64 `json:"test_time" gorm:"bigint"`
ResponseTime int `json:"response_time"` // in milliseconds ResponseTime int `json:"response_time"` // in milliseconds
BaseURL string `json:"base_url" gorm:"column:base_url"` BaseURL *string `json:"base_url" gorm:"column:base_url;default:''"`
Other string `json:"other"` Other string `json:"other"`
Balance float64 `json:"balance"` // in USD Balance float64 `json:"balance"` // in USD
BalanceUpdatedTime int64 `json:"balance_updated_time" gorm:"bigint"` BalanceUpdatedTime int64 `json:"balance_updated_time" gorm:"bigint"`
Models string `json:"models"` Models string `json:"models"`
Group string `json:"group" gorm:"type:varchar(32);default:'default'"` Group string `json:"group" gorm:"type:varchar(32);default:'default'"`
UsedQuota int64 `json:"used_quota" gorm:"bigint;default:0"` UsedQuota int64 `json:"used_quota" gorm:"bigint;default:0"`
ModelMapping string `json:"model_mapping" gorm:"type:varchar(1024);default:''"` ModelMapping *string `json:"model_mapping" gorm:"type:varchar(1024);default:''"`
Priority *int64 `json:"priority" gorm:"bigint;default:0"` Priority *int64 `json:"priority" gorm:"bigint;default:0"`
} }
@@ -80,12 +80,26 @@ func BatchInsertChannels(channels []Channel) error {
} }
func (channel *Channel) GetPriority() int64 { func (channel *Channel) GetPriority() int64 {
if channel == nil { if channel.Priority == nil {
return 0 return 0
} }
return *channel.Priority return *channel.Priority
} }
func (channel *Channel) GetBaseURL() string {
if channel.BaseURL == nil {
return ""
}
return *channel.BaseURL
}
func (channel *Channel) GetModelMapping() string {
if channel.ModelMapping == nil {
return ""
}
return *channel.ModelMapping
}
func (channel *Channel) Insert() error { func (channel *Channel) Insert() error {
var err error var err error
err = DB.Create(channel).Error err = DB.Create(channel).Error
@@ -162,3 +176,8 @@ func updateChannelUsedQuota(id int, quota int) {
common.SysError("failed to update channel used quota: " + err.Error()) common.SysError("failed to update channel used quota: " + err.Error())
} }
} }
func DeleteChannelByStatus(status int64) (int64, error) {
result := DB.Where("status = ?", status).Delete(&Channel{})
return result.RowsAffected, result.Error
}

View File

@@ -8,18 +8,18 @@ import (
) )
type Log struct { type Log struct {
Id int `json:"id"` Id int `json:"id;index:idx_created_at_id,priority:1"`
UserId int `json:"user_id"` UserId int `json:"user_id" gorm:"index"`
CreatedAt int64 `json:"created_at" gorm:"bigint;index"` CreatedAt int64 `json:"created_at" gorm:"bigint;index:idx_created_at_id,priority:2;index:idx_created_at_type"`
Type int `json:"type" gorm:"index"` Type int `json:"type" gorm:"index:idx_created_at_type"`
Content string `json:"content"` Content string `json:"content"`
Username string `json:"username" gorm:"index;default:''"` Username string `json:"username" gorm:"index:index_username_model_name,priority:2;default:''"`
TokenName string `json:"token_name" gorm:"index;default:''"` TokenName string `json:"token_name" gorm:"index;default:''"`
ModelName string `json:"model_name" gorm:"index;default:''"` ModelName string `json:"model_name" gorm:"index;index:index_username_model_name,priority:1;default:''"`
Quota int `json:"quota" gorm:"default:0"` Quota int `json:"quota" gorm:"default:0"`
PromptTokens int `json:"prompt_tokens" gorm:"default:0"` PromptTokens int `json:"prompt_tokens" gorm:"default:0"`
CompletionTokens int `json:"completion_tokens" gorm:"default:0"` CompletionTokens int `json:"completion_tokens" gorm:"default:0"`
Channel int `json:"channel" gorm:"default:0"` ChannelId int `json:"channel" gorm:"index"`
} }
const ( const (
@@ -47,7 +47,6 @@ func RecordLog(userId int, logType int, content string) {
} }
} }
func RecordConsumeLog(ctx context.Context, userId int, channelId int, promptTokens int, completionTokens int, modelName string, tokenName string, quota int, content string) { func RecordConsumeLog(ctx context.Context, userId int, channelId int, promptTokens int, completionTokens int, modelName string, tokenName string, quota int, content string) {
common.LogInfo(ctx, fmt.Sprintf("record consume log: userId=%d, channelId=%d, promptTokens=%d, completionTokens=%d, modelName=%s, tokenName=%s, quota=%d, content=%s", userId, channelId, promptTokens, completionTokens, modelName, tokenName, quota, content)) common.LogInfo(ctx, fmt.Sprintf("record consume log: userId=%d, channelId=%d, promptTokens=%d, completionTokens=%d, modelName=%s, tokenName=%s, quota=%d, content=%s", userId, channelId, promptTokens, completionTokens, modelName, tokenName, quota, content))
if !common.LogConsumeEnabled { if !common.LogConsumeEnabled {
@@ -64,7 +63,7 @@ func RecordConsumeLog(ctx context.Context, userId int, channelId int, promptToke
TokenName: tokenName, TokenName: tokenName,
ModelName: modelName, ModelName: modelName,
Quota: quota, Quota: quota,
Channel: channelId, ChannelId: channelId,
} }
err := DB.Create(log).Error err := DB.Create(log).Error
if err != nil { if err != nil {
@@ -135,7 +134,7 @@ func SearchUserLogs(userId int, keyword string) (logs []*Log, err error) {
} }
func SumUsedQuota(logType int, startTimestamp int64, endTimestamp int64, modelName string, username string, tokenName string, channel int) (quota int) { func SumUsedQuota(logType int, startTimestamp int64, endTimestamp int64, modelName string, username string, tokenName string, channel int) (quota int) {
tx := DB.Table("logs").Select("sum(quota)") tx := DB.Table("logs").Select("ifnull(sum(quota),0)")
if username != "" { if username != "" {
tx = tx.Where("username = ?", username) tx = tx.Where("username = ?", username)
} }
@@ -159,7 +158,7 @@ func SumUsedQuota(logType int, startTimestamp int64, endTimestamp int64, modelNa
} }
func SumUsedToken(logType int, startTimestamp int64, endTimestamp int64, modelName string, username string, tokenName string) (token int) { func SumUsedToken(logType int, startTimestamp int64, endTimestamp int64, modelName string, username string, tokenName string) (token int) {
tx := DB.Table("logs").Select("sum(prompt_tokens) + sum(completion_tokens)") tx := DB.Table("logs").Select("ifnull(sum(prompt_tokens),0) + ifnull(sum(completion_tokens),0)")
if username != "" { if username != "" {
tx = tx.Where("username = ?", username) tx = tx.Where("username = ?", username)
} }

View File

@@ -81,6 +81,7 @@ func InitDB() (err error) {
if !common.IsMasterNode { if !common.IsMasterNode {
return nil return nil
} }
common.SysLog("database migration started")
err = db.AutoMigrate(&Channel{}) err = db.AutoMigrate(&Channel{})
if err != nil { if err != nil {
return err return err

View File

@@ -74,6 +74,7 @@ func SetApiRouter(router *gin.Engine) {
channelRoute.GET("/update_balance/:id", controller.UpdateChannelBalance) channelRoute.GET("/update_balance/:id", controller.UpdateChannelBalance)
channelRoute.POST("/", controller.AddChannel) channelRoute.POST("/", controller.AddChannel)
channelRoute.PUT("/", controller.UpdateChannel) channelRoute.PUT("/", controller.UpdateChannel)
channelRoute.DELETE("/manually_disabled", controller.DeleteManuallyDisabledChannel)
channelRoute.DELETE("/:id", controller.DeleteChannel) channelRoute.DELETE("/:id", controller.DeleteChannel)
} }
tokenRoute := apiRouter.Group("/token") tokenRoute := apiRouter.Group("/token")

View File

@@ -1,5 +1,5 @@
import React, { useEffect, useState } from 'react'; import React, { useEffect, useState } from 'react';
import {Button, Form, Input, Label, Pagination, Popup, Table} from 'semantic-ui-react'; import { Button, Form, Input, Label, Pagination, Popup, Table } from 'semantic-ui-react';
import { Link } from 'react-router-dom'; import { Link } from 'react-router-dom';
import { API, showError, showInfo, showNotice, showSuccess, timestamp2string } from '../helpers'; import { API, showError, showInfo, showNotice, showSuccess, timestamp2string } from '../helpers';
@@ -96,7 +96,7 @@ const ChannelsTable = () => {
}); });
}, []); }, []);
const manageChannel = async (id, action, idx, priority) => { const manageChannel = async (id, action, idx, value) => {
let data = { id }; let data = { id };
let res; let res;
switch (action) { switch (action) {
@@ -112,10 +112,20 @@ const ChannelsTable = () => {
res = await API.put('/api/channel/', data); res = await API.put('/api/channel/', data);
break; break;
case 'priority': case 'priority':
if (priority === '') { if (value === '') {
return; return;
} }
data.priority = parseInt(priority); data.priority = parseInt(value);
res = await API.put('/api/channel/', data);
break;
case 'weight':
if (value === '') {
return;
}
data.weight = parseInt(value);
if (data.weight < 0) {
data.weight = 0;
}
res = await API.put('/api/channel/', data); res = await API.put('/api/channel/', data);
break; break;
} }
@@ -142,9 +152,23 @@ const ChannelsTable = () => {
return <Label basic color='green'>已启用</Label>; return <Label basic color='green'>已启用</Label>;
case 2: case 2:
return ( return (
<Label basic color='red'> <Popup
已禁用 trigger={<Label basic color='red'>
</Label> 已禁用
</Label>}
content='本渠道被手动禁用'
basic
/>
);
case 3:
return (
<Popup
trigger={<Label basic color='yellow'>
已禁用
</Label>}
content='本渠道被程序自动禁用'
basic
/>
); );
default: default:
return ( return (
@@ -202,7 +226,7 @@ const ChannelsTable = () => {
showInfo(`通道 ${name} 测试成功,耗时 ${time.toFixed(2)} 秒。`); showInfo(`通道 ${name} 测试成功,耗时 ${time.toFixed(2)} 秒。`);
} else { } else {
showError(message); showError(message);
showNotice("当前版本测试是通过按照 OpenAI API 格式使用 gpt-3.5-turbo 模型进行非流式请求实现的,因此测试报错并不一定代表通道不可用,该功能后续会修复。") showNotice('当前版本测试是通过按照 OpenAI API 格式使用 gpt-3.5-turbo 模型进行非流式请求实现的,因此测试报错并不一定代表通道不可用,该功能后续会修复。');
} }
}; };
@@ -216,6 +240,17 @@ const ChannelsTable = () => {
} }
}; };
const deleteAllManuallyDisabledChannels = async () => {
const res = await API.delete(`/api/channel/manually_disabled`);
const { success, message, data } = res.data;
if (success) {
showSuccess(`已删除所有手动禁用渠道,共计 ${data}`);
await refresh();
} else {
showError(message);
}
};
const updateChannelBalance = async (id, name, idx) => { const updateChannelBalance = async (id, name, idx) => {
const res = await API.get(`/api/channel/update_balance/${id}/`); const res = await API.get(`/api/channel/update_balance/${id}/`);
const { success, message, balance } = res.data; const { success, message, balance } = res.data;
@@ -343,10 +378,10 @@ const ChannelsTable = () => {
余额 余额
</Table.HeaderCell> </Table.HeaderCell>
<Table.HeaderCell <Table.HeaderCell
style={{ cursor: 'pointer' }} style={{ cursor: 'pointer' }}
onClick={() => { onClick={() => {
sortChannel('priority'); sortChannel('priority');
}} }}
> >
优先级 优先级
</Table.HeaderCell> </Table.HeaderCell>
@@ -390,18 +425,18 @@ const ChannelsTable = () => {
</Table.Cell> </Table.Cell>
<Table.Cell> <Table.Cell>
<Popup <Popup
trigger={<Input type="number" defaultValue={channel.priority} onBlur={(event) => { trigger={<Input type='number' defaultValue={channel.priority} onBlur={(event) => {
manageChannel( manageChannel(
channel.id, channel.id,
'priority', 'priority',
idx, idx,
event.target.value, event.target.value
); );
}}> }}>
<input style={{maxWidth:'60px'}} /> <input style={{ maxWidth: '60px' }} />
</Input>} </Input>}
content='渠道选择优先级,越高越优先' content='渠道选择优先级,越高越优先'
basic basic
/> />
</Table.Cell> </Table.Cell>
<Table.Cell> <Table.Cell>
@@ -481,6 +516,20 @@ const ChannelsTable = () => {
</Button> </Button>
<Button size='small' onClick={updateAllChannelsBalance} <Button size='small' onClick={updateAllChannelsBalance}
loading={loading || updatingBalance}>更新所有已启用通道余额</Button> loading={loading || updatingBalance}>更新所有已启用通道余额</Button>
<Popup
trigger={
<Button size='small' loading={loading}>
删除所有手动禁用渠道
</Button>
}
on='click'
flowing
hoverable
>
<Button size='small' loading={loading} negative onClick={deleteAllManuallyDisabledChannels}>
确认删除
</Button>
</Popup>
<Pagination <Pagination
floated='right' floated='right'
activePage={activePage} activePage={activePage}

View File

@@ -2,8 +2,8 @@ import React, { useContext, useEffect, useState } from 'react';
import { Button, Divider, Form, Grid, Header, Image, Message, Modal, Segment } from 'semantic-ui-react'; import { Button, Divider, Form, Grid, Header, Image, Message, Modal, Segment } from 'semantic-ui-react';
import { Link, useNavigate, useSearchParams } from 'react-router-dom'; import { Link, useNavigate, useSearchParams } from 'react-router-dom';
import { UserContext } from '../context/User'; import { UserContext } from '../context/User';
import { API, getLogo, showError, showSuccess } from '../helpers'; import { API, getLogo, showError, showSuccess, showWarning } from '../helpers';
import { getOAuthState, onGitHubOAuthClicked } from './utils'; import { onGitHubOAuthClicked } from './utils';
const LoginForm = () => { const LoginForm = () => {
const [inputs, setInputs] = useState({ const [inputs, setInputs] = useState({
@@ -68,8 +68,14 @@ const LoginForm = () => {
if (success) { if (success) {
userDispatch({ type: 'login', payload: data }); userDispatch({ type: 'login', payload: data });
localStorage.setItem('user', JSON.stringify(data)); localStorage.setItem('user', JSON.stringify(data));
navigate('/'); if (username === 'root' && password === '123456') {
showSuccess('登录成功!'); navigate('/user/edit');
showSuccess('登录成功!');
showWarning('请立刻修改默认密码!');
} else {
navigate('/token');
showSuccess('登录成功!');
}
} else { } else {
showError(message); showError(message);
} }
@@ -126,7 +132,7 @@ const LoginForm = () => {
circular circular
color='black' color='black'
icon='github' icon='github'
onClick={()=>onGitHubOAuthClicked(status.github_client_id)} onClick={() => onGitHubOAuthClicked(status.github_client_id)}
/> />
) : ( ) : (
<></> <></>

View File

@@ -67,7 +67,7 @@ const EditChannel = () => {
localModels = ['ERNIE-Bot', 'ERNIE-Bot-turbo', 'Embedding-V1']; localModels = ['ERNIE-Bot', 'ERNIE-Bot-turbo', 'Embedding-V1'];
break; break;
case 17: case 17:
localModels = ['qwen-v1', 'qwen-plus-v1', 'text-embedding-v1']; localModels = ['qwen-turbo', 'qwen-plus', 'text-embedding-v1'];
break; break;
case 16: case 16:
localModels = ['chatglm_pro', 'chatglm_std', 'chatglm_lite']; localModels = ['chatglm_pro', 'chatglm_std', 'chatglm_lite'];
@@ -174,7 +174,7 @@ const EditChannel = () => {
return; return;
} }
let localInputs = inputs; let localInputs = inputs;
if (localInputs.base_url.endsWith('/')) { if (localInputs.base_url && localInputs.base_url.endsWith('/')) {
localInputs.base_url = localInputs.base_url.slice(0, localInputs.base_url.length - 1); localInputs.base_url = localInputs.base_url.slice(0, localInputs.base_url.length - 1);
} }
if (localInputs.type === 3 && localInputs.other === '') { if (localInputs.type === 3 && localInputs.other === '') {
@@ -183,9 +183,6 @@ const EditChannel = () => {
if (localInputs.type === 18 && localInputs.other === '') { if (localInputs.type === 18 && localInputs.other === '') {
localInputs.other = 'v2.1'; localInputs.other = 'v2.1';
} }
if (localInputs.model_mapping === '') {
localInputs.model_mapping = '{}';
}
let res; let res;
localInputs.models = localInputs.models.join(','); localInputs.models = localInputs.models.join(',');
localInputs.group = localInputs.groups.join(','); localInputs.group = localInputs.groups.join(',');

View File

@@ -102,7 +102,7 @@ const EditUser = () => {
label='密码' label='密码'
name='password' name='password'
type={'password'} type={'password'}
placeholder={'请输入新的密码'} placeholder={'请输入新的密码,最短 8 位'}
onChange={handleInputChange} onChange={handleInputChange}
value={password} value={password}
autoComplete='new-password' autoComplete='new-password'