Compare commits

..

11 Commits

Author SHA1 Message Date
JustSong
594f06e7b0 perf: lazy initialization for token encoders (close #566) 2023-09-29 17:56:11 +08:00
JustSong
197d1d7a9d docs: update readme 2023-09-29 17:49:47 +08:00
JustSong
f9b748c2ca chore: add MEMORY_CACHE_ENABLED env variable 2023-09-29 11:38:27 +08:00
JustSong
fd98463611 chore: update ali's model name 2023-09-23 22:57:59 +08:00
JustSong
f5a1cd3463 feat: add support for gpt-3.5-turbo-instruct (close #545) 2023-09-23 22:37:11 +08:00
igophper
8651451e53 fix: sum null to 0 (#541)
Co-authored-by: igophper <admin@jialilgu.cn>
2023-09-19 22:39:54 +08:00
JustSong
1c5bb97a42 fix: fix gorm expression
Co-authored-by: 初音控灬 <xyfacai@gmail.com>
2023-09-18 23:11:37 +08:00
JustSong
de868e4e4e fix: fix gorm expression
Co-authored-by: 初音控灬 <xyfacai@gmail.com>
2023-09-18 23:07:59 +08:00
JustSong
1d258cc898 fix: add default value for base url 2023-09-18 22:49:05 +08:00
JustSong
37e09d764c fix: fix random selection is not working when directly using database 2023-09-18 22:39:10 +08:00
JustSong
159b9e3369 fix: fix unable to set zero value for base url & model mapping 2023-09-18 22:07:17 +08:00
15 changed files with 106 additions and 68 deletions

View File

@@ -59,6 +59,9 @@ _✨ 通过标准的 OpenAI API 格式访问所有的大模型,开箱即用
> **Warning** > **Warning**
> 使用 Docker 拉取的最新镜像可能是 `alpha` 版本,如果追求稳定性请手动指定版本。 > 使用 Docker 拉取的最新镜像可能是 `alpha` 版本,如果追求稳定性请手动指定版本。
> **Warning**
> 使用 root 用户初次登录系统后,务必修改默认密码 `123456`
## 功能 ## 功能
1. 支持多种大模型: 1. 支持多种大模型:
+ [x] [OpenAI ChatGPT 系列模型](https://platform.openai.com/docs/guides/gpt/chat-completions-api)(支持 [Azure OpenAI API](https://learn.microsoft.com/en-us/azure/ai-services/openai/reference) + [x] [OpenAI ChatGPT 系列模型](https://platform.openai.com/docs/guides/gpt/chat-completions-api)(支持 [Azure OpenAI API](https://learn.microsoft.com/en-us/azure/ai-services/openai/reference)
@@ -309,22 +312,24 @@ graph LR
+ `SQL_CONN_MAX_LIFETIME`:连接的最大生命周期,默认为 `60`,单位分钟。 + `SQL_CONN_MAX_LIFETIME`:连接的最大生命周期,默认为 `60`,单位分钟。
4. `FRONTEND_BASE_URL`:设置之后将重定向页面请求到指定的地址,仅限从服务器设置。 4. `FRONTEND_BASE_URL`:设置之后将重定向页面请求到指定的地址,仅限从服务器设置。
+ 例子:`FRONTEND_BASE_URL=https://openai.justsong.cn` + 例子:`FRONTEND_BASE_URL=https://openai.justsong.cn`
5. `SYNC_FREQUENCY`:设置之后将定期与数据库同步配置,单位为秒,未设置则不进行同步 5. `MEMORY_CACHE_ENABLED`:启用内存缓存,会导致用户额度的更新存在一定的延迟,可选值为 `true` 和 `false`,未设置则默认为 `false`
+ 例子:`MEMORY_CACHE_ENABLED=true`
6. `SYNC_FREQUENCY`:在启用缓存的情况下与数据库同步配置的频率,单位为秒,默认为 `600` 秒。
+ 例子:`SYNC_FREQUENCY=60` + 例子:`SYNC_FREQUENCY=60`
6. `NODE_TYPE`:设置之后将指定节点类型,可选值为 `master` 和 `slave`,未设置则默认为 `master`。 7. `NODE_TYPE`:设置之后将指定节点类型,可选值为 `master` 和 `slave`,未设置则默认为 `master`。
+ 例子:`NODE_TYPE=slave` + 例子:`NODE_TYPE=slave`
7. `CHANNEL_UPDATE_FREQUENCY`:设置之后将定期更新渠道余额,单位为分钟,未设置则不进行更新。 8. `CHANNEL_UPDATE_FREQUENCY`:设置之后将定期更新渠道余额,单位为分钟,未设置则不进行更新。
+ 例子:`CHANNEL_UPDATE_FREQUENCY=1440` + 例子:`CHANNEL_UPDATE_FREQUENCY=1440`
8. `CHANNEL_TEST_FREQUENCY`:设置之后将定期检查渠道,单位为分钟,未设置则不进行检查。 9. `CHANNEL_TEST_FREQUENCY`:设置之后将定期检查渠道,单位为分钟,未设置则不进行检查。
+ 例子:`CHANNEL_TEST_FREQUENCY=1440` + 例子:`CHANNEL_TEST_FREQUENCY=1440`
9. `POLLING_INTERVAL`:批量更新渠道余额以及测试可用性时的请求间隔,单位为秒,默认无间隔。 10. `POLLING_INTERVAL`:批量更新渠道余额以及测试可用性时的请求间隔,单位为秒,默认无间隔。
+ 例子:`POLLING_INTERVAL=5` + 例子:`POLLING_INTERVAL=5`
10. `BATCH_UPDATE_ENABLED`:启用数据库批量更新聚合,会导致用户额度的更新存在一定的延迟可选值为 `true` 和 `false`,未设置则默认为 `false`。 11. `BATCH_UPDATE_ENABLED`:启用数据库批量更新聚合,会导致用户额度的更新存在一定的延迟可选值为 `true` 和 `false`,未设置则默认为 `false`。
+ 例子:`BATCH_UPDATE_ENABLED=true` + 例子:`BATCH_UPDATE_ENABLED=true`
+ 如果你遇到了数据库连接数过多的问题,可以尝试启用该选项。 + 如果你遇到了数据库连接数过多的问题,可以尝试启用该选项。
11. `BATCH_UPDATE_INTERVAL=5`:批量更新聚合的时间间隔,单位为秒,默认为 `5`。 12. `BATCH_UPDATE_INTERVAL=5`:批量更新聚合的时间间隔,单位为秒,默认为 `5`。
+ 例子:`BATCH_UPDATE_INTERVAL=5` + 例子:`BATCH_UPDATE_INTERVAL=5`
12. 请求频率限制: 13. 请求频率限制:
+ `GLOBAL_API_RATE_LIMIT`:全局 API 速率限制(除中继请求外),单 ip 三分钟内的最大请求数,默认为 `180`。 + `GLOBAL_API_RATE_LIMIT`:全局 API 速率限制(除中继请求外),单 ip 三分钟内的最大请求数,默认为 `180`。
+ `GLOBAL_WEB_RATE_LIMIT`:全局 Web 速率限制,单 ip 三分钟内的最大请求数,默认为 `60`。 + `GLOBAL_WEB_RATE_LIMIT`:全局 Web 速率限制,单 ip 三分钟内的最大请求数,默认为 `60`。

View File

@@ -56,6 +56,7 @@ var EmailDomainWhitelist = []string{
} }
var DebugEnabled = os.Getenv("DEBUG") == "true" var DebugEnabled = os.Getenv("DEBUG") == "true"
var MemoryCacheEnabled = os.Getenv("MEMORY_CACHE_ENABLED") == "true"
var LogConsumeEnabled = true var LogConsumeEnabled = true
@@ -92,7 +93,7 @@ var IsMasterNode = os.Getenv("NODE_TYPE") != "slave"
var requestInterval, _ = strconv.Atoi(os.Getenv("POLLING_INTERVAL")) var requestInterval, _ = strconv.Atoi(os.Getenv("POLLING_INTERVAL"))
var RequestInterval = time.Duration(requestInterval) * time.Second var RequestInterval = time.Duration(requestInterval) * time.Second
var SyncFrequency = 10 * 60 // unit is second, will be overwritten by SYNC_FREQUENCY var SyncFrequency = GetOrDefault("SYNC_FREQUENCY", 10*60) // unit is second
var BatchUpdateEnabled = false var BatchUpdateEnabled = false
var BatchUpdateInterval = GetOrDefault("BATCH_UPDATE_INTERVAL", 5) var BatchUpdateInterval = GetOrDefault("BATCH_UPDATE_INTERVAL", 5)

View File

@@ -24,6 +24,7 @@ var ModelRatio = map[string]float64{
"gpt-3.5-turbo-0613": 0.75, "gpt-3.5-turbo-0613": 0.75,
"gpt-3.5-turbo-16k": 1.5, // $0.003 / 1K tokens "gpt-3.5-turbo-16k": 1.5, // $0.003 / 1K tokens
"gpt-3.5-turbo-16k-0613": 1.5, "gpt-3.5-turbo-16k-0613": 1.5,
"gpt-3.5-turbo-instruct": 0.75, // $0.0015 / 1K tokens
"text-ada-001": 0.2, "text-ada-001": 0.2,
"text-babbage-001": 0.25, "text-babbage-001": 0.25,
"text-curie-001": 1, "text-curie-001": 1,
@@ -50,8 +51,8 @@ var ModelRatio = map[string]float64{
"chatglm_pro": 0.7143, // ¥0.01 / 1k tokens "chatglm_pro": 0.7143, // ¥0.01 / 1k tokens
"chatglm_std": 0.3572, // ¥0.005 / 1k tokens "chatglm_std": 0.3572, // ¥0.005 / 1k tokens
"chatglm_lite": 0.1429, // ¥0.002 / 1k tokens "chatglm_lite": 0.1429, // ¥0.002 / 1k tokens
"qwen-v1": 0.8572, // ¥0.012 / 1k tokens "qwen-turbo": 0.8572, // ¥0.012 / 1k tokens
"qwen-plus-v1": 1, // ¥0.014 / 1k tokens "qwen-plus": 10, // ¥0.14 / 1k tokens
"text-embedding-v1": 0.05, // ¥0.0007 / 1k tokens "text-embedding-v1": 0.05, // ¥0.0007 / 1k tokens
"SparkDesk": 1.2858, // ¥0.018 / 1k tokens "SparkDesk": 1.2858, // ¥0.018 / 1k tokens
"360GPT_S2_V9": 0.8572, // ¥0.012 / 1k tokens "360GPT_S2_V9": 0.8572, // ¥0.012 / 1k tokens

View File

@@ -111,7 +111,7 @@ func GetResponseBody(method, url string, channel *model.Channel, headers http.He
} }
func updateChannelCloseAIBalance(channel *model.Channel) (float64, error) { func updateChannelCloseAIBalance(channel *model.Channel) (float64, error) {
url := fmt.Sprintf("%s/dashboard/billing/credit_grants", channel.BaseURL) url := fmt.Sprintf("%s/dashboard/billing/credit_grants", channel.GetBaseURL())
body, err := GetResponseBody("GET", url, channel, GetAuthHeader(channel.Key)) body, err := GetResponseBody("GET", url, channel, GetAuthHeader(channel.Key))
if err != nil { if err != nil {
@@ -201,18 +201,18 @@ func updateChannelAIGC2DBalance(channel *model.Channel) (float64, error) {
func updateChannelBalance(channel *model.Channel) (float64, error) { func updateChannelBalance(channel *model.Channel) (float64, error) {
baseURL := common.ChannelBaseURLs[channel.Type] baseURL := common.ChannelBaseURLs[channel.Type]
if channel.BaseURL == "" { if channel.GetBaseURL() == "" {
channel.BaseURL = baseURL channel.BaseURL = &baseURL
} }
switch channel.Type { switch channel.Type {
case common.ChannelTypeOpenAI: case common.ChannelTypeOpenAI:
if channel.BaseURL != "" { if channel.GetBaseURL() != "" {
baseURL = channel.BaseURL baseURL = channel.GetBaseURL()
} }
case common.ChannelTypeAzure: case common.ChannelTypeAzure:
return 0, errors.New("尚未实现") return 0, errors.New("尚未实现")
case common.ChannelTypeCustom: case common.ChannelTypeCustom:
baseURL = channel.BaseURL baseURL = channel.GetBaseURL()
case common.ChannelTypeCloseAI: case common.ChannelTypeCloseAI:
return updateChannelCloseAIBalance(channel) return updateChannelCloseAIBalance(channel)
case common.ChannelTypeOpenAISB: case common.ChannelTypeOpenAISB:

View File

@@ -42,10 +42,10 @@ func testChannel(channel *model.Channel, request ChatRequest) (err error, openai
} }
requestURL := common.ChannelBaseURLs[channel.Type] requestURL := common.ChannelBaseURLs[channel.Type]
if channel.Type == common.ChannelTypeAzure { if channel.Type == common.ChannelTypeAzure {
requestURL = fmt.Sprintf("%s/openai/deployments/%s/chat/completions?api-version=2023-03-15-preview", channel.BaseURL, request.Model) requestURL = fmt.Sprintf("%s/openai/deployments/%s/chat/completions?api-version=2023-03-15-preview", channel.GetBaseURL(), request.Model)
} else { } else {
if channel.BaseURL != "" { if channel.GetBaseURL() != "" {
requestURL = channel.BaseURL requestURL = channel.GetBaseURL()
} }
requestURL += "/v1/chat/completions" requestURL += "/v1/chat/completions"
} }

View File

@@ -117,6 +117,15 @@ func init() {
Root: "gpt-3.5-turbo-16k-0613", Root: "gpt-3.5-turbo-16k-0613",
Parent: nil, Parent: nil,
}, },
{
Id: "gpt-3.5-turbo-instruct",
Object: "model",
Created: 1677649963,
OwnedBy: "openai",
Permission: permission,
Root: "gpt-3.5-turbo-instruct",
Parent: nil,
},
{ {
Id: "gpt-4", Id: "gpt-4",
Object: "model", Object: "model",
@@ -343,21 +352,21 @@ func init() {
Parent: nil, Parent: nil,
}, },
{ {
Id: "qwen-v1", Id: "qwen-turbo",
Object: "model", Object: "model",
Created: 1677649963, Created: 1677649963,
OwnedBy: "ali", OwnedBy: "ali",
Permission: permission, Permission: permission,
Root: "qwen-v1", Root: "qwen-turbo",
Parent: nil, Parent: nil,
}, },
{ {
Id: "qwen-plus-v1", Id: "qwen-plus",
Object: "model", Object: "model",
Created: 1677649963, Created: 1677649963,
OwnedBy: "ali", OwnedBy: "ali",
Permission: permission, Permission: permission,
Root: "qwen-plus-v1", Root: "qwen-plus",
Parent: nil, Parent: nil,
}, },
{ {

View File

@@ -9,44 +9,53 @@ import (
"net/http" "net/http"
"one-api/common" "one-api/common"
"strconv" "strconv"
"strings"
) )
var stopFinishReason = "stop" var stopFinishReason = "stop"
// tokenEncoderMap won't grow after initialization
var tokenEncoderMap = map[string]*tiktoken.Tiktoken{} var tokenEncoderMap = map[string]*tiktoken.Tiktoken{}
var defaultTokenEncoder *tiktoken.Tiktoken
func InitTokenEncoders() { func InitTokenEncoders() {
common.SysLog("initializing token encoders") common.SysLog("initializing token encoders")
fallbackTokenEncoder, err := tiktoken.EncodingForModel("gpt-3.5-turbo") gpt35TokenEncoder, err := tiktoken.EncodingForModel("gpt-3.5-turbo")
if err != nil { if err != nil {
common.FatalLog(fmt.Sprintf("failed to get fallback token encoder: %s", err.Error())) common.FatalLog(fmt.Sprintf("failed to get gpt-3.5-turbo token encoder: %s", err.Error()))
}
defaultTokenEncoder = gpt35TokenEncoder
gpt4TokenEncoder, err := tiktoken.EncodingForModel("gpt-4")
if err != nil {
common.FatalLog(fmt.Sprintf("failed to get gpt-4 token encoder: %s", err.Error()))
} }
for model, _ := range common.ModelRatio { for model, _ := range common.ModelRatio {
tokenEncoder, err := tiktoken.EncodingForModel(model) if strings.HasPrefix(model, "gpt-3.5") {
if err != nil { tokenEncoderMap[model] = gpt35TokenEncoder
common.SysError(fmt.Sprintf("using fallback encoder for model %s", model)) } else if strings.HasPrefix(model, "gpt-4") {
tokenEncoderMap[model] = fallbackTokenEncoder tokenEncoderMap[model] = gpt4TokenEncoder
continue } else {
tokenEncoderMap[model] = nil
} }
tokenEncoderMap[model] = tokenEncoder
} }
common.SysLog("token encoders initialized") common.SysLog("token encoders initialized")
} }
func getTokenEncoder(model string) *tiktoken.Tiktoken { func getTokenEncoder(model string) *tiktoken.Tiktoken {
if tokenEncoder, ok := tokenEncoderMap[model]; ok { tokenEncoder, ok := tokenEncoderMap[model]
if ok && tokenEncoder != nil {
return tokenEncoder return tokenEncoder
} }
tokenEncoder, err := tiktoken.EncodingForModel(model) if ok {
if err != nil { tokenEncoder, err := tiktoken.EncodingForModel(model)
common.SysError(fmt.Sprintf("failed to get token encoder for model %s: %s, using encoder for gpt-3.5-turbo", model, err.Error()))
tokenEncoder, err = tiktoken.EncodingForModel("gpt-3.5-turbo")
if err != nil { if err != nil {
common.FatalLog(fmt.Sprintf("failed to get token encoder for model gpt-3.5-turbo: %s", err.Error())) common.SysError(fmt.Sprintf("failed to get token encoder for model %s: %s, using encoder for gpt-3.5-turbo", model, err.Error()))
tokenEncoder = defaultTokenEncoder
} }
tokenEncoderMap[model] = tokenEncoder
return tokenEncoder
} }
tokenEncoderMap[model] = tokenEncoder return defaultTokenEncoder
return tokenEncoder
} }
func getTokenNum(tokenEncoder *tiktoken.Tiktoken, text string) int { func getTokenNum(tokenEncoder *tiktoken.Tiktoken, text string) int {

20
main.go
View File

@@ -2,6 +2,7 @@ package main
import ( import (
"embed" "embed"
"fmt"
"github.com/gin-contrib/sessions" "github.com/gin-contrib/sessions"
"github.com/gin-contrib/sessions/cookie" "github.com/gin-contrib/sessions/cookie"
"github.com/gin-gonic/gin" "github.com/gin-gonic/gin"
@@ -50,18 +51,17 @@ func main() {
// Initialize options // Initialize options
model.InitOptionMap() model.InitOptionMap()
if common.RedisEnabled { if common.RedisEnabled {
// for compatibility with old versions
common.MemoryCacheEnabled = true
}
if common.MemoryCacheEnabled {
common.SysLog("memory cache enabled")
common.SysError(fmt.Sprintf("sync frequency: %d seconds", common.SyncFrequency))
model.InitChannelCache() model.InitChannelCache()
} }
if os.Getenv("SYNC_FREQUENCY") != "" { if common.MemoryCacheEnabled {
frequency, err := strconv.Atoi(os.Getenv("SYNC_FREQUENCY")) go model.SyncOptions(common.SyncFrequency)
if err != nil { go model.SyncChannelCache(common.SyncFrequency)
common.FatalLog("failed to parse SYNC_FREQUENCY: " + err.Error())
}
common.SyncFrequency = frequency
go model.SyncOptions(frequency)
if common.RedisEnabled {
go model.SyncChannelCache(frequency)
}
} }
if os.Getenv("CHANNEL_UPDATE_FREQUENCY") != "" { if os.Getenv("CHANNEL_UPDATE_FREQUENCY") != "" {
frequency, err := strconv.Atoi(os.Getenv("CHANNEL_UPDATE_FREQUENCY")) frequency, err := strconv.Atoi(os.Getenv("CHANNEL_UPDATE_FREQUENCY"))

View File

@@ -94,7 +94,7 @@ func TokenAuth() func(c *gin.Context) {
abortWithMessage(c, http.StatusUnauthorized, err.Error()) abortWithMessage(c, http.StatusUnauthorized, err.Error())
return return
} }
userEnabled, err := model.IsUserEnabled(token.UserId) userEnabled, err := model.CacheIsUserEnabled(token.UserId)
if err != nil { if err != nil {
abortWithMessage(c, http.StatusInternalServerError, err.Error()) abortWithMessage(c, http.StatusInternalServerError, err.Error())
return return

View File

@@ -82,9 +82,9 @@ func Distribute() func(c *gin.Context) {
c.Set("channel", channel.Type) c.Set("channel", channel.Type)
c.Set("channel_id", channel.Id) c.Set("channel_id", channel.Id)
c.Set("channel_name", channel.Name) c.Set("channel_name", channel.Name)
c.Set("model_mapping", channel.ModelMapping) c.Set("model_mapping", channel.GetModelMapping())
c.Request.Header.Set("Authorization", fmt.Sprintf("Bearer %s", channel.Key)) c.Request.Header.Set("Authorization", fmt.Sprintf("Bearer %s", channel.Key))
c.Set("base_url", channel.BaseURL) c.Set("base_url", channel.GetBaseURL())
switch channel.Type { switch channel.Type {
case common.ChannelTypeAzure: case common.ChannelTypeAzure:
c.Set("api_version", channel.Other) c.Set("api_version", channel.Other)

View File

@@ -10,16 +10,18 @@ type Ability struct {
Model string `json:"model" gorm:"primaryKey;autoIncrement:false"` Model string `json:"model" gorm:"primaryKey;autoIncrement:false"`
ChannelId int `json:"channel_id" gorm:"primaryKey;autoIncrement:false;index"` ChannelId int `json:"channel_id" gorm:"primaryKey;autoIncrement:false;index"`
Enabled bool `json:"enabled"` Enabled bool `json:"enabled"`
Priority *int64 `json:"priority" gorm:"bigint;default:0"` Priority *int64 `json:"priority" gorm:"bigint;default:0;index"`
} }
func GetRandomSatisfiedChannel(group string, model string) (*Channel, error) { func GetRandomSatisfiedChannel(group string, model string) (*Channel, error) {
ability := Ability{} ability := Ability{}
var err error = nil var err error = nil
maxPrioritySubQuery := DB.Model(&Ability{}).Select("MAX(priority)").Where("`group` = ? and model = ? and enabled = 1", group, model)
channelQuery := DB.Where("`group` = ? and model = ? and enabled = 1 and priority = (?)", group, model, maxPrioritySubQuery)
if common.UsingSQLite { if common.UsingSQLite {
err = DB.Where("`group` = ? and model = ? and enabled = 1", group, model).Order("CASE WHEN priority <> 0 THEN priority ELSE RANDOM() END DESC ").Limit(1).First(&ability).Error err = channelQuery.Order("RANDOM()").First(&ability).Error
} else { } else {
err = DB.Where("`group` = ? and model = ? and enabled = 1", group, model).Order("CASE WHEN priority <> 0 THEN priority ELSE RAND() END DESC").Limit(1).First(&ability).Error err = channelQuery.Order("RAND()").First(&ability).Error
} }
if err != nil { if err != nil {
return nil, err return nil, err

View File

@@ -186,7 +186,7 @@ func SyncChannelCache(frequency int) {
} }
func CacheGetRandomSatisfiedChannel(group string, model string) (*Channel, error) { func CacheGetRandomSatisfiedChannel(group string, model string) (*Channel, error) {
if !common.RedisEnabled { if !common.MemoryCacheEnabled {
return GetRandomSatisfiedChannel(group, model) return GetRandomSatisfiedChannel(group, model)
} }
channelSyncLock.RLock() channelSyncLock.RLock()

View File

@@ -15,14 +15,14 @@ type Channel struct {
CreatedTime int64 `json:"created_time" gorm:"bigint"` CreatedTime int64 `json:"created_time" gorm:"bigint"`
TestTime int64 `json:"test_time" gorm:"bigint"` TestTime int64 `json:"test_time" gorm:"bigint"`
ResponseTime int `json:"response_time"` // in milliseconds ResponseTime int `json:"response_time"` // in milliseconds
BaseURL string `json:"base_url" gorm:"column:base_url"` BaseURL *string `json:"base_url" gorm:"column:base_url;default:''"`
Other string `json:"other"` Other string `json:"other"`
Balance float64 `json:"balance"` // in USD Balance float64 `json:"balance"` // in USD
BalanceUpdatedTime int64 `json:"balance_updated_time" gorm:"bigint"` BalanceUpdatedTime int64 `json:"balance_updated_time" gorm:"bigint"`
Models string `json:"models"` Models string `json:"models"`
Group string `json:"group" gorm:"type:varchar(32);default:'default'"` Group string `json:"group" gorm:"type:varchar(32);default:'default'"`
UsedQuota int64 `json:"used_quota" gorm:"bigint;default:0"` UsedQuota int64 `json:"used_quota" gorm:"bigint;default:0"`
ModelMapping string `json:"model_mapping" gorm:"type:varchar(1024);default:''"` ModelMapping *string `json:"model_mapping" gorm:"type:varchar(1024);default:''"`
Priority *int64 `json:"priority" gorm:"bigint;default:0"` Priority *int64 `json:"priority" gorm:"bigint;default:0"`
} }
@@ -80,12 +80,26 @@ func BatchInsertChannels(channels []Channel) error {
} }
func (channel *Channel) GetPriority() int64 { func (channel *Channel) GetPriority() int64 {
if channel == nil { if channel.Priority == nil {
return 0 return 0
} }
return *channel.Priority return *channel.Priority
} }
func (channel *Channel) GetBaseURL() string {
if channel.BaseURL == nil {
return ""
}
return *channel.BaseURL
}
func (channel *Channel) GetModelMapping() string {
if channel.ModelMapping == nil {
return ""
}
return *channel.ModelMapping
}
func (channel *Channel) Insert() error { func (channel *Channel) Insert() error {
var err error var err error
err = DB.Create(channel).Error err = DB.Create(channel).Error

View File

@@ -135,7 +135,7 @@ func SearchUserLogs(userId int, keyword string) (logs []*Log, err error) {
} }
func SumUsedQuota(logType int, startTimestamp int64, endTimestamp int64, modelName string, username string, tokenName string, channel int) (quota int) { func SumUsedQuota(logType int, startTimestamp int64, endTimestamp int64, modelName string, username string, tokenName string, channel int) (quota int) {
tx := DB.Table("logs").Select("sum(quota)") tx := DB.Table("logs").Select("ifnull(sum(quota),0)")
if username != "" { if username != "" {
tx = tx.Where("username = ?", username) tx = tx.Where("username = ?", username)
} }
@@ -159,7 +159,7 @@ func SumUsedQuota(logType int, startTimestamp int64, endTimestamp int64, modelNa
} }
func SumUsedToken(logType int, startTimestamp int64, endTimestamp int64, modelName string, username string, tokenName string) (token int) { func SumUsedToken(logType int, startTimestamp int64, endTimestamp int64, modelName string, username string, tokenName string) (token int) {
tx := DB.Table("logs").Select("sum(prompt_tokens) + sum(completion_tokens)") tx := DB.Table("logs").Select("ifnull(sum(prompt_tokens),0) + ifnull(sum(completion_tokens),0)")
if username != "" { if username != "" {
tx = tx.Where("username = ?", username) tx = tx.Where("username = ?", username)
} }

View File

@@ -67,7 +67,7 @@ const EditChannel = () => {
localModels = ['ERNIE-Bot', 'ERNIE-Bot-turbo', 'Embedding-V1']; localModels = ['ERNIE-Bot', 'ERNIE-Bot-turbo', 'Embedding-V1'];
break; break;
case 17: case 17:
localModels = ['qwen-v1', 'qwen-plus-v1', 'text-embedding-v1']; localModels = ['qwen-turbo', 'qwen-plus', 'text-embedding-v1'];
break; break;
case 16: case 16:
localModels = ['chatglm_pro', 'chatglm_std', 'chatglm_lite']; localModels = ['chatglm_pro', 'chatglm_std', 'chatglm_lite'];
@@ -174,7 +174,7 @@ const EditChannel = () => {
return; return;
} }
let localInputs = inputs; let localInputs = inputs;
if (localInputs.base_url.endsWith('/')) { if (localInputs.base_url && localInputs.base_url.endsWith('/')) {
localInputs.base_url = localInputs.base_url.slice(0, localInputs.base_url.length - 1); localInputs.base_url = localInputs.base_url.slice(0, localInputs.base_url.length - 1);
} }
if (localInputs.type === 3 && localInputs.other === '') { if (localInputs.type === 3 && localInputs.other === '') {
@@ -183,9 +183,6 @@ const EditChannel = () => {
if (localInputs.type === 18 && localInputs.other === '') { if (localInputs.type === 18 && localInputs.other === '') {
localInputs.other = 'v2.1'; localInputs.other = 'v2.1';
} }
if (localInputs.model_mapping === '') {
localInputs.model_mapping = '{}';
}
let res; let res;
localInputs.models = localInputs.models.join(','); localInputs.models = localInputs.models.join(',');
localInputs.group = localInputs.groups.join(','); localInputs.group = localInputs.groups.join(',');