mirror of
				https://github.com/songquanpeng/one-api.git
				synced 2025-11-04 07:43:41 +08:00 
			
		
		
		
	Compare commits
	
		
			16 Commits
		
	
	
		
			v0.5.5
			...
			v0.5.6-alp
		
	
	| Author | SHA1 | Date | |
|---|---|---|---|
| 
						 | 
					594f06e7b0 | ||
| 
						 | 
					197d1d7a9d | ||
| 
						 | 
					f9b748c2ca | ||
| 
						 | 
					fd98463611 | ||
| 
						 | 
					f5a1cd3463 | ||
| 
						 | 
					8651451e53 | ||
| 
						 | 
					1c5bb97a42 | ||
| 
						 | 
					de868e4e4e | ||
| 
						 | 
					1d258cc898 | ||
| 
						 | 
					37e09d764c | ||
| 
						 | 
					159b9e3369 | ||
| 
						 | 
					92001986db | ||
| 
						 | 
					a5647b1ea7 | ||
| 
						 | 
					215e54fc96 | ||
| 
						 | 
					ecf8a6d875 | ||
| 
						 | 
					24df3e5f62 | 
							
								
								
									
										29
									
								
								README.md
									
									
									
									
									
								
							
							
						
						
									
										29
									
								
								README.md
									
									
									
									
									
								
							@@ -59,6 +59,9 @@ _✨ 通过标准的 OpenAI API 格式访问所有的大模型,开箱即用 
 | 
				
			|||||||
> **Warning**
 | 
					> **Warning**
 | 
				
			||||||
> 使用 Docker 拉取的最新镜像可能是 `alpha` 版本,如果追求稳定性请手动指定版本。
 | 
					> 使用 Docker 拉取的最新镜像可能是 `alpha` 版本,如果追求稳定性请手动指定版本。
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					> **Warning**
 | 
				
			||||||
 | 
					> 使用 root 用户初次登录系统后,务必修改默认密码 `123456`!
 | 
				
			||||||
 | 
					
 | 
				
			||||||
## 功能
 | 
					## 功能
 | 
				
			||||||
1. 支持多种大模型:
 | 
					1. 支持多种大模型:
 | 
				
			||||||
   + [x] [OpenAI ChatGPT 系列模型](https://platform.openai.com/docs/guides/gpt/chat-completions-api)(支持 [Azure OpenAI API](https://learn.microsoft.com/en-us/azure/ai-services/openai/reference))
 | 
					   + [x] [OpenAI ChatGPT 系列模型](https://platform.openai.com/docs/guides/gpt/chat-completions-api)(支持 [Azure OpenAI API](https://learn.microsoft.com/en-us/azure/ai-services/openai/reference))
 | 
				
			||||||
@@ -269,6 +272,12 @@ docker run --name chatgpt-web -d -p 3002:3002 -e OPENAI_API_BASE_URL=https://ope
 | 
				
			|||||||
 | 
					
 | 
				
			||||||
注意,具体的 API Base 的格式取决于你所使用的客户端。
 | 
					注意,具体的 API Base 的格式取决于你所使用的客户端。
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					例如对于 OpenAI 的官方库:
 | 
				
			||||||
 | 
					```bash
 | 
				
			||||||
 | 
					OPENAI_API_KEY="sk-xxxxxx"
 | 
				
			||||||
 | 
					OPENAI_API_BASE="https://<HOST>:<PORT>/v1" 
 | 
				
			||||||
 | 
					```
 | 
				
			||||||
 | 
					
 | 
				
			||||||
```mermaid
 | 
					```mermaid
 | 
				
			||||||
graph LR
 | 
					graph LR
 | 
				
			||||||
    A(用户)
 | 
					    A(用户)
 | 
				
			||||||
@@ -303,22 +312,24 @@ graph LR
 | 
				
			|||||||
     + `SQL_CONN_MAX_LIFETIME`:连接的最大生命周期,默认为 `60`,单位分钟。
 | 
					     + `SQL_CONN_MAX_LIFETIME`:连接的最大生命周期,默认为 `60`,单位分钟。
 | 
				
			||||||
4. `FRONTEND_BASE_URL`:设置之后将重定向页面请求到指定的地址,仅限从服务器设置。
 | 
					4. `FRONTEND_BASE_URL`:设置之后将重定向页面请求到指定的地址,仅限从服务器设置。
 | 
				
			||||||
   + 例子:`FRONTEND_BASE_URL=https://openai.justsong.cn`
 | 
					   + 例子:`FRONTEND_BASE_URL=https://openai.justsong.cn`
 | 
				
			||||||
5. `SYNC_FREQUENCY`:设置之后将定期与数据库同步配置,单位为秒,未设置则不进行同步。
 | 
					5. `MEMORY_CACHE_ENABLED`:启用内存缓存,会导致用户额度的更新存在一定的延迟,可选值为 `true` 和 `false`,未设置则默认为 `false`。
 | 
				
			||||||
 | 
					   + 例子:`MEMORY_CACHE_ENABLED=true`
 | 
				
			||||||
 | 
					6. `SYNC_FREQUENCY`:在启用缓存的情况下与数据库同步配置的频率,单位为秒,默认为 `600` 秒。
 | 
				
			||||||
   + 例子:`SYNC_FREQUENCY=60`
 | 
					   + 例子:`SYNC_FREQUENCY=60`
 | 
				
			||||||
6. `NODE_TYPE`:设置之后将指定节点类型,可选值为 `master` 和 `slave`,未设置则默认为 `master`。
 | 
					7. `NODE_TYPE`:设置之后将指定节点类型,可选值为 `master` 和 `slave`,未设置则默认为 `master`。
 | 
				
			||||||
   + 例子:`NODE_TYPE=slave`
 | 
					   + 例子:`NODE_TYPE=slave`
 | 
				
			||||||
7. `CHANNEL_UPDATE_FREQUENCY`:设置之后将定期更新渠道余额,单位为分钟,未设置则不进行更新。
 | 
					8. `CHANNEL_UPDATE_FREQUENCY`:设置之后将定期更新渠道余额,单位为分钟,未设置则不进行更新。
 | 
				
			||||||
   + 例子:`CHANNEL_UPDATE_FREQUENCY=1440`
 | 
					   + 例子:`CHANNEL_UPDATE_FREQUENCY=1440`
 | 
				
			||||||
8. `CHANNEL_TEST_FREQUENCY`:设置之后将定期检查渠道,单位为分钟,未设置则不进行检查。
 | 
					9. `CHANNEL_TEST_FREQUENCY`:设置之后将定期检查渠道,单位为分钟,未设置则不进行检查。
 | 
				
			||||||
   + 例子:`CHANNEL_TEST_FREQUENCY=1440`
 | 
					   + 例子:`CHANNEL_TEST_FREQUENCY=1440`
 | 
				
			||||||
9. `POLLING_INTERVAL`:批量更新渠道余额以及测试可用性时的请求间隔,单位为秒,默认无间隔。
 | 
					10. `POLLING_INTERVAL`:批量更新渠道余额以及测试可用性时的请求间隔,单位为秒,默认无间隔。
 | 
				
			||||||
   + 例子:`POLLING_INTERVAL=5`
 | 
					    + 例子:`POLLING_INTERVAL=5`
 | 
				
			||||||
10. `BATCH_UPDATE_ENABLED`:启用数据库批量更新聚合,会导致用户额度的更新存在一定的延迟可选值为 `true` 和 `false`,未设置则默认为 `false`。
 | 
					11. `BATCH_UPDATE_ENABLED`:启用数据库批量更新聚合,会导致用户额度的更新存在一定的延迟可选值为 `true` 和 `false`,未设置则默认为 `false`。
 | 
				
			||||||
    + 例子:`BATCH_UPDATE_ENABLED=true`
 | 
					    + 例子:`BATCH_UPDATE_ENABLED=true`
 | 
				
			||||||
    + 如果你遇到了数据库连接数过多的问题,可以尝试启用该选项。
 | 
					    + 如果你遇到了数据库连接数过多的问题,可以尝试启用该选项。
 | 
				
			||||||
11. `BATCH_UPDATE_INTERVAL=5`:批量更新聚合的时间间隔,单位为秒,默认为 `5`。
 | 
					12. `BATCH_UPDATE_INTERVAL=5`:批量更新聚合的时间间隔,单位为秒,默认为 `5`。
 | 
				
			||||||
    + 例子:`BATCH_UPDATE_INTERVAL=5`
 | 
					    + 例子:`BATCH_UPDATE_INTERVAL=5`
 | 
				
			||||||
12. 请求频率限制:
 | 
					13. 请求频率限制:
 | 
				
			||||||
    + `GLOBAL_API_RATE_LIMIT`:全局 API 速率限制(除中继请求外),单 ip 三分钟内的最大请求数,默认为 `180`。
 | 
					    + `GLOBAL_API_RATE_LIMIT`:全局 API 速率限制(除中继请求外),单 ip 三分钟内的最大请求数,默认为 `180`。
 | 
				
			||||||
    + `GLOBAL_WEB_RATE_LIMIT`:全局 Web 速率限制,单 ip 三分钟内的最大请求数,默认为 `60`。
 | 
					    + `GLOBAL_WEB_RATE_LIMIT`:全局 Web 速率限制,单 ip 三分钟内的最大请求数,默认为 `60`。
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 
 | 
				
			|||||||
@@ -56,6 +56,7 @@ var EmailDomainWhitelist = []string{
 | 
				
			|||||||
}
 | 
					}
 | 
				
			||||||
 | 
					
 | 
				
			||||||
var DebugEnabled = os.Getenv("DEBUG") == "true"
 | 
					var DebugEnabled = os.Getenv("DEBUG") == "true"
 | 
				
			||||||
 | 
					var MemoryCacheEnabled = os.Getenv("MEMORY_CACHE_ENABLED") == "true"
 | 
				
			||||||
 | 
					
 | 
				
			||||||
var LogConsumeEnabled = true
 | 
					var LogConsumeEnabled = true
 | 
				
			||||||
 | 
					
 | 
				
			||||||
@@ -92,7 +93,7 @@ var IsMasterNode = os.Getenv("NODE_TYPE") != "slave"
 | 
				
			|||||||
var requestInterval, _ = strconv.Atoi(os.Getenv("POLLING_INTERVAL"))
 | 
					var requestInterval, _ = strconv.Atoi(os.Getenv("POLLING_INTERVAL"))
 | 
				
			||||||
var RequestInterval = time.Duration(requestInterval) * time.Second
 | 
					var RequestInterval = time.Duration(requestInterval) * time.Second
 | 
				
			||||||
 | 
					
 | 
				
			||||||
var SyncFrequency = 10 * 60 // unit is second, will be overwritten by SYNC_FREQUENCY
 | 
					var SyncFrequency = GetOrDefault("SYNC_FREQUENCY", 10*60) // unit is second
 | 
				
			||||||
 | 
					
 | 
				
			||||||
var BatchUpdateEnabled = false
 | 
					var BatchUpdateEnabled = false
 | 
				
			||||||
var BatchUpdateInterval = GetOrDefault("BATCH_UPDATE_INTERVAL", 5)
 | 
					var BatchUpdateInterval = GetOrDefault("BATCH_UPDATE_INTERVAL", 5)
 | 
				
			||||||
 
 | 
				
			|||||||
@@ -24,6 +24,7 @@ var ModelRatio = map[string]float64{
 | 
				
			|||||||
	"gpt-3.5-turbo-0613":        0.75,
 | 
						"gpt-3.5-turbo-0613":        0.75,
 | 
				
			||||||
	"gpt-3.5-turbo-16k":         1.5, // $0.003 / 1K tokens
 | 
						"gpt-3.5-turbo-16k":         1.5, // $0.003 / 1K tokens
 | 
				
			||||||
	"gpt-3.5-turbo-16k-0613":    1.5,
 | 
						"gpt-3.5-turbo-16k-0613":    1.5,
 | 
				
			||||||
 | 
						"gpt-3.5-turbo-instruct":    0.75, // $0.0015 / 1K tokens
 | 
				
			||||||
	"text-ada-001":              0.2,
 | 
						"text-ada-001":              0.2,
 | 
				
			||||||
	"text-babbage-001":          0.25,
 | 
						"text-babbage-001":          0.25,
 | 
				
			||||||
	"text-curie-001":            1,
 | 
						"text-curie-001":            1,
 | 
				
			||||||
@@ -50,8 +51,8 @@ var ModelRatio = map[string]float64{
 | 
				
			|||||||
	"chatglm_pro":               0.7143, // ¥0.01 / 1k tokens
 | 
						"chatglm_pro":               0.7143, // ¥0.01 / 1k tokens
 | 
				
			||||||
	"chatglm_std":               0.3572, // ¥0.005 / 1k tokens
 | 
						"chatglm_std":               0.3572, // ¥0.005 / 1k tokens
 | 
				
			||||||
	"chatglm_lite":              0.1429, // ¥0.002 / 1k tokens
 | 
						"chatglm_lite":              0.1429, // ¥0.002 / 1k tokens
 | 
				
			||||||
	"qwen-v1":                   0.8572, // ¥0.012 / 1k tokens
 | 
						"qwen-turbo":                0.8572, // ¥0.012 / 1k tokens
 | 
				
			||||||
	"qwen-plus-v1":              1,      // ¥0.014 / 1k tokens
 | 
						"qwen-plus":                 10,     // ¥0.14 / 1k tokens
 | 
				
			||||||
	"text-embedding-v1":         0.05,   // ¥0.0007 / 1k tokens
 | 
						"text-embedding-v1":         0.05,   // ¥0.0007 / 1k tokens
 | 
				
			||||||
	"SparkDesk":                 1.2858, // ¥0.018 / 1k tokens
 | 
						"SparkDesk":                 1.2858, // ¥0.018 / 1k tokens
 | 
				
			||||||
	"360GPT_S2_V9":              0.8572, // ¥0.012 / 1k tokens
 | 
						"360GPT_S2_V9":              0.8572, // ¥0.012 / 1k tokens
 | 
				
			||||||
 
 | 
				
			|||||||
@@ -111,7 +111,7 @@ func GetResponseBody(method, url string, channel *model.Channel, headers http.He
 | 
				
			|||||||
}
 | 
					}
 | 
				
			||||||
 | 
					
 | 
				
			||||||
func updateChannelCloseAIBalance(channel *model.Channel) (float64, error) {
 | 
					func updateChannelCloseAIBalance(channel *model.Channel) (float64, error) {
 | 
				
			||||||
	url := fmt.Sprintf("%s/dashboard/billing/credit_grants", channel.BaseURL)
 | 
						url := fmt.Sprintf("%s/dashboard/billing/credit_grants", channel.GetBaseURL())
 | 
				
			||||||
	body, err := GetResponseBody("GET", url, channel, GetAuthHeader(channel.Key))
 | 
						body, err := GetResponseBody("GET", url, channel, GetAuthHeader(channel.Key))
 | 
				
			||||||
 | 
					
 | 
				
			||||||
	if err != nil {
 | 
						if err != nil {
 | 
				
			||||||
@@ -201,18 +201,18 @@ func updateChannelAIGC2DBalance(channel *model.Channel) (float64, error) {
 | 
				
			|||||||
 | 
					
 | 
				
			||||||
func updateChannelBalance(channel *model.Channel) (float64, error) {
 | 
					func updateChannelBalance(channel *model.Channel) (float64, error) {
 | 
				
			||||||
	baseURL := common.ChannelBaseURLs[channel.Type]
 | 
						baseURL := common.ChannelBaseURLs[channel.Type]
 | 
				
			||||||
	if channel.BaseURL == "" {
 | 
						if channel.GetBaseURL() == "" {
 | 
				
			||||||
		channel.BaseURL = baseURL
 | 
							channel.BaseURL = &baseURL
 | 
				
			||||||
	}
 | 
						}
 | 
				
			||||||
	switch channel.Type {
 | 
						switch channel.Type {
 | 
				
			||||||
	case common.ChannelTypeOpenAI:
 | 
						case common.ChannelTypeOpenAI:
 | 
				
			||||||
		if channel.BaseURL != "" {
 | 
							if channel.GetBaseURL() != "" {
 | 
				
			||||||
			baseURL = channel.BaseURL
 | 
								baseURL = channel.GetBaseURL()
 | 
				
			||||||
		}
 | 
							}
 | 
				
			||||||
	case common.ChannelTypeAzure:
 | 
						case common.ChannelTypeAzure:
 | 
				
			||||||
		return 0, errors.New("尚未实现")
 | 
							return 0, errors.New("尚未实现")
 | 
				
			||||||
	case common.ChannelTypeCustom:
 | 
						case common.ChannelTypeCustom:
 | 
				
			||||||
		baseURL = channel.BaseURL
 | 
							baseURL = channel.GetBaseURL()
 | 
				
			||||||
	case common.ChannelTypeCloseAI:
 | 
						case common.ChannelTypeCloseAI:
 | 
				
			||||||
		return updateChannelCloseAIBalance(channel)
 | 
							return updateChannelCloseAIBalance(channel)
 | 
				
			||||||
	case common.ChannelTypeOpenAISB:
 | 
						case common.ChannelTypeOpenAISB:
 | 
				
			||||||
 
 | 
				
			|||||||
@@ -42,10 +42,10 @@ func testChannel(channel *model.Channel, request ChatRequest) (err error, openai
 | 
				
			|||||||
	}
 | 
						}
 | 
				
			||||||
	requestURL := common.ChannelBaseURLs[channel.Type]
 | 
						requestURL := common.ChannelBaseURLs[channel.Type]
 | 
				
			||||||
	if channel.Type == common.ChannelTypeAzure {
 | 
						if channel.Type == common.ChannelTypeAzure {
 | 
				
			||||||
		requestURL = fmt.Sprintf("%s/openai/deployments/%s/chat/completions?api-version=2023-03-15-preview", channel.BaseURL, request.Model)
 | 
							requestURL = fmt.Sprintf("%s/openai/deployments/%s/chat/completions?api-version=2023-03-15-preview", channel.GetBaseURL(), request.Model)
 | 
				
			||||||
	} else {
 | 
						} else {
 | 
				
			||||||
		if channel.BaseURL != "" {
 | 
							if channel.GetBaseURL() != "" {
 | 
				
			||||||
			requestURL = channel.BaseURL
 | 
								requestURL = channel.GetBaseURL()
 | 
				
			||||||
		}
 | 
							}
 | 
				
			||||||
		requestURL += "/v1/chat/completions"
 | 
							requestURL += "/v1/chat/completions"
 | 
				
			||||||
	}
 | 
						}
 | 
				
			||||||
 
 | 
				
			|||||||
@@ -19,7 +19,8 @@ func GetAllLogs(c *gin.Context) {
 | 
				
			|||||||
	username := c.Query("username")
 | 
						username := c.Query("username")
 | 
				
			||||||
	tokenName := c.Query("token_name")
 | 
						tokenName := c.Query("token_name")
 | 
				
			||||||
	modelName := c.Query("model_name")
 | 
						modelName := c.Query("model_name")
 | 
				
			||||||
	logs, err := model.GetAllLogs(logType, startTimestamp, endTimestamp, modelName, username, tokenName, p*common.ItemsPerPage, common.ItemsPerPage)
 | 
						channel, _ := strconv.Atoi(c.Query("channel"))
 | 
				
			||||||
 | 
						logs, err := model.GetAllLogs(logType, startTimestamp, endTimestamp, modelName, username, tokenName, p*common.ItemsPerPage, common.ItemsPerPage, channel)
 | 
				
			||||||
	if err != nil {
 | 
						if err != nil {
 | 
				
			||||||
		c.JSON(http.StatusOK, gin.H{
 | 
							c.JSON(http.StatusOK, gin.H{
 | 
				
			||||||
			"success": false,
 | 
								"success": false,
 | 
				
			||||||
@@ -106,7 +107,8 @@ func GetLogsStat(c *gin.Context) {
 | 
				
			|||||||
	tokenName := c.Query("token_name")
 | 
						tokenName := c.Query("token_name")
 | 
				
			||||||
	username := c.Query("username")
 | 
						username := c.Query("username")
 | 
				
			||||||
	modelName := c.Query("model_name")
 | 
						modelName := c.Query("model_name")
 | 
				
			||||||
	quotaNum := model.SumUsedQuota(logType, startTimestamp, endTimestamp, modelName, username, tokenName)
 | 
						channel, _ := strconv.Atoi(c.Query("channel"))
 | 
				
			||||||
 | 
						quotaNum := model.SumUsedQuota(logType, startTimestamp, endTimestamp, modelName, username, tokenName, channel)
 | 
				
			||||||
	//tokenNum := model.SumUsedToken(logType, startTimestamp, endTimestamp, modelName, username, "")
 | 
						//tokenNum := model.SumUsedToken(logType, startTimestamp, endTimestamp, modelName, username, "")
 | 
				
			||||||
	c.JSON(http.StatusOK, gin.H{
 | 
						c.JSON(http.StatusOK, gin.H{
 | 
				
			||||||
		"success": true,
 | 
							"success": true,
 | 
				
			||||||
@@ -126,7 +128,8 @@ func GetLogsSelfStat(c *gin.Context) {
 | 
				
			|||||||
	endTimestamp, _ := strconv.ParseInt(c.Query("end_timestamp"), 10, 64)
 | 
						endTimestamp, _ := strconv.ParseInt(c.Query("end_timestamp"), 10, 64)
 | 
				
			||||||
	tokenName := c.Query("token_name")
 | 
						tokenName := c.Query("token_name")
 | 
				
			||||||
	modelName := c.Query("model_name")
 | 
						modelName := c.Query("model_name")
 | 
				
			||||||
	quotaNum := model.SumUsedQuota(logType, startTimestamp, endTimestamp, modelName, username, tokenName)
 | 
						channel, _ := strconv.Atoi(c.Query("channel"))
 | 
				
			||||||
 | 
						quotaNum := model.SumUsedQuota(logType, startTimestamp, endTimestamp, modelName, username, tokenName, channel)
 | 
				
			||||||
	//tokenNum := model.SumUsedToken(logType, startTimestamp, endTimestamp, modelName, username, tokenName)
 | 
						//tokenNum := model.SumUsedToken(logType, startTimestamp, endTimestamp, modelName, username, tokenName)
 | 
				
			||||||
	c.JSON(http.StatusOK, gin.H{
 | 
						c.JSON(http.StatusOK, gin.H{
 | 
				
			||||||
		"success": true,
 | 
							"success": true,
 | 
				
			||||||
 
 | 
				
			|||||||
@@ -117,6 +117,15 @@ func init() {
 | 
				
			|||||||
			Root:       "gpt-3.5-turbo-16k-0613",
 | 
								Root:       "gpt-3.5-turbo-16k-0613",
 | 
				
			||||||
			Parent:     nil,
 | 
								Parent:     nil,
 | 
				
			||||||
		},
 | 
							},
 | 
				
			||||||
 | 
							{
 | 
				
			||||||
 | 
								Id:         "gpt-3.5-turbo-instruct",
 | 
				
			||||||
 | 
								Object:     "model",
 | 
				
			||||||
 | 
								Created:    1677649963,
 | 
				
			||||||
 | 
								OwnedBy:    "openai",
 | 
				
			||||||
 | 
								Permission: permission,
 | 
				
			||||||
 | 
								Root:       "gpt-3.5-turbo-instruct",
 | 
				
			||||||
 | 
								Parent:     nil,
 | 
				
			||||||
 | 
							},
 | 
				
			||||||
		{
 | 
							{
 | 
				
			||||||
			Id:         "gpt-4",
 | 
								Id:         "gpt-4",
 | 
				
			||||||
			Object:     "model",
 | 
								Object:     "model",
 | 
				
			||||||
@@ -343,21 +352,21 @@ func init() {
 | 
				
			|||||||
			Parent:     nil,
 | 
								Parent:     nil,
 | 
				
			||||||
		},
 | 
							},
 | 
				
			||||||
		{
 | 
							{
 | 
				
			||||||
			Id:         "qwen-v1",
 | 
								Id:         "qwen-turbo",
 | 
				
			||||||
			Object:     "model",
 | 
								Object:     "model",
 | 
				
			||||||
			Created:    1677649963,
 | 
								Created:    1677649963,
 | 
				
			||||||
			OwnedBy:    "ali",
 | 
								OwnedBy:    "ali",
 | 
				
			||||||
			Permission: permission,
 | 
								Permission: permission,
 | 
				
			||||||
			Root:       "qwen-v1",
 | 
								Root:       "qwen-turbo",
 | 
				
			||||||
			Parent:     nil,
 | 
								Parent:     nil,
 | 
				
			||||||
		},
 | 
							},
 | 
				
			||||||
		{
 | 
							{
 | 
				
			||||||
			Id:         "qwen-plus-v1",
 | 
								Id:         "qwen-plus",
 | 
				
			||||||
			Object:     "model",
 | 
								Object:     "model",
 | 
				
			||||||
			Created:    1677649963,
 | 
								Created:    1677649963,
 | 
				
			||||||
			OwnedBy:    "ali",
 | 
								OwnedBy:    "ali",
 | 
				
			||||||
			Permission: permission,
 | 
								Permission: permission,
 | 
				
			||||||
			Root:       "qwen-plus-v1",
 | 
								Root:       "qwen-plus",
 | 
				
			||||||
			Parent:     nil,
 | 
								Parent:     nil,
 | 
				
			||||||
		},
 | 
							},
 | 
				
			||||||
		{
 | 
							{
 | 
				
			||||||
 
 | 
				
			|||||||
@@ -18,6 +18,7 @@ func relayAudioHelper(c *gin.Context, relayMode int) *OpenAIErrorWithStatusCode
 | 
				
			|||||||
 | 
					
 | 
				
			||||||
	tokenId := c.GetInt("token_id")
 | 
						tokenId := c.GetInt("token_id")
 | 
				
			||||||
	channelType := c.GetInt("channel")
 | 
						channelType := c.GetInt("channel")
 | 
				
			||||||
 | 
						channelId := c.GetInt("channel_id")
 | 
				
			||||||
	userId := c.GetInt("id")
 | 
						userId := c.GetInt("id")
 | 
				
			||||||
	group := c.GetString("group")
 | 
						group := c.GetString("group")
 | 
				
			||||||
 | 
					
 | 
				
			||||||
@@ -107,7 +108,7 @@ func relayAudioHelper(c *gin.Context, relayMode int) *OpenAIErrorWithStatusCode
 | 
				
			|||||||
			if quota != 0 {
 | 
								if quota != 0 {
 | 
				
			||||||
				tokenName := c.GetString("token_name")
 | 
									tokenName := c.GetString("token_name")
 | 
				
			||||||
				logContent := fmt.Sprintf("模型倍率 %.2f,分组倍率 %.2f", modelRatio, groupRatio)
 | 
									logContent := fmt.Sprintf("模型倍率 %.2f,分组倍率 %.2f", modelRatio, groupRatio)
 | 
				
			||||||
				model.RecordConsumeLog(ctx, userId, 0, 0, audioModel, tokenName, quota, logContent)
 | 
									model.RecordConsumeLog(ctx, userId, channelId, 0, 0, audioModel, tokenName, quota, logContent)
 | 
				
			||||||
				model.UpdateUserUsedQuotaAndRequestCount(userId, quota)
 | 
									model.UpdateUserUsedQuotaAndRequestCount(userId, quota)
 | 
				
			||||||
				channelId := c.GetInt("channel_id")
 | 
									channelId := c.GetInt("channel_id")
 | 
				
			||||||
				model.UpdateChannelUsedQuota(channelId, quota)
 | 
									model.UpdateChannelUsedQuota(channelId, quota)
 | 
				
			||||||
 
 | 
				
			|||||||
@@ -19,6 +19,7 @@ func relayImageHelper(c *gin.Context, relayMode int) *OpenAIErrorWithStatusCode
 | 
				
			|||||||
 | 
					
 | 
				
			||||||
	tokenId := c.GetInt("token_id")
 | 
						tokenId := c.GetInt("token_id")
 | 
				
			||||||
	channelType := c.GetInt("channel")
 | 
						channelType := c.GetInt("channel")
 | 
				
			||||||
 | 
						channelId := c.GetInt("channel_id")
 | 
				
			||||||
	userId := c.GetInt("id")
 | 
						userId := c.GetInt("id")
 | 
				
			||||||
	consumeQuota := c.GetBool("consume_quota")
 | 
						consumeQuota := c.GetBool("consume_quota")
 | 
				
			||||||
	group := c.GetString("group")
 | 
						group := c.GetString("group")
 | 
				
			||||||
@@ -138,7 +139,7 @@ func relayImageHelper(c *gin.Context, relayMode int) *OpenAIErrorWithStatusCode
 | 
				
			|||||||
			if quota != 0 {
 | 
								if quota != 0 {
 | 
				
			||||||
				tokenName := c.GetString("token_name")
 | 
									tokenName := c.GetString("token_name")
 | 
				
			||||||
				logContent := fmt.Sprintf("模型倍率 %.2f,分组倍率 %.2f", modelRatio, groupRatio)
 | 
									logContent := fmt.Sprintf("模型倍率 %.2f,分组倍率 %.2f", modelRatio, groupRatio)
 | 
				
			||||||
				model.RecordConsumeLog(ctx, userId, 0, 0, imageModel, tokenName, quota, logContent)
 | 
									model.RecordConsumeLog(ctx, userId, channelId, 0, 0, imageModel, tokenName, quota, logContent)
 | 
				
			||||||
				model.UpdateUserUsedQuotaAndRequestCount(userId, quota)
 | 
									model.UpdateUserUsedQuotaAndRequestCount(userId, quota)
 | 
				
			||||||
				channelId := c.GetInt("channel_id")
 | 
									channelId := c.GetInt("channel_id")
 | 
				
			||||||
				model.UpdateChannelUsedQuota(channelId, quota)
 | 
									model.UpdateChannelUsedQuota(channelId, quota)
 | 
				
			||||||
 
 | 
				
			|||||||
@@ -38,6 +38,7 @@ func init() {
 | 
				
			|||||||
 | 
					
 | 
				
			||||||
func relayTextHelper(c *gin.Context, relayMode int) *OpenAIErrorWithStatusCode {
 | 
					func relayTextHelper(c *gin.Context, relayMode int) *OpenAIErrorWithStatusCode {
 | 
				
			||||||
	channelType := c.GetInt("channel")
 | 
						channelType := c.GetInt("channel")
 | 
				
			||||||
 | 
						channelId := c.GetInt("channel_id")
 | 
				
			||||||
	tokenId := c.GetInt("token_id")
 | 
						tokenId := c.GetInt("token_id")
 | 
				
			||||||
	userId := c.GetInt("id")
 | 
						userId := c.GetInt("id")
 | 
				
			||||||
	consumeQuota := c.GetBool("consume_quota")
 | 
						consumeQuota := c.GetBool("consume_quota")
 | 
				
			||||||
@@ -364,7 +365,6 @@ func relayTextHelper(c *gin.Context, relayMode int) *OpenAIErrorWithStatusCode {
 | 
				
			|||||||
 | 
					
 | 
				
			||||||
	var textResponse TextResponse
 | 
						var textResponse TextResponse
 | 
				
			||||||
	tokenName := c.GetString("token_name")
 | 
						tokenName := c.GetString("token_name")
 | 
				
			||||||
	channelId := c.GetInt("channel_id")
 | 
					 | 
				
			||||||
 | 
					
 | 
				
			||||||
	defer func(ctx context.Context) {
 | 
						defer func(ctx context.Context) {
 | 
				
			||||||
		// c.Writer.Flush()
 | 
							// c.Writer.Flush()
 | 
				
			||||||
@@ -397,7 +397,7 @@ func relayTextHelper(c *gin.Context, relayMode int) *OpenAIErrorWithStatusCode {
 | 
				
			|||||||
				}
 | 
									}
 | 
				
			||||||
				if quota != 0 {
 | 
									if quota != 0 {
 | 
				
			||||||
					logContent := fmt.Sprintf("模型倍率 %.2f,分组倍率 %.2f", modelRatio, groupRatio)
 | 
										logContent := fmt.Sprintf("模型倍率 %.2f,分组倍率 %.2f", modelRatio, groupRatio)
 | 
				
			||||||
					model.RecordConsumeLog(ctx, userId, promptTokens, completionTokens, textRequest.Model, tokenName, quota, logContent)
 | 
										model.RecordConsumeLog(ctx, userId, channelId, promptTokens, completionTokens, textRequest.Model, tokenName, quota, logContent)
 | 
				
			||||||
					model.UpdateUserUsedQuotaAndRequestCount(userId, quota)
 | 
										model.UpdateUserUsedQuotaAndRequestCount(userId, quota)
 | 
				
			||||||
					model.UpdateChannelUsedQuota(channelId, quota)
 | 
										model.UpdateChannelUsedQuota(channelId, quota)
 | 
				
			||||||
				}
 | 
									}
 | 
				
			||||||
@@ -541,24 +541,26 @@ func relayTextHelper(c *gin.Context, relayMode int) *OpenAIErrorWithStatusCode {
 | 
				
			|||||||
			return nil
 | 
								return nil
 | 
				
			||||||
		}
 | 
							}
 | 
				
			||||||
	case APITypeXunfei:
 | 
						case APITypeXunfei:
 | 
				
			||||||
		if isStream {
 | 
							auth := c.Request.Header.Get("Authorization")
 | 
				
			||||||
			auth := c.Request.Header.Get("Authorization")
 | 
							auth = strings.TrimPrefix(auth, "Bearer ")
 | 
				
			||||||
			auth = strings.TrimPrefix(auth, "Bearer ")
 | 
							splits := strings.Split(auth, "|")
 | 
				
			||||||
			splits := strings.Split(auth, "|")
 | 
							if len(splits) != 3 {
 | 
				
			||||||
			if len(splits) != 3 {
 | 
								return errorWrapper(errors.New("invalid auth"), "invalid_auth", http.StatusBadRequest)
 | 
				
			||||||
				return errorWrapper(errors.New("invalid auth"), "invalid_auth", http.StatusBadRequest)
 | 
					 | 
				
			||||||
			}
 | 
					 | 
				
			||||||
			err, usage := xunfeiStreamHandler(c, textRequest, splits[0], splits[1], splits[2])
 | 
					 | 
				
			||||||
			if err != nil {
 | 
					 | 
				
			||||||
				return err
 | 
					 | 
				
			||||||
			}
 | 
					 | 
				
			||||||
			if usage != nil {
 | 
					 | 
				
			||||||
				textResponse.Usage = *usage
 | 
					 | 
				
			||||||
			}
 | 
					 | 
				
			||||||
			return nil
 | 
					 | 
				
			||||||
		} else {
 | 
					 | 
				
			||||||
			return errorWrapper(errors.New("xunfei api does not support non-stream mode"), "invalid_api_type", http.StatusBadRequest)
 | 
					 | 
				
			||||||
		}
 | 
							}
 | 
				
			||||||
 | 
							var err *OpenAIErrorWithStatusCode
 | 
				
			||||||
 | 
							var usage *Usage
 | 
				
			||||||
 | 
							if isStream {
 | 
				
			||||||
 | 
								err, usage = xunfeiStreamHandler(c, textRequest, splits[0], splits[1], splits[2])
 | 
				
			||||||
 | 
							} else {
 | 
				
			||||||
 | 
								err, usage = xunfeiHandler(c, textRequest, splits[0], splits[1], splits[2])
 | 
				
			||||||
 | 
							}
 | 
				
			||||||
 | 
							if err != nil {
 | 
				
			||||||
 | 
								return err
 | 
				
			||||||
 | 
							}
 | 
				
			||||||
 | 
							if usage != nil {
 | 
				
			||||||
 | 
								textResponse.Usage = *usage
 | 
				
			||||||
 | 
							}
 | 
				
			||||||
 | 
							return nil
 | 
				
			||||||
	case APITypeAIProxyLibrary:
 | 
						case APITypeAIProxyLibrary:
 | 
				
			||||||
		if isStream {
 | 
							if isStream {
 | 
				
			||||||
			err, usage := aiProxyLibraryStreamHandler(c, resp)
 | 
								err, usage := aiProxyLibraryStreamHandler(c, resp)
 | 
				
			||||||
 
 | 
				
			|||||||
@@ -9,44 +9,53 @@ import (
 | 
				
			|||||||
	"net/http"
 | 
						"net/http"
 | 
				
			||||||
	"one-api/common"
 | 
						"one-api/common"
 | 
				
			||||||
	"strconv"
 | 
						"strconv"
 | 
				
			||||||
 | 
						"strings"
 | 
				
			||||||
)
 | 
					)
 | 
				
			||||||
 | 
					
 | 
				
			||||||
var stopFinishReason = "stop"
 | 
					var stopFinishReason = "stop"
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					// tokenEncoderMap won't grow after initialization
 | 
				
			||||||
var tokenEncoderMap = map[string]*tiktoken.Tiktoken{}
 | 
					var tokenEncoderMap = map[string]*tiktoken.Tiktoken{}
 | 
				
			||||||
 | 
					var defaultTokenEncoder *tiktoken.Tiktoken
 | 
				
			||||||
 | 
					
 | 
				
			||||||
func InitTokenEncoders() {
 | 
					func InitTokenEncoders() {
 | 
				
			||||||
	common.SysLog("initializing token encoders")
 | 
						common.SysLog("initializing token encoders")
 | 
				
			||||||
	fallbackTokenEncoder, err := tiktoken.EncodingForModel("gpt-3.5-turbo")
 | 
						gpt35TokenEncoder, err := tiktoken.EncodingForModel("gpt-3.5-turbo")
 | 
				
			||||||
	if err != nil {
 | 
						if err != nil {
 | 
				
			||||||
		common.FatalLog(fmt.Sprintf("failed to get fallback token encoder: %s", err.Error()))
 | 
							common.FatalLog(fmt.Sprintf("failed to get gpt-3.5-turbo token encoder: %s", err.Error()))
 | 
				
			||||||
 | 
						}
 | 
				
			||||||
 | 
						defaultTokenEncoder = gpt35TokenEncoder
 | 
				
			||||||
 | 
						gpt4TokenEncoder, err := tiktoken.EncodingForModel("gpt-4")
 | 
				
			||||||
 | 
						if err != nil {
 | 
				
			||||||
 | 
							common.FatalLog(fmt.Sprintf("failed to get gpt-4 token encoder: %s", err.Error()))
 | 
				
			||||||
	}
 | 
						}
 | 
				
			||||||
	for model, _ := range common.ModelRatio {
 | 
						for model, _ := range common.ModelRatio {
 | 
				
			||||||
		tokenEncoder, err := tiktoken.EncodingForModel(model)
 | 
							if strings.HasPrefix(model, "gpt-3.5") {
 | 
				
			||||||
		if err != nil {
 | 
								tokenEncoderMap[model] = gpt35TokenEncoder
 | 
				
			||||||
			common.SysError(fmt.Sprintf("using fallback encoder for model %s", model))
 | 
							} else if strings.HasPrefix(model, "gpt-4") {
 | 
				
			||||||
			tokenEncoderMap[model] = fallbackTokenEncoder
 | 
								tokenEncoderMap[model] = gpt4TokenEncoder
 | 
				
			||||||
			continue
 | 
							} else {
 | 
				
			||||||
 | 
								tokenEncoderMap[model] = nil
 | 
				
			||||||
		}
 | 
							}
 | 
				
			||||||
		tokenEncoderMap[model] = tokenEncoder
 | 
					 | 
				
			||||||
	}
 | 
						}
 | 
				
			||||||
	common.SysLog("token encoders initialized")
 | 
						common.SysLog("token encoders initialized")
 | 
				
			||||||
}
 | 
					}
 | 
				
			||||||
 | 
					
 | 
				
			||||||
func getTokenEncoder(model string) *tiktoken.Tiktoken {
 | 
					func getTokenEncoder(model string) *tiktoken.Tiktoken {
 | 
				
			||||||
	if tokenEncoder, ok := tokenEncoderMap[model]; ok {
 | 
						tokenEncoder, ok := tokenEncoderMap[model]
 | 
				
			||||||
 | 
						if ok && tokenEncoder != nil {
 | 
				
			||||||
		return tokenEncoder
 | 
							return tokenEncoder
 | 
				
			||||||
	}
 | 
						}
 | 
				
			||||||
	tokenEncoder, err := tiktoken.EncodingForModel(model)
 | 
						if ok {
 | 
				
			||||||
	if err != nil {
 | 
							tokenEncoder, err := tiktoken.EncodingForModel(model)
 | 
				
			||||||
		common.SysError(fmt.Sprintf("failed to get token encoder for model %s: %s, using encoder for gpt-3.5-turbo", model, err.Error()))
 | 
					 | 
				
			||||||
		tokenEncoder, err = tiktoken.EncodingForModel("gpt-3.5-turbo")
 | 
					 | 
				
			||||||
		if err != nil {
 | 
							if err != nil {
 | 
				
			||||||
			common.FatalLog(fmt.Sprintf("failed to get token encoder for model gpt-3.5-turbo: %s", err.Error()))
 | 
								common.SysError(fmt.Sprintf("failed to get token encoder for model %s: %s, using encoder for gpt-3.5-turbo", model, err.Error()))
 | 
				
			||||||
 | 
								tokenEncoder = defaultTokenEncoder
 | 
				
			||||||
		}
 | 
							}
 | 
				
			||||||
 | 
							tokenEncoderMap[model] = tokenEncoder
 | 
				
			||||||
 | 
							return tokenEncoder
 | 
				
			||||||
	}
 | 
						}
 | 
				
			||||||
	tokenEncoderMap[model] = tokenEncoder
 | 
						return defaultTokenEncoder
 | 
				
			||||||
	return tokenEncoder
 | 
					 | 
				
			||||||
}
 | 
					}
 | 
				
			||||||
 | 
					
 | 
				
			||||||
func getTokenNum(tokenEncoder *tiktoken.Tiktoken, text string) int {
 | 
					func getTokenNum(tokenEncoder *tiktoken.Tiktoken, text string) int {
 | 
				
			||||||
 
 | 
				
			|||||||
@@ -118,6 +118,7 @@ func responseXunfei2OpenAI(response *XunfeiChatResponse) *OpenAITextResponse {
 | 
				
			|||||||
			Role:    "assistant",
 | 
								Role:    "assistant",
 | 
				
			||||||
			Content: response.Payload.Choices.Text[0].Content,
 | 
								Content: response.Payload.Choices.Text[0].Content,
 | 
				
			||||||
		},
 | 
							},
 | 
				
			||||||
 | 
							FinishReason: stopFinishReason,
 | 
				
			||||||
	}
 | 
						}
 | 
				
			||||||
	fullTextResponse := OpenAITextResponse{
 | 
						fullTextResponse := OpenAITextResponse{
 | 
				
			||||||
		Object:  "chat.completion",
 | 
							Object:  "chat.completion",
 | 
				
			||||||
@@ -177,33 +178,82 @@ func buildXunfeiAuthUrl(hostUrl string, apiKey, apiSecret string) string {
 | 
				
			|||||||
}
 | 
					}
 | 
				
			||||||
 | 
					
 | 
				
			||||||
func xunfeiStreamHandler(c *gin.Context, textRequest GeneralOpenAIRequest, appId string, apiSecret string, apiKey string) (*OpenAIErrorWithStatusCode, *Usage) {
 | 
					func xunfeiStreamHandler(c *gin.Context, textRequest GeneralOpenAIRequest, appId string, apiSecret string, apiKey string) (*OpenAIErrorWithStatusCode, *Usage) {
 | 
				
			||||||
 | 
						domain, authUrl := getXunfeiAuthUrl(c, apiKey, apiSecret)
 | 
				
			||||||
 | 
						dataChan, stopChan, err := xunfeiMakeRequest(textRequest, domain, authUrl, appId)
 | 
				
			||||||
 | 
						if err != nil {
 | 
				
			||||||
 | 
							return errorWrapper(err, "make xunfei request err", http.StatusInternalServerError), nil
 | 
				
			||||||
 | 
						}
 | 
				
			||||||
 | 
						setEventStreamHeaders(c)
 | 
				
			||||||
	var usage Usage
 | 
						var usage Usage
 | 
				
			||||||
	query := c.Request.URL.Query()
 | 
						c.Stream(func(w io.Writer) bool {
 | 
				
			||||||
	apiVersion := query.Get("api-version")
 | 
							select {
 | 
				
			||||||
	if apiVersion == "" {
 | 
							case xunfeiResponse := <-dataChan:
 | 
				
			||||||
		apiVersion = c.GetString("api_version")
 | 
								usage.PromptTokens += xunfeiResponse.Payload.Usage.Text.PromptTokens
 | 
				
			||||||
 | 
								usage.CompletionTokens += xunfeiResponse.Payload.Usage.Text.CompletionTokens
 | 
				
			||||||
 | 
								usage.TotalTokens += xunfeiResponse.Payload.Usage.Text.TotalTokens
 | 
				
			||||||
 | 
								response := streamResponseXunfei2OpenAI(&xunfeiResponse)
 | 
				
			||||||
 | 
								jsonResponse, err := json.Marshal(response)
 | 
				
			||||||
 | 
								if err != nil {
 | 
				
			||||||
 | 
									common.SysError("error marshalling stream response: " + err.Error())
 | 
				
			||||||
 | 
									return true
 | 
				
			||||||
 | 
								}
 | 
				
			||||||
 | 
								c.Render(-1, common.CustomEvent{Data: "data: " + string(jsonResponse)})
 | 
				
			||||||
 | 
								return true
 | 
				
			||||||
 | 
							case <-stopChan:
 | 
				
			||||||
 | 
								c.Render(-1, common.CustomEvent{Data: "data: [DONE]"})
 | 
				
			||||||
 | 
								return false
 | 
				
			||||||
 | 
							}
 | 
				
			||||||
 | 
						})
 | 
				
			||||||
 | 
						return nil, &usage
 | 
				
			||||||
 | 
					}
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					func xunfeiHandler(c *gin.Context, textRequest GeneralOpenAIRequest, appId string, apiSecret string, apiKey string) (*OpenAIErrorWithStatusCode, *Usage) {
 | 
				
			||||||
 | 
						domain, authUrl := getXunfeiAuthUrl(c, apiKey, apiSecret)
 | 
				
			||||||
 | 
						dataChan, stopChan, err := xunfeiMakeRequest(textRequest, domain, authUrl, appId)
 | 
				
			||||||
 | 
						if err != nil {
 | 
				
			||||||
 | 
							return errorWrapper(err, "make xunfei request err", http.StatusInternalServerError), nil
 | 
				
			||||||
	}
 | 
						}
 | 
				
			||||||
	if apiVersion == "" {
 | 
						var usage Usage
 | 
				
			||||||
		apiVersion = "v1.1"
 | 
						var content string
 | 
				
			||||||
		common.SysLog("api_version not found, use default: " + apiVersion)
 | 
						var xunfeiResponse XunfeiChatResponse
 | 
				
			||||||
 | 
						stop := false
 | 
				
			||||||
 | 
						for !stop {
 | 
				
			||||||
 | 
							select {
 | 
				
			||||||
 | 
							case xunfeiResponse = <-dataChan:
 | 
				
			||||||
 | 
								content += xunfeiResponse.Payload.Choices.Text[0].Content
 | 
				
			||||||
 | 
								usage.PromptTokens += xunfeiResponse.Payload.Usage.Text.PromptTokens
 | 
				
			||||||
 | 
								usage.CompletionTokens += xunfeiResponse.Payload.Usage.Text.CompletionTokens
 | 
				
			||||||
 | 
								usage.TotalTokens += xunfeiResponse.Payload.Usage.Text.TotalTokens
 | 
				
			||||||
 | 
							case stop = <-stopChan:
 | 
				
			||||||
 | 
							}
 | 
				
			||||||
	}
 | 
						}
 | 
				
			||||||
	domain := "general"
 | 
					
 | 
				
			||||||
	if apiVersion == "v2.1" {
 | 
						xunfeiResponse.Payload.Choices.Text[0].Content = content
 | 
				
			||||||
		domain = "generalv2"
 | 
					
 | 
				
			||||||
 | 
						response := responseXunfei2OpenAI(&xunfeiResponse)
 | 
				
			||||||
 | 
						jsonResponse, err := json.Marshal(response)
 | 
				
			||||||
 | 
						if err != nil {
 | 
				
			||||||
 | 
							return errorWrapper(err, "marshal_response_body_failed", http.StatusInternalServerError), nil
 | 
				
			||||||
	}
 | 
						}
 | 
				
			||||||
	hostUrl := fmt.Sprintf("wss://spark-api.xf-yun.com/%s/chat", apiVersion)
 | 
						c.Writer.Header().Set("Content-Type", "application/json")
 | 
				
			||||||
 | 
						_, _ = c.Writer.Write(jsonResponse)
 | 
				
			||||||
 | 
						return nil, &usage
 | 
				
			||||||
 | 
					}
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					func xunfeiMakeRequest(textRequest GeneralOpenAIRequest, domain, authUrl, appId string) (chan XunfeiChatResponse, chan bool, error) {
 | 
				
			||||||
	d := websocket.Dialer{
 | 
						d := websocket.Dialer{
 | 
				
			||||||
		HandshakeTimeout: 5 * time.Second,
 | 
							HandshakeTimeout: 5 * time.Second,
 | 
				
			||||||
	}
 | 
						}
 | 
				
			||||||
	conn, resp, err := d.Dial(buildXunfeiAuthUrl(hostUrl, apiKey, apiSecret), nil)
 | 
						conn, resp, err := d.Dial(authUrl, nil)
 | 
				
			||||||
	if err != nil || resp.StatusCode != 101 {
 | 
						if err != nil || resp.StatusCode != 101 {
 | 
				
			||||||
		return errorWrapper(err, "dial_failed", http.StatusInternalServerError), nil
 | 
							return nil, nil, err
 | 
				
			||||||
	}
 | 
						}
 | 
				
			||||||
	data := requestOpenAI2Xunfei(textRequest, appId, domain)
 | 
						data := requestOpenAI2Xunfei(textRequest, appId, domain)
 | 
				
			||||||
	err = conn.WriteJSON(data)
 | 
						err = conn.WriteJSON(data)
 | 
				
			||||||
	if err != nil {
 | 
						if err != nil {
 | 
				
			||||||
		return errorWrapper(err, "write_json_failed", http.StatusInternalServerError), nil
 | 
							return nil, nil, err
 | 
				
			||||||
	}
 | 
						}
 | 
				
			||||||
 | 
					
 | 
				
			||||||
	dataChan := make(chan XunfeiChatResponse)
 | 
						dataChan := make(chan XunfeiChatResponse)
 | 
				
			||||||
	stopChan := make(chan bool)
 | 
						stopChan := make(chan bool)
 | 
				
			||||||
	go func() {
 | 
						go func() {
 | 
				
			||||||
@@ -230,61 +280,24 @@ func xunfeiStreamHandler(c *gin.Context, textRequest GeneralOpenAIRequest, appId
 | 
				
			|||||||
		}
 | 
							}
 | 
				
			||||||
		stopChan <- true
 | 
							stopChan <- true
 | 
				
			||||||
	}()
 | 
						}()
 | 
				
			||||||
	setEventStreamHeaders(c)
 | 
					
 | 
				
			||||||
	c.Stream(func(w io.Writer) bool {
 | 
						return dataChan, stopChan, nil
 | 
				
			||||||
		select {
 | 
					 | 
				
			||||||
		case xunfeiResponse := <-dataChan:
 | 
					 | 
				
			||||||
			usage.PromptTokens += xunfeiResponse.Payload.Usage.Text.PromptTokens
 | 
					 | 
				
			||||||
			usage.CompletionTokens += xunfeiResponse.Payload.Usage.Text.CompletionTokens
 | 
					 | 
				
			||||||
			usage.TotalTokens += xunfeiResponse.Payload.Usage.Text.TotalTokens
 | 
					 | 
				
			||||||
			response := streamResponseXunfei2OpenAI(&xunfeiResponse)
 | 
					 | 
				
			||||||
			jsonResponse, err := json.Marshal(response)
 | 
					 | 
				
			||||||
			if err != nil {
 | 
					 | 
				
			||||||
				common.SysError("error marshalling stream response: " + err.Error())
 | 
					 | 
				
			||||||
				return true
 | 
					 | 
				
			||||||
			}
 | 
					 | 
				
			||||||
			c.Render(-1, common.CustomEvent{Data: "data: " + string(jsonResponse)})
 | 
					 | 
				
			||||||
			return true
 | 
					 | 
				
			||||||
		case <-stopChan:
 | 
					 | 
				
			||||||
			c.Render(-1, common.CustomEvent{Data: "data: [DONE]"})
 | 
					 | 
				
			||||||
			return false
 | 
					 | 
				
			||||||
		}
 | 
					 | 
				
			||||||
	})
 | 
					 | 
				
			||||||
	return nil, &usage
 | 
					 | 
				
			||||||
}
 | 
					}
 | 
				
			||||||
 | 
					
 | 
				
			||||||
func xunfeiHandler(c *gin.Context, resp *http.Response) (*OpenAIErrorWithStatusCode, *Usage) {
 | 
					func getXunfeiAuthUrl(c *gin.Context, apiKey string, apiSecret string) (string, string) {
 | 
				
			||||||
	var xunfeiResponse XunfeiChatResponse
 | 
						query := c.Request.URL.Query()
 | 
				
			||||||
	responseBody, err := io.ReadAll(resp.Body)
 | 
						apiVersion := query.Get("api-version")
 | 
				
			||||||
	if err != nil {
 | 
						if apiVersion == "" {
 | 
				
			||||||
		return errorWrapper(err, "read_response_body_failed", http.StatusInternalServerError), nil
 | 
							apiVersion = c.GetString("api_version")
 | 
				
			||||||
	}
 | 
						}
 | 
				
			||||||
	err = resp.Body.Close()
 | 
						if apiVersion == "" {
 | 
				
			||||||
	if err != nil {
 | 
							apiVersion = "v1.1"
 | 
				
			||||||
		return errorWrapper(err, "close_response_body_failed", http.StatusInternalServerError), nil
 | 
							common.SysLog("api_version not found, use default: " + apiVersion)
 | 
				
			||||||
	}
 | 
						}
 | 
				
			||||||
	err = json.Unmarshal(responseBody, &xunfeiResponse)
 | 
						domain := "general"
 | 
				
			||||||
	if err != nil {
 | 
						if apiVersion == "v2.1" {
 | 
				
			||||||
		return errorWrapper(err, "unmarshal_response_body_failed", http.StatusInternalServerError), nil
 | 
							domain = "generalv2"
 | 
				
			||||||
	}
 | 
						}
 | 
				
			||||||
	if xunfeiResponse.Header.Code != 0 {
 | 
						authUrl := buildXunfeiAuthUrl(fmt.Sprintf("wss://spark-api.xf-yun.com/%s/chat", apiVersion), apiKey, apiSecret)
 | 
				
			||||||
		return &OpenAIErrorWithStatusCode{
 | 
						return domain, authUrl
 | 
				
			||||||
			OpenAIError: OpenAIError{
 | 
					 | 
				
			||||||
				Message: xunfeiResponse.Header.Message,
 | 
					 | 
				
			||||||
				Type:    "xunfei_error",
 | 
					 | 
				
			||||||
				Param:   "",
 | 
					 | 
				
			||||||
				Code:    xunfeiResponse.Header.Code,
 | 
					 | 
				
			||||||
			},
 | 
					 | 
				
			||||||
			StatusCode: resp.StatusCode,
 | 
					 | 
				
			||||||
		}, nil
 | 
					 | 
				
			||||||
	}
 | 
					 | 
				
			||||||
	fullTextResponse := responseXunfei2OpenAI(&xunfeiResponse)
 | 
					 | 
				
			||||||
	jsonResponse, err := json.Marshal(fullTextResponse)
 | 
					 | 
				
			||||||
	if err != nil {
 | 
					 | 
				
			||||||
		return errorWrapper(err, "marshal_response_body_failed", http.StatusInternalServerError), nil
 | 
					 | 
				
			||||||
	}
 | 
					 | 
				
			||||||
	c.Writer.Header().Set("Content-Type", "application/json")
 | 
					 | 
				
			||||||
	c.Writer.WriteHeader(resp.StatusCode)
 | 
					 | 
				
			||||||
	_, err = c.Writer.Write(jsonResponse)
 | 
					 | 
				
			||||||
	return nil, &fullTextResponse.Usage
 | 
					 | 
				
			||||||
}
 | 
					}
 | 
				
			||||||
 
 | 
				
			|||||||
							
								
								
									
										20
									
								
								main.go
									
									
									
									
									
								
							
							
						
						
									
										20
									
								
								main.go
									
									
									
									
									
								
							@@ -2,6 +2,7 @@ package main
 | 
				
			|||||||
 | 
					
 | 
				
			||||||
import (
 | 
					import (
 | 
				
			||||||
	"embed"
 | 
						"embed"
 | 
				
			||||||
 | 
						"fmt"
 | 
				
			||||||
	"github.com/gin-contrib/sessions"
 | 
						"github.com/gin-contrib/sessions"
 | 
				
			||||||
	"github.com/gin-contrib/sessions/cookie"
 | 
						"github.com/gin-contrib/sessions/cookie"
 | 
				
			||||||
	"github.com/gin-gonic/gin"
 | 
						"github.com/gin-gonic/gin"
 | 
				
			||||||
@@ -50,18 +51,17 @@ func main() {
 | 
				
			|||||||
	// Initialize options
 | 
						// Initialize options
 | 
				
			||||||
	model.InitOptionMap()
 | 
						model.InitOptionMap()
 | 
				
			||||||
	if common.RedisEnabled {
 | 
						if common.RedisEnabled {
 | 
				
			||||||
 | 
							// for compatibility with old versions
 | 
				
			||||||
 | 
							common.MemoryCacheEnabled = true
 | 
				
			||||||
 | 
						}
 | 
				
			||||||
 | 
						if common.MemoryCacheEnabled {
 | 
				
			||||||
 | 
							common.SysLog("memory cache enabled")
 | 
				
			||||||
 | 
							common.SysError(fmt.Sprintf("sync frequency: %d seconds", common.SyncFrequency))
 | 
				
			||||||
		model.InitChannelCache()
 | 
							model.InitChannelCache()
 | 
				
			||||||
	}
 | 
						}
 | 
				
			||||||
	if os.Getenv("SYNC_FREQUENCY") != "" {
 | 
						if common.MemoryCacheEnabled {
 | 
				
			||||||
		frequency, err := strconv.Atoi(os.Getenv("SYNC_FREQUENCY"))
 | 
							go model.SyncOptions(common.SyncFrequency)
 | 
				
			||||||
		if err != nil {
 | 
							go model.SyncChannelCache(common.SyncFrequency)
 | 
				
			||||||
			common.FatalLog("failed to parse SYNC_FREQUENCY: " + err.Error())
 | 
					 | 
				
			||||||
		}
 | 
					 | 
				
			||||||
		common.SyncFrequency = frequency
 | 
					 | 
				
			||||||
		go model.SyncOptions(frequency)
 | 
					 | 
				
			||||||
		if common.RedisEnabled {
 | 
					 | 
				
			||||||
			go model.SyncChannelCache(frequency)
 | 
					 | 
				
			||||||
		}
 | 
					 | 
				
			||||||
	}
 | 
						}
 | 
				
			||||||
	if os.Getenv("CHANNEL_UPDATE_FREQUENCY") != "" {
 | 
						if os.Getenv("CHANNEL_UPDATE_FREQUENCY") != "" {
 | 
				
			||||||
		frequency, err := strconv.Atoi(os.Getenv("CHANNEL_UPDATE_FREQUENCY"))
 | 
							frequency, err := strconv.Atoi(os.Getenv("CHANNEL_UPDATE_FREQUENCY"))
 | 
				
			||||||
 
 | 
				
			|||||||
@@ -94,7 +94,7 @@ func TokenAuth() func(c *gin.Context) {
 | 
				
			|||||||
			abortWithMessage(c, http.StatusUnauthorized, err.Error())
 | 
								abortWithMessage(c, http.StatusUnauthorized, err.Error())
 | 
				
			||||||
			return
 | 
								return
 | 
				
			||||||
		}
 | 
							}
 | 
				
			||||||
		userEnabled, err := model.IsUserEnabled(token.UserId)
 | 
							userEnabled, err := model.CacheIsUserEnabled(token.UserId)
 | 
				
			||||||
		if err != nil {
 | 
							if err != nil {
 | 
				
			||||||
			abortWithMessage(c, http.StatusInternalServerError, err.Error())
 | 
								abortWithMessage(c, http.StatusInternalServerError, err.Error())
 | 
				
			||||||
			return
 | 
								return
 | 
				
			||||||
 
 | 
				
			|||||||
@@ -82,9 +82,9 @@ func Distribute() func(c *gin.Context) {
 | 
				
			|||||||
		c.Set("channel", channel.Type)
 | 
							c.Set("channel", channel.Type)
 | 
				
			||||||
		c.Set("channel_id", channel.Id)
 | 
							c.Set("channel_id", channel.Id)
 | 
				
			||||||
		c.Set("channel_name", channel.Name)
 | 
							c.Set("channel_name", channel.Name)
 | 
				
			||||||
		c.Set("model_mapping", channel.ModelMapping)
 | 
							c.Set("model_mapping", channel.GetModelMapping())
 | 
				
			||||||
		c.Request.Header.Set("Authorization", fmt.Sprintf("Bearer %s", channel.Key))
 | 
							c.Request.Header.Set("Authorization", fmt.Sprintf("Bearer %s", channel.Key))
 | 
				
			||||||
		c.Set("base_url", channel.BaseURL)
 | 
							c.Set("base_url", channel.GetBaseURL())
 | 
				
			||||||
		switch channel.Type {
 | 
							switch channel.Type {
 | 
				
			||||||
		case common.ChannelTypeAzure:
 | 
							case common.ChannelTypeAzure:
 | 
				
			||||||
			c.Set("api_version", channel.Other)
 | 
								c.Set("api_version", channel.Other)
 | 
				
			||||||
 
 | 
				
			|||||||
@@ -10,15 +10,18 @@ type Ability struct {
 | 
				
			|||||||
	Model     string `json:"model" gorm:"primaryKey;autoIncrement:false"`
 | 
						Model     string `json:"model" gorm:"primaryKey;autoIncrement:false"`
 | 
				
			||||||
	ChannelId int    `json:"channel_id" gorm:"primaryKey;autoIncrement:false;index"`
 | 
						ChannelId int    `json:"channel_id" gorm:"primaryKey;autoIncrement:false;index"`
 | 
				
			||||||
	Enabled   bool   `json:"enabled"`
 | 
						Enabled   bool   `json:"enabled"`
 | 
				
			||||||
 | 
						Priority  *int64 `json:"priority" gorm:"bigint;default:0;index"`
 | 
				
			||||||
}
 | 
					}
 | 
				
			||||||
 | 
					
 | 
				
			||||||
func GetRandomSatisfiedChannel(group string, model string) (*Channel, error) {
 | 
					func GetRandomSatisfiedChannel(group string, model string) (*Channel, error) {
 | 
				
			||||||
	ability := Ability{}
 | 
						ability := Ability{}
 | 
				
			||||||
	var err error = nil
 | 
						var err error = nil
 | 
				
			||||||
 | 
						maxPrioritySubQuery := DB.Model(&Ability{}).Select("MAX(priority)").Where("`group` = ? and model = ? and enabled = 1", group, model)
 | 
				
			||||||
 | 
						channelQuery := DB.Where("`group` = ? and model = ? and enabled = 1 and priority = (?)", group, model, maxPrioritySubQuery)
 | 
				
			||||||
	if common.UsingSQLite {
 | 
						if common.UsingSQLite {
 | 
				
			||||||
		err = DB.Where("`group` = ? and model = ? and enabled = 1", group, model).Order("RANDOM()").Limit(1).First(&ability).Error
 | 
							err = channelQuery.Order("RANDOM()").First(&ability).Error
 | 
				
			||||||
	} else {
 | 
						} else {
 | 
				
			||||||
		err = DB.Where("`group` = ? and model = ? and enabled = 1", group, model).Order("RAND()").Limit(1).First(&ability).Error
 | 
							err = channelQuery.Order("RAND()").First(&ability).Error
 | 
				
			||||||
	}
 | 
						}
 | 
				
			||||||
	if err != nil {
 | 
						if err != nil {
 | 
				
			||||||
		return nil, err
 | 
							return nil, err
 | 
				
			||||||
@@ -40,6 +43,7 @@ func (channel *Channel) AddAbilities() error {
 | 
				
			|||||||
				Model:     model,
 | 
									Model:     model,
 | 
				
			||||||
				ChannelId: channel.Id,
 | 
									ChannelId: channel.Id,
 | 
				
			||||||
				Enabled:   channel.Status == common.ChannelStatusEnabled,
 | 
									Enabled:   channel.Status == common.ChannelStatusEnabled,
 | 
				
			||||||
 | 
									Priority:  channel.Priority,
 | 
				
			||||||
			}
 | 
								}
 | 
				
			||||||
			abilities = append(abilities, ability)
 | 
								abilities = append(abilities, ability)
 | 
				
			||||||
		}
 | 
							}
 | 
				
			||||||
 
 | 
				
			|||||||
@@ -6,6 +6,7 @@ import (
 | 
				
			|||||||
	"fmt"
 | 
						"fmt"
 | 
				
			||||||
	"math/rand"
 | 
						"math/rand"
 | 
				
			||||||
	"one-api/common"
 | 
						"one-api/common"
 | 
				
			||||||
 | 
						"sort"
 | 
				
			||||||
	"strconv"
 | 
						"strconv"
 | 
				
			||||||
	"strings"
 | 
						"strings"
 | 
				
			||||||
	"sync"
 | 
						"sync"
 | 
				
			||||||
@@ -159,6 +160,17 @@ func InitChannelCache() {
 | 
				
			|||||||
			}
 | 
								}
 | 
				
			||||||
		}
 | 
							}
 | 
				
			||||||
	}
 | 
						}
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
						// sort by priority
 | 
				
			||||||
 | 
						for group, model2channels := range newGroup2model2channels {
 | 
				
			||||||
 | 
							for model, channels := range model2channels {
 | 
				
			||||||
 | 
								sort.Slice(channels, func(i, j int) bool {
 | 
				
			||||||
 | 
									return channels[i].GetPriority() > channels[j].GetPriority()
 | 
				
			||||||
 | 
								})
 | 
				
			||||||
 | 
								newGroup2model2channels[group][model] = channels
 | 
				
			||||||
 | 
							}
 | 
				
			||||||
 | 
						}
 | 
				
			||||||
 | 
					
 | 
				
			||||||
	channelSyncLock.Lock()
 | 
						channelSyncLock.Lock()
 | 
				
			||||||
	group2model2channels = newGroup2model2channels
 | 
						group2model2channels = newGroup2model2channels
 | 
				
			||||||
	channelSyncLock.Unlock()
 | 
						channelSyncLock.Unlock()
 | 
				
			||||||
@@ -174,7 +186,7 @@ func SyncChannelCache(frequency int) {
 | 
				
			|||||||
}
 | 
					}
 | 
				
			||||||
 | 
					
 | 
				
			||||||
func CacheGetRandomSatisfiedChannel(group string, model string) (*Channel, error) {
 | 
					func CacheGetRandomSatisfiedChannel(group string, model string) (*Channel, error) {
 | 
				
			||||||
	if !common.RedisEnabled {
 | 
						if !common.MemoryCacheEnabled {
 | 
				
			||||||
		return GetRandomSatisfiedChannel(group, model)
 | 
							return GetRandomSatisfiedChannel(group, model)
 | 
				
			||||||
	}
 | 
						}
 | 
				
			||||||
	channelSyncLock.RLock()
 | 
						channelSyncLock.RLock()
 | 
				
			||||||
@@ -183,6 +195,17 @@ func CacheGetRandomSatisfiedChannel(group string, model string) (*Channel, error
 | 
				
			|||||||
	if len(channels) == 0 {
 | 
						if len(channels) == 0 {
 | 
				
			||||||
		return nil, errors.New("channel not found")
 | 
							return nil, errors.New("channel not found")
 | 
				
			||||||
	}
 | 
						}
 | 
				
			||||||
	idx := rand.Intn(len(channels))
 | 
						endIdx := len(channels)
 | 
				
			||||||
 | 
						// choose by priority
 | 
				
			||||||
 | 
						firstChannel := channels[0]
 | 
				
			||||||
 | 
						if firstChannel.GetPriority() > 0 {
 | 
				
			||||||
 | 
							for i := range channels {
 | 
				
			||||||
 | 
								if channels[i].GetPriority() != firstChannel.GetPriority() {
 | 
				
			||||||
 | 
									endIdx = i
 | 
				
			||||||
 | 
									break
 | 
				
			||||||
 | 
								}
 | 
				
			||||||
 | 
							}
 | 
				
			||||||
 | 
						}
 | 
				
			||||||
 | 
						idx := rand.Intn(endIdx)
 | 
				
			||||||
	return channels[idx], nil
 | 
						return channels[idx], nil
 | 
				
			||||||
}
 | 
					}
 | 
				
			||||||
 
 | 
				
			|||||||
@@ -15,14 +15,15 @@ type Channel struct {
 | 
				
			|||||||
	CreatedTime        int64   `json:"created_time" gorm:"bigint"`
 | 
						CreatedTime        int64   `json:"created_time" gorm:"bigint"`
 | 
				
			||||||
	TestTime           int64   `json:"test_time" gorm:"bigint"`
 | 
						TestTime           int64   `json:"test_time" gorm:"bigint"`
 | 
				
			||||||
	ResponseTime       int     `json:"response_time"` // in milliseconds
 | 
						ResponseTime       int     `json:"response_time"` // in milliseconds
 | 
				
			||||||
	BaseURL            string  `json:"base_url" gorm:"column:base_url"`
 | 
						BaseURL            *string `json:"base_url" gorm:"column:base_url;default:''"`
 | 
				
			||||||
	Other              string  `json:"other"`
 | 
						Other              string  `json:"other"`
 | 
				
			||||||
	Balance            float64 `json:"balance"` // in USD
 | 
						Balance            float64 `json:"balance"` // in USD
 | 
				
			||||||
	BalanceUpdatedTime int64   `json:"balance_updated_time" gorm:"bigint"`
 | 
						BalanceUpdatedTime int64   `json:"balance_updated_time" gorm:"bigint"`
 | 
				
			||||||
	Models             string  `json:"models"`
 | 
						Models             string  `json:"models"`
 | 
				
			||||||
	Group              string  `json:"group" gorm:"type:varchar(32);default:'default'"`
 | 
						Group              string  `json:"group" gorm:"type:varchar(32);default:'default'"`
 | 
				
			||||||
	UsedQuota          int64   `json:"used_quota" gorm:"bigint;default:0"`
 | 
						UsedQuota          int64   `json:"used_quota" gorm:"bigint;default:0"`
 | 
				
			||||||
	ModelMapping       string  `json:"model_mapping" gorm:"type:varchar(1024);default:''"`
 | 
						ModelMapping       *string `json:"model_mapping" gorm:"type:varchar(1024);default:''"`
 | 
				
			||||||
 | 
						Priority           *int64  `json:"priority" gorm:"bigint;default:0"`
 | 
				
			||||||
}
 | 
					}
 | 
				
			||||||
 | 
					
 | 
				
			||||||
func GetAllChannels(startIdx int, num int, selectAll bool) ([]*Channel, error) {
 | 
					func GetAllChannels(startIdx int, num int, selectAll bool) ([]*Channel, error) {
 | 
				
			||||||
@@ -78,6 +79,27 @@ func BatchInsertChannels(channels []Channel) error {
 | 
				
			|||||||
	return nil
 | 
						return nil
 | 
				
			||||||
}
 | 
					}
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					func (channel *Channel) GetPriority() int64 {
 | 
				
			||||||
 | 
						if channel.Priority == nil {
 | 
				
			||||||
 | 
							return 0
 | 
				
			||||||
 | 
						}
 | 
				
			||||||
 | 
						return *channel.Priority
 | 
				
			||||||
 | 
					}
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					func (channel *Channel) GetBaseURL() string {
 | 
				
			||||||
 | 
						if channel.BaseURL == nil {
 | 
				
			||||||
 | 
							return ""
 | 
				
			||||||
 | 
						}
 | 
				
			||||||
 | 
						return *channel.BaseURL
 | 
				
			||||||
 | 
					}
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					func (channel *Channel) GetModelMapping() string {
 | 
				
			||||||
 | 
						if channel.ModelMapping == nil {
 | 
				
			||||||
 | 
							return ""
 | 
				
			||||||
 | 
						}
 | 
				
			||||||
 | 
						return *channel.ModelMapping
 | 
				
			||||||
 | 
					}
 | 
				
			||||||
 | 
					
 | 
				
			||||||
func (channel *Channel) Insert() error {
 | 
					func (channel *Channel) Insert() error {
 | 
				
			||||||
	var err error
 | 
						var err error
 | 
				
			||||||
	err = DB.Create(channel).Error
 | 
						err = DB.Create(channel).Error
 | 
				
			||||||
 
 | 
				
			|||||||
							
								
								
									
										21
									
								
								model/log.go
									
									
									
									
									
								
							
							
						
						
									
										21
									
								
								model/log.go
									
									
									
									
									
								
							@@ -19,6 +19,7 @@ type Log struct {
 | 
				
			|||||||
	Quota            int    `json:"quota" gorm:"default:0"`
 | 
						Quota            int    `json:"quota" gorm:"default:0"`
 | 
				
			||||||
	PromptTokens     int    `json:"prompt_tokens" gorm:"default:0"`
 | 
						PromptTokens     int    `json:"prompt_tokens" gorm:"default:0"`
 | 
				
			||||||
	CompletionTokens int    `json:"completion_tokens" gorm:"default:0"`
 | 
						CompletionTokens int    `json:"completion_tokens" gorm:"default:0"`
 | 
				
			||||||
 | 
						Channel          int    `json:"channel" gorm:"default:0"`
 | 
				
			||||||
}
 | 
					}
 | 
				
			||||||
 | 
					
 | 
				
			||||||
const (
 | 
					const (
 | 
				
			||||||
@@ -46,8 +47,9 @@ func RecordLog(userId int, logType int, content string) {
 | 
				
			|||||||
	}
 | 
						}
 | 
				
			||||||
}
 | 
					}
 | 
				
			||||||
 | 
					
 | 
				
			||||||
func RecordConsumeLog(ctx context.Context, userId int, promptTokens int, completionTokens int, modelName string, tokenName string, quota int, content string) {
 | 
					
 | 
				
			||||||
	common.LogInfo(ctx, fmt.Sprintf("record consume log: userId=%d, promptTokens=%d, completionTokens=%d, modelName=%s, tokenName=%s, quota=%d, content=%s", userId, promptTokens, completionTokens, modelName, tokenName, quota, content))
 | 
					func RecordConsumeLog(ctx context.Context, userId int, channelId int, promptTokens int, completionTokens int, modelName string, tokenName string, quota int, content string) {
 | 
				
			||||||
 | 
						common.LogInfo(ctx, fmt.Sprintf("record consume log: userId=%d, channelId=%d, promptTokens=%d, completionTokens=%d, modelName=%s, tokenName=%s, quota=%d, content=%s", userId, channelId, promptTokens, completionTokens, modelName, tokenName, quota, content))
 | 
				
			||||||
	if !common.LogConsumeEnabled {
 | 
						if !common.LogConsumeEnabled {
 | 
				
			||||||
		return
 | 
							return
 | 
				
			||||||
	}
 | 
						}
 | 
				
			||||||
@@ -62,6 +64,7 @@ func RecordConsumeLog(ctx context.Context, userId int, promptTokens int, complet
 | 
				
			|||||||
		TokenName:        tokenName,
 | 
							TokenName:        tokenName,
 | 
				
			||||||
		ModelName:        modelName,
 | 
							ModelName:        modelName,
 | 
				
			||||||
		Quota:            quota,
 | 
							Quota:            quota,
 | 
				
			||||||
 | 
							Channel:          channelId,
 | 
				
			||||||
	}
 | 
						}
 | 
				
			||||||
	err := DB.Create(log).Error
 | 
						err := DB.Create(log).Error
 | 
				
			||||||
	if err != nil {
 | 
						if err != nil {
 | 
				
			||||||
@@ -69,7 +72,7 @@ func RecordConsumeLog(ctx context.Context, userId int, promptTokens int, complet
 | 
				
			|||||||
	}
 | 
						}
 | 
				
			||||||
}
 | 
					}
 | 
				
			||||||
 | 
					
 | 
				
			||||||
func GetAllLogs(logType int, startTimestamp int64, endTimestamp int64, modelName string, username string, tokenName string, startIdx int, num int) (logs []*Log, err error) {
 | 
					func GetAllLogs(logType int, startTimestamp int64, endTimestamp int64, modelName string, username string, tokenName string, startIdx int, num int, channel int) (logs []*Log, err error) {
 | 
				
			||||||
	var tx *gorm.DB
 | 
						var tx *gorm.DB
 | 
				
			||||||
	if logType == LogTypeUnknown {
 | 
						if logType == LogTypeUnknown {
 | 
				
			||||||
		tx = DB
 | 
							tx = DB
 | 
				
			||||||
@@ -91,6 +94,9 @@ func GetAllLogs(logType int, startTimestamp int64, endTimestamp int64, modelName
 | 
				
			|||||||
	if endTimestamp != 0 {
 | 
						if endTimestamp != 0 {
 | 
				
			||||||
		tx = tx.Where("created_at <= ?", endTimestamp)
 | 
							tx = tx.Where("created_at <= ?", endTimestamp)
 | 
				
			||||||
	}
 | 
						}
 | 
				
			||||||
 | 
						if channel != 0 {
 | 
				
			||||||
 | 
							tx = tx.Where("channel = ?", channel)
 | 
				
			||||||
 | 
						}
 | 
				
			||||||
	err = tx.Order("id desc").Limit(num).Offset(startIdx).Find(&logs).Error
 | 
						err = tx.Order("id desc").Limit(num).Offset(startIdx).Find(&logs).Error
 | 
				
			||||||
	return logs, err
 | 
						return logs, err
 | 
				
			||||||
}
 | 
					}
 | 
				
			||||||
@@ -128,8 +134,8 @@ func SearchUserLogs(userId int, keyword string) (logs []*Log, err error) {
 | 
				
			|||||||
	return logs, err
 | 
						return logs, err
 | 
				
			||||||
}
 | 
					}
 | 
				
			||||||
 | 
					
 | 
				
			||||||
func SumUsedQuota(logType int, startTimestamp int64, endTimestamp int64, modelName string, username string, tokenName string) (quota int) {
 | 
					func SumUsedQuota(logType int, startTimestamp int64, endTimestamp int64, modelName string, username string, tokenName string, channel int) (quota int) {
 | 
				
			||||||
	tx := DB.Table("logs").Select("sum(quota)")
 | 
						tx := DB.Table("logs").Select("ifnull(sum(quota),0)")
 | 
				
			||||||
	if username != "" {
 | 
						if username != "" {
 | 
				
			||||||
		tx = tx.Where("username = ?", username)
 | 
							tx = tx.Where("username = ?", username)
 | 
				
			||||||
	}
 | 
						}
 | 
				
			||||||
@@ -145,12 +151,15 @@ func SumUsedQuota(logType int, startTimestamp int64, endTimestamp int64, modelNa
 | 
				
			|||||||
	if modelName != "" {
 | 
						if modelName != "" {
 | 
				
			||||||
		tx = tx.Where("model_name = ?", modelName)
 | 
							tx = tx.Where("model_name = ?", modelName)
 | 
				
			||||||
	}
 | 
						}
 | 
				
			||||||
 | 
						if channel != 0 {
 | 
				
			||||||
 | 
							tx = tx.Where("channel = ?", channel)
 | 
				
			||||||
 | 
						}
 | 
				
			||||||
	tx.Where("type = ?", LogTypeConsume).Scan("a)
 | 
						tx.Where("type = ?", LogTypeConsume).Scan("a)
 | 
				
			||||||
	return quota
 | 
						return quota
 | 
				
			||||||
}
 | 
					}
 | 
				
			||||||
 | 
					
 | 
				
			||||||
func SumUsedToken(logType int, startTimestamp int64, endTimestamp int64, modelName string, username string, tokenName string) (token int) {
 | 
					func SumUsedToken(logType int, startTimestamp int64, endTimestamp int64, modelName string, username string, tokenName string) (token int) {
 | 
				
			||||||
	tx := DB.Table("logs").Select("sum(prompt_tokens) + sum(completion_tokens)")
 | 
						tx := DB.Table("logs").Select("ifnull(sum(prompt_tokens),0) + ifnull(sum(completion_tokens),0)")
 | 
				
			||||||
	if username != "" {
 | 
						if username != "" {
 | 
				
			||||||
		tx = tx.Where("username = ?", username)
 | 
							tx = tx.Where("username = ?", username)
 | 
				
			||||||
	}
 | 
						}
 | 
				
			||||||
 
 | 
				
			|||||||
@@ -1,5 +1,5 @@
 | 
				
			|||||||
import React, { useEffect, useState } from 'react';
 | 
					import React, { useEffect, useState } from 'react';
 | 
				
			||||||
import { Button, Form, Label, Pagination, Popup, Table } from 'semantic-ui-react';
 | 
					import {Button, Form, Input, Label, Pagination, Popup, Table} from 'semantic-ui-react';
 | 
				
			||||||
import { Link } from 'react-router-dom';
 | 
					import { Link } from 'react-router-dom';
 | 
				
			||||||
import { API, showError, showInfo, showNotice, showSuccess, timestamp2string } from '../helpers';
 | 
					import { API, showError, showInfo, showNotice, showSuccess, timestamp2string } from '../helpers';
 | 
				
			||||||
 | 
					
 | 
				
			||||||
@@ -24,7 +24,7 @@ function renderType(type) {
 | 
				
			|||||||
    }
 | 
					    }
 | 
				
			||||||
    type2label[0] = { value: 0, text: '未知类型', color: 'grey' };
 | 
					    type2label[0] = { value: 0, text: '未知类型', color: 'grey' };
 | 
				
			||||||
  }
 | 
					  }
 | 
				
			||||||
  return <Label basic color={type2label[type].color}>{type2label[type].text}</Label>;
 | 
					  return <Label basic color={type2label[type]?.color}>{type2label[type]?.text}</Label>;
 | 
				
			||||||
}
 | 
					}
 | 
				
			||||||
 | 
					
 | 
				
			||||||
function renderBalance(type, balance) {
 | 
					function renderBalance(type, balance) {
 | 
				
			||||||
@@ -96,7 +96,7 @@ const ChannelsTable = () => {
 | 
				
			|||||||
      });
 | 
					      });
 | 
				
			||||||
  }, []);
 | 
					  }, []);
 | 
				
			||||||
 | 
					
 | 
				
			||||||
  const manageChannel = async (id, action, idx) => {
 | 
					  const manageChannel = async (id, action, idx, priority) => {
 | 
				
			||||||
    let data = { id };
 | 
					    let data = { id };
 | 
				
			||||||
    let res;
 | 
					    let res;
 | 
				
			||||||
    switch (action) {
 | 
					    switch (action) {
 | 
				
			||||||
@@ -111,6 +111,13 @@ const ChannelsTable = () => {
 | 
				
			|||||||
        data.status = 2;
 | 
					        data.status = 2;
 | 
				
			||||||
        res = await API.put('/api/channel/', data);
 | 
					        res = await API.put('/api/channel/', data);
 | 
				
			||||||
        break;
 | 
					        break;
 | 
				
			||||||
 | 
					      case 'priority':
 | 
				
			||||||
 | 
					        if (priority === '') {
 | 
				
			||||||
 | 
					          return;
 | 
				
			||||||
 | 
					        }
 | 
				
			||||||
 | 
					        data.priority = parseInt(priority);
 | 
				
			||||||
 | 
					        res = await API.put('/api/channel/', data);
 | 
				
			||||||
 | 
					        break;
 | 
				
			||||||
    }
 | 
					    }
 | 
				
			||||||
    const { success, message } = res.data;
 | 
					    const { success, message } = res.data;
 | 
				
			||||||
    if (success) {
 | 
					    if (success) {
 | 
				
			||||||
@@ -335,6 +342,14 @@ const ChannelsTable = () => {
 | 
				
			|||||||
            >
 | 
					            >
 | 
				
			||||||
              余额
 | 
					              余额
 | 
				
			||||||
            </Table.HeaderCell>
 | 
					            </Table.HeaderCell>
 | 
				
			||||||
 | 
					            <Table.HeaderCell
 | 
				
			||||||
 | 
					                style={{ cursor: 'pointer' }}
 | 
				
			||||||
 | 
					                onClick={() => {
 | 
				
			||||||
 | 
					                  sortChannel('priority');
 | 
				
			||||||
 | 
					                }}
 | 
				
			||||||
 | 
					            >
 | 
				
			||||||
 | 
					              优先级
 | 
				
			||||||
 | 
					            </Table.HeaderCell>
 | 
				
			||||||
            <Table.HeaderCell>操作</Table.HeaderCell>
 | 
					            <Table.HeaderCell>操作</Table.HeaderCell>
 | 
				
			||||||
          </Table.Row>
 | 
					          </Table.Row>
 | 
				
			||||||
        </Table.Header>
 | 
					        </Table.Header>
 | 
				
			||||||
@@ -373,6 +388,22 @@ const ChannelsTable = () => {
 | 
				
			|||||||
                      basic
 | 
					                      basic
 | 
				
			||||||
                    />
 | 
					                    />
 | 
				
			||||||
                  </Table.Cell>
 | 
					                  </Table.Cell>
 | 
				
			||||||
 | 
					                  <Table.Cell>
 | 
				
			||||||
 | 
					                    <Popup
 | 
				
			||||||
 | 
					                        trigger={<Input type="number"  defaultValue={channel.priority} onBlur={(event) => {
 | 
				
			||||||
 | 
					                          manageChannel(
 | 
				
			||||||
 | 
					                              channel.id,
 | 
				
			||||||
 | 
					                              'priority',
 | 
				
			||||||
 | 
					                              idx,
 | 
				
			||||||
 | 
					                              event.target.value,
 | 
				
			||||||
 | 
					                          );
 | 
				
			||||||
 | 
					                        }}>
 | 
				
			||||||
 | 
					                          <input style={{maxWidth:'60px'}} />
 | 
				
			||||||
 | 
					                        </Input>}
 | 
				
			||||||
 | 
					                        content='渠道选择优先级,越高越优先'
 | 
				
			||||||
 | 
					                        basic
 | 
				
			||||||
 | 
					                    />
 | 
				
			||||||
 | 
					                  </Table.Cell>
 | 
				
			||||||
                  <Table.Cell>
 | 
					                  <Table.Cell>
 | 
				
			||||||
                    <div>
 | 
					                    <div>
 | 
				
			||||||
                      <Button
 | 
					                      <Button
 | 
				
			||||||
@@ -441,7 +472,7 @@ const ChannelsTable = () => {
 | 
				
			|||||||
 | 
					
 | 
				
			||||||
        <Table.Footer>
 | 
					        <Table.Footer>
 | 
				
			||||||
          <Table.Row>
 | 
					          <Table.Row>
 | 
				
			||||||
            <Table.HeaderCell colSpan='8'>
 | 
					            <Table.HeaderCell colSpan='9'>
 | 
				
			||||||
              <Button size='small' as={Link} to='/channel/add' loading={loading}>
 | 
					              <Button size='small' as={Link} to='/channel/add' loading={loading}>
 | 
				
			||||||
                添加新的渠道
 | 
					                添加新的渠道
 | 
				
			||||||
              </Button>
 | 
					              </Button>
 | 
				
			||||||
 
 | 
				
			|||||||
@@ -56,9 +56,10 @@ const LogsTable = () => {
 | 
				
			|||||||
    token_name: '',
 | 
					    token_name: '',
 | 
				
			||||||
    model_name: '',
 | 
					    model_name: '',
 | 
				
			||||||
    start_timestamp: timestamp2string(0),
 | 
					    start_timestamp: timestamp2string(0),
 | 
				
			||||||
    end_timestamp: timestamp2string(now.getTime() / 1000 + 3600)
 | 
					    end_timestamp: timestamp2string(now.getTime() / 1000 + 3600),
 | 
				
			||||||
 | 
					    channel: ''
 | 
				
			||||||
  });
 | 
					  });
 | 
				
			||||||
  const { username, token_name, model_name, start_timestamp, end_timestamp } = inputs;
 | 
					  const { username, token_name, model_name, start_timestamp, end_timestamp, channel } = inputs;
 | 
				
			||||||
 | 
					
 | 
				
			||||||
  const [stat, setStat] = useState({
 | 
					  const [stat, setStat] = useState({
 | 
				
			||||||
    quota: 0,
 | 
					    quota: 0,
 | 
				
			||||||
@@ -84,7 +85,7 @@ const LogsTable = () => {
 | 
				
			|||||||
  const getLogStat = async () => {
 | 
					  const getLogStat = async () => {
 | 
				
			||||||
    let localStartTimestamp = Date.parse(start_timestamp) / 1000;
 | 
					    let localStartTimestamp = Date.parse(start_timestamp) / 1000;
 | 
				
			||||||
    let localEndTimestamp = Date.parse(end_timestamp) / 1000;
 | 
					    let localEndTimestamp = Date.parse(end_timestamp) / 1000;
 | 
				
			||||||
    let res = await API.get(`/api/log/stat?type=${logType}&username=${username}&token_name=${token_name}&model_name=${model_name}&start_timestamp=${localStartTimestamp}&end_timestamp=${localEndTimestamp}`);
 | 
					    let res = await API.get(`/api/log/stat?type=${logType}&username=${username}&token_name=${token_name}&model_name=${model_name}&start_timestamp=${localStartTimestamp}&end_timestamp=${localEndTimestamp}&channel=${channel}`);
 | 
				
			||||||
    const { success, message, data } = res.data;
 | 
					    const { success, message, data } = res.data;
 | 
				
			||||||
    if (success) {
 | 
					    if (success) {
 | 
				
			||||||
      setStat(data);
 | 
					      setStat(data);
 | 
				
			||||||
@@ -109,7 +110,7 @@ const LogsTable = () => {
 | 
				
			|||||||
    let localStartTimestamp = Date.parse(start_timestamp) / 1000;
 | 
					    let localStartTimestamp = Date.parse(start_timestamp) / 1000;
 | 
				
			||||||
    let localEndTimestamp = Date.parse(end_timestamp) / 1000;
 | 
					    let localEndTimestamp = Date.parse(end_timestamp) / 1000;
 | 
				
			||||||
    if (isAdminUser) {
 | 
					    if (isAdminUser) {
 | 
				
			||||||
      url = `/api/log/?p=${startIdx}&type=${logType}&username=${username}&token_name=${token_name}&model_name=${model_name}&start_timestamp=${localStartTimestamp}&end_timestamp=${localEndTimestamp}`;
 | 
					      url = `/api/log/?p=${startIdx}&type=${logType}&username=${username}&token_name=${token_name}&model_name=${model_name}&start_timestamp=${localStartTimestamp}&end_timestamp=${localEndTimestamp}&channel=${channel}`;
 | 
				
			||||||
    } else {
 | 
					    } else {
 | 
				
			||||||
      url = `/api/log/self/?p=${startIdx}&type=${logType}&token_name=${token_name}&model_name=${model_name}&start_timestamp=${localStartTimestamp}&end_timestamp=${localEndTimestamp}`;
 | 
					      url = `/api/log/self/?p=${startIdx}&type=${logType}&token_name=${token_name}&model_name=${model_name}&start_timestamp=${localStartTimestamp}&end_timestamp=${localEndTimestamp}`;
 | 
				
			||||||
    }
 | 
					    }
 | 
				
			||||||
@@ -205,16 +206,9 @@ const LogsTable = () => {
 | 
				
			|||||||
        </Header>
 | 
					        </Header>
 | 
				
			||||||
        <Form>
 | 
					        <Form>
 | 
				
			||||||
          <Form.Group>
 | 
					          <Form.Group>
 | 
				
			||||||
            {
 | 
					            <Form.Input fluid label={'令牌名称'} width={3} value={token_name}
 | 
				
			||||||
              isAdminUser && (
 | 
					 | 
				
			||||||
                <Form.Input fluid label={'用户名称'} width={2} value={username}
 | 
					 | 
				
			||||||
                            placeholder={'可选值'} name='username'
 | 
					 | 
				
			||||||
                            onChange={handleInputChange} />
 | 
					 | 
				
			||||||
              )
 | 
					 | 
				
			||||||
            }
 | 
					 | 
				
			||||||
            <Form.Input fluid label={'令牌名称'} width={isAdminUser ? 2 : 3} value={token_name}
 | 
					 | 
				
			||||||
                        placeholder={'可选值'} name='token_name' onChange={handleInputChange} />
 | 
					                        placeholder={'可选值'} name='token_name' onChange={handleInputChange} />
 | 
				
			||||||
            <Form.Input fluid label='模型名称' width={isAdminUser ? 2 : 3} value={model_name} placeholder='可选值'
 | 
					            <Form.Input fluid label='模型名称' width={3} value={model_name} placeholder='可选值'
 | 
				
			||||||
                        name='model_name'
 | 
					                        name='model_name'
 | 
				
			||||||
                        onChange={handleInputChange} />
 | 
					                        onChange={handleInputChange} />
 | 
				
			||||||
            <Form.Input fluid label='起始时间' width={4} value={start_timestamp} type='datetime-local'
 | 
					            <Form.Input fluid label='起始时间' width={4} value={start_timestamp} type='datetime-local'
 | 
				
			||||||
@@ -225,6 +219,19 @@ const LogsTable = () => {
 | 
				
			|||||||
                        onChange={handleInputChange} />
 | 
					                        onChange={handleInputChange} />
 | 
				
			||||||
            <Form.Button fluid label='操作' width={2} onClick={refresh}>查询</Form.Button>
 | 
					            <Form.Button fluid label='操作' width={2} onClick={refresh}>查询</Form.Button>
 | 
				
			||||||
          </Form.Group>
 | 
					          </Form.Group>
 | 
				
			||||||
 | 
					          {
 | 
				
			||||||
 | 
					            isAdminUser && <>
 | 
				
			||||||
 | 
					              <Form.Group>
 | 
				
			||||||
 | 
					                <Form.Input fluid label={'渠道 ID'} width={3} value={channel}
 | 
				
			||||||
 | 
					                            placeholder='可选值' name='channel'
 | 
				
			||||||
 | 
					                            onChange={handleInputChange} />
 | 
				
			||||||
 | 
					                <Form.Input fluid label={'用户名称'} width={3} value={username}
 | 
				
			||||||
 | 
					                            placeholder={'可选值'} name='username'
 | 
				
			||||||
 | 
					                            onChange={handleInputChange} />
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					              </Form.Group>
 | 
				
			||||||
 | 
					            </>
 | 
				
			||||||
 | 
					          }
 | 
				
			||||||
        </Form>
 | 
					        </Form>
 | 
				
			||||||
        <Table basic compact size='small'>
 | 
					        <Table basic compact size='small'>
 | 
				
			||||||
          <Table.Header>
 | 
					          <Table.Header>
 | 
				
			||||||
@@ -238,6 +245,17 @@ const LogsTable = () => {
 | 
				
			|||||||
              >
 | 
					              >
 | 
				
			||||||
                时间
 | 
					                时间
 | 
				
			||||||
              </Table.HeaderCell>
 | 
					              </Table.HeaderCell>
 | 
				
			||||||
 | 
					              {
 | 
				
			||||||
 | 
					                isAdminUser && <Table.HeaderCell
 | 
				
			||||||
 | 
					                  style={{ cursor: 'pointer' }}
 | 
				
			||||||
 | 
					                  onClick={() => {
 | 
				
			||||||
 | 
					                    sortLog('channel');
 | 
				
			||||||
 | 
					                  }}
 | 
				
			||||||
 | 
					                  width={1}
 | 
				
			||||||
 | 
					                >
 | 
				
			||||||
 | 
					                  渠道
 | 
				
			||||||
 | 
					                </Table.HeaderCell>
 | 
				
			||||||
 | 
					              }
 | 
				
			||||||
              {
 | 
					              {
 | 
				
			||||||
                isAdminUser && <Table.HeaderCell
 | 
					                isAdminUser && <Table.HeaderCell
 | 
				
			||||||
                  style={{ cursor: 'pointer' }}
 | 
					                  style={{ cursor: 'pointer' }}
 | 
				
			||||||
@@ -299,16 +317,16 @@ const LogsTable = () => {
 | 
				
			|||||||
                onClick={() => {
 | 
					                onClick={() => {
 | 
				
			||||||
                  sortLog('quota');
 | 
					                  sortLog('quota');
 | 
				
			||||||
                }}
 | 
					                }}
 | 
				
			||||||
                width={2}
 | 
					                width={1}
 | 
				
			||||||
              >
 | 
					              >
 | 
				
			||||||
                消耗额度
 | 
					                额度
 | 
				
			||||||
              </Table.HeaderCell>
 | 
					              </Table.HeaderCell>
 | 
				
			||||||
              <Table.HeaderCell
 | 
					              <Table.HeaderCell
 | 
				
			||||||
                style={{ cursor: 'pointer' }}
 | 
					                style={{ cursor: 'pointer' }}
 | 
				
			||||||
                onClick={() => {
 | 
					                onClick={() => {
 | 
				
			||||||
                  sortLog('content');
 | 
					                  sortLog('content');
 | 
				
			||||||
                }}
 | 
					                }}
 | 
				
			||||||
                width={isAdminUser ? 4 : 5}
 | 
					                width={isAdminUser ? 4 : 6}
 | 
				
			||||||
              >
 | 
					              >
 | 
				
			||||||
                详情
 | 
					                详情
 | 
				
			||||||
              </Table.HeaderCell>
 | 
					              </Table.HeaderCell>
 | 
				
			||||||
@@ -326,6 +344,11 @@ const LogsTable = () => {
 | 
				
			|||||||
                return (
 | 
					                return (
 | 
				
			||||||
                  <Table.Row key={log.id}>
 | 
					                  <Table.Row key={log.id}>
 | 
				
			||||||
                    <Table.Cell>{renderTimestamp(log.created_at)}</Table.Cell>
 | 
					                    <Table.Cell>{renderTimestamp(log.created_at)}</Table.Cell>
 | 
				
			||||||
 | 
					                    {
 | 
				
			||||||
 | 
					                      isAdminUser && (
 | 
				
			||||||
 | 
					                        <Table.Cell>{log.channel ? <Label basic>{log.channel}</Label> : ''}</Table.Cell>
 | 
				
			||||||
 | 
					                      )
 | 
				
			||||||
 | 
					                    }
 | 
				
			||||||
                    {
 | 
					                    {
 | 
				
			||||||
                      isAdminUser && (
 | 
					                      isAdminUser && (
 | 
				
			||||||
                        <Table.Cell>{log.username ? <Label>{log.username}</Label> : ''}</Table.Cell>
 | 
					                        <Table.Cell>{log.username ? <Label>{log.username}</Label> : ''}</Table.Cell>
 | 
				
			||||||
@@ -345,7 +368,7 @@ const LogsTable = () => {
 | 
				
			|||||||
 | 
					
 | 
				
			||||||
          <Table.Footer>
 | 
					          <Table.Footer>
 | 
				
			||||||
            <Table.Row>
 | 
					            <Table.Row>
 | 
				
			||||||
              <Table.HeaderCell colSpan={'9'}>
 | 
					              <Table.HeaderCell colSpan={'10'}>
 | 
				
			||||||
                <Select
 | 
					                <Select
 | 
				
			||||||
                  placeholder='选择明细分类'
 | 
					                  placeholder='选择明细分类'
 | 
				
			||||||
                  options={LOG_OPTIONS}
 | 
					                  options={LOG_OPTIONS}
 | 
				
			||||||
 
 | 
				
			|||||||
@@ -67,7 +67,7 @@ const EditChannel = () => {
 | 
				
			|||||||
          localModels = ['ERNIE-Bot', 'ERNIE-Bot-turbo', 'Embedding-V1'];
 | 
					          localModels = ['ERNIE-Bot', 'ERNIE-Bot-turbo', 'Embedding-V1'];
 | 
				
			||||||
          break;
 | 
					          break;
 | 
				
			||||||
        case 17:
 | 
					        case 17:
 | 
				
			||||||
          localModels = ['qwen-v1', 'qwen-plus-v1', 'text-embedding-v1'];
 | 
					          localModels = ['qwen-turbo', 'qwen-plus', 'text-embedding-v1'];
 | 
				
			||||||
          break;
 | 
					          break;
 | 
				
			||||||
        case 16:
 | 
					        case 16:
 | 
				
			||||||
          localModels = ['chatglm_pro', 'chatglm_std', 'chatglm_lite'];
 | 
					          localModels = ['chatglm_pro', 'chatglm_std', 'chatglm_lite'];
 | 
				
			||||||
@@ -174,7 +174,7 @@ const EditChannel = () => {
 | 
				
			|||||||
      return;
 | 
					      return;
 | 
				
			||||||
    }
 | 
					    }
 | 
				
			||||||
    let localInputs = inputs;
 | 
					    let localInputs = inputs;
 | 
				
			||||||
    if (localInputs.base_url.endsWith('/')) {
 | 
					    if (localInputs.base_url && localInputs.base_url.endsWith('/')) {
 | 
				
			||||||
      localInputs.base_url = localInputs.base_url.slice(0, localInputs.base_url.length - 1);
 | 
					      localInputs.base_url = localInputs.base_url.slice(0, localInputs.base_url.length - 1);
 | 
				
			||||||
    }
 | 
					    }
 | 
				
			||||||
    if (localInputs.type === 3 && localInputs.other === '') {
 | 
					    if (localInputs.type === 3 && localInputs.other === '') {
 | 
				
			||||||
@@ -183,9 +183,6 @@ const EditChannel = () => {
 | 
				
			|||||||
    if (localInputs.type === 18 && localInputs.other === '') {
 | 
					    if (localInputs.type === 18 && localInputs.other === '') {
 | 
				
			||||||
      localInputs.other = 'v2.1';
 | 
					      localInputs.other = 'v2.1';
 | 
				
			||||||
    }
 | 
					    }
 | 
				
			||||||
    if (localInputs.model_mapping === '') {
 | 
					 | 
				
			||||||
      localInputs.model_mapping = '{}';
 | 
					 | 
				
			||||||
    }
 | 
					 | 
				
			||||||
    let res;
 | 
					    let res;
 | 
				
			||||||
    localInputs.models = localInputs.models.join(',');
 | 
					    localInputs.models = localInputs.models.join(',');
 | 
				
			||||||
    localInputs.group = localInputs.groups.join(',');
 | 
					    localInputs.group = localInputs.groups.join(',');
 | 
				
			||||||
 
 | 
				
			|||||||
		Reference in New Issue
	
	Block a user