Compare commits

...

33 Commits

Author SHA1 Message Date
JustSong
38668e7331 chore: update gpt3.5 completion ratio 2023-06-14 09:41:06 +08:00
JustSong
323f3d263a feat: add new released models 2023-06-14 09:12:14 +08:00
JustSong
0c34ed4c61 docs: update README 2023-06-13 17:45:01 +08:00
JustSong
7c7eb6b7ec fix: now the input field can be array type now (close #149) 2023-06-12 16:11:57 +08:00
JustSong
8b2ef666ef fix: fix OpenAI-SB balance not correct 2023-06-12 09:40:49 +08:00
JustSong
955d5f8707 fix: fix group list not correct (close #147) 2023-06-12 09:11:48 +08:00
quzard
47ca449e32 feat: add support for updating balance of channel typpe OpenAI-SB (#146, close #125)
* Add support for updating channel balance in OpenAISB

* fix: handel error

---------

Co-authored-by: JustSong <songquanpeng@foxmail.com>
2023-06-11 21:04:41 +08:00
JustSong
39481eb6c0 chore: add trailing slash for API calling 2023-06-11 16:33:40 +08:00
JustSong
69153e7231 docs: update README 2023-06-11 12:37:15 +08:00
JustSong
cdef10cad8 docs: update README 2023-06-11 11:11:47 +08:00
JustSong
077853416d chore: record ratio detail in log 2023-06-11 11:11:19 +08:00
JustSong
596446dba4 feat: able to set group ratio now (close #62, close #142) 2023-06-11 11:08:16 +08:00
JustSong
9d0bec83df chore: update prompt 2023-06-11 09:55:50 +08:00
JustSong
f97a9ce597 fix: correct OpenAI error code's type 2023-06-11 09:49:57 +08:00
JustSong
4339f45f74 feat: support /v1/moderations now (close #117) 2023-06-11 09:37:36 +08:00
JustSong
e398e0756b docs: update README 2023-06-10 20:43:32 +08:00
JustSong
64db39320a feat: now able to check all user's log 2023-06-10 20:40:23 +08:00
JustSong
0b4bf30908 docs: update README 2023-06-10 16:34:14 +08:00
JustSong
d29c273073 chore: add more log types 2023-06-10 16:31:40 +08:00
JustSong
74f508e847 feat: now user can check its topup & consume history (close #78, close #95) 2023-06-10 16:04:04 +08:00
JustSong
145bb14cb2 fix: fix not using proxy when update balance 2023-06-09 18:57:27 +08:00
JustSong
8901f03864 feat: support set proxy for channel OpenAI (close #139) 2023-06-09 18:30:01 +08:00
JustSong
813bf0bd66 refactor: enable model configuration on default group (close #143) 2023-06-09 18:05:51 +08:00
JustSong
45e9fd66e7 feat: able to check topup history & consumption history (#78, #95) 2023-06-09 16:59:00 +08:00
JustSong
e0d0674f81 fix: fix redemption code's quota not updated 2023-06-08 15:19:55 +08:00
JustSong
4b6adaec0b feat: support /v1/completions (close #115) 2023-06-08 14:54:02 +08:00
JustSong
9301b3fed3 chore: update test logic 2023-06-08 14:09:39 +08:00
JustSong
c6edb78ac9 docs: update README 2023-06-08 09:44:47 +08:00
JustSong
521ede2469 fix: able to manage root user now 2023-06-08 09:28:06 +08:00
JustSong
2c53424db8 feat: able to manage group now 2023-06-08 09:26:54 +08:00
JustSong
2ad22e1425 feat: support group now (close #17, close #72, close #85, close #104, close #136)
Co-authored-by: quzard <1191890118@qq.com>
2023-06-07 23:26:00 +08:00
JustSong
502515bbbd docs: update README 2023-06-07 15:31:18 +08:00
JustSong
1e1c6a828f fix: prompt user the feat is not implemented (#125) 2023-06-03 11:09:14 +08:00
35 changed files with 1223 additions and 115 deletions

View File

@@ -40,14 +40,17 @@ _✨ All in one 的 OpenAI 接口,整合各种 API 访问方式,开箱即用
<a href="https://openai.justsong.cn/">在线演示</a>
·
<a href="https://github.com/songquanpeng/one-api#常见问题">常见问题</a>
·
<a href="https://iamazing.cn/page/reward">赞赏支持</a>
</p>
> **Warning**:从 `v0.2` 版本升级到 `v0.3` 版本需要手动迁移数据库,请手动执行[数据库迁移脚本](./bin/migration_v0.2-v0.3.sql)
> **Note**:使用 Docker 拉取的最新镜像可能是 `alpha` 版本,如果追求稳定性请手动指定版本
> **Warning**:从 `v0.3` 版本升级到 `v0.4` 版本需要手动迁移数据库,请手动执行[数据库迁移脚本](./bin/migration_v0.3-v0.4.sql)。
## 功能
1. 支持多种 API 访问渠道,欢迎 PR 或提 issue 添加更多渠道:
+ [x] OpenAI 官方通道
+ [x] OpenAI 官方通道(支持配置代理)
+ [x] **Azure OpenAI API**
+ [x] [API2D](https://api2d.com/r/197971)
+ [x] [OhMyGPT](https://aigptx.top?aff=uFpUl2Kf)
@@ -56,23 +59,26 @@ _✨ All in one 的 OpenAI 接口,整合各种 API 访问方式,开箱即用
+ [x] [OpenAI Max](https://openaimax.com)
+ [x] [OpenAI-SB](https://openai-sb.com)
+ [x] [CloseAI](https://console.openai-asia.com/r/2412)
+ [x] 自定义渠道:例如使用自行搭建的 OpenAI 代理
+ [x] 自定义渠道:例如各种未收录的第三方代理服务
2. 支持通过**负载均衡**的方式访问多个渠道。
3. 支持 **stream 模式**,可以通过流式传输实现打字机效果。
4. 支持**多机部署**[详见此处](#多机部署)。
5. 支持**令牌管理**,设置令牌的过期时间和使用次数。
6. 支持**兑换码管理**,支持批量生成和导出兑换码,可使用兑换码为账户进行充值。
7. 支持**通道管理**,批量创建通道。
8. 支持发布公告,设置充值链接,设置新用户初始额度
9. 支持丰富的**自定义**设置,
1. 支持自定义系统名称logo 以及页脚
2. 支持自定义首页和关于页面,可以选择使用 HTML & Markdown 代码进行自定义,或者使用一个单独的网页通过 iframe 嵌入
10. 支持通过系统访问令牌访问管理 API。
11. 支持用户管理,支持**多种用户登录注册方式**
8. 支持**用户分组**以及**渠道分组**,支持为不同分组设置不同的倍率
9. 支持渠道**设置模型列表**。
10. 支持**查看额度明细**
11. 支持发布公告,设置充值链接,设置新用户初始额度
12. 支持丰富的**自定义**设置,
1. 支持自定义系统名称logo 以及页脚。
2. 支持自定义首页和关于页面,可以选择使用 HTML & Markdown 代码进行自定义,或者使用一个单独的网页通过 iframe 嵌入。
13. 支持通过系统访问令牌访问管理 API。
14. 支持用户管理,支持**多种用户登录注册方式**
+ 邮箱登录注册以及通过邮箱进行密码重置。
+ [GitHub 开放授权](https://github.com/settings/applications/new)。
+ 微信公众号授权(需要额外部署 [WeChat Server](https://github.com/songquanpeng/wechat-server))。
12. 未来其他大模型开放 API 后,将第一时间支持,并将其封装成同样的 API 访问方式。
15. 未来其他大模型开放 API 后,将第一时间支持,并将其封装成同样的 API 访问方式。
## 部署
### 基于 Docker 进行部署
@@ -111,6 +117,8 @@ sudo certbot --nginx
sudo service nginx restart
```
初始账号用户名为 `root`,密码为 `123456`
### 手动部署
1. 从 [GitHub Releases](https://github.com/songquanpeng/one-api/releases/latest) 下载可执行文件或者从源码编译:
```shell
@@ -194,4 +202,12 @@ https://openai.justsong.cn
+ 请检查你的令牌额度是否足够,这个和账户额度是分开的。
+ 令牌额度仅供用户设置最大使用量,用户可自由设置。
2. 宝塔部署后访问出现空白页面?
+ 自动配置的问题,详见[#97](https://github.com/songquanpeng/one-api/issues/97)。
+ 自动配置的问题,详见[#97](https://github.com/songquanpeng/one-api/issues/97)。
3. 提示无可用渠道?
+ 请检查的用户分组和渠道分组设置。
+ 以及渠道的模型设置。
## 注意
本项目为开源项目,请在遵循 OpenAI 的[使用条款](https://openai.com/policies/terms-of-use)以及法律法规的情况下使用,不得用于非法用途。
本项目依据 MIT 协议开源,请以某种方式保留 One API 的版权信息。

View File

@@ -0,0 +1,17 @@
INSERT INTO abilities (`group`, model, channel_id, enabled)
SELECT c.`group`, m.model, c.id, 1
FROM channels c
CROSS JOIN (
SELECT 'gpt-3.5-turbo' AS model UNION ALL
SELECT 'gpt-3.5-turbo-0301' AS model UNION ALL
SELECT 'gpt-4' AS model UNION ALL
SELECT 'gpt-4-0314' AS model
) AS m
WHERE c.status = 1
AND NOT EXISTS (
SELECT 1
FROM abilities a
WHERE a.`group` = c.`group`
AND a.model = m.model
AND a.channel_id = c.id
);

View File

@@ -25,6 +25,7 @@ var OptionMap map[string]string
var OptionMapRWMutex sync.RWMutex
var ItemsPerPage = 10
var MaxRecentItems = 100
var PasswordLoginEnabled = true
var PasswordRegisterEnabled = true

26
common/gin.go Normal file
View File

@@ -0,0 +1,26 @@
package common
import (
"bytes"
"encoding/json"
"github.com/gin-gonic/gin"
"io"
)
func UnmarshalBodyReusable(c *gin.Context, v any) error {
requestBody, err := io.ReadAll(c.Request.Body)
if err != nil {
return err
}
err = c.Request.Body.Close()
if err != nil {
return err
}
err = json.Unmarshal(requestBody, &v)
if err != nil {
return err
}
// Reset request body
c.Request.Body = io.NopCloser(bytes.NewBuffer(requestBody))
return nil
}

31
common/group-ratio.go Normal file
View File

@@ -0,0 +1,31 @@
package common
import "encoding/json"
var GroupRatio = map[string]float64{
"default": 1,
"vip": 1,
"svip": 1,
}
func GroupRatio2JSONString() string {
jsonBytes, err := json.Marshal(GroupRatio)
if err != nil {
SysError("Error marshalling model ratio: " + err.Error())
}
return string(jsonBytes)
}
func UpdateGroupRatioByJSONString(jsonStr string) error {
GroupRatio = make(map[string]float64)
return json.Unmarshal([]byte(jsonStr), &GroupRatio)
}
func GetGroupRatio(name string) float64 {
ratio, ok := GroupRatio[name]
if !ok {
SysError("Group ratio not found: " + name)
return 1
}
return ratio
}

View File

@@ -2,16 +2,23 @@ package common
import "encoding/json"
// ModelRatio
// https://platform.openai.com/docs/models/model-endpoint-compatibility
// https://openai.com/pricing
// TODO: when a new api is enabled, check the pricing here
// 1 === $0.002 / 1K tokens
var ModelRatio = map[string]float64{
"gpt-4": 15,
"gpt-4-0314": 15,
"gpt-4-0613": 15,
"gpt-4-32k": 30,
"gpt-4-32k-0314": 30,
"gpt-3.5-turbo": 1,
"gpt-3.5-turbo-0301": 1,
"gpt-4-32k-0613": 30,
"gpt-3.5-turbo": 0.75, // $0.0015 / 1K tokens
"gpt-3.5-turbo-0301": 0.75,
"gpt-3.5-turbo-0613": 0.75,
"gpt-3.5-turbo-16k": 1.5, // $0.003 / 1K tokens
"gpt-3.5-turbo-16k-0613": 1.5,
"text-ada-001": 0.2,
"text-babbage-001": 0.25,
"text-curie-001": 1,
@@ -26,8 +33,8 @@ var ModelRatio = map[string]float64{
"ada": 10,
"text-embedding-ada-002": 0.2,
"text-search-ada-doc-001": 10,
"text-moderation-stable": 10,
"text-moderation-latest": 10,
"text-moderation-stable": 0.1,
"text-moderation-latest": 0.1,
}
func ModelRatio2JSONString() string {
@@ -39,6 +46,7 @@ func ModelRatio2JSONString() string {
}
func UpdateModelRatioByJSONString(jsonStr string) error {
ModelRatio = make(map[string]float64)
return json.Unmarshal([]byte(jsonStr), &ModelRatio)
}

View File

@@ -37,32 +37,77 @@ type OpenAIUsageResponse struct {
TotalUsage float64 `json:"total_usage"` // unit: 0.01 dollar
}
func updateChannelBalance(channel *model.Channel) (float64, error) {
baseURL := common.ChannelBaseURLs[channel.Type]
switch channel.Type {
case common.ChannelTypeAzure:
return 0, errors.New("尚未实现")
case common.ChannelTypeCustom:
baseURL = channel.BaseURL
}
url := fmt.Sprintf("%s/v1/dashboard/billing/subscription", baseURL)
type OpenAISBUsageResponse struct {
Msg string `json:"msg"`
Data *struct {
Credit string `json:"credit"`
} `json:"data"`
}
func GetResponseBody(method, url string, channel *model.Channel) ([]byte, error) {
client := &http.Client{}
req, err := http.NewRequest("GET", url, nil)
req, err := http.NewRequest(method, url, nil)
if err != nil {
return 0, err
return nil, err
}
auth := fmt.Sprintf("Bearer %s", channel.Key)
req.Header.Add("Authorization", auth)
res, err := client.Do(req)
if err != nil {
return 0, err
return nil, err
}
body, err := io.ReadAll(res.Body)
if err != nil {
return 0, err
return nil, err
}
err = res.Body.Close()
if err != nil {
return nil, err
}
return body, nil
}
func updateChannelOpenAISBBalance(channel *model.Channel) (float64, error) {
url := fmt.Sprintf("https://api.openai-sb.com/sb-api/user/status?api_key=%s", channel.Key)
body, err := GetResponseBody("GET", url, channel)
if err != nil {
return 0, err
}
response := OpenAISBUsageResponse{}
err = json.Unmarshal(body, &response)
if err != nil {
return 0, err
}
if response.Data == nil {
return 0, errors.New(response.Msg)
}
balance, err := strconv.ParseFloat(response.Data.Credit, 64)
if err != nil {
return 0, err
}
channel.UpdateBalance(balance)
return balance, nil
}
func updateChannelBalance(channel *model.Channel) (float64, error) {
baseURL := common.ChannelBaseURLs[channel.Type]
switch channel.Type {
case common.ChannelTypeOpenAI:
if channel.BaseURL != "" {
baseURL = channel.BaseURL
}
case common.ChannelTypeAzure:
return 0, errors.New("尚未实现")
case common.ChannelTypeCustom:
baseURL = channel.BaseURL
case common.ChannelTypeOpenAISB:
return updateChannelOpenAISBBalance(channel)
default:
return 0, errors.New("尚未实现")
}
url := fmt.Sprintf("%s/v1/dashboard/billing/subscription", baseURL)
body, err := GetResponseBody("GET", url, channel)
if err != nil {
return 0, err
}
@@ -78,20 +123,7 @@ func updateChannelBalance(channel *model.Channel) (float64, error) {
startDate = now.AddDate(0, 0, -100).Format("2006-01-02")
}
url = fmt.Sprintf("%s/v1/dashboard/billing/usage?start_date=%s&end_date=%s", baseURL, startDate, endDate)
req, err = http.NewRequest("GET", url, nil)
if err != nil {
return 0, err
}
req.Header.Add("Authorization", auth)
res, err = client.Do(req)
if err != nil {
return 0, err
}
body, err = io.ReadAll(res.Body)
if err != nil {
return 0, err
}
err = res.Body.Close()
body, err = GetResponseBody("GET", url, channel)
if err != nil {
return 0, err
}

View File

@@ -27,6 +27,8 @@ func testChannel(channel *model.Channel, request *ChatRequest) error {
} else {
if channel.Type == common.ChannelTypeCustom {
requestURL = channel.BaseURL
} else if channel.Type == common.ChannelTypeOpenAI && channel.BaseURL != "" {
requestURL = channel.BaseURL
}
requestURL += "/v1/chat/completions"
}
@@ -56,8 +58,8 @@ func testChannel(channel *model.Channel, request *ChatRequest) error {
if err != nil {
return err
}
if response.Error.Message != "" || response.Error.Code != "" {
return errors.New(fmt.Sprintf("type %s, code %s, message %s", response.Error.Type, response.Error.Code, response.Error.Message))
if response.Usage.CompletionTokens == 0 {
return errors.New(fmt.Sprintf("type %s, code %v, message %s", response.Error.Type, response.Error.Code, response.Error.Message))
}
return nil
}

19
controller/group.go Normal file
View File

@@ -0,0 +1,19 @@
package controller
import (
"github.com/gin-gonic/gin"
"net/http"
"one-api/common"
)
func GetGroups(c *gin.Context) {
groupNames := make([]string, 0)
for groupName, _ := range common.GroupRatio {
groupNames = append(groupNames, groupName)
}
c.JSON(http.StatusOK, gin.H{
"success": true,
"message": "",
"data": groupNames,
})
}

86
controller/log.go Normal file
View File

@@ -0,0 +1,86 @@
package controller
import (
"github.com/gin-gonic/gin"
"one-api/common"
"one-api/model"
"strconv"
)
func GetAllLogs(c *gin.Context) {
p, _ := strconv.Atoi(c.Query("p"))
if p < 0 {
p = 0
}
logType, _ := strconv.Atoi(c.Query("type"))
logs, err := model.GetAllLogs(logType, p*common.ItemsPerPage, common.ItemsPerPage)
if err != nil {
c.JSON(200, gin.H{
"success": false,
"message": err.Error(),
})
return
}
c.JSON(200, gin.H{
"success": true,
"message": "",
"data": logs,
})
}
func GetUserLogs(c *gin.Context) {
p, _ := strconv.Atoi(c.Query("p"))
if p < 0 {
p = 0
}
userId := c.GetInt("id")
logType, _ := strconv.Atoi(c.Query("type"))
logs, err := model.GetUserLogs(userId, logType, p*common.ItemsPerPage, common.ItemsPerPage)
if err != nil {
c.JSON(200, gin.H{
"success": false,
"message": err.Error(),
})
return
}
c.JSON(200, gin.H{
"success": true,
"message": "",
"data": logs,
})
}
func SearchAllLogs(c *gin.Context) {
keyword := c.Query("keyword")
logs, err := model.SearchAllLogs(keyword)
if err != nil {
c.JSON(200, gin.H{
"success": false,
"message": err.Error(),
})
return
}
c.JSON(200, gin.H{
"success": true,
"message": "",
"data": logs,
})
}
func SearchUserLogs(c *gin.Context) {
keyword := c.Query("keyword")
userId := c.GetInt("id")
logs, err := model.SearchUserLogs(userId, keyword)
if err != nil {
c.JSON(200, gin.H{
"success": false,
"message": err.Error(),
})
return
}
c.JSON(200, gin.H{
"success": true,
"message": "",
"data": logs,
})
}

View File

@@ -71,6 +71,33 @@ func init() {
Root: "gpt-3.5-turbo-0301",
Parent: nil,
},
{
Id: "gpt-3.5-turbo-0613",
Object: "model",
Created: 1677649963,
OwnedBy: "openai",
Permission: permission,
Root: "gpt-3.5-turbo-0613",
Parent: nil,
},
{
Id: "gpt-3.5-turbo-16k",
Object: "model",
Created: 1677649963,
OwnedBy: "openai",
Permission: permission,
Root: "gpt-3.5-turbo-16k",
Parent: nil,
},
{
Id: "gpt-3.5-turbo-16k-0613",
Object: "model",
Created: 1677649963,
OwnedBy: "openai",
Permission: permission,
Root: "gpt-3.5-turbo-16k-0613",
Parent: nil,
},
{
Id: "gpt-4",
Object: "model",
@@ -89,6 +116,15 @@ func init() {
Root: "gpt-4-0314",
Parent: nil,
},
{
Id: "gpt-4-0613",
Object: "model",
Created: 1677649963,
OwnedBy: "openai",
Permission: permission,
Root: "gpt-4-0613",
Parent: nil,
},
{
Id: "gpt-4-32k",
Object: "model",
@@ -107,6 +143,15 @@ func init() {
Root: "gpt-4-32k-0314",
Parent: nil,
},
{
Id: "gpt-4-32k-0613",
Object: "model",
Created: 1677649963,
OwnedBy: "openai",
Permission: permission,
Root: "gpt-4-32k-0613",
Parent: nil,
},
{
Id: "text-embedding-ada-002",
Object: "model",
@@ -116,6 +161,69 @@ func init() {
Root: "text-embedding-ada-002",
Parent: nil,
},
{
Id: "text-davinci-003",
Object: "model",
Created: 1677649963,
OwnedBy: "openai",
Permission: permission,
Root: "text-davinci-003",
Parent: nil,
},
{
Id: "text-davinci-002",
Object: "model",
Created: 1677649963,
OwnedBy: "openai",
Permission: permission,
Root: "text-davinci-002",
Parent: nil,
},
{
Id: "text-curie-001",
Object: "model",
Created: 1677649963,
OwnedBy: "openai",
Permission: permission,
Root: "text-curie-001",
Parent: nil,
},
{
Id: "text-babbage-001",
Object: "model",
Created: 1677649963,
OwnedBy: "openai",
Permission: permission,
Root: "text-babbage-001",
Parent: nil,
},
{
Id: "text-ada-001",
Object: "model",
Created: 1677649963,
OwnedBy: "openai",
Permission: permission,
Root: "text-ada-001",
Parent: nil,
},
{
Id: "text-moderation-latest",
Object: "model",
Created: 1677649963,
OwnedBy: "openai",
Permission: permission,
Root: "text-moderation-latest",
Parent: nil,
},
{
Id: "text-moderation-stable",
Object: "model",
Created: 1677649963,
OwnedBy: "openai",
Permission: permission,
Root: "text-moderation-stable",
Parent: nil,
},
}
openAIModelsMap = make(map[string]OpenAIModels)
for _, model := range openAIModels {

View File

@@ -58,6 +58,20 @@ func countTokenMessages(messages []Message, model string) int {
return tokenNum
}
func countTokenInput(input any, model string) int {
switch input.(type) {
case string:
return countTokenText(input.(string), model)
case []string:
text := ""
for _, s := range input.([]string) {
text += s
}
return countTokenText(text, model)
}
return 0
}
func countTokenText(text string, model string) int {
tokenEncoder := getTokenEncoder(model)
token := tokenEncoder.Encode(text, nil, nil)

View File

@@ -19,6 +19,14 @@ type Message struct {
Name *string `json:"name,omitempty"`
}
const (
RelayModeUnknown = iota
RelayModeChatCompletions
RelayModeCompletions
RelayModeEmbeddings
RelayModeModeration
)
// https://platform.openai.com/docs/api-reference/chat
type GeneralOpenAIRequest struct {
@@ -30,6 +38,7 @@ type GeneralOpenAIRequest struct {
Temperature float64 `json:"temperature"`
TopP float64 `json:"top_p"`
N int `json:"n"`
Input any `json:"input"`
}
type ChatRequest struct {
@@ -56,7 +65,7 @@ type OpenAIError struct {
Message string `json:"message"`
Type string `json:"type"`
Param string `json:"param"`
Code string `json:"code"`
Code any `json:"code"`
}
type OpenAIErrorWithStatusCode struct {
@@ -69,7 +78,7 @@ type TextResponse struct {
Error OpenAIError `json:"error"`
}
type StreamResponse struct {
type ChatCompletionsStreamResponse struct {
Choices []struct {
Delta struct {
Content string `json:"content"`
@@ -78,11 +87,28 @@ type StreamResponse struct {
} `json:"choices"`
}
type CompletionsStreamResponse struct {
Choices []struct {
Text string `json:"text"`
FinishReason string `json:"finish_reason"`
} `json:"choices"`
}
func Relay(c *gin.Context) {
err := relayHelper(c)
relayMode := RelayModeUnknown
if strings.HasPrefix(c.Request.URL.Path, "/v1/chat/completions") {
relayMode = RelayModeChatCompletions
} else if strings.HasPrefix(c.Request.URL.Path, "/v1/completions") {
relayMode = RelayModeCompletions
} else if strings.HasPrefix(c.Request.URL.Path, "/v1/embeddings") {
relayMode = RelayModeEmbeddings
} else if strings.HasPrefix(c.Request.URL.Path, "/v1/moderations") {
relayMode = RelayModeModeration
}
err := relayHelper(c, relayMode)
if err != nil {
if err.StatusCode == http.StatusTooManyRequests {
err.OpenAIError.Message = "负载已,请稍后再试,或升级账户以提升服务质量。"
err.OpenAIError.Message = "当前分组负载已饱和,请稍后再试,或升级账户以提升服务质量。"
}
c.JSON(err.StatusCode, gin.H{
"error": err.OpenAIError,
@@ -110,31 +136,29 @@ func errorWrapper(err error, code string, statusCode int) *OpenAIErrorWithStatus
}
}
func relayHelper(c *gin.Context) *OpenAIErrorWithStatusCode {
func relayHelper(c *gin.Context, relayMode int) *OpenAIErrorWithStatusCode {
channelType := c.GetInt("channel")
tokenId := c.GetInt("token_id")
consumeQuota := c.GetBool("consume_quota")
group := c.GetString("group")
var textRequest GeneralOpenAIRequest
if consumeQuota || channelType == common.ChannelTypeAzure || channelType == common.ChannelTypePaLM {
requestBody, err := io.ReadAll(c.Request.Body)
err := common.UnmarshalBodyReusable(c, &textRequest)
if err != nil {
return errorWrapper(err, "read_request_body_failed", http.StatusBadRequest)
return errorWrapper(err, "bind_request_body_failed", http.StatusBadRequest)
}
err = c.Request.Body.Close()
if err != nil {
return errorWrapper(err, "close_request_body_failed", http.StatusBadRequest)
}
err = json.Unmarshal(requestBody, &textRequest)
if err != nil {
return errorWrapper(err, "unmarshal_request_body_failed", http.StatusBadRequest)
}
// Reset request body
c.Request.Body = io.NopCloser(bytes.NewBuffer(requestBody))
}
if relayMode == RelayModeModeration && textRequest.Model == "" {
textRequest.Model = "text-moderation-latest"
}
baseURL := common.ChannelBaseURLs[channelType]
requestURL := c.Request.URL.String()
if channelType == common.ChannelTypeCustom {
baseURL = c.GetString("base_url")
} else if channelType == common.ChannelTypeOpenAI {
if c.GetString("base_url") != "" {
baseURL = c.GetString("base_url")
}
}
fullRequestURL := fmt.Sprintf("%s%s", baseURL, requestURL)
if channelType == common.ChannelTypeAzure {
@@ -158,13 +182,22 @@ func relayHelper(c *gin.Context) *OpenAIErrorWithStatusCode {
err := relayPaLM(textRequest, c)
return err
}
promptTokens := countTokenMessages(textRequest.Messages, textRequest.Model)
var promptTokens int
switch relayMode {
case RelayModeChatCompletions:
promptTokens = countTokenMessages(textRequest.Messages, textRequest.Model)
case RelayModeCompletions:
promptTokens = countTokenText(textRequest.Prompt, textRequest.Model)
case RelayModeModeration:
promptTokens = countTokenInput(textRequest.Input, textRequest.Model)
}
preConsumedTokens := common.PreConsumedQuota
if textRequest.MaxTokens != 0 {
preConsumedTokens = promptTokens + textRequest.MaxTokens
}
ratio := common.GetModelRatio(textRequest.Model)
modelRatio := common.GetModelRatio(textRequest.Model)
groupRatio := common.GetGroupRatio(group)
ratio := modelRatio * groupRatio
preConsumedQuota := int(float64(preConsumedTokens) * ratio)
if consumeQuota {
err := model.PreConsumeTokenQuota(tokenId, preConsumedQuota)
@@ -206,23 +239,27 @@ func relayHelper(c *gin.Context) *OpenAIErrorWithStatusCode {
defer func() {
if consumeQuota {
quota := 0
usingGPT4 := strings.HasPrefix(textRequest.Model, "gpt-4")
completionRatio := 1
if usingGPT4 {
completionRatio := 1.34 // default for gpt-3
if strings.HasPrefix(textRequest.Model, "gpt-4") {
completionRatio = 2
}
if isStream {
responseTokens := countTokenText(streamResponseText, textRequest.Model)
quota = promptTokens + responseTokens*completionRatio
quota = promptTokens + int(float64(responseTokens)*completionRatio)
} else {
quota = textResponse.Usage.PromptTokens + textResponse.Usage.CompletionTokens*completionRatio
quota = textResponse.Usage.PromptTokens + int(float64(textResponse.Usage.CompletionTokens)*completionRatio)
}
quota = int(float64(quota) * ratio)
if ratio != 0 && quota <= 0 {
quota = 1
}
quotaDelta := quota - preConsumedQuota
err := model.PostConsumeTokenQuota(tokenId, quotaDelta)
if err != nil {
common.SysError("Error consuming token remain quota: " + err.Error())
}
userId := c.GetInt("id")
model.RecordLog(userId, model.LogTypeConsume, fmt.Sprintf("使用模型 %s 消耗 %d 点额度(模型倍率 %.2f,分组倍率 %.2f,补全倍率 %.2f", textRequest.Model, quota, modelRatio, groupRatio, completionRatio))
}
}()
@@ -255,14 +292,27 @@ func relayHelper(c *gin.Context) *OpenAIErrorWithStatusCode {
dataChan <- data
data = data[6:]
if !strings.HasPrefix(data, "[DONE]") {
var streamResponse StreamResponse
err = json.Unmarshal([]byte(data), &streamResponse)
if err != nil {
common.SysError("Error unmarshalling stream response: " + err.Error())
return
}
for _, choice := range streamResponse.Choices {
streamResponseText += choice.Delta.Content
switch relayMode {
case RelayModeChatCompletions:
var streamResponse ChatCompletionsStreamResponse
err = json.Unmarshal([]byte(data), &streamResponse)
if err != nil {
common.SysError("Error unmarshalling stream response: " + err.Error())
return
}
for _, choice := range streamResponse.Choices {
streamResponseText += choice.Delta.Content
}
case RelayModeCompletions:
var streamResponse CompletionsStreamResponse
err = json.Unmarshal([]byte(data), &streamResponse)
if err != nil {
common.SysError("Error unmarshalling stream response: " + err.Error())
return
}
for _, choice := range streamResponse.Choices {
streamResponseText += choice.Text
}
}
}
}

View File

@@ -2,6 +2,7 @@ package controller
import (
"encoding/json"
"fmt"
"github.com/gin-contrib/sessions"
"github.com/gin-gonic/gin"
"net/http"
@@ -228,7 +229,7 @@ func GetUser(c *gin.Context) {
return
}
myRole := c.GetInt("role")
if myRole <= user.Role {
if myRole <= user.Role && myRole != common.RoleRootUser {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": "无权获取同级或更高等级用户的信息",
@@ -326,14 +327,14 @@ func UpdateUser(c *gin.Context) {
return
}
myRole := c.GetInt("role")
if myRole <= originUser.Role {
if myRole <= originUser.Role && myRole != common.RoleRootUser {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": "无权更新同权限等级或更高权限等级的用户信息",
})
return
}
if myRole <= updatedUser.Role {
if myRole <= updatedUser.Role && myRole != common.RoleRootUser {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": "无权将其他用户权限等级提升到大于等于自己的权限等级",
@@ -351,6 +352,9 @@ func UpdateUser(c *gin.Context) {
})
return
}
if originUser.Quota != updatedUser.Quota {
model.RecordLog(originUser.Id, model.LogTypeManage, fmt.Sprintf("管理员将用户额度从 %d 点修改为 %d 点", originUser.Quota, updatedUser.Quota))
}
c.JSON(http.StatusOK, gin.H{
"success": true,
"message": "",

View File

@@ -7,10 +7,18 @@ import (
"one-api/common"
"one-api/model"
"strconv"
"strings"
)
type ModelRequest struct {
Model string `json:"model"`
}
func Distribute() func(c *gin.Context) {
return func(c *gin.Context) {
userId := c.GetInt("id")
userGroup, _ := model.GetUserGroup(userId)
c.Set("group", userGroup)
var channel *model.Channel
channelId, ok := c.Get("channelId")
if ok {
@@ -48,8 +56,24 @@ func Distribute() func(c *gin.Context) {
}
} else {
// Select a channel for the user
var err error
channel, err = model.GetRandomChannel()
var modelRequest ModelRequest
err := common.UnmarshalBodyReusable(c, &modelRequest)
if err != nil {
c.JSON(200, gin.H{
"error": gin.H{
"message": "无效的请求",
"type": "one_api_error",
},
})
c.Abort()
return
}
if strings.HasPrefix(c.Request.URL.Path, "/v1/moderations") {
if modelRequest.Model == "" {
modelRequest.Model = "text-moderation-stable"
}
}
channel, err = model.GetRandomSatisfiedChannel(userGroup, modelRequest.Model)
if err != nil {
c.JSON(200, gin.H{
"error": gin.H{
@@ -65,11 +89,9 @@ func Distribute() func(c *gin.Context) {
c.Set("channel_id", channel.Id)
c.Set("channel_name", channel.Name)
c.Request.Header.Set("Authorization", fmt.Sprintf("Bearer %s", channel.Key))
if channel.Type == common.ChannelTypeCustom || channel.Type == common.ChannelTypeAzure {
c.Set("base_url", channel.BaseURL)
if channel.Type == common.ChannelTypeAzure {
c.Set("api_version", channel.Other)
}
c.Set("base_url", channel.BaseURL)
if channel.Type == common.ChannelTypeAzure {
c.Set("api_version", channel.Other)
}
c.Next()
}

69
model/ability.go Normal file
View File

@@ -0,0 +1,69 @@
package model
import (
"one-api/common"
"strings"
)
type Ability struct {
Group string `json:"group" gorm:"type:varchar(32);primaryKey;autoIncrement:false"`
Model string `json:"model" gorm:"primaryKey;autoIncrement:false"`
ChannelId int `json:"channel_id" gorm:"primaryKey;autoIncrement:false;index"`
Enabled bool `json:"enabled"`
}
func GetRandomSatisfiedChannel(group string, model string) (*Channel, error) {
ability := Ability{}
var err error = nil
if common.UsingSQLite {
err = DB.Where("`group` = ? and model = ? and enabled = 1", group, model).Order("RANDOM()").Limit(1).First(&ability).Error
} else {
err = DB.Where("`group` = ? and model = ? and enabled = 1", group, model).Order("RAND()").Limit(1).First(&ability).Error
}
if err != nil {
return nil, err
}
channel := Channel{}
err = DB.First(&channel, "id = ?", ability.ChannelId).Error
return &channel, err
}
func (channel *Channel) AddAbilities() error {
models_ := strings.Split(channel.Models, ",")
abilities := make([]Ability, 0, len(models_))
for _, model := range models_ {
ability := Ability{
Group: channel.Group,
Model: model,
ChannelId: channel.Id,
Enabled: channel.Status == common.ChannelStatusEnabled,
}
abilities = append(abilities, ability)
}
return DB.Create(&abilities).Error
}
func (channel *Channel) DeleteAbilities() error {
return DB.Where("channel_id = ?", channel.Id).Delete(&Ability{}).Error
}
// UpdateAbilities updates abilities of this channel.
// Make sure the channel is completed before calling this function.
func (channel *Channel) UpdateAbilities() error {
// A quick and dirty way to update abilities
// First delete all abilities of this channel
err := channel.DeleteAbilities()
if err != nil {
return err
}
// Then add new abilities
err = channel.AddAbilities()
if err != nil {
return err
}
return nil
}
func UpdateAbilityStatus(channelId int, status bool) error {
return DB.Model(&Ability{}).Where("channel_id = ?", channelId).Select("enabled").Update("enabled", status).Error
}

View File

@@ -1,7 +1,6 @@
package model
import (
_ "gorm.io/driver/sqlite"
"one-api/common"
)
@@ -19,6 +18,8 @@ type Channel struct {
Other string `json:"other"`
Balance float64 `json:"balance"` // in USD
BalanceUpdatedTime int64 `json:"balance_updated_time" gorm:"bigint"`
Models string `json:"models"`
Group string `json:"group" gorm:"type:varchar(32);default:'default'"`
}
func GetAllChannels(startIdx int, num int, selectAll bool) ([]*Channel, error) {
@@ -49,13 +50,12 @@ func GetChannelById(id int, selectAll bool) (*Channel, error) {
}
func GetRandomChannel() (*Channel, error) {
// TODO: consider weight
channel := Channel{}
var err error = nil
if common.UsingSQLite {
err = DB.Where("status = ?", common.ChannelStatusEnabled).Order("RANDOM()").Limit(1).First(&channel).Error
err = DB.Where("status = ? and `group` = ?", common.ChannelStatusEnabled, "default").Order("RANDOM()").Limit(1).First(&channel).Error
} else {
err = DB.Where("status = ?", common.ChannelStatusEnabled).Order("RAND()").Limit(1).First(&channel).Error
err = DB.Where("status = ? and `group` = ?", common.ChannelStatusEnabled, "default").Order("RAND()").Limit(1).First(&channel).Error
}
return &channel, err
}
@@ -63,18 +63,36 @@ func GetRandomChannel() (*Channel, error) {
func BatchInsertChannels(channels []Channel) error {
var err error
err = DB.Create(&channels).Error
return err
if err != nil {
return err
}
for _, channel_ := range channels {
err = channel_.AddAbilities()
if err != nil {
return err
}
}
return nil
}
func (channel *Channel) Insert() error {
var err error
err = DB.Create(channel).Error
if err != nil {
return err
}
err = channel.AddAbilities()
return err
}
func (channel *Channel) Update() error {
var err error
err = DB.Model(channel).Updates(channel).Error
if err != nil {
return err
}
DB.Model(channel).First(channel, "id = ?", channel.Id)
err = channel.UpdateAbilities()
return err
}
@@ -101,11 +119,19 @@ func (channel *Channel) UpdateBalance(balance float64) {
func (channel *Channel) Delete() error {
var err error
err = DB.Delete(channel).Error
if err != nil {
return err
}
err = channel.DeleteAbilities()
return err
}
func UpdateChannelStatusById(id int, status int) {
err := DB.Model(&Channel{}).Where("id = ?", id).Update("status", status).Error
err := UpdateAbilityStatus(id, status == common.ChannelStatusEnabled)
if err != nil {
common.SysError("failed to update ability status: " + err.Error())
}
err = DB.Model(&Channel{}).Where("id = ?", id).Update("status", status).Error
if err != nil {
common.SysError("failed to update channel status: " + err.Error())
}

67
model/log.go Normal file
View File

@@ -0,0 +1,67 @@
package model
import (
"gorm.io/gorm"
"one-api/common"
)
type Log struct {
Id int `json:"id"`
UserId int `json:"user_id" gorm:"index"`
CreatedAt int64 `json:"created_at" gorm:"bigint"`
Type int `json:"type" gorm:"index"`
Content string `json:"content"`
}
const (
LogTypeUnknown = iota
LogTypeTopup
LogTypeConsume
LogTypeManage
LogTypeSystem
)
func RecordLog(userId int, logType int, content string) {
log := &Log{
UserId: userId,
CreatedAt: common.GetTimestamp(),
Type: logType,
Content: content,
}
err := DB.Create(log).Error
if err != nil {
common.SysError("failed to record log: " + err.Error())
}
}
func GetAllLogs(logType int, startIdx int, num int) (logs []*Log, err error) {
var tx *gorm.DB
if logType == LogTypeUnknown {
tx = DB
} else {
tx = DB.Where("type = ?", logType)
}
err = tx.Order("id desc").Limit(num).Offset(startIdx).Find(&logs).Error
return logs, err
}
func GetUserLogs(userId int, logType int, startIdx int, num int) (logs []*Log, err error) {
var tx *gorm.DB
if logType == LogTypeUnknown {
tx = DB.Where("user_id = ?", userId)
} else {
tx = DB.Where("user_id = ? and type = ?", userId, logType)
}
err = tx.Order("id desc").Limit(num).Offset(startIdx).Omit("id").Find(&logs).Error
return logs, err
}
func SearchAllLogs(keyword string) (logs []*Log, err error) {
err = DB.Where("type = ? or content LIKE ?", keyword, keyword+"%").Order("id desc").Limit(common.MaxRecentItems).Find(&logs).Error
return logs, err
}
func SearchUserLogs(userId int, keyword string) (logs []*Log, err error) {
err = DB.Where("user_id = ? and type = ?", userId, keyword).Order("id desc").Limit(common.MaxRecentItems).Omit("id").Find(&logs).Error
return logs, err
}

View File

@@ -75,6 +75,14 @@ func InitDB() (err error) {
if err != nil {
return err
}
err = db.AutoMigrate(&Ability{})
if err != nil {
return err
}
err = db.AutoMigrate(&Log{})
if err != nil {
return err
}
err = createRootAccountIfNeed()
return err
} else {

View File

@@ -58,6 +58,7 @@ func InitOptionMap() {
common.OptionMap["QuotaRemindThreshold"] = strconv.Itoa(common.QuotaRemindThreshold)
common.OptionMap["PreConsumedQuota"] = strconv.Itoa(common.PreConsumedQuota)
common.OptionMap["ModelRatio"] = common.ModelRatio2JSONString()
common.OptionMap["GroupRatio"] = common.GroupRatio2JSONString()
common.OptionMap["TopUpLink"] = common.TopUpLink
common.OptionMapRWMutex.Unlock()
loadOptionsFromDatabase()
@@ -177,6 +178,8 @@ func updateOptionMap(key string, value string) (err error) {
common.PreConsumedQuota, _ = strconv.Atoi(value)
case "ModelRatio":
err = common.UpdateModelRatioByJSONString(value)
case "GroupRatio":
err = common.UpdateGroupRatioByJSONString(value)
case "TopUpLink":
common.TopUpLink = value
case "ChannelDisableThreshold":

View File

@@ -2,7 +2,7 @@ package model
import (
"errors"
_ "gorm.io/driver/sqlite"
"fmt"
"one-api/common"
)
@@ -66,6 +66,7 @@ func Redeem(key string, userId int) (quota int, err error) {
if err != nil {
common.SysError("更新兑换码状态失败:" + err.Error())
}
RecordLog(userId, LogTypeTopup, fmt.Sprintf("通过兑换码充值 %d 点额度", redemption.Quota))
}()
return redemption.Quota, nil
}
@@ -84,7 +85,7 @@ func (redemption *Redemption) SelectUpdate() error {
// Update Make sure your token's fields is completed, because this will update non-zero values
func (redemption *Redemption) Update() error {
var err error
err = DB.Model(redemption).Select("name", "status", "redeemed_time").Updates(redemption).Error
err = DB.Model(redemption).Select("name", "status", "quota", "redeemed_time").Updates(redemption).Error
return err
}

View File

@@ -3,7 +3,6 @@ package model
import (
"errors"
"fmt"
_ "gorm.io/driver/sqlite"
"gorm.io/gorm"
"one-api/common"
)

View File

@@ -2,6 +2,7 @@ package model
import (
"errors"
"fmt"
"gorm.io/gorm"
"one-api/common"
"strings"
@@ -22,6 +23,7 @@ type User struct {
VerificationCode string `json:"verification_code" gorm:"-:all"` // this field is only for Email verification, don't save it to database!
AccessToken string `json:"access_token" gorm:"type:char(32);column:access_token;uniqueIndex"` // this token is for system management
Quota int `json:"quota" gorm:"type:int;default:0"`
Group string `json:"group" gorm:"type:varchar(32);default:'default'"`
}
func GetMaxUserId() int {
@@ -72,8 +74,14 @@ func (user *User) Insert() error {
}
user.Quota = common.QuotaForNewUser
user.AccessToken = common.GetUUID()
err = DB.Create(user).Error
return err
result := DB.Create(user)
if result.Error != nil {
return result.Error
}
if common.QuotaForNewUser > 0 {
RecordLog(user.Id, LogTypeSystem, fmt.Sprintf("新用户注册赠送 %d 点额度", common.QuotaForNewUser))
}
return nil
}
func (user *User) Update(updatePassword bool) error {
@@ -229,6 +237,11 @@ func GetUserEmail(id int) (email string, err error) {
return email, err
}
func GetUserGroup(id int) (group string, err error) {
err = DB.Model(&User{}).Where("id = ?", id).Select("`group`").Find(&group).Error
return group, err
}
func IncreaseUserQuota(id int, quota int) (err error) {
if quota < 0 {
return errors.New("quota 不能为负数!")

View File

@@ -63,6 +63,7 @@ func SetApiRouter(router *gin.Engine) {
{
channelRoute.GET("/", controller.GetAllChannels)
channelRoute.GET("/search", controller.SearchChannels)
channelRoute.GET("/models", controller.ListModels)
channelRoute.GET("/:id", controller.GetChannel)
channelRoute.GET("/test", controller.TestAllChannels)
channelRoute.GET("/test/:id", controller.TestChannel)
@@ -92,5 +93,15 @@ func SetApiRouter(router *gin.Engine) {
redemptionRoute.PUT("/", controller.UpdateRedemption)
redemptionRoute.DELETE("/:id", controller.DeleteRedemption)
}
logRoute := apiRouter.Group("/log")
logRoute.GET("/", middleware.AdminAuth(), controller.GetAllLogs)
logRoute.GET("/search", middleware.AdminAuth(), controller.SearchAllLogs)
logRoute.GET("/self", middleware.UserAuth(), controller.GetUserLogs)
logRoute.GET("/self/search", middleware.UserAuth(), controller.SearchUserLogs)
groupRoute := apiRouter.Group("/group")
groupRoute.Use(middleware.AdminAuth())
{
groupRoute.GET("/", controller.GetGroups)
}
}
}

View File

@@ -8,12 +8,16 @@ import (
func SetRelayRouter(router *gin.Engine) {
// https://platform.openai.com/docs/api-reference/introduction
modelsRouter := router.Group("/v1/models")
modelsRouter.Use(middleware.TokenAuth())
{
modelsRouter.GET("/", controller.ListModels)
modelsRouter.GET("/:model", controller.RetrieveModel)
}
relayV1Router := router.Group("/v1")
relayV1Router.Use(middleware.TokenAuth(), middleware.Distribute())
{
relayV1Router.GET("/models", controller.ListModels)
relayV1Router.GET("/models/:model", controller.RetrieveModel)
relayV1Router.POST("/completions", controller.RelayNotImplemented)
relayV1Router.POST("/completions", controller.Relay)
relayV1Router.POST("/chat/completions", controller.Relay)
relayV1Router.POST("/edits", controller.RelayNotImplemented)
relayV1Router.POST("/images/generations", controller.RelayNotImplemented)
@@ -33,6 +37,6 @@ func SetRelayRouter(router *gin.Engine) {
relayV1Router.POST("/fine-tunes/:id/cancel", controller.RelayNotImplemented)
relayV1Router.GET("/fine-tunes/:id/events", controller.RelayNotImplemented)
relayV1Router.DELETE("/models/:model", controller.RelayNotImplemented)
relayV1Router.POST("/moderations", controller.RelayNotImplemented)
relayV1Router.POST("/moderations", controller.Relay)
}
}

View File

@@ -22,6 +22,7 @@ import EditChannel from './pages/Channel/EditChannel';
import Redemption from './pages/Redemption';
import EditRedemption from './pages/Redemption/EditRedemption';
import TopUp from './pages/TopUp';
import Log from './pages/Log';
const Home = lazy(() => import('./pages/Home'));
const About = lazy(() => import('./pages/About'));
@@ -250,6 +251,14 @@ function App() {
</PrivateRoute>
}
/>
<Route
path='/log'
element={
<PrivateRoute>
<Log />
</PrivateRoute>
}
/>
<Route
path='/about'
element={

View File

@@ -4,6 +4,7 @@ import { Link } from 'react-router-dom';
import { API, showError, showInfo, showSuccess, timestamp2string } from '../helpers';
import { CHANNEL_OPTIONS, ITEMS_PER_PAGE } from '../constants';
import { renderGroup } from '../helpers/render';
function renderTimestamp(timestamp) {
return (
@@ -26,6 +27,13 @@ function renderType(type) {
return <Label basic color={type2label[type].color}>{type2label[type].text}</Label>;
}
function renderBalance(type, balance) {
if (type === 5) {
return <span>¥{(balance / 10000).toFixed(2)}</span>
}
return <span>${balance.toFixed(2)}</span>
}
const ChannelsTable = () => {
const [channels, setChannels] = useState([]);
const [loading, setLoading] = useState(true);
@@ -264,6 +272,14 @@ const ChannelsTable = () => {
>
名称
</Table.HeaderCell>
<Table.HeaderCell
style={{ cursor: 'pointer' }}
onClick={() => {
sortChannel('group');
}}
>
分组
</Table.HeaderCell>
<Table.HeaderCell
style={{ cursor: 'pointer' }}
onClick={() => {
@@ -312,6 +328,7 @@ const ChannelsTable = () => {
<Table.Row key={channel.id}>
<Table.Cell>{channel.id}</Table.Cell>
<Table.Cell>{channel.name ? channel.name : '无'}</Table.Cell>
<Table.Cell>{renderGroup(channel.group)}</Table.Cell>
<Table.Cell>{renderType(channel.type)}</Table.Cell>
<Table.Cell>{renderStatus(channel.status)}</Table.Cell>
<Table.Cell>
@@ -326,7 +343,7 @@ const ChannelsTable = () => {
<Popup
content={channel.balance_updated_time ? renderTimestamp(channel.balance_updated_time) : '未更新'}
key={channel.id}
trigger={<span>${channel.balance.toFixed(2)}</span>}
trigger={renderBalance(channel.type, channel.balance)}
basic
/>
</Table.Cell>
@@ -398,7 +415,7 @@ const ChannelsTable = () => {
<Table.Footer>
<Table.Row>
<Table.HeaderCell colSpan='7'>
<Table.HeaderCell colSpan='8'>
<Button size='small' as={Link} to='/channel/add' loading={loading}>
添加新的渠道
</Button>

View File

@@ -41,6 +41,11 @@ const headerButtons = [
icon: 'user',
admin: true,
},
{
name: '日志',
to: '/log',
icon: 'book',
},
{
name: '设置',
to: '/setting',

View File

@@ -0,0 +1,256 @@
import React, { useEffect, useState } from 'react';
import { Button, Label, Pagination, Select, Table } from 'semantic-ui-react';
import { API, isAdmin, showError, timestamp2string } from '../helpers';
import { ITEMS_PER_PAGE } from '../constants';
function renderTimestamp(timestamp) {
return (
<>
{timestamp2string(timestamp)}
</>
);
}
const MODE_OPTIONS = [
{ key: 'all', text: '全部用户', value: 'all' },
{ key: 'self', text: '当前用户', value: 'self' },
];
const LOG_OPTIONS = [
{ key: '0', text: '全部', value: 0 },
{ key: '1', text: '充值', value: 1 },
{ key: '2', text: '消费', value: 2 },
{ key: '3', text: '管理', value: 3 },
{ key: '4', text: '系统', value: 4 }
];
function renderType(type) {
switch (type) {
case 1:
return <Label basic color='green'> 充值 </Label>;
case 2:
return <Label basic color='olive'> 消费 </Label>;
case 3:
return <Label basic color='orange'> 管理 </Label>;
case 4:
return <Label basic color='purple'> 系统 </Label>;
default:
return <Label basic color='black'> 未知 </Label>;
}
}
const LogsTable = () => {
const [logs, setLogs] = useState([]);
const [loading, setLoading] = useState(true);
const [activePage, setActivePage] = useState(1);
const [searchKeyword, setSearchKeyword] = useState('');
const [searching, setSearching] = useState(false);
const [logType, setLogType] = useState(0);
const [mode, setMode] = useState('self'); // all, self
const showModePanel = isAdmin();
const loadLogs = async (startIdx) => {
let url = `/api/log/self/?p=${startIdx}&type=${logType}`;
if (mode === 'all') {
url = `/api/log/?p=${startIdx}&type=${logType}`;
}
const res = await API.get(url);
const { success, message, data } = res.data;
if (success) {
if (startIdx === 0) {
setLogs(data);
} else {
let newLogs = logs;
newLogs.push(...data);
setLogs(newLogs);
}
} else {
showError(message);
}
setLoading(false);
};
const onPaginationChange = (e, { activePage }) => {
(async () => {
if (activePage === Math.ceil(logs.length / ITEMS_PER_PAGE) + 1) {
// In this case we have to load more data and then append them.
await loadLogs(activePage - 1);
}
setActivePage(activePage);
})();
};
const refresh = async () => {
setLoading(true);
await loadLogs(0);
};
useEffect(() => {
loadLogs(0)
.then()
.catch((reason) => {
showError(reason);
});
}, []);
useEffect(() => {
refresh().then();
}, [mode, logType]);
const searchLogs = async () => {
if (searchKeyword === '') {
// if keyword is blank, load files instead.
await loadLogs(0);
setActivePage(1);
return;
}
setSearching(true);
const res = await API.get(`/api/log/self/search?keyword=${searchKeyword}`);
const { success, message, data } = res.data;
if (success) {
setLogs(data);
setActivePage(1);
} else {
showError(message);
}
setSearching(false);
};
const handleKeywordChange = async (e, { value }) => {
setSearchKeyword(value.trim());
};
const sortLog = (key) => {
if (logs.length === 0) return;
setLoading(true);
let sortedLogs = [...logs];
sortedLogs.sort((a, b) => {
return ('' + a[key]).localeCompare(b[key]);
});
if (sortedLogs[0].id === logs[0].id) {
sortedLogs.reverse();
}
setLogs(sortedLogs);
setLoading(false);
};
return (
<>
<Table basic>
<Table.Header>
<Table.Row>
<Table.HeaderCell
style={{ cursor: 'pointer' }}
onClick={() => {
sortLog('created_time');
}}
width={3}
>
时间
</Table.HeaderCell>
{
showModePanel && (
<Table.HeaderCell
style={{ cursor: 'pointer' }}
onClick={() => {
sortLog('user_id');
}}
width={1}
>
用户
</Table.HeaderCell>
)
}
<Table.HeaderCell
style={{ cursor: 'pointer' }}
onClick={() => {
sortLog('type');
}}
width={2}
>
类型
</Table.HeaderCell>
<Table.HeaderCell
style={{ cursor: 'pointer' }}
onClick={() => {
sortLog('content');
}}
width={showModePanel ? 10 : 11}
>
详情
</Table.HeaderCell>
</Table.Row>
</Table.Header>
<Table.Body>
{logs
.slice(
(activePage - 1) * ITEMS_PER_PAGE,
activePage * ITEMS_PER_PAGE
)
.map((log, idx) => {
if (log.deleted) return <></>;
return (
<Table.Row key={log.created_at}>
<Table.Cell>{renderTimestamp(log.created_at)}</Table.Cell>
{
showModePanel && (
<Table.Cell><Label>{log.user_id}</Label></Table.Cell>
)
}
<Table.Cell>{renderType(log.type)}</Table.Cell>
<Table.Cell>{log.content}</Table.Cell>
</Table.Row>
);
})}
</Table.Body>
<Table.Footer>
<Table.Row>
<Table.HeaderCell colSpan={showModePanel ? '5' : '4'}>
{
showModePanel && (
<Select
placeholder='选择模式'
options={MODE_OPTIONS}
style={{ marginRight: '8px' }}
name='mode'
value={mode}
onChange={(e, { name, value }) => {
setMode(value);
}}
/>
)
}
<Select
placeholder='选择明细分类'
options={LOG_OPTIONS}
style={{ marginRight: '8px' }}
name='logType'
value={logType}
onChange={(e, { name, value }) => {
setLogType(value);
}}
/>
<Button size='small' onClick={refresh} loading={loading}>刷新</Button>
<Pagination
floated='right'
activePage={activePage}
onPageChange={onPaginationChange}
size='small'
siblingRange={1}
totalPages={
Math.ceil(logs.length / ITEMS_PER_PAGE) +
(logs.length % ITEMS_PER_PAGE === 0 ? 1 : 0)
}
/>
</Table.HeaderCell>
</Table.Row>
</Table.Footer>
</Table>
</>
);
};
export default LogsTable;

View File

@@ -30,6 +30,7 @@ const SystemSetting = () => {
QuotaRemindThreshold: 0,
PreConsumedQuota: 0,
ModelRatio: '',
GroupRatio: '',
TopUpLink: '',
AutomaticDisableChannelEnabled: '',
ChannelDisableThreshold: 0,
@@ -101,6 +102,7 @@ const SystemSetting = () => {
name === 'QuotaRemindThreshold' ||
name === 'PreConsumedQuota' ||
name === 'ModelRatio' ||
name === 'GroupRatio' ||
name === 'TopUpLink'
) {
setInputs((inputs) => ({ ...inputs, [name]: value }));
@@ -131,6 +133,13 @@ const SystemSetting = () => {
}
await updateOption('ModelRatio', inputs.ModelRatio);
}
if (originInputs['GroupRatio'] !== inputs.GroupRatio) {
if (!verifyJSON(inputs.GroupRatio)) {
showError('分组倍率不是合法的 JSON 字符串');
return;
}
await updateOption('GroupRatio', inputs.GroupRatio);
}
if (originInputs['TopUpLink'] !== inputs.TopUpLink) {
await updateOption('TopUpLink', inputs.TopUpLink);
}
@@ -329,6 +338,17 @@ const SystemSetting = () => {
placeholder='为一个 JSON 文本,键为模型名称,值为倍率'
/>
</Form.Group>
<Form.Group widths='equal'>
<Form.TextArea
label='分组倍率'
name='GroupRatio'
onChange={handleInputChange}
style={{ minHeight: 250, fontFamily: 'JetBrains Mono, Consolas' }}
autoComplete='new-password'
value={inputs.GroupRatio}
placeholder='为一个 JSON 文本,键为分组名称,值为倍率'
/>
</Form.Group>
<Form.Button onClick={submitOperationConfig}>保存运营设置</Form.Button>
<Divider />
<Header as='h3'>

View File

@@ -4,7 +4,7 @@ import { Link } from 'react-router-dom';
import { API, showError, showSuccess } from '../helpers';
import { ITEMS_PER_PAGE } from '../constants';
import { renderText } from '../helpers/render';
import { renderGroup, renderText } from '../helpers/render';
function renderRole(role) {
switch (role) {
@@ -175,6 +175,14 @@ const UsersTable = () => {
>
用户名
</Table.HeaderCell>
<Table.HeaderCell
style={{ cursor: 'pointer' }}
onClick={() => {
sortUser('group');
}}
>
分组
</Table.HeaderCell>
<Table.HeaderCell
style={{ cursor: 'pointer' }}
onClick={() => {
@@ -231,6 +239,7 @@ const UsersTable = () => {
hoverable
/>
</Table.Cell>
<Table.Cell>{renderGroup(user.group)}</Table.Cell>
<Table.Cell>{user.email ? renderText(user.email, 30) : '无'}</Table.Cell>
<Table.Cell>{user.quota}</Table.Cell>
<Table.Cell>{renderRole(user.role)}</Table.Cell>
@@ -293,7 +302,6 @@ const UsersTable = () => {
size={'small'}
as={Link}
to={'/user/edit/' + user.id}
disabled={user.role === 100}
>
编辑
</Button>
@@ -306,7 +314,7 @@ const UsersTable = () => {
<Table.Footer>
<Table.Row>
<Table.HeaderCell colSpan='7'>
<Table.HeaderCell colSpan='8'>
<Button size='small' as={Link} to='/user/add' loading={loading}>
添加新的用户
</Button>

View File

@@ -1,6 +1,19 @@
import { Label } from 'semantic-ui-react';
export function renderText(text, limit) {
if (text.length > limit) {
return text.slice(0, limit - 3) + '...';
}
return text;
}
export function renderGroup(group) {
if (group === "") {
return <Label>default</Label>
} else if (group === "vip" || group === "pro") {
return <Label color='yellow'>{group}</Label>
} else if (group === "svip" || group === "premium") {
return <Label color='red'>{group}</Label>
}
return <Label>{group}</Label>
}

View File

@@ -1,7 +1,7 @@
import React, { useEffect, useState } from 'react';
import { Button, Form, Header, Message, Segment } from 'semantic-ui-react';
import { useParams } from 'react-router-dom';
import { API, showError, showSuccess } from '../../helpers';
import { API, showError, showInfo, showSuccess } from '../../helpers';
import { CHANNEL_OPTIONS } from '../../constants';
const EditChannel = () => {
@@ -14,12 +14,17 @@ const EditChannel = () => {
type: 1,
key: '',
base_url: '',
other: ''
other: '',
group: 'default',
models: [],
};
const [batch, setBatch] = useState(false);
const [inputs, setInputs] = useState(originInputs);
const [modelOptions, setModelOptions] = useState([]);
const [groupOptions, setGroupOptions] = useState([]);
const [basicModels, setBasicModels] = useState([]);
const [fullModels, setFullModels] = useState([]);
const handleInputChange = (e, { name, value }) => {
console.log(name, value);
setInputs((inputs) => ({ ...inputs, [name]: value }));
};
@@ -27,21 +32,59 @@ const EditChannel = () => {
let res = await API.get(`/api/channel/${channelId}`);
const { success, message, data } = res.data;
if (success) {
data.password = '';
if (data.models === "") {
data.models = []
} else {
data.models = data.models.split(",")
}
setInputs(data);
} else {
showError(message);
}
setLoading(false);
};
const fetchModels = async () => {
try {
let res = await API.get(`/api/channel/models`);
setModelOptions(res.data.data.map((model) => ({
key: model.id,
text: model.id,
value: model.id,
})));
setFullModels(res.data.data.map((model) => model.id));
setBasicModels(res.data.data.filter((model) => !model.id.startsWith("gpt-4")).map((model) => model.id));
} catch (error) {
showError(error.message);
}
};
const fetchGroups = async () => {
try {
let res = await API.get(`/api/group/`);
setGroupOptions(res.data.data.map((group) => ({
key: group,
text: group,
value: group,
})));
} catch (error) {
showError(error.message);
}
};
useEffect(() => {
if (isEdit) {
loadChannel().then();
}
fetchModels().then();
fetchGroups().then();
}, []);
const submit = async () => {
if (!isEdit && (inputs.name === '' || inputs.key === '')) return;
if (!isEdit && (inputs.name === '' || inputs.key === '')) {
showInfo('请填写渠道名称和渠道密钥!');
return;
}
let localInputs = inputs;
if (localInputs.base_url.endsWith('/')) {
localInputs.base_url = localInputs.base_url.slice(0, localInputs.base_url.length - 1);
@@ -50,6 +93,7 @@ const EditChannel = () => {
localInputs.other = '2023-03-15-preview';
}
let res;
localInputs.models = localInputs.models.join(",")
if (isEdit) {
res = await API.put(`/api/channel/`, { ...localInputs, id: parseInt(channelId) });
} else {
@@ -137,6 +181,58 @@ const EditChannel = () => {
autoComplete='new-password'
/>
</Form.Field>
<Form.Field>
<Form.Dropdown
label='分组'
placeholder={'请选择分组'}
name='group'
fluid
search
selection
allowAdditions
additionLabel={'请在系统设置页面编辑分组倍率以添加新的分组:'}
onChange={handleInputChange}
value={inputs.group}
autoComplete='new-password'
options={groupOptions}
/>
</Form.Field>
<Form.Field>
<Form.Dropdown
label='模型'
placeholder={'请选择该通道所支持的模型'}
name='models'
fluid
multiple
selection
onChange={handleInputChange}
value={inputs.models}
autoComplete='new-password'
options={modelOptions}
/>
</Form.Field>
<div style={{ lineHeight: '40px', marginBottom: '12px'}}>
<Button type={'button'} onClick={() => {
handleInputChange(null, { name: 'models', value: basicModels });
}}>填入基础模型</Button>
<Button type={'button'} onClick={() => {
handleInputChange(null, { name: 'models', value: fullModels });
}}>填入所有模型</Button>
</div>
{
inputs.type === 1 && (
<Form.Field>
<Form.Input
label='代理'
name='base_url'
placeholder={'请输入 OpenAI API 代理地址如果不需要请留空格式为https://api.openai.com'}
onChange={handleInputChange}
value={inputs.base_url}
autoComplete='new-password'
/>
</Form.Field>
)
}
{
batch ? <Form.Field>
<Form.TextArea

View File

@@ -0,0 +1,14 @@
import React from 'react';
import { Header, Segment } from 'semantic-ui-react';
import LogsTable from '../../components/LogsTable';
const Token = () => (
<>
<Segment>
<Header as='h3'>额度明细</Header>
<LogsTable />
</Segment>
</>
);
export default Token;

View File

@@ -15,12 +15,26 @@ const EditUser = () => {
wechat_id: '',
email: '',
quota: 0,
group: 'default'
});
const { username, display_name, password, github_id, wechat_id, email, quota } =
const [groupOptions, setGroupOptions] = useState([]);
const { username, display_name, password, github_id, wechat_id, email, quota, group } =
inputs;
const handleInputChange = (e, { name, value }) => {
setInputs((inputs) => ({ ...inputs, [name]: value }));
};
const fetchGroups = async () => {
try {
let res = await API.get(`/api/group/`);
setGroupOptions(res.data.data.map((group) => ({
key: group,
text: group,
value: group,
})));
} catch (error) {
showError(error.message);
}
};
const loadUser = async () => {
let res = undefined;
@@ -40,6 +54,9 @@ const EditUser = () => {
};
useEffect(() => {
loadUser().then();
if (userId) {
fetchGroups().then();
}
}, []);
const submit = async () => {
@@ -98,7 +115,23 @@ const EditUser = () => {
/>
</Form.Field>
{
userId && (
userId && <>
<Form.Field>
<Form.Dropdown
label='分组'
placeholder={'请选择分组'}
name='group'
fluid
search
selection
allowAdditions
additionLabel={'请在系统设置页面编辑分组倍率以添加新的分组:'}
onChange={handleInputChange}
value={inputs.group}
autoComplete='new-password'
options={groupOptions}
/>
</Form.Field>
<Form.Field>
<Form.Input
label='剩余额度'
@@ -110,7 +143,7 @@ const EditUser = () => {
autoComplete='new-password'
/>
</Form.Field>
)
</>
}
<Form.Field>
<Form.Input