mirror of
https://github.com/songquanpeng/one-api.git
synced 2025-10-23 09:53:42 +08:00
Compare commits
38 Commits
v0.4.2
...
v0.4.5-alp
Author | SHA1 | Date | |
---|---|---|---|
|
6855d0dc39 | ||
|
a43b1e2add | ||
|
46c43396d8 | ||
|
6dcffca065 | ||
|
d754620ef7 | ||
|
21111126a2 | ||
|
d91e7dcfdc | ||
|
d79289ccdd | ||
|
f89f6c7fa6 | ||
|
b7d71b4f0a | ||
|
70ed126ccb | ||
|
57b213a035 | ||
|
549e944b95 | ||
|
0cdab80a6e | ||
|
760183a970 | ||
|
58fb18aace | ||
|
630156dc0a | ||
|
5f23f59d1c | ||
|
538a5d7a9b | ||
|
593e1926e9 | ||
|
e87ad1f402 | ||
|
07cccdc8c0 | ||
|
f71f01662c | ||
|
54d7a1c2e8 | ||
|
f426f31bd7 | ||
|
2930577cd6 | ||
|
e09512177a | ||
|
d6dbaff3c2 | ||
|
7f9577a386 | ||
|
38668e7331 | ||
|
323f3d263a | ||
|
0c34ed4c61 | ||
|
7c7eb6b7ec | ||
|
8b2ef666ef | ||
|
955d5f8707 | ||
|
47ca449e32 | ||
|
39481eb6c0 | ||
|
69153e7231 |
23
.github/ISSUE_TEMPLATE/bug_report.md
vendored
Normal file
23
.github/ISSUE_TEMPLATE/bug_report.md
vendored
Normal file
@@ -0,0 +1,23 @@
|
||||
---
|
||||
name: 报告问题
|
||||
about: 使用简练详细的语言描述你遇到的问题
|
||||
title: ''
|
||||
labels: bug
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
**例行检查**
|
||||
+ [ ] 我已确认目前没有类似 issue
|
||||
+ [ ] 我已确认我已升级到最新版本
|
||||
+ [ ] 我理解并愿意跟进此 issue,协助测试和提供反馈
|
||||
+ [ ] 我理解并认可上述内容,并理解项目维护者精力有限,不遵循规则的 issue 可能会被无视或直接关闭
|
||||
|
||||
**问题描述**
|
||||
|
||||
**复现步骤**
|
||||
|
||||
**预期结果**
|
||||
|
||||
**相关截图**
|
||||
如果没有的话,请删除此节。
|
11
.github/ISSUE_TEMPLATE/config.yml
vendored
Normal file
11
.github/ISSUE_TEMPLATE/config.yml
vendored
Normal file
@@ -0,0 +1,11 @@
|
||||
blank_issues_enabled: false
|
||||
contact_links:
|
||||
- name: 项目群聊
|
||||
url: https://openai.justsong.cn/
|
||||
about: QQ 群:828520184,自动审核,备注 One API
|
||||
- name: 赞赏支持
|
||||
url: https://iamazing.cn/page/reward
|
||||
about: 请作者喝杯咖啡,以激励作者持续开发
|
||||
- name: 付费部署或定制功能
|
||||
url: https://openai.justsong.cn/
|
||||
about: 加群后联系群主
|
18
.github/ISSUE_TEMPLATE/feature_request.md
vendored
Normal file
18
.github/ISSUE_TEMPLATE/feature_request.md
vendored
Normal file
@@ -0,0 +1,18 @@
|
||||
---
|
||||
name: 功能请求
|
||||
about: 使用简练详细的语言描述希望加入的新功能
|
||||
title: ''
|
||||
labels: enhancement
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
**例行检查**
|
||||
+ [ ] 我已确认目前没有类似 issue
|
||||
+ [ ] 我已确认我已升级到最新版本
|
||||
+ [ ] 我理解并愿意跟进此 issue,协助测试和提供反馈
|
||||
+ [ ] 我理解并认可上述内容,并理解项目维护者精力有限,不遵循规则的 issue 可能会被无视或直接关闭
|
||||
|
||||
**功能描述**
|
||||
|
||||
**应用场景**
|
92
README.md
92
README.md
@@ -29,10 +29,10 @@ _✨ All in one 的 OpenAI 接口,整合各种 API 访问方式,开箱即用
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="https://github.com/songquanpeng/one-api/releases">程序下载</a>
|
||||
·
|
||||
<a href="https://github.com/songquanpeng/one-api#部署">部署教程</a>
|
||||
·
|
||||
<a href="https://github.com/songquanpeng/one-api#使用方法">使用方法</a>
|
||||
·
|
||||
<a href="https://github.com/songquanpeng/one-api/issues">意见反馈</a>
|
||||
·
|
||||
<a href="https://github.com/songquanpeng/one-api#截图展示">截图展示</a>
|
||||
@@ -55,10 +55,11 @@ _✨ All in one 的 OpenAI 接口,整合各种 API 访问方式,开箱即用
|
||||
+ [x] [API2D](https://api2d.com/r/197971)
|
||||
+ [x] [OhMyGPT](https://aigptx.top?aff=uFpUl2Kf)
|
||||
+ [x] [AI Proxy](https://aiproxy.io/?i=OneAPI) (邀请码:`OneAPI`)
|
||||
+ [x] [OpenAI-SB](https://openai-sb.com)
|
||||
+ [x] [API2GPT](http://console.api2gpt.com/m/00002S)
|
||||
+ [x] [CloseAI](https://console.openai-asia.com/r/2412)
|
||||
+ [x] [AI.LS](https://ai.ls)
|
||||
+ [x] [OpenAI Max](https://openaimax.com)
|
||||
+ [x] [OpenAI-SB](https://openai-sb.com)
|
||||
+ [x] [CloseAI](https://console.openai-asia.com/r/2412)
|
||||
+ [x] 自定义渠道:例如各种未收录的第三方代理服务
|
||||
2. 支持通过**负载均衡**的方式访问多个渠道。
|
||||
3. 支持 **stream 模式**,可以通过流式传输实现打字机效果。
|
||||
@@ -74,15 +75,18 @@ _✨ All in one 的 OpenAI 接口,整合各种 API 访问方式,开箱即用
|
||||
1. 支持自定义系统名称,logo 以及页脚。
|
||||
2. 支持自定义首页和关于页面,可以选择使用 HTML & Markdown 代码进行自定义,或者使用一个单独的网页通过 iframe 嵌入。
|
||||
13. 支持通过系统访问令牌访问管理 API。
|
||||
14. 支持用户管理,支持**多种用户登录注册方式**:
|
||||
14. 支持 Cloudflare Turnstile 用户校验。
|
||||
15. 支持用户管理,支持**多种用户登录注册方式**:
|
||||
+ 邮箱登录注册以及通过邮箱进行密码重置。
|
||||
+ [GitHub 开放授权](https://github.com/settings/applications/new)。
|
||||
+ 微信公众号授权(需要额外部署 [WeChat Server](https://github.com/songquanpeng/wechat-server))。
|
||||
15. 未来其他大模型开放 API 后,将第一时间支持,并将其封装成同样的 API 访问方式。
|
||||
16. 未来其他大模型开放 API 后,将第一时间支持,并将其封装成同样的 API 访问方式。
|
||||
|
||||
## 部署
|
||||
### 基于 Docker 进行部署
|
||||
执行:`docker run -d --restart always -p 3000:3000 -v /home/ubuntu/data/one-api:/data justsong/one-api`
|
||||
部署命令:`docker run --name one-api -d --restart always -p 3000:3000 -v /home/ubuntu/data/one-api:/data justsong/one-api`
|
||||
|
||||
更新命令:`docker run --rm -v /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower -cR`
|
||||
|
||||
`-p 3000:3000` 中的第一个 `3000` 是宿主机的端口,可以根据需要进行修改。
|
||||
|
||||
@@ -117,6 +121,8 @@ sudo certbot --nginx
|
||||
sudo service nginx restart
|
||||
```
|
||||
|
||||
初始账号用户名为 `root`,密码为 `123456`。
|
||||
|
||||
### 手动部署
|
||||
1. 从 [GitHub Releases](https://github.com/songquanpeng/one-api/releases/latest) 下载可执行文件或者从源码编译:
|
||||
```shell
|
||||
@@ -149,6 +155,47 @@ sudo service nginx restart
|
||||
|
||||
环境变量的具体使用方法详见[此处](#环境变量)。
|
||||
|
||||
### 部署第三方服务配合 One API 使用
|
||||
> 欢迎 PR 添加更多示例。
|
||||
|
||||
#### ChatGPT Next Web
|
||||
项目主页:https://github.com/Yidadaa/ChatGPT-Next-Web
|
||||
|
||||
```bash
|
||||
docker run --name chat-next-web -d -p 3001:3000 -e BASE_URL=https://openai.justsong.cn yidadaa/chatgpt-next-web
|
||||
```
|
||||
|
||||
注意修改端口号和 `BASE_URL`。
|
||||
|
||||
#### ChatGPT Web
|
||||
项目主页:https://github.com/Chanzhaoyu/chatgpt-web
|
||||
|
||||
```bash
|
||||
docker run --name chatgpt-web -d -p 3002:3002 -e OPENAI_API_BASE_URL=https://openai.justsong.cn -e OPENAI_API_KEY=sk-xxx chenzhaoyu94/chatgpt-web
|
||||
```
|
||||
|
||||
注意修改端口号、`OPENAI_API_BASE_URL` 和 `OPENAI_API_KEY`。
|
||||
|
||||
### 部署到第三方平台
|
||||
<details>
|
||||
<summary><strong>部署到 Zeabur</strong></summary>
|
||||
<div>
|
||||
|
||||
> Zeabur 的服务器在国外,自动解决了网络的问题,同时免费的额度也足够个人使用。
|
||||
|
||||
1. 首先 fork 一份代码。
|
||||
2. 进入 [Zeabur](https://zeabur.com/),登录,进入控制台。
|
||||
3. 新建一个 Project,在 Service -> Add Service 选择 Marketplace,选择 MySQL,并记下连接参数(用户名、密码、地址、端口)。
|
||||
4. 复制链接参数,运行 ```create database `one-api` ``` 创建数据库。
|
||||
5. 然后在 Service -> Add Service,选择 Git(第一次使用需要先授权),选择你 fork 的仓库。
|
||||
6. Deploy 会自动开始,先取消。进入下方 Variable,添加一个 `PORT`,值为 `3000`,再添加一个 `SQL_DSN`,值为 `<username>:<password>@tcp(<addr>:<port>)/one-api` ,然后保存。 注意如果不填写 `SQL_DSN`,数据将无法持久化,重新部署后数据会丢失。
|
||||
7. 选择 Redeploy。
|
||||
8. 进入下方 Domains,选择一个合适的域名前缀,如 "my-one-api",最终域名为 "my-one-api.zeabur.app",也可以 CNAME 自己的域名。
|
||||
9. 等待部署完成,点击生成的域名进入 One API。
|
||||
|
||||
</div>
|
||||
</details>
|
||||
|
||||
## 配置
|
||||
系统本身开箱即用。
|
||||
|
||||
@@ -156,11 +203,24 @@ sudo service nginx restart
|
||||
|
||||
等到系统启动后,使用 `root` 用户登录系统并做进一步的配置。
|
||||
|
||||
## 使用方式
|
||||
在`渠道`页面中添加你的 API Key,之后在`令牌`页面中新增一个访问令牌。
|
||||
## 使用方法
|
||||
在`渠道`页面中添加你的 API Key,之后在`令牌`页面中新增访问令牌。
|
||||
|
||||
之后就可以使用你的令牌访问 One API 了,使用方式与 [OpenAI API](https://platform.openai.com/docs/api-reference/introduction) 一致。
|
||||
|
||||
你需要在各种用到 OpenAI API 的地方设置 API Base 为你的 One API 的部署地址,例如:`https://openai.justsong.cn`,API Key 则为你在 One API 中生成的令牌。
|
||||
|
||||
注意,具体的 API Base 的格式取决于你所使用的客户端。
|
||||
|
||||
```mermaid
|
||||
graph LR
|
||||
A(用户)
|
||||
A --->|请求| B(One API)
|
||||
B -->|中继请求| C(OpenAI)
|
||||
B -->|中继请求| D(Azure)
|
||||
B -->|中继请求| E(其他下游渠道)
|
||||
```
|
||||
|
||||
可以通过在令牌后面添加渠道 ID 的方式指定使用哪一个渠道处理本次请求,例如:`Authorization: Bearer ONE_API_KEY-CHANNEL_ID`。
|
||||
注意,需要是管理员用户创建的令牌才能指定渠道 ID。
|
||||
|
||||
@@ -171,7 +231,7 @@ sudo service nginx restart
|
||||
+ 例子:`REDIS_CONN_STRING=redis://default:redispw@localhost:49153`
|
||||
2. `SESSION_SECRET`:设置之后将使用固定的会话密钥,这样系统重新启动后已登录用户的 cookie 将依旧有效。
|
||||
+ 例子:`SESSION_SECRET=random_string`
|
||||
3. `SQL_DSN`:设置之后将使用指定数据库而非 SQLite。
|
||||
3. `SQL_DSN`:设置之后将使用指定数据库而非 SQLite,请使用 MySQL 8.0 版本。
|
||||
+ 例子:`SQL_DSN=root:123456@tcp(localhost:3306)/one-api`
|
||||
4. `FRONTEND_BASE_URL`:设置之后将使用指定的前端地址,而非后端地址。
|
||||
+ 例子:`FRONTEND_BASE_URL=https://openai.justsong.cn`
|
||||
@@ -200,4 +260,14 @@ https://openai.justsong.cn
|
||||
+ 请检查你的令牌额度是否足够,这个和账户额度是分开的。
|
||||
+ 令牌额度仅供用户设置最大使用量,用户可自由设置。
|
||||
2. 宝塔部署后访问出现空白页面?
|
||||
+ 自动配置的问题,详见[#97](https://github.com/songquanpeng/one-api/issues/97)。
|
||||
+ 自动配置的问题,详见[#97](https://github.com/songquanpeng/one-api/issues/97)。
|
||||
3. 提示无可用渠道?
|
||||
+ 请检查的用户分组和渠道分组设置。
|
||||
+ 以及渠道的模型设置。
|
||||
|
||||
## 注意
|
||||
本项目为开源项目,请在遵循 OpenAI 的[使用条款](https://openai.com/policies/terms-of-use)以及法律法规的情况下使用,不得用于非法用途。
|
||||
|
||||
本项目使用 MIT 协议进行开源,请以某种方式保留 One API 的版权信息。
|
||||
|
||||
依据 MIT 协议,使用者需自行承担使用本项目的风险与责任,本开源项目开发者与此无关。
|
@@ -1,9 +1,10 @@
|
||||
package common
|
||||
|
||||
import (
|
||||
"github.com/google/uuid"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/google/uuid"
|
||||
)
|
||||
|
||||
var StartTime = time.Now().Unix() // unit: second
|
||||
@@ -35,6 +36,8 @@ var WeChatAuthEnabled = false
|
||||
var TurnstileCheckEnabled = false
|
||||
var RegisterEnabled = true
|
||||
|
||||
var LogConsumeEnabled = true
|
||||
|
||||
var SMTPServer = ""
|
||||
var SMTPPort = 587
|
||||
var SMTPAccount = ""
|
||||
@@ -131,6 +134,7 @@ const (
|
||||
ChannelTypeAILS = 9
|
||||
ChannelTypeAIProxy = 10
|
||||
ChannelTypePaLM = 11
|
||||
ChannelTypeAPI2GPT = 12
|
||||
)
|
||||
|
||||
var ChannelBaseURLs = []string{
|
||||
@@ -146,4 +150,5 @@ var ChannelBaseURLs = []string{
|
||||
"https://api.caipacity.com", // 9
|
||||
"https://api.aiproxy.io", // 10
|
||||
"", // 11
|
||||
"https://api.api2gpt.com", // 12
|
||||
}
|
||||
|
@@ -17,6 +17,7 @@ func GroupRatio2JSONString() string {
|
||||
}
|
||||
|
||||
func UpdateGroupRatioByJSONString(jsonStr string) error {
|
||||
GroupRatio = make(map[string]float64)
|
||||
return json.Unmarshal([]byte(jsonStr), &GroupRatio)
|
||||
}
|
||||
|
||||
|
@@ -2,16 +2,23 @@ package common
|
||||
|
||||
import "encoding/json"
|
||||
|
||||
// ModelRatio
|
||||
// https://platform.openai.com/docs/models/model-endpoint-compatibility
|
||||
// https://openai.com/pricing
|
||||
// TODO: when a new api is enabled, check the pricing here
|
||||
// 1 === $0.002 / 1K tokens
|
||||
var ModelRatio = map[string]float64{
|
||||
"gpt-4": 15,
|
||||
"gpt-4-0314": 15,
|
||||
"gpt-4-0613": 15,
|
||||
"gpt-4-32k": 30,
|
||||
"gpt-4-32k-0314": 30,
|
||||
"gpt-3.5-turbo": 1, // $0.002 / 1K tokens
|
||||
"gpt-3.5-turbo-0301": 1,
|
||||
"gpt-4-32k-0613": 30,
|
||||
"gpt-3.5-turbo": 0.75, // $0.0015 / 1K tokens
|
||||
"gpt-3.5-turbo-0301": 0.75,
|
||||
"gpt-3.5-turbo-0613": 0.75,
|
||||
"gpt-3.5-turbo-16k": 1.5, // $0.003 / 1K tokens
|
||||
"gpt-3.5-turbo-16k-0613": 1.5,
|
||||
"text-ada-001": 0.2,
|
||||
"text-babbage-001": 0.25,
|
||||
"text-curie-001": 1,
|
||||
@@ -39,6 +46,7 @@ func ModelRatio2JSONString() string {
|
||||
}
|
||||
|
||||
func UpdateModelRatioByJSONString(jsonStr string) error {
|
||||
ModelRatio = make(map[string]float64)
|
||||
return json.Unmarshal([]byte(jsonStr), &ModelRatio)
|
||||
}
|
||||
|
||||
|
@@ -4,13 +4,14 @@ import (
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"github.com/gin-gonic/gin"
|
||||
"io"
|
||||
"net/http"
|
||||
"one-api/common"
|
||||
"one-api/model"
|
||||
"strconv"
|
||||
"time"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
// https://github.com/songquanpeng/one-api/issues/79
|
||||
@@ -37,6 +38,119 @@ type OpenAIUsageResponse struct {
|
||||
TotalUsage float64 `json:"total_usage"` // unit: 0.01 dollar
|
||||
}
|
||||
|
||||
type OpenAISBUsageResponse struct {
|
||||
Msg string `json:"msg"`
|
||||
Data *struct {
|
||||
Credit string `json:"credit"`
|
||||
} `json:"data"`
|
||||
}
|
||||
|
||||
type AIProxyUserOverviewResponse struct {
|
||||
Success bool `json:"success"`
|
||||
Message string `json:"message"`
|
||||
ErrorCode int `json:"error_code"`
|
||||
Data struct {
|
||||
TotalPoints float64 `json:"totalPoints"`
|
||||
} `json:"data"`
|
||||
}
|
||||
|
||||
type API2GPTUsageResponse struct {
|
||||
Object string `json:"object"`
|
||||
TotalGranted float64 `json:"total_granted"`
|
||||
TotalUsed float64 `json:"total_used"`
|
||||
TotalRemaining float64 `json:"total_remaining"`
|
||||
}
|
||||
|
||||
// GetAuthHeader get auth header
|
||||
func GetAuthHeader(token string) http.Header {
|
||||
h := http.Header{}
|
||||
h.Add("Authorization", fmt.Sprintf("Bearer %s", token))
|
||||
return h
|
||||
}
|
||||
|
||||
func GetResponseBody(method, url string, channel *model.Channel, headers http.Header) ([]byte, error) {
|
||||
client := &http.Client{}
|
||||
req, err := http.NewRequest(method, url, nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
for k := range headers {
|
||||
req.Header.Add(k, headers.Get(k))
|
||||
}
|
||||
res, err := client.Do(req)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
body, err := io.ReadAll(res.Body)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
err = res.Body.Close()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return body, nil
|
||||
}
|
||||
|
||||
func updateChannelOpenAISBBalance(channel *model.Channel) (float64, error) {
|
||||
url := fmt.Sprintf("https://api.openai-sb.com/sb-api/user/status?api_key=%s", channel.Key)
|
||||
body, err := GetResponseBody("GET", url, channel, GetAuthHeader(channel.Key))
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
response := OpenAISBUsageResponse{}
|
||||
err = json.Unmarshal(body, &response)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
if response.Data == nil {
|
||||
return 0, errors.New(response.Msg)
|
||||
}
|
||||
balance, err := strconv.ParseFloat(response.Data.Credit, 64)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
channel.UpdateBalance(balance)
|
||||
return balance, nil
|
||||
}
|
||||
|
||||
func updateChannelAIProxyBalance(channel *model.Channel) (float64, error) {
|
||||
url := "https://aiproxy.io/api/report/getUserOverview"
|
||||
headers := http.Header{}
|
||||
headers.Add("Api-Key", channel.Key)
|
||||
body, err := GetResponseBody("GET", url, channel, headers)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
response := AIProxyUserOverviewResponse{}
|
||||
err = json.Unmarshal(body, &response)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
if !response.Success {
|
||||
return 0, fmt.Errorf("code: %d, message: %s", response.ErrorCode, response.Message)
|
||||
}
|
||||
channel.UpdateBalance(response.Data.TotalPoints)
|
||||
return response.Data.TotalPoints, nil
|
||||
}
|
||||
|
||||
func updateChannelAPI2GPTBalance(channel *model.Channel) (float64, error) {
|
||||
url := "https://api.api2gpt.com/dashboard/billing/credit_grants"
|
||||
body, err := GetResponseBody("GET", url, channel, GetAuthHeader(channel.Key))
|
||||
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
response := API2GPTUsageResponse{}
|
||||
err = json.Unmarshal(body, &response)
|
||||
fmt.Print(response)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
channel.UpdateBalance(response.TotalRemaining)
|
||||
return response.TotalRemaining, nil
|
||||
}
|
||||
|
||||
func updateChannelBalance(channel *model.Channel) (float64, error) {
|
||||
baseURL := common.ChannelBaseURLs[channel.Type]
|
||||
switch channel.Type {
|
||||
@@ -48,27 +162,18 @@ func updateChannelBalance(channel *model.Channel) (float64, error) {
|
||||
return 0, errors.New("尚未实现")
|
||||
case common.ChannelTypeCustom:
|
||||
baseURL = channel.BaseURL
|
||||
case common.ChannelTypeOpenAISB:
|
||||
return updateChannelOpenAISBBalance(channel)
|
||||
case common.ChannelTypeAIProxy:
|
||||
return updateChannelAIProxyBalance(channel)
|
||||
case common.ChannelTypeAPI2GPT:
|
||||
return updateChannelAPI2GPTBalance(channel)
|
||||
default:
|
||||
return 0, errors.New("尚未实现")
|
||||
}
|
||||
url := fmt.Sprintf("%s/v1/dashboard/billing/subscription", baseURL)
|
||||
|
||||
client := &http.Client{}
|
||||
req, err := http.NewRequest("GET", url, nil)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
auth := fmt.Sprintf("Bearer %s", channel.Key)
|
||||
req.Header.Add("Authorization", auth)
|
||||
res, err := client.Do(req)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
body, err := io.ReadAll(res.Body)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
err = res.Body.Close()
|
||||
body, err := GetResponseBody("GET", url, channel, GetAuthHeader(channel.Key))
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
@@ -84,20 +189,7 @@ func updateChannelBalance(channel *model.Channel) (float64, error) {
|
||||
startDate = now.AddDate(0, 0, -100).Format("2006-01-02")
|
||||
}
|
||||
url = fmt.Sprintf("%s/v1/dashboard/billing/usage?start_date=%s&end_date=%s", baseURL, startDate, endDate)
|
||||
req, err = http.NewRequest("GET", url, nil)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
req.Header.Add("Authorization", auth)
|
||||
res, err = client.Do(req)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
body, err = io.ReadAll(res.Body)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
err = res.Body.Close()
|
||||
body, err = GetResponseBody("GET", url, channel, GetAuthHeader(channel.Key))
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
@@ -71,6 +71,33 @@ func init() {
|
||||
Root: "gpt-3.5-turbo-0301",
|
||||
Parent: nil,
|
||||
},
|
||||
{
|
||||
Id: "gpt-3.5-turbo-0613",
|
||||
Object: "model",
|
||||
Created: 1677649963,
|
||||
OwnedBy: "openai",
|
||||
Permission: permission,
|
||||
Root: "gpt-3.5-turbo-0613",
|
||||
Parent: nil,
|
||||
},
|
||||
{
|
||||
Id: "gpt-3.5-turbo-16k",
|
||||
Object: "model",
|
||||
Created: 1677649963,
|
||||
OwnedBy: "openai",
|
||||
Permission: permission,
|
||||
Root: "gpt-3.5-turbo-16k",
|
||||
Parent: nil,
|
||||
},
|
||||
{
|
||||
Id: "gpt-3.5-turbo-16k-0613",
|
||||
Object: "model",
|
||||
Created: 1677649963,
|
||||
OwnedBy: "openai",
|
||||
Permission: permission,
|
||||
Root: "gpt-3.5-turbo-16k-0613",
|
||||
Parent: nil,
|
||||
},
|
||||
{
|
||||
Id: "gpt-4",
|
||||
Object: "model",
|
||||
@@ -89,6 +116,15 @@ func init() {
|
||||
Root: "gpt-4-0314",
|
||||
Parent: nil,
|
||||
},
|
||||
{
|
||||
Id: "gpt-4-0613",
|
||||
Object: "model",
|
||||
Created: 1677649963,
|
||||
OwnedBy: "openai",
|
||||
Permission: permission,
|
||||
Root: "gpt-4-0613",
|
||||
Parent: nil,
|
||||
},
|
||||
{
|
||||
Id: "gpt-4-32k",
|
||||
Object: "model",
|
||||
@@ -107,6 +143,15 @@ func init() {
|
||||
Root: "gpt-4-32k-0314",
|
||||
Parent: nil,
|
||||
},
|
||||
{
|
||||
Id: "gpt-4-32k-0613",
|
||||
Object: "model",
|
||||
Created: 1677649963,
|
||||
OwnedBy: "openai",
|
||||
Permission: permission,
|
||||
Root: "gpt-4-32k-0613",
|
||||
Parent: nil,
|
||||
},
|
||||
{
|
||||
Id: "text-embedding-ada-002",
|
||||
Object: "model",
|
||||
|
@@ -58,6 +58,20 @@ func countTokenMessages(messages []Message, model string) int {
|
||||
return tokenNum
|
||||
}
|
||||
|
||||
func countTokenInput(input any, model string) int {
|
||||
switch input.(type) {
|
||||
case string:
|
||||
return countTokenText(input.(string), model)
|
||||
case []string:
|
||||
text := ""
|
||||
for _, s := range input.([]string) {
|
||||
text += s
|
||||
}
|
||||
return countTokenText(text, model)
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func countTokenText(text string, model string) int {
|
||||
tokenEncoder := getTokenEncoder(model)
|
||||
token := tokenEncoder.Encode(text, nil, nil)
|
||||
|
@@ -32,13 +32,13 @@ const (
|
||||
type GeneralOpenAIRequest struct {
|
||||
Model string `json:"model"`
|
||||
Messages []Message `json:"messages"`
|
||||
Prompt string `json:"prompt"`
|
||||
Prompt any `json:"prompt"`
|
||||
Stream bool `json:"stream"`
|
||||
MaxTokens int `json:"max_tokens"`
|
||||
Temperature float64 `json:"temperature"`
|
||||
TopP float64 `json:"top_p"`
|
||||
N int `json:"n"`
|
||||
Input string `json:"input"`
|
||||
Input any `json:"input"`
|
||||
}
|
||||
|
||||
type ChatRequest struct {
|
||||
@@ -177,6 +177,7 @@ func relayHelper(c *gin.Context, relayMode int) *OpenAIErrorWithStatusCode {
|
||||
// https://github.com/songquanpeng/one-api/issues/67
|
||||
model_ = strings.TrimSuffix(model_, "-0301")
|
||||
model_ = strings.TrimSuffix(model_, "-0314")
|
||||
model_ = strings.TrimSuffix(model_, "-0613")
|
||||
fullRequestURL = fmt.Sprintf("%s/openai/deployments/%s/%s", baseURL, model_, task)
|
||||
} else if channelType == common.ChannelTypePaLM {
|
||||
err := relayPaLM(textRequest, c)
|
||||
@@ -187,9 +188,9 @@ func relayHelper(c *gin.Context, relayMode int) *OpenAIErrorWithStatusCode {
|
||||
case RelayModeChatCompletions:
|
||||
promptTokens = countTokenMessages(textRequest.Messages, textRequest.Model)
|
||||
case RelayModeCompletions:
|
||||
promptTokens = countTokenText(textRequest.Prompt, textRequest.Model)
|
||||
promptTokens = countTokenInput(textRequest.Prompt, textRequest.Model)
|
||||
case RelayModeModeration:
|
||||
promptTokens = countTokenText(textRequest.Input, textRequest.Model)
|
||||
promptTokens = countTokenInput(textRequest.Input, textRequest.Model)
|
||||
}
|
||||
preConsumedTokens := common.PreConsumedQuota
|
||||
if textRequest.MaxTokens != 0 {
|
||||
@@ -239,16 +240,15 @@ func relayHelper(c *gin.Context, relayMode int) *OpenAIErrorWithStatusCode {
|
||||
defer func() {
|
||||
if consumeQuota {
|
||||
quota := 0
|
||||
usingGPT4 := strings.HasPrefix(textRequest.Model, "gpt-4")
|
||||
completionRatio := 1
|
||||
if usingGPT4 {
|
||||
completionRatio := 1.34 // default for gpt-3
|
||||
if strings.HasPrefix(textRequest.Model, "gpt-4") {
|
||||
completionRatio = 2
|
||||
}
|
||||
if isStream {
|
||||
responseTokens := countTokenText(streamResponseText, textRequest.Model)
|
||||
quota = promptTokens + responseTokens*completionRatio
|
||||
quota = promptTokens + int(float64(responseTokens)*completionRatio)
|
||||
} else {
|
||||
quota = textResponse.Usage.PromptTokens + textResponse.Usage.CompletionTokens*completionRatio
|
||||
quota = textResponse.Usage.PromptTokens + int(float64(textResponse.Usage.CompletionTokens)*completionRatio)
|
||||
}
|
||||
quota = int(float64(quota) * ratio)
|
||||
if ratio != 0 && quota <= 0 {
|
||||
@@ -259,8 +259,12 @@ func relayHelper(c *gin.Context, relayMode int) *OpenAIErrorWithStatusCode {
|
||||
if err != nil {
|
||||
common.SysError("Error consuming token remain quota: " + err.Error())
|
||||
}
|
||||
tokenName := c.GetString("token_name")
|
||||
userId := c.GetInt("id")
|
||||
model.RecordLog(userId, model.LogTypeConsume, fmt.Sprintf("使用模型 %s 消耗 %d 点额度(模型倍率 %.2f,分组倍率 %.2f)", textRequest.Model, quota, modelRatio, groupRatio))
|
||||
model.RecordLog(userId, model.LogTypeConsume, fmt.Sprintf("通过令牌「%s」使用模型 %s 消耗 %d 点额度(模型倍率 %.2f,分组倍率 %.2f)", tokenName, textRequest.Model, quota, modelRatio, groupRatio))
|
||||
model.UpdateUserUsedQuotaAndRequestCount(userId, quota)
|
||||
channelId := c.GetInt("channel_id")
|
||||
model.UpdateChannelUsedQuota(channelId, quota)
|
||||
}
|
||||
}()
|
||||
|
||||
@@ -395,3 +399,15 @@ func RelayNotImplemented(c *gin.Context) {
|
||||
"error": err,
|
||||
})
|
||||
}
|
||||
|
||||
func RelayNotFound(c *gin.Context) {
|
||||
err := OpenAIError{
|
||||
Message: fmt.Sprintf("API not found: %s:%s", c.Request.Method, c.Request.URL.Path),
|
||||
Type: "one_api_error",
|
||||
Param: "",
|
||||
Code: "api_not_found",
|
||||
}
|
||||
c.JSON(http.StatusOK, gin.H{
|
||||
"error": err,
|
||||
})
|
||||
}
|
||||
|
@@ -112,6 +112,7 @@ func TokenAuth() func(c *gin.Context) {
|
||||
}
|
||||
c.Set("id", token.UserId)
|
||||
c.Set("token_id", token.Id)
|
||||
c.Set("token_name", token.Name)
|
||||
requestURL := c.Request.URL.String()
|
||||
consumeQuota := true
|
||||
if strings.HasPrefix(requestURL, "/v1/models") {
|
||||
|
@@ -10,6 +10,6 @@ func CORS() gin.HandlerFunc {
|
||||
config.AllowAllOrigins = true
|
||||
config.AllowCredentials = true
|
||||
config.AllowMethods = []string{"GET", "POST", "PUT", "DELETE", "OPTIONS"}
|
||||
config.AllowHeaders = []string{"Origin", "Content-Length", "Content-Type", "Authorization", "Accept", "Connection"}
|
||||
config.AllowHeaders = []string{"Origin", "Content-Length", "Content-Type", "Authorization", "Accept", "Connection", "x-requested-with"}
|
||||
return cors.New(config)
|
||||
}
|
||||
|
@@ -30,15 +30,18 @@ func GetRandomSatisfiedChannel(group string, model string) (*Channel, error) {
|
||||
|
||||
func (channel *Channel) AddAbilities() error {
|
||||
models_ := strings.Split(channel.Models, ",")
|
||||
groups_ := strings.Split(channel.Group, ",")
|
||||
abilities := make([]Ability, 0, len(models_))
|
||||
for _, model := range models_ {
|
||||
ability := Ability{
|
||||
Group: channel.Group,
|
||||
Model: model,
|
||||
ChannelId: channel.Id,
|
||||
Enabled: channel.Status == common.ChannelStatusEnabled,
|
||||
for _, group := range groups_ {
|
||||
ability := Ability{
|
||||
Group: group,
|
||||
Model: model,
|
||||
ChannelId: channel.Id,
|
||||
Enabled: channel.Status == common.ChannelStatusEnabled,
|
||||
}
|
||||
abilities = append(abilities, ability)
|
||||
}
|
||||
abilities = append(abilities, ability)
|
||||
}
|
||||
return DB.Create(&abilities).Error
|
||||
}
|
||||
|
@@ -1,6 +1,7 @@
|
||||
package model
|
||||
|
||||
import (
|
||||
"gorm.io/gorm"
|
||||
"one-api/common"
|
||||
)
|
||||
|
||||
@@ -20,6 +21,7 @@ type Channel struct {
|
||||
BalanceUpdatedTime int64 `json:"balance_updated_time" gorm:"bigint"`
|
||||
Models string `json:"models"`
|
||||
Group string `json:"group" gorm:"type:varchar(32);default:'default'"`
|
||||
UsedQuota int64 `json:"used_quota" gorm:"bigint;default:0"`
|
||||
}
|
||||
|
||||
func GetAllChannels(startIdx int, num int, selectAll bool) ([]*Channel, error) {
|
||||
@@ -136,3 +138,10 @@ func UpdateChannelStatusById(id int, status int) {
|
||||
common.SysError("failed to update channel status: " + err.Error())
|
||||
}
|
||||
}
|
||||
|
||||
func UpdateChannelUsedQuota(id int, quota int) {
|
||||
err := DB.Model(&Channel{}).Where("id = ?", id).Update("used_quota", gorm.Expr("used_quota + ?", quota)).Error
|
||||
if err != nil {
|
||||
common.SysError("failed to update channel used quota: " + err.Error())
|
||||
}
|
||||
}
|
||||
|
@@ -22,6 +22,9 @@ const (
|
||||
)
|
||||
|
||||
func RecordLog(userId int, logType int, content string) {
|
||||
if logType == LogTypeConsume && !common.LogConsumeEnabled {
|
||||
return
|
||||
}
|
||||
log := &Log{
|
||||
UserId: userId,
|
||||
CreatedAt: common.GetTimestamp(),
|
||||
|
@@ -34,6 +34,7 @@ func InitOptionMap() {
|
||||
common.OptionMap["TurnstileCheckEnabled"] = strconv.FormatBool(common.TurnstileCheckEnabled)
|
||||
common.OptionMap["RegisterEnabled"] = strconv.FormatBool(common.RegisterEnabled)
|
||||
common.OptionMap["AutomaticDisableChannelEnabled"] = strconv.FormatBool(common.AutomaticDisableChannelEnabled)
|
||||
common.OptionMap["LogConsumeEnabled"] = strconv.FormatBool(common.LogConsumeEnabled)
|
||||
common.OptionMap["ChannelDisableThreshold"] = strconv.FormatFloat(common.ChannelDisableThreshold, 'f', -1, 64)
|
||||
common.OptionMap["SMTPServer"] = ""
|
||||
common.OptionMap["SMTPFrom"] = ""
|
||||
@@ -134,6 +135,8 @@ func updateOptionMap(key string, value string) (err error) {
|
||||
common.RegisterEnabled = boolValue
|
||||
case "AutomaticDisableChannelEnabled":
|
||||
common.AutomaticDisableChannelEnabled = boolValue
|
||||
case "LogConsumeEnabled":
|
||||
common.LogConsumeEnabled = boolValue
|
||||
}
|
||||
}
|
||||
switch key {
|
||||
|
@@ -23,6 +23,8 @@ type User struct {
|
||||
VerificationCode string `json:"verification_code" gorm:"-:all"` // this field is only for Email verification, don't save it to database!
|
||||
AccessToken string `json:"access_token" gorm:"type:char(32);column:access_token;uniqueIndex"` // this token is for system management
|
||||
Quota int `json:"quota" gorm:"type:int;default:0"`
|
||||
UsedQuota int `json:"used_quota" gorm:"type:int;default:0;column:used_quota"` // used quota
|
||||
RequestCount int `json:"request_count" gorm:"type:int;default:0;"` // request number
|
||||
Group string `json:"group" gorm:"type:varchar(32);default:'default'"`
|
||||
}
|
||||
|
||||
@@ -262,3 +264,15 @@ func GetRootUserEmail() (email string) {
|
||||
DB.Model(&User{}).Where("role = ?", common.RoleRootUser).Select("email").Find(&email)
|
||||
return email
|
||||
}
|
||||
|
||||
func UpdateUserUsedQuotaAndRequestCount(id int, quota int) {
|
||||
err := DB.Model(&User{}).Where("id = ?", id).Updates(
|
||||
map[string]interface{}{
|
||||
"used_quota": gorm.Expr("used_quota + ?", quota),
|
||||
"request_count": gorm.Expr("request_count + ?", 1),
|
||||
},
|
||||
).Error
|
||||
if err != nil {
|
||||
common.SysError("Failed to update user used quota and request count: " + err.Error())
|
||||
}
|
||||
}
|
||||
|
@@ -7,7 +7,9 @@ import (
|
||||
"github.com/gin-gonic/gin"
|
||||
"net/http"
|
||||
"one-api/common"
|
||||
"one-api/controller"
|
||||
"one-api/middleware"
|
||||
"strings"
|
||||
)
|
||||
|
||||
func SetWebRouter(router *gin.Engine, buildFS embed.FS, indexPage []byte) {
|
||||
@@ -16,6 +18,10 @@ func SetWebRouter(router *gin.Engine, buildFS embed.FS, indexPage []byte) {
|
||||
router.Use(middleware.Cache())
|
||||
router.Use(static.Serve("/", common.EmbedFolder(buildFS, "web/build")))
|
||||
router.NoRoute(func(c *gin.Context) {
|
||||
if strings.HasPrefix(c.Request.RequestURI, "/v1") {
|
||||
controller.RelayNotFound(c)
|
||||
return
|
||||
}
|
||||
c.Header("Cache-Control", "no-cache")
|
||||
c.Data(http.StatusOK, "text/html; charset=utf-8", indexPage)
|
||||
})
|
||||
|
@@ -4,7 +4,7 @@ import { Link } from 'react-router-dom';
|
||||
import { API, showError, showInfo, showSuccess, timestamp2string } from '../helpers';
|
||||
|
||||
import { CHANNEL_OPTIONS, ITEMS_PER_PAGE } from '../constants';
|
||||
import { renderGroup } from '../helpers/render';
|
||||
import { renderGroup, renderNumber } from '../helpers/render';
|
||||
|
||||
function renderTimestamp(timestamp) {
|
||||
return (
|
||||
@@ -27,6 +27,22 @@ function renderType(type) {
|
||||
return <Label basic color={type2label[type].color}>{type2label[type].text}</Label>;
|
||||
}
|
||||
|
||||
function renderBalance(type, balance) {
|
||||
switch (type) {
|
||||
case 1: // OpenAI
|
||||
case 8: // 自定义
|
||||
return <span>${balance.toFixed(2)}</span>;
|
||||
case 5: // OpenAI-SB
|
||||
return <span>¥{(balance / 10000).toFixed(2)}</span>;
|
||||
case 10: // AI Proxy
|
||||
return <span>{renderNumber(balance)}</span>;
|
||||
case 12: // API2GPT
|
||||
return <span>¥{balance.toFixed(2)}</span>;
|
||||
default:
|
||||
return <span>不支持</span>;
|
||||
}
|
||||
}
|
||||
|
||||
const ChannelsTable = () => {
|
||||
const [channels, setChannels] = useState([]);
|
||||
const [loading, setLoading] = useState(true);
|
||||
@@ -336,7 +352,7 @@ const ChannelsTable = () => {
|
||||
<Popup
|
||||
content={channel.balance_updated_time ? renderTimestamp(channel.balance_updated_time) : '未更新'}
|
||||
key={channel.id}
|
||||
trigger={<span>${channel.balance.toFixed(2)}</span>}
|
||||
trigger={renderBalance(channel.type, channel.balance)}
|
||||
basic
|
||||
/>
|
||||
</Table.Cell>
|
||||
@@ -415,7 +431,8 @@ const ChannelsTable = () => {
|
||||
<Button size='small' loading={loading} onClick={testAllChannels}>
|
||||
测试所有已启用通道
|
||||
</Button>
|
||||
<Button size='small' onClick={updateAllChannelsBalance} loading={loading || updatingBalance}>更新所有已启用通道余额</Button>
|
||||
<Button size='small' onClick={updateAllChannelsBalance}
|
||||
loading={loading || updatingBalance}>更新所有已启用通道余额</Button>
|
||||
<Pagination
|
||||
floated='right'
|
||||
activePage={activePage}
|
||||
|
@@ -1,11 +1,31 @@
|
||||
import React from 'react';
|
||||
import React, { useEffect, useState } from 'react';
|
||||
|
||||
import { Container, Segment } from 'semantic-ui-react';
|
||||
import { getFooterHTML, getSystemName } from '../helpers';
|
||||
|
||||
const Footer = () => {
|
||||
const systemName = getSystemName();
|
||||
const footer = getFooterHTML();
|
||||
const [footer, setFooter] = useState(getFooterHTML());
|
||||
let remainCheckTimes = 5;
|
||||
|
||||
const loadFooter = () => {
|
||||
let footer_html = localStorage.getItem('footer_html');
|
||||
if (footer_html) {
|
||||
setFooter(footer_html);
|
||||
}
|
||||
};
|
||||
|
||||
useEffect(() => {
|
||||
const timer = setInterval(() => {
|
||||
if (remainCheckTimes <= 0) {
|
||||
clearInterval(timer);
|
||||
return;
|
||||
}
|
||||
remainCheckTimes--;
|
||||
loadFooter();
|
||||
}, 200);
|
||||
return () => clearTimeout(timer);
|
||||
}, []);
|
||||
|
||||
return (
|
||||
<Segment vertical>
|
||||
|
@@ -1,5 +1,5 @@
|
||||
import React, { useEffect, useState } from 'react';
|
||||
import { Button, Divider, Form, Grid, Header, Modal } from 'semantic-ui-react';
|
||||
import { Button, Divider, Form, Grid, Header, Message, Modal } from 'semantic-ui-react';
|
||||
import { API, showError, showSuccess } from '../helpers';
|
||||
import { marked } from 'marked';
|
||||
|
||||
@@ -10,13 +10,13 @@ const OtherSetting = () => {
|
||||
About: '',
|
||||
SystemName: '',
|
||||
Logo: '',
|
||||
HomePageContent: '',
|
||||
HomePageContent: ''
|
||||
});
|
||||
let [loading, setLoading] = useState(false);
|
||||
const [showUpdateModal, setShowUpdateModal] = useState(false);
|
||||
const [updateData, setUpdateData] = useState({
|
||||
tag_name: '',
|
||||
content: '',
|
||||
content: ''
|
||||
});
|
||||
|
||||
const getOptions = async () => {
|
||||
@@ -43,7 +43,7 @@ const OtherSetting = () => {
|
||||
setLoading(true);
|
||||
const res = await API.put('/api/option/', {
|
||||
key,
|
||||
value,
|
||||
value
|
||||
});
|
||||
const { success, message } = res.data;
|
||||
if (success) {
|
||||
@@ -97,7 +97,7 @@ const OtherSetting = () => {
|
||||
} else {
|
||||
setUpdateData({
|
||||
tag_name: tag_name,
|
||||
content: marked.parse(body),
|
||||
content: marked.parse(body)
|
||||
});
|
||||
setShowUpdateModal(true);
|
||||
}
|
||||
@@ -153,7 +153,7 @@ const OtherSetting = () => {
|
||||
style={{ minHeight: 150, fontFamily: 'JetBrains Mono, Consolas' }}
|
||||
/>
|
||||
</Form.Group>
|
||||
<Form.Button onClick={()=>submitOption('HomePageContent')}>保存首页内容</Form.Button>
|
||||
<Form.Button onClick={() => submitOption('HomePageContent')}>保存首页内容</Form.Button>
|
||||
<Form.Group widths='equal'>
|
||||
<Form.TextArea
|
||||
label='关于'
|
||||
@@ -165,6 +165,7 @@ const OtherSetting = () => {
|
||||
/>
|
||||
</Form.Group>
|
||||
<Form.Button onClick={submitAbout}>保存关于</Form.Button>
|
||||
<Message>移除 One API 的版权标识必须首先获得授权,后续版本将通过授权码强制执行。</Message>
|
||||
<Form.Group widths='equal'>
|
||||
<Form.Input
|
||||
label='页脚'
|
||||
|
@@ -34,6 +34,7 @@ const SystemSetting = () => {
|
||||
TopUpLink: '',
|
||||
AutomaticDisableChannelEnabled: '',
|
||||
ChannelDisableThreshold: 0,
|
||||
LogConsumeEnabled: '',
|
||||
});
|
||||
const [originInputs, setOriginInputs] = useState({});
|
||||
let [loading, setLoading] = useState(false);
|
||||
@@ -68,6 +69,7 @@ const SystemSetting = () => {
|
||||
case 'TurnstileCheckEnabled':
|
||||
case 'RegisterEnabled':
|
||||
case 'AutomaticDisableChannelEnabled':
|
||||
case 'LogConsumeEnabled':
|
||||
value = inputs[key] === 'true' ? 'false' : 'true';
|
||||
break;
|
||||
default:
|
||||
@@ -349,6 +351,12 @@ const SystemSetting = () => {
|
||||
placeholder='为一个 JSON 文本,键为分组名称,值为倍率'
|
||||
/>
|
||||
</Form.Group>
|
||||
<Form.Checkbox
|
||||
checked={inputs.LogConsumeEnabled === 'true'}
|
||||
label='启用额度消费日志记录'
|
||||
name='LogConsumeEnabled'
|
||||
onChange={handleInputChange}
|
||||
/>
|
||||
<Form.Button onClick={submitOperationConfig}>保存运营设置</Form.Button>
|
||||
<Divider />
|
||||
<Header as='h3'>
|
||||
|
@@ -4,7 +4,7 @@ import { Link } from 'react-router-dom';
|
||||
import { API, showError, showSuccess } from '../helpers';
|
||||
|
||||
import { ITEMS_PER_PAGE } from '../constants';
|
||||
import { renderGroup, renderText } from '../helpers/render';
|
||||
import { renderGroup, renderNumber, renderText } from '../helpers/render';
|
||||
|
||||
function renderRole(role) {
|
||||
switch (role) {
|
||||
@@ -197,7 +197,7 @@ const UsersTable = () => {
|
||||
sortUser('quota');
|
||||
}}
|
||||
>
|
||||
剩余额度
|
||||
统计信息
|
||||
</Table.HeaderCell>
|
||||
<Table.HeaderCell
|
||||
style={{ cursor: 'pointer' }}
|
||||
@@ -241,7 +241,11 @@ const UsersTable = () => {
|
||||
</Table.Cell>
|
||||
<Table.Cell>{renderGroup(user.group)}</Table.Cell>
|
||||
<Table.Cell>{user.email ? renderText(user.email, 30) : '无'}</Table.Cell>
|
||||
<Table.Cell>{user.quota}</Table.Cell>
|
||||
<Table.Cell>
|
||||
<Popup content='剩余额度' trigger={<Label>{renderNumber(user.quota)}</Label>} />
|
||||
<Popup content='已用额度' trigger={<Label>{renderNumber(user.used_quota)}</Label>} />
|
||||
<Popup content='请求次数' trigger={<Label>{renderNumber(user.request_count)}</Label>} />
|
||||
</Table.Cell>
|
||||
<Table.Cell>{renderRole(user.role)}</Table.Cell>
|
||||
<Table.Cell>{renderStatus(user.status)}</Table.Cell>
|
||||
<Table.Cell>
|
||||
|
@@ -8,5 +8,6 @@ export const CHANNEL_OPTIONS = [
|
||||
{ key: 6, text: 'OpenAI Max', value: 6, color: 'violet' },
|
||||
{ key: 7, text: 'OhMyGPT', value: 7, color: 'purple' },
|
||||
{ key: 9, text: 'AI.LS', value: 9, color: 'yellow' },
|
||||
{ key: 10, text: 'AI Proxy', value: 10, color: 'purple' }
|
||||
{ key: 10, text: 'AI Proxy', value: 10, color: 'purple' },
|
||||
{ key: 12, text: 'API2GPT', value: 12, color: 'blue' }
|
||||
];
|
||||
|
@@ -8,12 +8,31 @@ export function renderText(text, limit) {
|
||||
}
|
||||
|
||||
export function renderGroup(group) {
|
||||
if (group === "") {
|
||||
return <Label>default</Label>
|
||||
} else if (group === "vip" || group === "pro") {
|
||||
return <Label color='yellow'>{group}</Label>
|
||||
} else if (group === "svip" || group === "premium") {
|
||||
return <Label color='red'>{group}</Label>
|
||||
if (group === '') {
|
||||
return <Label>default</Label>;
|
||||
}
|
||||
let groups = group.split(',');
|
||||
groups.sort();
|
||||
return <>
|
||||
{groups.map((group) => {
|
||||
if (group === 'vip' || group === 'pro') {
|
||||
return <Label color='yellow'>{group}</Label>;
|
||||
} else if (group === 'svip' || group === 'premium') {
|
||||
return <Label color='red'>{group}</Label>;
|
||||
}
|
||||
return <Label>{group}</Label>;
|
||||
})}
|
||||
</>;
|
||||
}
|
||||
|
||||
export function renderNumber(num) {
|
||||
if (num >= 1000000000) {
|
||||
return (num / 1000000000).toFixed(1) + 'B';
|
||||
} else if (num >= 1000000) {
|
||||
return (num / 1000000).toFixed(1) + 'M';
|
||||
} else if (num >= 10000) {
|
||||
return (num / 1000).toFixed(1) + 'k';
|
||||
} else {
|
||||
return num;
|
||||
}
|
||||
return <Label>{group}</Label>
|
||||
}
|
@@ -15,8 +15,8 @@ const EditChannel = () => {
|
||||
key: '',
|
||||
base_url: '',
|
||||
other: '',
|
||||
group: 'default',
|
||||
models: [],
|
||||
groups: ['default']
|
||||
};
|
||||
const [batch, setBatch] = useState(false);
|
||||
const [inputs, setInputs] = useState(originInputs);
|
||||
@@ -37,6 +37,11 @@ const EditChannel = () => {
|
||||
} else {
|
||||
data.models = data.models.split(",")
|
||||
}
|
||||
if (data.group === "") {
|
||||
data.groups = []
|
||||
} else {
|
||||
data.groups = data.group.split(",")
|
||||
}
|
||||
setInputs(data);
|
||||
} else {
|
||||
showError(message);
|
||||
@@ -61,7 +66,7 @@ const EditChannel = () => {
|
||||
|
||||
const fetchGroups = async () => {
|
||||
try {
|
||||
let res = await API.get(`/api/group`);
|
||||
let res = await API.get(`/api/group/`);
|
||||
setGroupOptions(res.data.data.map((group) => ({
|
||||
key: group,
|
||||
text: group,
|
||||
@@ -94,6 +99,7 @@ const EditChannel = () => {
|
||||
}
|
||||
let res;
|
||||
localInputs.models = localInputs.models.join(",")
|
||||
localInputs.group = localInputs.groups.join(",")
|
||||
if (isEdit) {
|
||||
res = await API.put(`/api/channel/`, { ...localInputs, id: parseInt(channelId) });
|
||||
} else {
|
||||
@@ -185,14 +191,14 @@ const EditChannel = () => {
|
||||
<Form.Dropdown
|
||||
label='分组'
|
||||
placeholder={'请选择分组'}
|
||||
name='group'
|
||||
name='groups'
|
||||
fluid
|
||||
search
|
||||
multiple
|
||||
selection
|
||||
allowAdditions
|
||||
additionLabel={'请在系统设置页面编辑分组倍率以添加新的分组:'}
|
||||
onChange={handleInputChange}
|
||||
value={inputs.group}
|
||||
value={inputs.groups}
|
||||
autoComplete='new-password'
|
||||
options={groupOptions}
|
||||
/>
|
||||
|
@@ -25,7 +25,7 @@ const EditUser = () => {
|
||||
};
|
||||
const fetchGroups = async () => {
|
||||
try {
|
||||
let res = await API.get(`/api/group`);
|
||||
let res = await API.get(`/api/group/`);
|
||||
setGroupOptions(res.data.data.map((group) => ({
|
||||
key: group,
|
||||
text: group,
|
||||
|
Reference in New Issue
Block a user