Compare commits

...

25 Commits

Author SHA1 Message Date
JustSong
0cdab80a6e feat: record channel's used quota (close #137) 2023-06-16 16:02:00 +08:00
JustSong
760183a970 feat: record used quota & request count (close #102, #165) 2023-06-16 15:20:06 +08:00
JustSong
58fb18aace fix: do not record completion ratio anymore 2023-06-16 14:24:16 +08:00
张城铭
630156dc0a fix: the prompt field can be array type now (close #166, #167)
* fix: the prompt field can be array type now (close #166)

* fix: fix prompt type

---------

Co-authored-by: JustSong <songquanpeng@foxmail.com>
2023-06-16 14:20:25 +08:00
JustSong
5f23f59d1c docs: update notice 2023-06-16 00:24:38 +08:00
JustSong
538a5d7a9b docs: update issue template (#164) 2023-06-15 21:51:30 +08:00
JustSong
593e1926e9 feat: able to disable quota consumption recording (close #156) 2023-06-15 16:32:16 +08:00
quzard
e87ad1f402 chore: remove -0613 suffix for Azure (#163) 2023-06-14 16:33:03 +08:00
JustSong
07cccdc8c0 docs: update issue template 2023-06-14 15:13:05 +08:00
JustSong
f71f01662c docs: update issue template 2023-06-14 15:03:51 +08:00
JustSong
54d7a1c2e8 docs: update issue template 2023-06-14 15:02:36 +08:00
JustSong
f426f31bd7 docs: update issue template 2023-06-14 14:59:24 +08:00
JustSong
2930577cd6 docs: update issue template 2023-06-14 14:51:48 +08:00
JustSong
e09512177a docs: add issue templates 2023-06-14 14:48:31 +08:00
JustSong
d6dbaff3c2 fix: fix file not committed 2023-06-14 12:52:56 +08:00
JustSong
7f9577a386 feat: now one channel can belong to multiple groups (close #153) 2023-06-14 12:14:08 +08:00
JustSong
38668e7331 chore: update gpt3.5 completion ratio 2023-06-14 09:41:06 +08:00
JustSong
323f3d263a feat: add new released models 2023-06-14 09:12:14 +08:00
JustSong
0c34ed4c61 docs: update README 2023-06-13 17:45:01 +08:00
JustSong
7c7eb6b7ec fix: now the input field can be array type now (close #149) 2023-06-12 16:11:57 +08:00
JustSong
8b2ef666ef fix: fix OpenAI-SB balance not correct 2023-06-12 09:40:49 +08:00
JustSong
955d5f8707 fix: fix group list not correct (close #147) 2023-06-12 09:11:48 +08:00
quzard
47ca449e32 feat: add support for updating balance of channel typpe OpenAI-SB (#146, close #125)
* Add support for updating channel balance in OpenAISB

* fix: handel error

---------

Co-authored-by: JustSong <songquanpeng@foxmail.com>
2023-06-11 21:04:41 +08:00
JustSong
39481eb6c0 chore: add trailing slash for API calling 2023-06-11 16:33:40 +08:00
JustSong
69153e7231 docs: update README 2023-06-11 12:37:15 +08:00
22 changed files with 304 additions and 65 deletions

23
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,23 @@
---
name: 报告问题
about: 使用简练详细的语言描述你遇到的问题
title: ''
labels: bug
assignees: ''
---
**例行检查**
+ [ ] 我已确认目前没有类似 issue
+ [ ] 我已确认我已升级到最新版本
+ [ ] 我理解并愿意跟进此 issue协助测试和提供反馈
+ [ ] 我理解并认可上述内容,并理解项目维护者精力有限,不遵循规则的 issue 可能会被无视或直接关闭
**问题描述**
**复现步骤**
**预期结果**
**相关截图**
如果没有的话,请删除此节。

11
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View File

@@ -0,0 +1,11 @@
blank_issues_enabled: false
contact_links:
- name: 项目群聊
url: https://openai.justsong.cn/
about: QQ 群828520184自动审核备注 One API
- name: 赞赏支持
url: https://iamazing.cn/page/reward
about: 请作者喝杯咖啡,以激励作者持续开发
- name: 付费部署或定制功能
url: https://openai.justsong.cn/
about: 加群后联系群主

View File

@@ -0,0 +1,18 @@
---
name: 功能请求
about: 使用简练详细的语言描述希望加入的新功能
title: ''
labels: enhancement
assignees: ''
---
**例行检查**
+ [ ] 我已确认目前没有类似 issue
+ [ ] 我已确认我已升级到最新版本
+ [ ] 我理解并愿意跟进此 issue协助测试和提供反馈
+ [ ] 我理解并认可上述内容,并理解项目维护者精力有限,不遵循规则的 issue 可能会被无视或直接关闭
**功能描述**
**应用场景**

View File

@@ -117,6 +117,8 @@ sudo certbot --nginx
sudo service nginx restart
```
初始账号用户名为 `root`,密码为 `123456`
### 手动部署
1. 从 [GitHub Releases](https://github.com/songquanpeng/one-api/releases/latest) 下载可执行文件或者从源码编译:
```shell
@@ -200,4 +202,14 @@ https://openai.justsong.cn
+ 请检查你的令牌额度是否足够,这个和账户额度是分开的。
+ 令牌额度仅供用户设置最大使用量,用户可自由设置。
2. 宝塔部署后访问出现空白页面?
+ 自动配置的问题,详见[#97](https://github.com/songquanpeng/one-api/issues/97)。
+ 自动配置的问题,详见[#97](https://github.com/songquanpeng/one-api/issues/97)。
3. 提示无可用渠道?
+ 请检查的用户分组和渠道分组设置。
+ 以及渠道的模型设置。
## 注意
本项目为开源项目,请在遵循 OpenAI 的[使用条款](https://openai.com/policies/terms-of-use)以及法律法规的情况下使用,不得用于非法用途。
本项目使用 MIT 协议进行开源,请以某种方式保留 One API 的版权信息。
依据 MIT 协议,使用者需自行承担使用本项目的风险与责任,本开源项目开发者与此无关。

View File

@@ -35,6 +35,8 @@ var WeChatAuthEnabled = false
var TurnstileCheckEnabled = false
var RegisterEnabled = true
var LogConsumeEnabled = true
var SMTPServer = ""
var SMTPPort = 587
var SMTPAccount = ""

View File

@@ -17,6 +17,7 @@ func GroupRatio2JSONString() string {
}
func UpdateGroupRatioByJSONString(jsonStr string) error {
GroupRatio = make(map[string]float64)
return json.Unmarshal([]byte(jsonStr), &GroupRatio)
}

View File

@@ -2,16 +2,23 @@ package common
import "encoding/json"
// ModelRatio
// https://platform.openai.com/docs/models/model-endpoint-compatibility
// https://openai.com/pricing
// TODO: when a new api is enabled, check the pricing here
// 1 === $0.002 / 1K tokens
var ModelRatio = map[string]float64{
"gpt-4": 15,
"gpt-4-0314": 15,
"gpt-4-0613": 15,
"gpt-4-32k": 30,
"gpt-4-32k-0314": 30,
"gpt-3.5-turbo": 1, // $0.002 / 1K tokens
"gpt-3.5-turbo-0301": 1,
"gpt-4-32k-0613": 30,
"gpt-3.5-turbo": 0.75, // $0.0015 / 1K tokens
"gpt-3.5-turbo-0301": 0.75,
"gpt-3.5-turbo-0613": 0.75,
"gpt-3.5-turbo-16k": 1.5, // $0.003 / 1K tokens
"gpt-3.5-turbo-16k-0613": 1.5,
"text-ada-001": 0.2,
"text-babbage-001": 0.25,
"text-curie-001": 1,
@@ -39,6 +46,7 @@ func ModelRatio2JSONString() string {
}
func UpdateModelRatioByJSONString(jsonStr string) error {
ModelRatio = make(map[string]float64)
return json.Unmarshal([]byte(jsonStr), &ModelRatio)
}

View File

@@ -37,6 +37,58 @@ type OpenAIUsageResponse struct {
TotalUsage float64 `json:"total_usage"` // unit: 0.01 dollar
}
type OpenAISBUsageResponse struct {
Msg string `json:"msg"`
Data *struct {
Credit string `json:"credit"`
} `json:"data"`
}
func GetResponseBody(method, url string, channel *model.Channel) ([]byte, error) {
client := &http.Client{}
req, err := http.NewRequest(method, url, nil)
if err != nil {
return nil, err
}
auth := fmt.Sprintf("Bearer %s", channel.Key)
req.Header.Add("Authorization", auth)
res, err := client.Do(req)
if err != nil {
return nil, err
}
body, err := io.ReadAll(res.Body)
if err != nil {
return nil, err
}
err = res.Body.Close()
if err != nil {
return nil, err
}
return body, nil
}
func updateChannelOpenAISBBalance(channel *model.Channel) (float64, error) {
url := fmt.Sprintf("https://api.openai-sb.com/sb-api/user/status?api_key=%s", channel.Key)
body, err := GetResponseBody("GET", url, channel)
if err != nil {
return 0, err
}
response := OpenAISBUsageResponse{}
err = json.Unmarshal(body, &response)
if err != nil {
return 0, err
}
if response.Data == nil {
return 0, errors.New(response.Msg)
}
balance, err := strconv.ParseFloat(response.Data.Credit, 64)
if err != nil {
return 0, err
}
channel.UpdateBalance(balance)
return balance, nil
}
func updateChannelBalance(channel *model.Channel) (float64, error) {
baseURL := common.ChannelBaseURLs[channel.Type]
switch channel.Type {
@@ -48,27 +100,14 @@ func updateChannelBalance(channel *model.Channel) (float64, error) {
return 0, errors.New("尚未实现")
case common.ChannelTypeCustom:
baseURL = channel.BaseURL
case common.ChannelTypeOpenAISB:
return updateChannelOpenAISBBalance(channel)
default:
return 0, errors.New("尚未实现")
}
url := fmt.Sprintf("%s/v1/dashboard/billing/subscription", baseURL)
client := &http.Client{}
req, err := http.NewRequest("GET", url, nil)
if err != nil {
return 0, err
}
auth := fmt.Sprintf("Bearer %s", channel.Key)
req.Header.Add("Authorization", auth)
res, err := client.Do(req)
if err != nil {
return 0, err
}
body, err := io.ReadAll(res.Body)
if err != nil {
return 0, err
}
err = res.Body.Close()
body, err := GetResponseBody("GET", url, channel)
if err != nil {
return 0, err
}
@@ -84,20 +123,7 @@ func updateChannelBalance(channel *model.Channel) (float64, error) {
startDate = now.AddDate(0, 0, -100).Format("2006-01-02")
}
url = fmt.Sprintf("%s/v1/dashboard/billing/usage?start_date=%s&end_date=%s", baseURL, startDate, endDate)
req, err = http.NewRequest("GET", url, nil)
if err != nil {
return 0, err
}
req.Header.Add("Authorization", auth)
res, err = client.Do(req)
if err != nil {
return 0, err
}
body, err = io.ReadAll(res.Body)
if err != nil {
return 0, err
}
err = res.Body.Close()
body, err = GetResponseBody("GET", url, channel)
if err != nil {
return 0, err
}

View File

@@ -71,6 +71,33 @@ func init() {
Root: "gpt-3.5-turbo-0301",
Parent: nil,
},
{
Id: "gpt-3.5-turbo-0613",
Object: "model",
Created: 1677649963,
OwnedBy: "openai",
Permission: permission,
Root: "gpt-3.5-turbo-0613",
Parent: nil,
},
{
Id: "gpt-3.5-turbo-16k",
Object: "model",
Created: 1677649963,
OwnedBy: "openai",
Permission: permission,
Root: "gpt-3.5-turbo-16k",
Parent: nil,
},
{
Id: "gpt-3.5-turbo-16k-0613",
Object: "model",
Created: 1677649963,
OwnedBy: "openai",
Permission: permission,
Root: "gpt-3.5-turbo-16k-0613",
Parent: nil,
},
{
Id: "gpt-4",
Object: "model",
@@ -89,6 +116,15 @@ func init() {
Root: "gpt-4-0314",
Parent: nil,
},
{
Id: "gpt-4-0613",
Object: "model",
Created: 1677649963,
OwnedBy: "openai",
Permission: permission,
Root: "gpt-4-0613",
Parent: nil,
},
{
Id: "gpt-4-32k",
Object: "model",
@@ -107,6 +143,15 @@ func init() {
Root: "gpt-4-32k-0314",
Parent: nil,
},
{
Id: "gpt-4-32k-0613",
Object: "model",
Created: 1677649963,
OwnedBy: "openai",
Permission: permission,
Root: "gpt-4-32k-0613",
Parent: nil,
},
{
Id: "text-embedding-ada-002",
Object: "model",

View File

@@ -58,6 +58,20 @@ func countTokenMessages(messages []Message, model string) int {
return tokenNum
}
func countTokenInput(input any, model string) int {
switch input.(type) {
case string:
return countTokenText(input.(string), model)
case []string:
text := ""
for _, s := range input.([]string) {
text += s
}
return countTokenText(text, model)
}
return 0
}
func countTokenText(text string, model string) int {
tokenEncoder := getTokenEncoder(model)
token := tokenEncoder.Encode(text, nil, nil)

View File

@@ -32,13 +32,13 @@ const (
type GeneralOpenAIRequest struct {
Model string `json:"model"`
Messages []Message `json:"messages"`
Prompt string `json:"prompt"`
Prompt any `json:"prompt"`
Stream bool `json:"stream"`
MaxTokens int `json:"max_tokens"`
Temperature float64 `json:"temperature"`
TopP float64 `json:"top_p"`
N int `json:"n"`
Input string `json:"input"`
Input any `json:"input"`
}
type ChatRequest struct {
@@ -177,6 +177,7 @@ func relayHelper(c *gin.Context, relayMode int) *OpenAIErrorWithStatusCode {
// https://github.com/songquanpeng/one-api/issues/67
model_ = strings.TrimSuffix(model_, "-0301")
model_ = strings.TrimSuffix(model_, "-0314")
model_ = strings.TrimSuffix(model_, "-0613")
fullRequestURL = fmt.Sprintf("%s/openai/deployments/%s/%s", baseURL, model_, task)
} else if channelType == common.ChannelTypePaLM {
err := relayPaLM(textRequest, c)
@@ -187,9 +188,9 @@ func relayHelper(c *gin.Context, relayMode int) *OpenAIErrorWithStatusCode {
case RelayModeChatCompletions:
promptTokens = countTokenMessages(textRequest.Messages, textRequest.Model)
case RelayModeCompletions:
promptTokens = countTokenText(textRequest.Prompt, textRequest.Model)
promptTokens = countTokenInput(textRequest.Prompt, textRequest.Model)
case RelayModeModeration:
promptTokens = countTokenText(textRequest.Input, textRequest.Model)
promptTokens = countTokenInput(textRequest.Input, textRequest.Model)
}
preConsumedTokens := common.PreConsumedQuota
if textRequest.MaxTokens != 0 {
@@ -239,16 +240,15 @@ func relayHelper(c *gin.Context, relayMode int) *OpenAIErrorWithStatusCode {
defer func() {
if consumeQuota {
quota := 0
usingGPT4 := strings.HasPrefix(textRequest.Model, "gpt-4")
completionRatio := 1
if usingGPT4 {
completionRatio := 1.34 // default for gpt-3
if strings.HasPrefix(textRequest.Model, "gpt-4") {
completionRatio = 2
}
if isStream {
responseTokens := countTokenText(streamResponseText, textRequest.Model)
quota = promptTokens + responseTokens*completionRatio
quota = promptTokens + int(float64(responseTokens)*completionRatio)
} else {
quota = textResponse.Usage.PromptTokens + textResponse.Usage.CompletionTokens*completionRatio
quota = textResponse.Usage.PromptTokens + int(float64(textResponse.Usage.CompletionTokens)*completionRatio)
}
quota = int(float64(quota) * ratio)
if ratio != 0 && quota <= 0 {
@@ -261,6 +261,9 @@ func relayHelper(c *gin.Context, relayMode int) *OpenAIErrorWithStatusCode {
}
userId := c.GetInt("id")
model.RecordLog(userId, model.LogTypeConsume, fmt.Sprintf("使用模型 %s 消耗 %d 点额度(模型倍率 %.2f,分组倍率 %.2f", textRequest.Model, quota, modelRatio, groupRatio))
model.UpdateUserUsedQuotaAndRequestCount(userId, quota)
channelId := c.GetInt("channel_id")
model.UpdateChannelUsedQuota(channelId, quota)
}
}()

View File

@@ -30,15 +30,18 @@ func GetRandomSatisfiedChannel(group string, model string) (*Channel, error) {
func (channel *Channel) AddAbilities() error {
models_ := strings.Split(channel.Models, ",")
groups_ := strings.Split(channel.Group, ",")
abilities := make([]Ability, 0, len(models_))
for _, model := range models_ {
ability := Ability{
Group: channel.Group,
Model: model,
ChannelId: channel.Id,
Enabled: channel.Status == common.ChannelStatusEnabled,
for _, group := range groups_ {
ability := Ability{
Group: group,
Model: model,
ChannelId: channel.Id,
Enabled: channel.Status == common.ChannelStatusEnabled,
}
abilities = append(abilities, ability)
}
abilities = append(abilities, ability)
}
return DB.Create(&abilities).Error
}

View File

@@ -1,6 +1,7 @@
package model
import (
"gorm.io/gorm"
"one-api/common"
)
@@ -20,6 +21,7 @@ type Channel struct {
BalanceUpdatedTime int64 `json:"balance_updated_time" gorm:"bigint"`
Models string `json:"models"`
Group string `json:"group" gorm:"type:varchar(32);default:'default'"`
UsedQuota int64 `json:"used_quota" gorm:"bigint;default:0"`
}
func GetAllChannels(startIdx int, num int, selectAll bool) ([]*Channel, error) {
@@ -136,3 +138,10 @@ func UpdateChannelStatusById(id int, status int) {
common.SysError("failed to update channel status: " + err.Error())
}
}
func UpdateChannelUsedQuota(id int, quota int) {
err := DB.Model(&Channel{}).Where("id = ?", id).Update("used_quota", gorm.Expr("used_quota + ?", quota)).Error
if err != nil {
common.SysError("failed to update channel used quota: " + err.Error())
}
}

View File

@@ -22,6 +22,9 @@ const (
)
func RecordLog(userId int, logType int, content string) {
if logType == LogTypeConsume && !common.LogConsumeEnabled {
return
}
log := &Log{
UserId: userId,
CreatedAt: common.GetTimestamp(),

View File

@@ -34,6 +34,7 @@ func InitOptionMap() {
common.OptionMap["TurnstileCheckEnabled"] = strconv.FormatBool(common.TurnstileCheckEnabled)
common.OptionMap["RegisterEnabled"] = strconv.FormatBool(common.RegisterEnabled)
common.OptionMap["AutomaticDisableChannelEnabled"] = strconv.FormatBool(common.AutomaticDisableChannelEnabled)
common.OptionMap["LogConsumeEnabled"] = strconv.FormatBool(common.LogConsumeEnabled)
common.OptionMap["ChannelDisableThreshold"] = strconv.FormatFloat(common.ChannelDisableThreshold, 'f', -1, 64)
common.OptionMap["SMTPServer"] = ""
common.OptionMap["SMTPFrom"] = ""
@@ -134,6 +135,8 @@ func updateOptionMap(key string, value string) (err error) {
common.RegisterEnabled = boolValue
case "AutomaticDisableChannelEnabled":
common.AutomaticDisableChannelEnabled = boolValue
case "LogConsumeEnabled":
common.LogConsumeEnabled = boolValue
}
}
switch key {

View File

@@ -23,6 +23,8 @@ type User struct {
VerificationCode string `json:"verification_code" gorm:"-:all"` // this field is only for Email verification, don't save it to database!
AccessToken string `json:"access_token" gorm:"type:char(32);column:access_token;uniqueIndex"` // this token is for system management
Quota int `json:"quota" gorm:"type:int;default:0"`
UsedQuota int `json:"used_quota" gorm:"type:int;default:0;column:used_quota"` // used quota
RequestCount int `json:"request_count" gorm:"type:int;default:0;"` // request number
Group string `json:"group" gorm:"type:varchar(32);default:'default'"`
}
@@ -262,3 +264,15 @@ func GetRootUserEmail() (email string) {
DB.Model(&User{}).Where("role = ?", common.RoleRootUser).Select("email").Find(&email)
return email
}
func UpdateUserUsedQuotaAndRequestCount(id int, quota int) {
err := DB.Model(&User{}).Where("id = ?", id).Updates(
map[string]interface{}{
"used_quota": gorm.Expr("used_quota + ?", quota),
"request_count": gorm.Expr("request_count + ?", 1),
},
).Error
if err != nil {
common.SysError("Failed to update user used quota and request count: " + err.Error())
}
}

View File

@@ -27,6 +27,13 @@ function renderType(type) {
return <Label basic color={type2label[type].color}>{type2label[type].text}</Label>;
}
function renderBalance(type, balance) {
if (type === 5) {
return <span>¥{(balance / 10000).toFixed(2)}</span>
}
return <span>${balance.toFixed(2)}</span>
}
const ChannelsTable = () => {
const [channels, setChannels] = useState([]);
const [loading, setLoading] = useState(true);
@@ -336,7 +343,7 @@ const ChannelsTable = () => {
<Popup
content={channel.balance_updated_time ? renderTimestamp(channel.balance_updated_time) : '未更新'}
key={channel.id}
trigger={<span>${channel.balance.toFixed(2)}</span>}
trigger={renderBalance(channel.type, channel.balance)}
basic
/>
</Table.Cell>

View File

@@ -34,6 +34,7 @@ const SystemSetting = () => {
TopUpLink: '',
AutomaticDisableChannelEnabled: '',
ChannelDisableThreshold: 0,
LogConsumeEnabled: '',
});
const [originInputs, setOriginInputs] = useState({});
let [loading, setLoading] = useState(false);
@@ -68,6 +69,7 @@ const SystemSetting = () => {
case 'TurnstileCheckEnabled':
case 'RegisterEnabled':
case 'AutomaticDisableChannelEnabled':
case 'LogConsumeEnabled':
value = inputs[key] === 'true' ? 'false' : 'true';
break;
default:
@@ -349,6 +351,12 @@ const SystemSetting = () => {
placeholder='为一个 JSON 文本,键为分组名称,值为倍率'
/>
</Form.Group>
<Form.Checkbox
checked={inputs.LogConsumeEnabled === 'true'}
label='启用额度消费日志记录'
name='LogConsumeEnabled'
onChange={handleInputChange}
/>
<Form.Button onClick={submitOperationConfig}>保存运营设置</Form.Button>
<Divider />
<Header as='h3'>

View File

@@ -4,7 +4,7 @@ import { Link } from 'react-router-dom';
import { API, showError, showSuccess } from '../helpers';
import { ITEMS_PER_PAGE } from '../constants';
import { renderGroup, renderText } from '../helpers/render';
import { renderGroup, renderNumber, renderText } from '../helpers/render';
function renderRole(role) {
switch (role) {
@@ -197,7 +197,7 @@ const UsersTable = () => {
sortUser('quota');
}}
>
剩余额度
统计信息
</Table.HeaderCell>
<Table.HeaderCell
style={{ cursor: 'pointer' }}
@@ -241,7 +241,11 @@ const UsersTable = () => {
</Table.Cell>
<Table.Cell>{renderGroup(user.group)}</Table.Cell>
<Table.Cell>{user.email ? renderText(user.email, 30) : '无'}</Table.Cell>
<Table.Cell>{user.quota}</Table.Cell>
<Table.Cell>
<Popup content='剩余额度' trigger={<Label>{renderNumber(user.quota)}</Label>} />
<Popup content='已用额度' trigger={<Label>{renderNumber(user.used_quota)}</Label>} />
<Popup content='请求次数' trigger={<Label>{renderNumber(user.request_count)}</Label>} />
</Table.Cell>
<Table.Cell>{renderRole(user.role)}</Table.Cell>
<Table.Cell>{renderStatus(user.status)}</Table.Cell>
<Table.Cell>

View File

@@ -8,12 +8,31 @@ export function renderText(text, limit) {
}
export function renderGroup(group) {
if (group === "") {
return <Label>default</Label>
} else if (group === "vip" || group === "pro") {
return <Label color='yellow'>{group}</Label>
} else if (group === "svip" || group === "premium") {
return <Label color='red'>{group}</Label>
if (group === '') {
return <Label>default</Label>;
}
let groups = group.split(',');
groups.sort();
return <>
{groups.map((group) => {
if (group === 'vip' || group === 'pro') {
return <Label color='yellow'>{group}</Label>;
} else if (group === 'svip' || group === 'premium') {
return <Label color='red'>{group}</Label>;
}
return <Label>{group}</Label>;
})}
</>;
}
export function renderNumber(num) {
if (num >= 1000000000) {
return (num / 1000000000).toFixed(1) + 'B';
} else if (num >= 1000000) {
return (num / 1000000).toFixed(1) + 'M';
} else if (num >= 10000) {
return (num / 1000).toFixed(1) + 'k';
} else {
return num;
}
return <Label>{group}</Label>
}

View File

@@ -15,8 +15,8 @@ const EditChannel = () => {
key: '',
base_url: '',
other: '',
group: 'default',
models: [],
groups: ['default']
};
const [batch, setBatch] = useState(false);
const [inputs, setInputs] = useState(originInputs);
@@ -37,6 +37,11 @@ const EditChannel = () => {
} else {
data.models = data.models.split(",")
}
if (data.group === "") {
data.groups = []
} else {
data.groups = data.group.split(",")
}
setInputs(data);
} else {
showError(message);
@@ -61,7 +66,7 @@ const EditChannel = () => {
const fetchGroups = async () => {
try {
let res = await API.get(`/api/group`);
let res = await API.get(`/api/group/`);
setGroupOptions(res.data.data.map((group) => ({
key: group,
text: group,
@@ -94,6 +99,7 @@ const EditChannel = () => {
}
let res;
localInputs.models = localInputs.models.join(",")
localInputs.group = localInputs.groups.join(",")
if (isEdit) {
res = await API.put(`/api/channel/`, { ...localInputs, id: parseInt(channelId) });
} else {
@@ -185,14 +191,14 @@ const EditChannel = () => {
<Form.Dropdown
label='分组'
placeholder={'请选择分组'}
name='group'
name='groups'
fluid
search
multiple
selection
allowAdditions
additionLabel={'请在系统设置页面编辑分组倍率以添加新的分组:'}
onChange={handleInputChange}
value={inputs.group}
value={inputs.groups}
autoComplete='new-password'
options={groupOptions}
/>

View File

@@ -25,7 +25,7 @@ const EditUser = () => {
};
const fetchGroups = async () => {
try {
let res = await API.get(`/api/group`);
let res = await API.get(`/api/group/`);
setGroupOptions(res.data.data.map((group) => ({
key: group,
text: group,