Compare commits

...

37 Commits

Author SHA1 Message Date
1808837298@qq.com
1568d6481a fix: 模型价格 2024-05-21 21:07:32 +08:00
1808837298@qq.com
d05a786b4c chore: 删除无用代码 2024-05-21 20:50:48 +08:00
1808837298@qq.com
01160658a5 chore: 删除无用代码 2024-05-21 20:01:32 +08:00
Calcium-Ion
f421699e1b Merge pull request #266 from Calcium-Ion/custom-channel
feat: 自定义渠道功能变更
2024-05-21 19:57:51 +08:00
Calcium-Ion
f0c884cb55 Merge pull request #272 from hepeichun/main
fix: 删除显示模型倍率都乘两倍的问题
2024-05-21 19:57:31 +08:00
1808837298@qq.com
51e0754ade fix: log page error (close #270) 2024-05-21 19:57:50 +08:00
hepeichun
1ab93717bb fix:删除显示模型倍率都乘两倍的问题 2024-05-21 18:14:23 +08:00
CaIon
d6c1e3f37c feat: update SettingsMagnification 2024-05-18 23:04:55 +08:00
CaIon
774ce7195c feat: update model ratio 2024-05-18 18:32:10 +08:00
CaIon
dbaa9390d3 feat: update model ratio 2024-05-18 17:51:53 +08:00
CaIon
84da88506f feat: 自定义渠道功能变更 (#262) 2024-05-18 16:06:12 +08:00
CaIon
98a991306d chore: update minimax url 2024-05-18 15:15:20 +08:00
CaIon
a3de309175 chore: token counter 2024-05-18 15:14:49 +08:00
Calcium-Ion
de81eba90b Merge pull request #265 from jimmyshjj/original
Update Perplexity and 01AI models
2024-05-18 13:54:33 +08:00
Jiayun Shen
ea0c99ac1b Update Perplexity and 01 models
更新Perplexity和01万物模型,增加相关模型价格。对于模型价格,从one-api引入了 价格 * 系数 的方式,目前仅对新模型使用了新方式,待进一步测试。
2024-05-17 19:37:18 +08:00
CaIon
095121673d chore: update model list 2024-05-16 19:08:37 +08:00
CaIon
039fda91f2 feat: support minimax 2024-05-16 19:06:35 +08:00
CaIon
e0df8bbbda feat: support minimax 2024-05-16 19:03:42 +08:00
CaIon
5e07ff85eb feat: pre to delete custom channel type 2024-05-16 18:31:03 +08:00
CaIon
71dcf43c71 feat: 日志显示重试信息 2024-05-16 16:41:08 +08:00
CaIon
7003a4ed94 fix: try to fix sqlite database migration (#231) 2024-05-16 16:10:25 +08:00
Calcium-Ion
e3b885b7f3 Merge pull request #257 from p3psi-boo/main
修复渠道测试时,没有走模型映射
2024-05-16 15:55:13 +08:00
Calcium-Ion
55962acf7c Merge pull request #259 from jimmyshjj/original
Add Baidu Default Behavior and Updating Baidu&360 Models & Prices
2024-05-16 15:54:31 +08:00
Akarin
d33b802dac Squashed commit of the following:
commit 5a6a0df45dee3dfbf2f65591a79fe5f2b74a49e6
Author: Akarin <jimmyshjj@gmail.com>
Date:   Thu May 16 14:05:28 2024 +0800

    Revert "Update docker-image-amd64.yml"

    This reverts commit 581343a78783bbd779e65b476e125af0e2b64ce5.

commit a0aec1bd030da2c6b25d9541199d598f16813a60
Merge: 5b46c7d 58abb38
Author: Jiayun Shen <jimmyshjj@gmail.com>
Date:   Thu May 16 06:46:51 2024 +0800

    Merge branch 'main' of https://github.com/jimmyshjj/new-api

commit 58abb3864a89294d82f812cda9fe49ccf7e2dd91
Merge: 7d2c026 93858c3
Author: Akarin <jimmyshjj@gmail.com>
Date:   Thu May 16 06:46:00 2024 +0800

    Merge branch 'Calcium-Ion:main' into main

commit 5b46c7dd8e6132d2be3b59c7b2ed6a4b84b93cef
Author: Jiayun Shen <jimmyshjj@gmail.com>
Date:   Thu May 16 06:45:00 2024 +0800

    Update constants.go

    Remove replaced Baidu models

commit 7d2c02679cd90b8b53f4145f83969b980a8c2095
Author: Akarin <jimmyshjj@gmail.com>
Date:   Wed May 15 23:40:50 2024 +0800

    Update adaptor.go - Normalize model name to lowercase

    Baidu's official model names may include mixed case letters, but their model APIs are case-sensitive and accept only lowercase. To ensure compatibility, the default behavior has been updated to convert model names to lowercase before constructing API requests.

commit 6bc168a39d9a6194d66f2f32b175e56de9295b2e
Merge: bb9fecd 910e76a
Author: Jiayun Shen <jimmyshjj@gmail.com>
Date:   Wed May 15 21:51:52 2024 +0800

    Merge branch 'main' of https://github.com/jimmyshjj/new-api

commit 910e76ac94d7f5dca6254abb4d0669cbb762e724
Merge: 581343a ff044de
Author: Akarin <jimmyshjj@gmail.com>
Date:   Wed May 15 21:51:13 2024 +0800

    Merge branch 'Calcium-Ion:main' into main

commit bb9fecd5bf2bd9f1859a4017e7e68f80bdb6685a
Author: Jiayun Shen <jimmyshjj@gmail.com>
Date:   Wed May 15 21:50:08 2024 +0800

    update Baidu and 360 models

    Add Baidu and 360 new models. Add Baidu completion ratio

commit 581343a78783bbd779e65b476e125af0e2b64ce5
Author: Akarin <jimmyshjj@gmail.com>
Date:   Wed May 15 19:41:34 2024 +0800

    Update docker-image-amd64.yml

commit de17e2d95eec80f1eeae66e82dec4e9601cdee43
Merge: 046f653 a3b3e6c
Author: Akarin <jimmyshjj@gmail.com>
Date:   Wed May 15 19:22:09 2024 +0800

    Merge branch 'Calcium-Ion:main' into main

commit 046f6537913ae8ad8ecf21019b64c0379331b3fd
Merge: 4164d51 7b58305
Author: Akarin <jimmyshjj@gmail.com>
Date:   Wed May 15 15:32:38 2024 +0800

    Merge branch 'Calcium-Ion:main' into main

commit 4164d51207808283a18ca2728241fd5cddc4855f
Merge: ef35b07 c222bc8
Author: Akarin <jimmyshjj@gmail.com>
Date:   Wed May 15 11:19:13 2024 +0800

    Merge branch 'Calcium-Ion:main' into main

commit ef35b072824b5095ecd2d1ed7ca9fa11673da2c4
Author: Jiayun Shen <jimmyshjj@gmail.com>
Date:   Tue May 14 19:17:32 2024 +0800

    Update adaptor.go

    Update frequently used model names from Baidu official docs and support custom models
2024-05-16 14:05:44 +08:00
Bubu
63d68ce7bf Merge branch 'Calcium-Ion:main' into main 2024-05-16 08:26:49 +08:00
Boo p3psi
95ac7c343b 修复渠道测试没有走模型映射 2024-05-16 08:24:42 +08:00
CaIon
93858c32d9 feat: 完善模型价格获取逻辑 2024-05-15 23:56:26 +08:00
CaIon
ff044de42a feat: 完善模型价格页面 2024-05-15 20:17:27 +08:00
CaIon
a3b3e6cc38 chore: update InitTokenEncoders (#255) 2024-05-15 16:32:00 +08:00
CaIon
7b5830522a Merge remote-tracking branch 'origin/main' 2024-05-15 14:18:43 +08:00
CaIon
9dcec2772d chore: update tiktoken (#254) 2024-05-15 14:18:29 +08:00
Calcium-Ion
8faf5d2517 Merge pull request #252 from utopeadia/main
add gemini-1.5-flash-latest support
2024-05-15 14:15:16 +08:00
HowieWu
a3a6733fb5 Update constant.go 2024-05-15 10:28:30 +08:00
HowieWu
0f11461af3 Update model-ratio.go 2024-05-15 10:27:30 +08:00
HowieWu
a5b84ba524 Update adaptor.go 2024-05-15 10:25:01 +08:00
Calcium-Ion
c222bc8752 Merge pull request #251 from congyijiu/main
Update constant.go
2024-05-14 21:24:19 +08:00
congyijiu
3dd2a5bfc5 Update constant.go
To prevent the default testing of GPT-4o
2024-05-14 20:47:06 +08:00
36 changed files with 438 additions and 200 deletions

View File

@@ -208,6 +208,7 @@ const (
ChannelTypeLingYiWanWu = 31
ChannelTypeAws = 33
ChannelTypeCohere = 34
ChannelTypeMiniMax = 35
ChannelTypeDummy // this one is only for count, do not add any channel after this
)
@@ -248,4 +249,5 @@ var ChannelBaseURLs = []string{
"", //32
"", //33
"https://api.cohere.ai", //34
"https://api.minimax.chat", //35
}

View File

@@ -5,6 +5,13 @@ import (
"strings"
)
// from songquanpeng/one-api
const (
USD2RMB = 7.3 // 暂定 1 USD = 7.3 RMB
USD = 500 // $0.002 = 1 -> $1 = 500
RMB = USD / USD2RMB
)
// modelRatio
// https://platform.openai.com/docs/models/model-endpoint-compatibility
// https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Blfmc9dlf
@@ -33,79 +40,100 @@ var DefaultModelRatio = map[string]float64{
"gpt-4-turbo-2024-04-09": 5, // $0.01 / 1K tokens
"gpt-3.5-turbo": 0.25, // $0.0015 / 1K tokens
//"gpt-3.5-turbo-0301": 0.75, //deprecated
"gpt-3.5-turbo-0613": 0.75,
"gpt-3.5-turbo-16k": 1.5, // $0.003 / 1K tokens
"gpt-3.5-turbo-16k-0613": 1.5,
"gpt-3.5-turbo-instruct": 0.75, // $0.0015 / 1K tokens
"gpt-3.5-turbo-1106": 0.5, // $0.001 / 1K tokens
"gpt-3.5-turbo-0125": 0.25,
"babbage-002": 0.2, // $0.0004 / 1K tokens
"davinci-002": 1, // $0.002 / 1K tokens
"text-ada-001": 0.2,
"text-babbage-001": 0.25,
"text-curie-001": 1,
"text-davinci-002": 10,
"text-davinci-003": 10,
"text-davinci-edit-001": 10,
"code-davinci-edit-001": 10,
"whisper-1": 15, // $0.006 / minute -> $0.006 / 150 words -> $0.006 / 200 tokens -> $0.03 / 1k tokens
"tts-1": 7.5, // 1k characters -> $0.015
"tts-1-1106": 7.5, // 1k characters -> $0.015
"tts-1-hd": 15, // 1k characters -> $0.03
"tts-1-hd-1106": 15, // 1k characters -> $0.03
"davinci": 10,
"curie": 10,
"babbage": 10,
"ada": 10,
"text-embedding-3-small": 0.01,
"text-embedding-3-large": 0.065,
"text-embedding-ada-002": 0.05,
"text-search-ada-doc-001": 10,
"text-moderation-stable": 0.1,
"text-moderation-latest": 0.1,
"claude-instant-1": 0.4, // $0.8 / 1M tokens
"claude-2.0": 4, // $8 / 1M tokens
"claude-2.1": 4, // $8 / 1M tokens
"claude-3-haiku-20240307": 0.125, // $0.25 / 1M tokens
"claude-3-sonnet-20240229": 1.5, // $3 / 1M tokens
"claude-3-opus-20240229": 7.5, // $15 / 1M tokens
"ERNIE-Bot": 0.8572, // ¥0.012 / 1k tokens
"ERNIE-Bot-turbo": 0.5715, // ¥0.008 / 1k tokens
"ERNIE-Bot-4": 8.572, // ¥0.12 / 1k tokens
"Embedding-V1": 0.1429, // ¥0.002 / 1k tokens
"PaLM-2": 1,
"gemini-pro": 1, // $0.00025 / 1k characters -> $0.001 / 1k tokens
"gemini-pro-vision": 1, // $0.00025 / 1k characters -> $0.001 / 1k tokens
"gemini-1.0-pro-vision-001": 1,
"gemini-1.0-pro-001": 1,
"gemini-1.5-pro-latest": 1,
"gemini-1.0-pro-latest": 1,
"gemini-1.0-pro-vision-latest": 1,
"gemini-ultra": 1,
"chatglm_turbo": 0.3572, // 0.005 / 1k tokens
"chatglm_pro": 0.7143, // 0.01 / 1k tokens
"chatglm_std": 0.3572, // ¥0.005 / 1k tokens
"chatglm_lite": 0.1429, // ¥0.002 / 1k tokens
"glm-4": 7.143, // ¥0.1 / 1k tokens
"glm-4v": 7.143, // ¥0.1 / 1k tokens
"glm-3-turbo": 0.3572,
"qwen-turbo": 0.8572, // ¥0.012 / 1k tokens
"qwen-plus": 10, // ¥0.14 / 1k tokens
"text-embedding-v1": 0.05, // ¥0.0007 / 1k tokens
"SparkDesk-v1.1": 1.2858, // ¥0.018 / 1k tokens
"SparkDesk-v2.1": 1.2858, // ¥0.018 / 1k tokens
"SparkDesk-v3.1": 1.2858, // ¥0.018 / 1k tokens
"SparkDesk-v3.5": 1.2858, // ¥0.018 / 1k tokens
"360GPT_S2_V9": 0.8572, // ¥0.012 / 1k tokens
"embedding-bert-512-v1": 0.0715, // ¥0.001 / 1k tokens
"embedding_s1_v1": 0.0715, // ¥0.001 / 1k tokens
"semantic_similarity_s1_v1": 0.0715, // ¥0.001 / 1k tokens
"hunyuan": 7.143, // ¥0.1 / 1k tokens // https://cloud.tencent.com/document/product/1729/97731#e0e6be58-60c8-469f-bdeb-6c264ce3b4d0
"gpt-3.5-turbo-0613": 0.75,
"gpt-3.5-turbo-16k": 1.5, // $0.003 / 1K tokens
"gpt-3.5-turbo-16k-0613": 1.5,
"gpt-3.5-turbo-instruct": 0.75, // $0.0015 / 1K tokens
"gpt-3.5-turbo-1106": 0.5, // $0.001 / 1K tokens
"gpt-3.5-turbo-0125": 0.25,
"babbage-002": 0.2, // $0.0004 / 1K tokens
"davinci-002": 1, // $0.002 / 1K tokens
"text-ada-001": 0.2,
"text-babbage-001": 0.25,
"text-curie-001": 1,
"text-davinci-002": 10,
"text-davinci-003": 10,
"text-davinci-edit-001": 10,
"code-davinci-edit-001": 10,
"whisper-1": 15, // $0.006 / minute -> $0.006 / 150 words -> $0.006 / 200 tokens -> $0.03 / 1k tokens
"tts-1": 7.5, // 1k characters -> $0.015
"tts-1-1106": 7.5, // 1k characters -> $0.015
"tts-1-hd": 15, // 1k characters -> $0.03
"tts-1-hd-1106": 15, // 1k characters -> $0.03
"davinci": 10,
"curie": 10,
"babbage": 10,
"ada": 10,
"text-embedding-3-small": 0.01,
"text-embedding-3-large": 0.065,
"text-embedding-ada-002": 0.05,
"text-search-ada-doc-001": 10,
"text-moderation-stable": 0.1,
"text-moderation-latest": 0.1,
"claude-instant-1": 0.4, // $0.8 / 1M tokens
"claude-2.0": 4, // $8 / 1M tokens
"claude-2.1": 4, // $8 / 1M tokens
"claude-3-haiku-20240307": 0.125, // $0.25 / 1M tokens
"claude-3-sonnet-20240229": 1.5, // $3 / 1M tokens
"claude-3-opus-20240229": 7.5, // $15 / 1M tokens
"ERNIE-Bot": 0.8572, // ¥0.012 / 1k tokens //renamed to ERNIE-3.5-8K
"ERNIE-Bot-turbo": 0.5715, // ¥0.008 / 1k tokens //renamed to ERNIE-Lite-8K
"ERNIE-Bot-4": 8.572, // ¥0.12 / 1k tokens //renamed to ERNIE-4.0-8K
"ERNIE-4.0-8K": 8.572, // ¥0.12 / 1k tokens
"ERNIE-3.5-8K": 0.8572, // ¥0.012 / 1k tokens
"ERNIE-Speed-8K": 0.2858, // 0.004 / 1k tokens
"ERNIE-Speed-128K": 0.2858, // 0.004 / 1k tokens
"ERNIE-Lite-8K": 0.2143, // ¥0.003 / 1k tokens
"ERNIE-Tiny-8K": 0.0715, // ¥0.001 / 1k tokens
"ERNIE-Character-8K": 0.2858, // ¥0.004 / 1k tokens
"ERNIE-Functions-8K": 0.2858, // ¥0.004 / 1k tokens
"Embedding-V1": 0.1429, // ¥0.002 / 1k tokens
"PaLM-2": 1,
"gemini-pro": 1, // $0.00025 / 1k characters -> $0.001 / 1k tokens
"gemini-pro-vision": 1, // $0.00025 / 1k characters -> $0.001 / 1k tokens
"gemini-1.0-pro-vision-001": 1,
"gemini-1.0-pro-001": 1,
"gemini-1.5-pro-latest": 1,
"gemini-1.5-flash-latest": 1,
"gemini-1.0-pro-latest": 1,
"gemini-1.0-pro-vision-latest": 1,
"gemini-ultra": 1,
"chatglm_turbo": 0.3572, // ¥0.005 / 1k tokens
"chatglm_pro": 0.7143, // ¥0.01 / 1k tokens
"chatglm_std": 0.3572, // ¥0.005 / 1k tokens
"chatglm_lite": 0.1429, // ¥0.002 / 1k tokens
"glm-4": 7.143, // ¥0.1 / 1k tokens
"glm-4v": 7.143, // 0.1 / 1k tokens
"glm-3-turbo": 0.3572,
"qwen-turbo": 0.8572, // 0.012 / 1k tokens
"qwen-plus": 10, // 0.14 / 1k tokens
"text-embedding-v1": 0.05, // 0.0007 / 1k tokens
"SparkDesk-v1.1": 1.2858, // ¥0.018 / 1k tokens
"SparkDesk-v2.1": 1.2858, // ¥0.018 / 1k tokens
"SparkDesk-v3.1": 1.2858, // ¥0.018 / 1k tokens
"SparkDesk-v3.5": 1.2858, // ¥0.018 / 1k tokens
"360GPT_S2_V9": 0.8572, // ¥0.012 / 1k tokens
"360gpt-turbo": 0.0858, // ¥0.0012 / 1k tokens
"360gpt-turbo-responsibility-8k": 0.8572, // ¥0.012 / 1k tokens
"360gpt-pro": 0.8572, // ¥0.012 / 1k tokens
"embedding-bert-512-v1": 0.0715, // ¥0.001 / 1k tokens
"embedding_s1_v1": 0.0715, // ¥0.001 / 1k tokens
"semantic_similarity_s1_v1": 0.0715, // ¥0.001 / 1k tokens
"hunyuan": 7.143, // ¥0.1 / 1k tokens // https://cloud.tencent.com/document/product/1729/97731#e0e6be58-60c8-469f-bdeb-6c264ce3b4d0
// https://platform.lingyiwanwu.com/docs#-计费单元
// 已经按照 7.2 来换算美元价格
"yi-34b-chat-0205": 0.018,
"yi-34b-chat-200k": 0.0864,
"yi-vl-plus": 0.0432,
"yi-34b-chat-0205": 0.18,
"yi-34b-chat-200k": 0.864,
"yi-vl-plus": 0.432,
"yi-large": 20.0 / 1000 * RMB,
"yi-medium": 2.5 / 1000 * RMB,
"yi-vision": 6.0 / 1000 * RMB,
"yi-medium-200k": 12.0 / 1000 * RMB,
"yi-spark": 1.0 / 1000 * RMB,
"yi-large-rag": 25.0 / 1000 * RMB,
"yi-large-turbo": 12.0 / 1000 * RMB,
"yi-large-preview": 20.0 / 1000 * RMB,
"yi-large-rag-preview": 25.0 / 1000 * RMB,
"command": 0.5,
"command-nightly": 0.5,
"command-light": 0.5,
@@ -114,6 +142,11 @@ var DefaultModelRatio = map[string]float64{
"command-r-plus ": 1.5,
"deepseek-chat": 0.07,
"deepseek-coder": 0.07,
// Perplexity online 模型对搜索额外收费,有需要应自行调整,此处不计入搜索费用
"llama-3-sonar-small-32k-chat": 0.2 / 1000 * USD,
"llama-3-sonar-small-32k-online": 0.2 / 1000 * USD,
"llama-3-sonar-large-32k-chat": 1 / 1000 * USD,
"llama-3-sonar-large-32k-online": 1 / 1000 * USD,
}
var DefaultModelPrice = map[string]float64{
@@ -256,7 +289,7 @@ func GetCompletionRatio(name string) float64 {
}
return 4.0 / 3.0
}
if strings.HasPrefix(name, "gpt-4") && name != "gpt-4-all" && name != "gpt-4-gizmo-*" {
if strings.HasPrefix(name, "gpt-4") && !strings.HasSuffix(name, "-all") && !strings.HasSuffix(name, "-gizmo-*") {
if strings.HasPrefix(name, "gpt-4-turbo") || strings.HasSuffix(name, "preview") || strings.HasPrefix(name, "gpt-4o") {
return 3
}
@@ -288,6 +321,15 @@ func GetCompletionRatio(name string) float64 {
if strings.HasPrefix(name, "deepseek") {
return 2
}
if strings.HasPrefix(name, "ERNIE-Speed-") {
return 2
} else if strings.HasPrefix(name, "ERNIE-Lite-") {
return 2
} else if strings.HasPrefix(name, "ERNIE-Character") {
return 2
} else if strings.HasPrefix(name, "ERNIE-Functions") {
return 2
}
switch name {
case "llama2-70b-4096":
return 0.8 / 0.64

View File

@@ -258,3 +258,12 @@ func MapToJsonStrFloat(m map[string]float64) string {
}
return string(bytes)
}
func StrToMap(str string) map[string]interface{} {
m := make(map[string]interface{})
err := json.Unmarshal([]byte(str), &m)
if err != nil {
return nil
}
return m
}

View File

@@ -64,7 +64,21 @@ func testChannel(channel *model.Channel, testModel string) (err error, openaiErr
} else {
testModel = adaptor.GetModelList()[0]
}
} else {
modelMapping := *channel.ModelMapping
if modelMapping != "" && modelMapping != "{}" {
modelMap := make(map[string]string)
err := json.Unmarshal([]byte(modelMapping), &modelMap)
if err != nil {
openaiErr := service.OpenAIErrorWrapperLocal(err, "unmarshal_model_mapping_failed", http.StatusInternalServerError).Error
return err, &openaiErr
}
if modelMap[testModel] != "" {
testModel = modelMap[testModel]
}
}
}
request := buildTestRequest()
request.Model = testModel
meta.UpstreamModelName = testModel

View File

@@ -11,6 +11,7 @@ import (
"one-api/relay"
"one-api/relay/channel/ai360"
"one-api/relay/channel/lingyiwanwu"
"one-api/relay/channel/minimax"
"one-api/relay/channel/moonshot"
relaycommon "one-api/relay/common"
relayconstant "one-api/relay/constant"
@@ -79,7 +80,7 @@ func init() {
Id: modelName,
Object: "model",
Created: 1626777600,
OwnedBy: "moonshot",
OwnedBy: moonshot.ChannelName,
Permission: permission,
Root: modelName,
Parent: nil,
@@ -90,7 +91,18 @@ func init() {
Id: modelName,
Object: "model",
Created: 1626777600,
OwnedBy: "lingyiwanwu",
OwnedBy: lingyiwanwu.ChannelName,
Permission: permission,
Root: modelName,
Parent: nil,
})
}
for _, modelName := range minimax.ModelList {
openAIModels = append(openAIModels, dto.OpenAIModels{
Id: modelName,
Object: "model",
Created: 1626777600,
OwnedBy: minimax.ChannelName,
Permission: permission,
Root: modelName,
Parent: nil,
@@ -108,8 +120,8 @@ func init() {
})
}
openAIModelsMap = make(map[string]dto.OpenAIModels)
for _, model := range openAIModels {
openAIModelsMap[model.Id] = model
for _, aiModel := range openAIModels {
openAIModelsMap[aiModel.Id] = aiModel
}
channelId2Models = make(map[int][]string)
for i := 1; i <= common.ChannelTypeDummy; i++ {
@@ -174,8 +186,8 @@ func DashboardListModels(c *gin.Context) {
func RetrieveModel(c *gin.Context) {
modelId := c.Param("model")
if model, ok := openAIModelsMap[modelId]; ok {
c.JSON(200, model)
if aiModel, ok := openAIModelsMap[modelId]; ok {
c.JSON(200, aiModel)
} else {
openAIError := dto.OpenAIError{
Message: fmt.Sprintf("The model '%s' does not exist", modelId),
@@ -191,12 +203,12 @@ func RetrieveModel(c *gin.Context) {
func GetPricing(c *gin.Context) {
userId := c.GetInt("id")
user, _ := model.GetUserById(userId, true)
group, err := model.CacheGetUserGroup(userId)
groupRatio := common.GetGroupRatio("default")
if user != nil {
groupRatio = common.GetGroupRatio(user.Group)
if err != nil {
groupRatio = common.GetGroupRatio(group)
}
pricing := model.GetPricing(user, openAIModels)
pricing := model.GetPricing(group)
c.JSON(200, gin.H{
"success": true,
"data": pricing,

View File

@@ -43,7 +43,7 @@ func Relay(c *gin.Context) {
group := c.GetString("group")
originalModel := c.GetString("original_model")
openaiErr := relayHandler(c, relayMode)
useChannel := []int{channelId}
c.Set("use_channel", []string{fmt.Sprintf("%d", channelId)})
if openaiErr != nil {
go processChannelError(c, channelId, openaiErr)
} else {
@@ -56,7 +56,9 @@ func Relay(c *gin.Context) {
break
}
channelId = channel.Id
useChannel = append(useChannel, channelId)
useChannel := c.GetStringSlice("use_channel")
useChannel = append(useChannel, fmt.Sprintf("%d", channelId))
c.Set("use_channel", useChannel)
common.LogInfo(c.Request.Context(), fmt.Sprintf("using channel #%d to retry (remain times %d)", channel.Id, i))
middleware.SetupContextForSelectedChannel(c, channel, originalModel)
@@ -67,6 +69,7 @@ func Relay(c *gin.Context) {
go processChannelError(c, channelId, openaiErr)
}
}
useChannel := c.GetStringSlice("use_channel")
if len(useChannel) > 1 {
retryLogStr := fmt.Sprintf("重试:%s", strings.Trim(strings.Join(strings.Fields(fmt.Sprint(useChannel)), "->"), "[]"))
common.LogInfo(c.Request.Context(), retryLogStr)

6
go.mod
View File

@@ -17,11 +17,11 @@ require (
github.com/go-playground/validator/v10 v10.19.0
github.com/go-redis/redis/v8 v8.11.5
github.com/golang-jwt/jwt v3.2.2+incompatible
github.com/google/uuid v1.3.0
github.com/google/uuid v1.6.0
github.com/gorilla/websocket v1.5.0
github.com/jinzhu/copier v0.4.0
github.com/linux-do/tiktoken-go v0.7.0
github.com/pkg/errors v0.9.1
github.com/pkoukk/tiktoken-go v0.1.6
github.com/samber/lo v1.39.0
github.com/shirou/gopsutil v3.21.11+incompatible
golang.org/x/crypto v0.21.0
@@ -42,7 +42,7 @@ require (
github.com/cespare/xxhash/v2 v2.1.2 // indirect
github.com/chenzhuoyu/base64x v0.0.0-20221115062448-fe3a3abad311 // indirect
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f // indirect
github.com/dlclark/regexp2 v1.10.0 // indirect
github.com/dlclark/regexp2 v1.11.0 // indirect
github.com/gabriel-vasile/mimetype v1.4.3 // indirect
github.com/gin-contrib/sse v0.1.0 // indirect
github.com/go-ole/go-ole v1.2.6 // indirect

12
go.sum
View File

@@ -32,8 +32,8 @@ github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78=
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc=
github.com/dlclark/regexp2 v1.10.0 h1:+/GIL799phkJqYW+3YbOd8LCcbHzT0Pbo8zl70MHsq0=
github.com/dlclark/regexp2 v1.10.0/go.mod h1:DHkYz0B9wPfa6wondMfaivmHpzrQ3v9q8cnmRbL6yW8=
github.com/dlclark/regexp2 v1.11.0 h1:G/nrcoOa7ZXlpoa/91N3X7mM3r8eIlMBBJZvsz/mxKI=
github.com/dlclark/regexp2 v1.11.0/go.mod h1:DHkYz0B9wPfa6wondMfaivmHpzrQ3v9q8cnmRbL6yW8=
github.com/fsnotify/fsnotify v1.4.9 h1:hsms1Qyu0jgnwNXIxa+/V/PDsU6CfLf6CNO8H7IWoS4=
github.com/gabriel-vasile/mimetype v1.4.3 h1:in2uUcidCuFcDKtdcBxlR0rJ1+fsokWf+uqxgUFjbI0=
github.com/gabriel-vasile/mimetype v1.4.3/go.mod h1:d8uq/6HKRL6CGdk+aubisF/M5GcPfT7nKyLpA0lbSSk=
@@ -81,8 +81,8 @@ github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaS
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.8 h1:e6P7q2lk1O+qJJb4BtCQXlK8vWEO8V1ZeuEdJNOqZyg=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/uuid v1.3.0 h1:t6JiXgmwXMjEs8VusXIJk2BXHsn+wx8BZdTaoZ5fu7I=
github.com/google/uuid v1.3.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/gorilla/context v1.1.1 h1:AWwleXJkX/nhcU9bZSnZoi3h/qGYqQAGhq6zZe/aQW8=
github.com/gorilla/context v1.1.1/go.mod h1:kBGZzfjB9CEq2AlWe17Uuf7NDRt0dE0s8S51q0aT7Yg=
github.com/gorilla/securecookie v1.1.1 h1:miw7JPhV+b/lAHSXz4qd/nN9jRiAFV5FwjeKyCS8BvQ=
@@ -124,6 +124,8 @@ github.com/leodido/go-urn v1.2.0/go.mod h1:+8+nEpDfqqsY+g338gtMEUOtuK+4dEMhiQEgx
github.com/leodido/go-urn v1.2.1/go.mod h1:zt4jvISO2HfUBqxjfIshjdMTYS56ZS/qv49ictyFfxY=
github.com/leodido/go-urn v1.4.0 h1:WT9HwE9SGECu3lg4d/dIA+jxlljEa1/ffXKmRjqdmIQ=
github.com/leodido/go-urn v1.4.0/go.mod h1:bvxc+MVxLKB4z00jd1z+Dvzr47oO32F/QSNjSBOlFxI=
github.com/linux-do/tiktoken-go v0.7.0 h1:Kcm/miJ5gp77srtF8GQWnfq7W9kTaXEuHZg/g9IVEu8=
github.com/linux-do/tiktoken-go v0.7.0/go.mod h1:9Vkdtp0ngi4USmrdSx984iuIQ5IMr0hnUdz4jZZTJb8=
github.com/mattn/go-isatty v0.0.12/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU=
github.com/mattn/go-isatty v0.0.14/go.mod h1:7GGIvUiUoEMVVmxf/4nioHXj79iQHKdU27kJ6hsGG94=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
@@ -148,8 +150,6 @@ github.com/pelletier/go-toml/v2 v2.0.8/go.mod h1:vuYfssBdrU2XDZ9bYydBu6t+6a6PYNc
github.com/pkg/diff v0.0.0-20210226163009-20ebb0f2a09e/go.mod h1:pJLUxLENpZxwdsKMEsNbx1VGcRFpLqf3715MtcvvzbA=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkoukk/tiktoken-go v0.1.6 h1:JF0TlJzhTbrI30wCvFuiw6FzP2+/bR+FIxUdgEAcUsw=
github.com/pkoukk/tiktoken-go v0.1.6/go.mod h1:9NiV+i9mJKGj1rYOT+njbv+ZwA/zJxYdewGl6qVatpg=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/rogpeppe/go-internal v1.6.1/go.mod h1:xXDCJY+GAPziupqXw64V24skbSoqbTEfhy4qGm1nDQc=

View File

@@ -142,6 +142,15 @@ func GetUserLogs(userId int, logType int, startTimestamp int64, endTimestamp int
tx = tx.Where("created_at <= ?", endTimestamp)
}
err = tx.Order("id desc").Limit(num).Offset(startIdx).Omit("id").Find(&logs).Error
for i := range logs {
var otherMap map[string]interface{}
otherMap = common.StrToMap(logs[i].Other)
if otherMap != nil {
// delete admin
delete(otherMap, "admin_info")
}
logs[i].Other = common.MapToJsonStr(otherMap)
}
return logs, err
}

View File

@@ -93,12 +93,12 @@ func InitDB() (err error) {
if !common.IsMasterNode {
return nil
}
if common.UsingMySQL {
_, _ = sqlDB.Exec("DROP INDEX idx_channels_key ON channels;") // TODO: delete this line when most users have upgraded
_, _ = sqlDB.Exec("ALTER TABLE midjourneys MODIFY action VARCHAR(40);") // TODO: delete this line when most users have upgraded
_, _ = sqlDB.Exec("ALTER TABLE midjourneys MODIFY progress VARCHAR(30);") // TODO: delete this line when most users have upgraded
_, _ = sqlDB.Exec("ALTER TABLE midjourneys MODIFY status VARCHAR(20);") // TODO: delete this line when most users have upgraded
}
//if common.UsingMySQL {
// _, _ = sqlDB.Exec("DROP INDEX idx_channels_key ON channels;") // TODO: delete this line when most users have upgraded
// _, _ = sqlDB.Exec("ALTER TABLE midjourneys MODIFY action VARCHAR(40);") // TODO: delete this line when most users have upgraded
// _, _ = sqlDB.Exec("ALTER TABLE midjourneys MODIFY progress VARCHAR(30);") // TODO: delete this line when most users have upgraded
// _, _ = sqlDB.Exec("ALTER TABLE midjourneys MODIFY status VARCHAR(20);") // TODO: delete this line when most users have upgraded
//}
common.SysLog("database migration started")
err = db.AutoMigrate(&Channel{})
if err != nil {

View File

@@ -13,16 +13,16 @@ var (
updatePricingLock sync.Mutex
)
func GetPricing(user *User, openAIModels []dto.OpenAIModels) []dto.ModelPricing {
func GetPricing(group string) []dto.ModelPricing {
updatePricingLock.Lock()
defer updatePricingLock.Unlock()
if time.Since(lastGetPricingTime) > time.Minute*1 || len(pricingMap) == 0 {
updatePricing(openAIModels)
updatePricing()
}
if user != nil {
if group != "" {
userPricingMap := make([]dto.ModelPricing, 0)
models := GetGroupModels(user.Group)
models := GetGroupModels(group)
for _, pricing := range pricingMap {
if !common.StringsContains(models, pricing.ModelName) {
pricing.Available = false
@@ -34,28 +34,19 @@ func GetPricing(user *User, openAIModels []dto.OpenAIModels) []dto.ModelPricing
return pricingMap
}
func updatePricing(openAIModels []dto.OpenAIModels) {
modelRatios := common.GetModelRatios()
func updatePricing() {
//modelRatios := common.GetModelRatios()
enabledModels := GetEnabledModels()
allModels := make(map[string]string)
for _, openAIModel := range openAIModels {
if common.StringsContains(enabledModels, openAIModel.Id) {
allModels[openAIModel.Id] = openAIModel.OwnedBy
}
}
for model, _ := range modelRatios {
if common.StringsContains(enabledModels, model) {
if _, ok := allModels[model]; !ok {
allModels[model] = "custom"
}
}
allModels := make(map[string]int)
for i, model := range enabledModels {
allModels[model] = i
}
pricingMap = make([]dto.ModelPricing, 0)
for model, ownerBy := range allModels {
for model, _ := range allModels {
pricing := dto.ModelPricing{
Available: true,
ModelName: model,
OwnerBy: ownerBy,
}
modelPrice, findPrice := common.GetModelPrice(model, false)
if findPrice {

View File

@@ -11,7 +11,7 @@ import (
type Token struct {
Id int `json:"id"`
UserId int `json:"user_id"`
UserId int `json:"user_id" gorm:"index"`
Key string `json:"key" gorm:"type:char(48);uniqueIndex"`
Status int `json:"status" gorm:"default:1"`
Name string `json:"name" gorm:"index" `

View File

@@ -1,6 +1,9 @@
package ai360
var ModelList = []string{
"360gpt-turbo",
"360gpt-turbo-responsibility-8k",
"360gpt-pro",
"360GPT_S2_V9",
"embedding-bert-512-v1",
"embedding_s1_v1",

View File

@@ -9,6 +9,7 @@ import (
"one-api/relay/channel"
relaycommon "one-api/relay/common"
"one-api/relay/constant"
"strings"
)
type Adaptor struct {
@@ -33,8 +34,24 @@ func (a *Adaptor) GetRequestURL(info *relaycommon.RelayInfo) (string, error) {
fullRequestURL = "https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/eb-instant"
case "BLOOMZ-7B":
fullRequestURL = "https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/bloomz_7b1"
case "ERNIE-4.0-8K":
fullRequestURL = "https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/completions_pro"
case "ERNIE-3.5-8K":
fullRequestURL = "https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/completions"
case "ERNIE-Speed-8K":
fullRequestURL = "https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/ernie_speed"
case "ERNIE-Character-8K":
fullRequestURL = "https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/ernie-char-8k"
case "ERNIE-Functions-8K":
fullRequestURL = "https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/ernie-func-8k"
case "ERNIE-Lite-8K-0922":
fullRequestURL = "https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/eb-instant"
case "Yi-34B-Chat":
fullRequestURL = "https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/yi_34b_chat"
case "Embedding-V1":
fullRequestURL = "https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/embeddings/embedding-v1"
default:
fullRequestURL = "https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/" + strings.ToLower(info.UpstreamModelName)
}
var accessToken string
var err error

View File

@@ -1,11 +1,19 @@
package baidu
var ModelList = []string{
"ERNIE-Bot-4",
"ERNIE-Bot-8K",
"ERNIE-Bot",
"ERNIE-Speed",
"ERNIE-Bot-turbo",
"ERNIE-3.5-8K",
"ERNIE-4.0-8K",
"ERNIE-Speed-8K",
"ERNIE-Speed-128K",
"ERNIE-Lite-8K",
"ERNIE-Tiny-8K",
"ERNIE-Character-8K",
"ERNIE-Functions-8K",
//"ERNIE-Bot-4",
//"ERNIE-Bot-8K",
//"ERNIE-Bot",
//"ERNIE-Speed",
//"ERNIE-Bot-turbo",
"Embedding-V1",
}

View File

@@ -21,6 +21,7 @@ func (a *Adaptor) Init(info *relaycommon.RelayInfo, request dto.GeneralOpenAIReq
// 定义一个映射,存储模型名称和对应的版本
var modelVersionMap = map[string]string{
"gemini-1.5-pro-latest": "v1beta",
"gemini-1.5-flash-latest": "v1beta",
"gemini-ultra": "v1beta",
}

View File

@@ -5,7 +5,7 @@ const (
)
var ModelList = []string{
"gemini-1.0-pro-latest", "gemini-1.0-pro-001", "gemini-1.5-pro-latest", "gemini-ultra",
"gemini-1.0-pro-latest", "gemini-1.0-pro-001", "gemini-1.5-pro-latest", "gemini-1.5-flash-latest", "gemini-ultra",
"gemini-1.0-pro-vision-latest", "gemini-1.0-pro-vision-001",
}

View File

@@ -1,9 +1,9 @@
package lingyiwanwu
// https://platform.lingyiwanwu.com/docs
var ModelList = []string{
"yi-34b-chat-0205",
"yi-34b-chat-200k",
"yi-vl-plus",
}
package lingyiwanwu
// https://platform.lingyiwanwu.com/docs
var ModelList = []string{
"yi-large", "yi-medium", "yi-vision", "yi-medium-200k", "yi-spark", "yi-large-rag", "yi-large-turbo", "yi-large-preview", "yi-large-rag-preview",
}
var ChannelName = "lingyiwanwu"

View File

@@ -0,0 +1,13 @@
package minimax
// https://www.minimaxi.com/document/guides/chat-model/V2?id=65e0736ab2845de20908e2dd
var ModelList = []string{
"abab6.5-chat",
"abab6.5s-chat",
"abab6-chat",
"abab5.5-chat",
"abab5.5s-chat",
}
var ChannelName = "minimax"

View File

@@ -0,0 +1,10 @@
package minimax
import (
"fmt"
relaycommon "one-api/relay/common"
)
func GetRequestURL(info *relaycommon.RelayInfo) (string, error) {
return fmt.Sprintf("%s/v1/text/chatcompletion_v2", info.BaseUrl), nil
}

View File

@@ -5,3 +5,5 @@ var ModelList = []string{
"moonshot-v1-32k",
"moonshot-v1-128k",
}
var ChannelName = "moonshot"

View File

@@ -11,6 +11,7 @@ import (
"one-api/relay/channel"
"one-api/relay/channel/ai360"
"one-api/relay/channel/lingyiwanwu"
"one-api/relay/channel/minimax"
"one-api/relay/channel/moonshot"
relaycommon "one-api/relay/common"
"one-api/service"
@@ -26,7 +27,8 @@ func (a *Adaptor) Init(info *relaycommon.RelayInfo, request dto.GeneralOpenAIReq
}
func (a *Adaptor) GetRequestURL(info *relaycommon.RelayInfo) (string, error) {
if info.ChannelType == common.ChannelTypeAzure {
switch info.ChannelType {
case common.ChannelTypeAzure:
// https://learn.microsoft.com/en-us/azure/cognitive-services/openai/chatgpt-quickstart?pivots=rest-api&tabs=command-line#rest-api
requestURL := strings.Split(info.RequestURLPath, "?")[0]
requestURL = fmt.Sprintf("%s?api-version=%s", requestURL, info.ApiVersion)
@@ -37,8 +39,15 @@ func (a *Adaptor) GetRequestURL(info *relaycommon.RelayInfo) (string, error) {
requestURL = fmt.Sprintf("/openai/deployments/%s/%s", model_, task)
return relaycommon.GetFullRequestURL(info.BaseUrl, requestURL, info.ChannelType), nil
case common.ChannelTypeMiniMax:
return minimax.GetRequestURL(info)
case common.ChannelTypeCustom:
url := info.BaseUrl
url = strings.Replace(url, "{model}", info.UpstreamModelName, -1)
return url, nil
default:
return relaycommon.GetFullRequestURL(info.BaseUrl, info.RequestURLPath, info.ChannelType), nil
}
return relaycommon.GetFullRequestURL(info.BaseUrl, info.RequestURLPath, info.ChannelType), nil
}
func (a *Adaptor) SetupRequestHeader(c *gin.Context, req *http.Request, info *relaycommon.RelayInfo) error {
@@ -90,11 +99,24 @@ func (a *Adaptor) GetModelList() []string {
return moonshot.ModelList
case common.ChannelTypeLingYiWanWu:
return lingyiwanwu.ModelList
case common.ChannelTypeMiniMax:
return minimax.ModelList
default:
return ModelList
}
}
func (a *Adaptor) GetChannelName() string {
return ChannelName
switch a.ChannelType {
case common.ChannelType360:
return ai360.ChannelName
case common.ChannelTypeMoonshot:
return moonshot.ChannelName
case common.ChannelTypeLingYiWanWu:
return lingyiwanwu.ChannelName
case common.ChannelTypeMiniMax:
return minimax.ChannelName
default:
return ChannelName
}
}

View File

@@ -1,7 +1,6 @@
package openai
var ModelList = []string{
"gpt-4o",
"gpt-3.5-turbo", "gpt-3.5-turbo-0301", "gpt-3.5-turbo-0613", "gpt-3.5-turbo-1106", "gpt-3.5-turbo-0125",
"gpt-3.5-turbo-16k", "gpt-3.5-turbo-16k-0613",
"gpt-3.5-turbo-instruct",
@@ -9,6 +8,7 @@ var ModelList = []string{
"gpt-4-32k", "gpt-4-32k-0314", "gpt-4-32k-0613",
"gpt-4-turbo-preview", "gpt-4-turbo", "gpt-4-turbo-2024-04-09",
"gpt-4-vision-preview",
"gpt-4o", "gpt-4o-2024-05-13",
"text-embedding-ada-002", "text-embedding-3-small", "text-embedding-3-large",
"text-curie-001", "text-babbage-001", "text-ada-001", "text-davinci-002", "text-davinci-003",
"text-moderation-latest", "text-moderation-stable",

View File

@@ -1,7 +1,7 @@
package perplexity
var ModelList = []string{
"sonar-small-chat", "sonar-small-online", "sonar-medium-chat", "sonar-medium-online", "mistral-7b-instruct", "mixtral-8x7b-instruct",
"llama-3-sonar-small-32k-chat", "llama-3-sonar-small-32k-online", "llama-3-sonar-large-32k-chat", "llama-3-sonar-large-32k-online", "llama-3-8b-instruct", "llama-3-70b-instruct", "mixtral-8x7b-instruct",
}
var ChannelName = "perplexity"

View File

@@ -15,7 +15,7 @@ const (
APITypeAIProxyLibrary
APITypeTencent
APITypeGemini
APITypeZhipu_v4
APITypeZhipuV4
APITypeOllama
APITypePerplexity
APITypeAws
@@ -48,7 +48,7 @@ func ChannelType2APIType(channelType int) (int, bool) {
case common.ChannelTypeGemini:
apiType = APITypeGemini
case common.ChannelTypeZhipu_v4:
apiType = APITypeZhipu_v4
apiType = APITypeZhipuV4
case common.ChannelTypeOllama:
apiType = APITypeOllama
case common.ChannelTypePerplexity:

View File

@@ -320,6 +320,9 @@ func postConsumeQuota(ctx *gin.Context, relayInfo *relaycommon.RelayInfo, textRe
other["group_ratio"] = groupRatio
other["completion_ratio"] = completionRatio
other["model_price"] = modelPrice
adminInfo := make(map[string]interface{})
adminInfo["use_channel"] = ctx.GetStringSlice("use_channel")
other["admin_info"] = adminInfo
model.RecordConsumeLog(ctx, relayInfo.UserId, relayInfo.ChannelId, promptTokens, completionTokens, logModel, tokenName, quota, logContent, relayInfo.TokenId, userQuota, int(useTimeSeconds), relayInfo.IsStream, other)
//if quota != 0 {

View File

@@ -41,7 +41,7 @@ func GetAdaptor(apiType int) channel.Adaptor {
return &xunfei.Adaptor{}
case constant.APITypeZhipu:
return &zhipu.Adaptor{}
case constant.APITypeZhipu_v4:
case constant.APITypeZhipuV4:
return &zhipu_4v.Adaptor{}
case constant.APITypeOllama:
return &ollama.Adaptor{}

View File

@@ -20,7 +20,7 @@ func SetApiRouter(router *gin.Engine) {
apiRouter.GET("/about", controller.GetAbout)
//apiRouter.GET("/midjourney", controller.GetMidjourney)
apiRouter.GET("/home_page_content", controller.GetHomePageContent)
apiRouter.GET("/pricing", middleware.CriticalRateLimit(), middleware.TryUserAuth(), controller.GetPricing)
apiRouter.GET("/pricing", middleware.TryUserAuth(), controller.GetPricing)
apiRouter.GET("/verification", middleware.CriticalRateLimit(), middleware.TurnstileCheck(), controller.SendEmailVerification)
apiRouter.GET("/reset_password", middleware.CriticalRateLimit(), middleware.TurnstileCheck(), controller.SendPasswordResetEmail)
apiRouter.POST("/user/reset", middleware.CriticalRateLimit(), controller.ResetPassword)

View File

@@ -4,7 +4,7 @@ import (
"encoding/json"
"errors"
"fmt"
"github.com/pkoukk/tiktoken-go"
"github.com/linux-do/tiktoken-go"
"image"
"log"
"math"
@@ -26,6 +26,7 @@ func InitTokenEncoders() {
}
defaultTokenEncoder = gpt35TokenEncoder
gpt4TokenEncoder, err := tiktoken.EncodingForModel("gpt-4")
gpt4oTokenEncoder, err := tiktoken.EncodingForModel("gpt-4o")
if err != nil {
common.FatalLog(fmt.Sprintf("failed to get gpt-4 token encoder: %s", err.Error()))
}
@@ -33,7 +34,11 @@ func InitTokenEncoders() {
if strings.HasPrefix(model, "gpt-3.5") {
tokenEncoderMap[model] = gpt35TokenEncoder
} else if strings.HasPrefix(model, "gpt-4") {
tokenEncoderMap[model] = gpt4TokenEncoder
if strings.HasPrefix(model, "gpt-4o") {
tokenEncoderMap[model] = gpt4oTokenEncoder
} else {
tokenEncoderMap[model] = gpt4TokenEncoder
}
} else {
tokenEncoderMap[model] = nil
}
@@ -62,7 +67,11 @@ func getTokenNum(tokenEncoder *tiktoken.Tiktoken, text string) int {
return len(tokenEncoder.Encode(text, nil, nil))
}
func getImageToken(imageUrl *dto.MessageImageUrl) (int, error) {
func getImageToken(imageUrl *dto.MessageImageUrl, model string, stream bool) (int, error) {
// TODO: 非流模式下不计算图片token数量
if model == "glm-4v" {
return 1047, nil
}
if imageUrl.Detail == "low" {
return 85, nil
}
@@ -118,7 +127,7 @@ func getImageToken(imageUrl *dto.MessageImageUrl) (int, error) {
func CountTokenChatRequest(request dto.GeneralOpenAIRequest, model string, checkSensitive bool) (int, error, bool) {
tkm := 0
msgTokens, err, b := CountTokenMessages(request.Messages, model, checkSensitive)
msgTokens, err, b := CountTokenMessages(request.Messages, model, request.Stream, checkSensitive)
if err != nil {
return 0, err, b
}
@@ -151,7 +160,7 @@ func CountTokenChatRequest(request dto.GeneralOpenAIRequest, model string, check
return tkm, nil, false
}
func CountTokenMessages(messages []dto.Message, model string, checkSensitive bool) (int, error, bool) {
func CountTokenMessages(messages []dto.Message, model string, stream bool, checkSensitive bool) (int, error, bool) {
//recover when panic
tokenEncoder := getTokenEncoder(model)
// Reference:
@@ -188,19 +197,13 @@ func CountTokenMessages(messages []dto.Message, model string, checkSensitive boo
tokenNum += getTokenNum(tokenEncoder, *message.Name)
}
} else {
var err error
arrayContent := message.ParseContent()
for _, m := range arrayContent {
if m.Type == "image_url" {
var imageTokenNum int
if model == "glm-4v" {
imageTokenNum = 1047
} else {
imageUrl := m.ImageUrl.(dto.MessageImageUrl)
imageTokenNum, err = getImageToken(&imageUrl)
if err != nil {
return 0, err, false
}
imageUrl := m.ImageUrl.(dto.MessageImageUrl)
imageTokenNum, err := getImageToken(&imageUrl, model, stream)
if err != nil {
return 0, err, false
}
tokenNum += imageTokenNum
log.Printf("image token num: %d", imageTokenNum)

View File

@@ -6,6 +6,7 @@ import {
showError,
showInfo,
showSuccess,
showWarning,
timestamp2string,
} from '../helpers';
@@ -309,6 +310,12 @@ const ChannelsTable = () => {
const setChannelFormat = (channels) => {
for (let i = 0; i < channels.length; i++) {
// if (channels[i].type === 8) {
// showWarning(
// '检测到您使用了“自定义渠道”类型请更换为“OpenAI”渠道类型',
// );
// showWarning('下个版本将不再支持“自定义渠道”类型!');
// }
channels[i].key = '' + channels[i].id;
let test_models = [];
channels[i].models.split(',').forEach((item, index) => {

View File

@@ -294,11 +294,39 @@ const LogsTable = () => {
);
},
},
{
title: '重试',
dataIndex: 'retry',
className: isAdmin() ? 'tableShow' : 'tableHiddle',
render: (text, record, index) => {
let content = '渠道:' + record.channel;
if (record.other !== '') {
let other = JSON.parse(record.other);
if (other === null) {
return <></>
}
if (other.admin_info !== undefined) {
if (
other.admin_info.use_channel !== null &&
other.admin_info.use_channel !== undefined &&
other.admin_info.use_channel !== ''
) {
// channel id array
let useChannel = other.admin_info.use_channel;
let useChannelStr = useChannel.join('->');
content = `渠道:${useChannelStr}`;
}
}
}
return isAdminUser ? <div>{content}</div> : <></>;
},
},
{
title: '详情',
dataIndex: 'content',
render: (text, record, index) => {
if (record.other === '') {
let other = JSON.parse(record.other);
if (other == null) {
return (
<Paragraph
ellipsis={{
@@ -314,7 +342,6 @@ const LogsTable = () => {
</Paragraph>
);
}
let other = JSON.parse(record.other);
let content = renderModelPrice(
record.prompt_tokens,
record.completion_tokens,

View File

@@ -1,7 +1,16 @@
import React, { useContext, useEffect, useState } from 'react';
import React, { useContext, useEffect, useRef, useState } from 'react';
import { API, copy, showError, showSuccess } from '../helpers';
import { Banner, Layout, Modal, Table, Tag, Tooltip } from '@douyinfe/semi-ui';
import {
Banner,
Input,
Layout,
Modal,
Space,
Table,
Tag,
Tooltip,
} from '@douyinfe/semi-ui';
import { stringToColor } from '../helpers/render.js';
import { UserContext } from '../context/User/index.js';
import Text from '@douyinfe/semi-ui/lib/es/typography/text';
@@ -45,6 +54,27 @@ function renderAvailable(available) {
}
const ModelPricing = () => {
const [filteredValue, setFilteredValue] = useState([]);
const compositionRef = useRef({ isComposition: false });
const handleChange = (value) => {
if (compositionRef.current.isComposition) {
return;
}
const newFilteredValue = value ? [value] : [];
setFilteredValue(newFilteredValue);
};
const handleCompositionStart = () => {
compositionRef.current.isComposition = true;
};
const handleCompositionEnd = (event) => {
compositionRef.current.isComposition = false;
const value = event.target.value;
const newFilteredValue = value ? [value] : [];
setFilteredValue(newFilteredValue);
};
const columns = [
{
title: '可用性',
@@ -52,28 +82,28 @@ const ModelPricing = () => {
render: (text, record, index) => {
return renderAvailable(text);
},
sorter: (a, b) => a.available - b.available,
},
{
title: '提供者',
dataIndex: 'owner_by',
render: (text, record, index) => {
return (
<>
<Tag color={stringToColor(text)} size='large'>
{text}
</Tag>
</>
);
},
},
{
title: '模型名称',
title: (
<Space>
<span>模型名称</span>
<Input
placeholder='模糊搜索'
style={{ width: 200 }}
onCompositionStart={handleCompositionStart}
onCompositionEnd={handleCompositionEnd}
onChange={handleChange}
showClear
/>
</Space>
),
dataIndex: 'model_name', // 以finish_time作为dataIndex
render: (text, record, index) => {
return (
<>
<Tag
color={stringToColor(record.owner_by)}
color={stringToColor(text)}
size='large'
onClick={() => {
copyText(text);
@@ -84,6 +114,8 @@ const ModelPricing = () => {
</>
);
},
onFilter: (value, record) => record.model_name.includes(value),
filteredValue,
},
{
title: '计费类型',
@@ -91,6 +123,7 @@ const ModelPricing = () => {
render: (text, record, index) => {
return renderQuotaType(parseInt(text));
},
sorter: (a, b) => a.quota_type - b.quota_type,
},
{
title: '模型倍率',
@@ -113,11 +146,10 @@ const ModelPricing = () => {
render: (text, record, index) => {
let content = text;
if (record.quota_type === 0) {
let inputRatioPrice = record.model_ratio * 2.0 * record.group_ratio;
let inputRatioPrice = record.model_ratio * 2 * record.group_ratio;
let completionRatioPrice =
record.model_ratio *
record.completion_ratio *
2.0 *
record.group_ratio;
content = (
<>
@@ -150,14 +182,17 @@ const ModelPricing = () => {
return a.quota_type - b.quota_type;
});
// sort by owner_by, openai is max, other use localeCompare
// sort by model_name, start with gpt is max, other use localeCompare
models.sort((a, b) => {
if (a.owner_by === 'openai') {
if (a.model_name.startsWith('gpt') && !b.model_name.startsWith('gpt')) {
return -1;
} else if (b.owner_by === 'openai') {
} else if (
!a.model_name.startsWith('gpt') &&
b.model_name.startsWith('gpt')
) {
return 1;
} else {
return a.owner_by.localeCompare(b.owner_by);
return a.model_name.localeCompare(b.model_name);
}
});

View File

@@ -36,13 +36,6 @@ export const CHANNEL_OPTIONS = [
color: 'teal',
label: 'Azure OpenAI',
},
{
key: 11,
text: 'Google PaLM2',
value: 11,
color: 'orange',
label: 'Google PaLM2',
},
{
key: 24,
text: 'Google Gemini',
@@ -92,10 +85,18 @@ export const CHANNEL_OPTIONS = [
color: 'purple',
label: '智谱 GLM-4V',
},
{
key: 11,
text: 'Google PaLM2',
value: 11,
color: 'orange',
label: 'Google PaLM2',
},
{ key: 25, text: 'Moonshot', value: 25, color: 'green', label: 'Moonshot' },
{ key: 19, text: '360 智脑', value: 19, color: 'blue', label: '360 智脑' },
{ key: 23, text: '腾讯混元', value: 23, color: 'teal', label: '腾讯混元' },
{ key: 31, text: '零一万物', value: 31, color: 'green', label: '零一万物' },
{ key: 35, text: 'MiniMax', value: 35, color: 'green', label: 'MiniMax' },
{ key: 8, text: '自定义渠道', value: 8, color: 'pink', label: '自定义渠道' },
{
key: 22,

View File

@@ -150,7 +150,7 @@ export function renderModelPrice(
completionRatio = 0;
}
let inputRatioPrice = modelRatio * 2.0 * groupRatio;
let completionRatioPrice = modelRatio * completionRatio * 2.0 * groupRatio;
let completionRatioPrice = modelRatio * completionRatio * groupRatio;
let price =
(inputTokens / 1000000) * inputRatioPrice +
(completionTokens / 1000000) * completionRatioPrice;

View File

@@ -301,17 +301,18 @@ const EditChannel = (props) => {
const addCustomModels = () => {
if (customModel.trim() === '') return;
// 使用逗号分隔字符串,然后去除每个模型名称前后的空格
const modelArray = customModel.split(',').map(model => model.trim());
const modelArray = customModel.split(',').map((model) => model.trim());
let localModels = [...inputs.models];
let localModelOptions = [...modelOptions];
let hasError = false;
modelArray.forEach(model => {
modelArray.forEach((model) => {
// 检查模型是否已存在,且模型名称非空
if (model && !localModels.includes(model)) {
localModels.push(model); // 添加到模型列表
localModelOptions.push({ // 添加到下拉选项
localModelOptions.push({
// 添加到下拉选项
key: model,
text: model,
value: model,
@@ -330,7 +331,6 @@ const EditChannel = (props) => {
handleInputChange('models', localModels);
};
return (
<>
<SideSheet
@@ -433,11 +433,15 @@ const EditChannel = (props) => {
{inputs.type === 8 && (
<>
<div style={{ marginTop: 10 }}>
<Typography.Text strong>Base URL</Typography.Text>
<Typography.Text strong>
完整的 Base URL支持变量{'{model}'}
</Typography.Text>
</div>
<Input
name='base_url'
placeholder={'请输入自定义渠道的 Base URL'}
placeholder={
'请输入完整的URL例如https://api.openai.com/v1/chat/completions'
}
onChange={(value) => {
handleInputChange('base_url', value);
}}

View File

@@ -135,7 +135,7 @@ export default function SettingsMagnification(props) {
<Row gutter={16}>
<Col span={16}>
<Form.TextArea
label={'模型补全倍率'}
label={'模型补全倍率(仅对自定义模型有效)'}
extraText={'仅对自定义模型有效'}
placeholder={'为一个 JSON 文本,键为模型名称,值为倍率'}
field={'CompletionRatio'}