Compare commits

..

71 Commits

Author SHA1 Message Date
suziheng
974331a028 feat:gemini file 2025-09-15 17:16:50 +08:00
suziheng
a529eab39e build:dockerfile 2025-09-15 16:18:27 +08:00
suziheng
2174039fce build:dockerfile 2025-09-15 15:58:46 +08:00
suziheng
9ce714ac8d build:dockerfile 2025-09-15 15:33:48 +08:00
suziheng
7d2fc27c0f build:dockerfile 2025-09-15 15:30:56 +08:00
suziheng
4d3add220e build: npm version 2025-09-15 15:17:55 +08:00
suziheng
04de01c798 feat: support openai format enable deepseek v3.1 thinking 2025-09-15 14:43:19 +08:00
suziheng
cf0ce425e6 feat: 调整gemini tools 2025-05-28 15:08:48 +08:00
suziheng
e1ee4fe7d9 feat: 支持gemini 思考过程 2025-05-21 16:19:36 +08:00
suziheng
1e19c333c9 fix: get gemini adapter bug 2025-05-16 10:18:07 +08:00
suziheng
93d54a7ef5 feat: 增加gpt-4.1 2025-04-23 10:27:45 +08:00
suziheng
9a7967e9bb feat: 调整azure deployment 2025-04-23 10:25:35 +08:00
suziheng
c7742de0fc build: 并行改为串行构建 2025-04-22 11:52:57 +08:00
suziheng
a8a303b4ee feat: 恢复表结构 2025-04-22 11:12:23 +08:00
suziheng
abf9d113af feat: 恢复表结构 2025-04-22 11:10:21 +08:00
suziheng
5f5521bc9a feat: 更新模型倍率 2025-04-22 09:27:58 +08:00
suziheng
77267aa1b8 feat: 合并main 2025-04-21 16:36:54 +08:00
suziheng
dfcf8868fe Merge branch 'feat/coze-v3' into feat/transcribe 2025-04-21 16:03:35 +08:00
suziheng
c2bd301e0a feat:修复transcribe bug 2025-04-21 15:59:44 +08:00
JustSong
8df4a2670b docs: update ByteDance Doubao model link in README
Some checks failed
CI / Unit tests (push) Has been cancelled
CI / commit_lint (push) Has been cancelled
2025-02-21 19:30:16 +08:00
longkeyy
7ac553541b feat: update openrouter models and price 20250213 (#2084)
Some checks failed
CI / Unit tests (push) Has been cancelled
CI / commit_lint (push) Has been cancelled
2025-02-16 18:01:59 +08:00
longkeyy
a5c517c27a feat: update ali models and price 20250213 (#2086) 2025-02-16 18:01:24 +08:00
JustSong
3f421c4f04 feat: support Gemini openai compatible api 2025-02-16 17:59:39 +08:00
JustSong
1ce6a226f6 chore: update prompt 2025-02-16 17:42:20 +08:00
JustSong
cafd0a0327 feat: add OpenAI compatible channel (close #2091) 2025-02-16 17:38:06 +08:00
JustSong
8b8cd03e85 feat: add balance not supported message in ChannelsTable
Some checks failed
CI / Unit tests (push) Has been cancelled
CI / commit_lint (push) Has been cancelled
2025-02-12 01:20:28 +08:00
JustSong
54c38de813 style: improve code formatting and structure in ChannelsTable and render helpers 2025-02-12 01:15:45 +08:00
JustSong
d6284bf6b0 feat: enhance error handling for utils.js 2025-02-12 00:46:13 +08:00
DobyAsa
df5d2ca93d docs: fix README typo (#2060) 2025-02-12 00:35:29 +08:00
Laisky.Cai
fef7ae048b feat: support gemini-2.0-flash (#2055)
* feat: support gemini-2.0-flash

- Enhance model support by adding new entries and refining checks for system instruction compatibility.
- Update logging display behavior and adjust default quotas for better user experience.
- Revamp pricing structures in the billing system to reflect current model values and deprecate outdated entries.
- Streamline code by replacing hardcoded values with configurations for maintainability.

* feat: add new Gemini 2.0 flash models to adapter and billing ratio

* fix: update GetRequestURL to support gemini-1.5 model in versioning
2025-02-12 00:34:25 +08:00
JustSong
6916debf66 feat: update TestPrompt to specify output format for model name 2025-02-12 00:28:23 +08:00
JustSong
53da209134 feat: add AliBailian adaptor and update channel options 2025-02-12 00:15:43 +08:00
JustSong
517f6ad211 feat: update date range to display at least 7 days of data in Dashboard
Some checks are pending
CI / Unit tests (push) Waiting to run
CI / commit_lint (push) Waiting to run
2025-02-11 01:48:26 +08:00
JustSong
10aba11f18 style: improve code formatting in ChannelsTable component 2025-02-11 00:38:15 +08:00
JustSong
4d011c5f98 feat: add OpenRouter balance update functionality and improve code formatting 2025-02-11 00:35:06 +08:00
JustSong
eb96aa635e feat: update OpenRouter channel name and add model list for OpenRouter adaptor 2025-02-11 00:20:55 +08:00
JustSong
c715f2bc1d feat: add new models for xai
Some checks are pending
CI / Unit tests (push) Waiting to run
CI / commit_lint (push) Waiting to run
2025-02-09 21:21:28 +08:00
JustSong
aed090dd55 fix: fix cannot select test model when searching
Some checks are pending
CI / Unit tests (push) Waiting to run
CI / commit_lint (push) Waiting to run
2025-02-09 19:09:53 +08:00
JustSong
696265774e feat: add MiniMax model constants to the adaptor 2025-02-09 18:55:32 +08:00
JustSong
974729426d feat: refactor Xunfei API version handling and update model list 2025-02-09 18:50:51 +08:00
JustSong
57c1367ec8 feat: add Xunfei V2 channel support and update related configurations 2025-02-09 18:31:54 +08:00
JustSong
44233d5c04 feat: add completion tokens details and reasoning effort fields to model (close #2050) 2025-02-09 18:14:01 +08:00
JustSong
bf45a955c3 fix: update system prompt handling by renaming field and ensuring proper usage in request processing (close #2069) 2025-02-09 14:41:42 +08:00
JustSong
20435fcbfc fix: simplify Docker build configuration by removing unnecessary platform and architecture settings 2025-02-09 14:33:25 +08:00
JustSong
6e7a1c2323 fix: format channel options for consistency and improve tips for user guidance 2025-02-09 12:42:31 +08:00
JustSong
dd65b997dd feat: add Baidu V2 channel support and improve model handling 2025-02-09 12:37:26 +08:00
JustSong
0b6d03d6c6 fix: update channel name from '火山引擎' to '字节火山引擎' for consistency 2025-02-09 12:08:40 +08:00
JustSong
4375246e24 feat: enhance channel options with tips and descriptions for better user guidance 2025-02-09 12:03:31 +08:00
longkeyy
3e3b8230ac fix: add read/write locks for ModelRatio and GroupRatio to prevent concurrent map read/write issues (#2067)
Some checks are pending
CI / Unit tests (push) Waiting to run
CI / commit_lint (push) Waiting to run
2025-02-09 11:02:45 +08:00
JustSong
07808122a6 fix: fix Debugf not using DebugEnabled (close #2068) 2025-02-09 10:57:22 +08:00
牡丹凤凰
c96895e35b docs: add related project CherryStudio (#2059)
Some checks failed
CI / Unit tests (push) Has been cancelled
CI / commit_lint (push) Has been cancelled
* Update README.md

增加相关项目CherryStudio

* Update README.en.md

* Update README.ja.md
2025-02-08 00:07:55 +08:00
JustSong
2552c68249 fix: update doubao channel name
Some checks are pending
CI / Unit tests (push) Waiting to run
CI / commit_lint (push) Waiting to run
2025-02-07 01:51:28 +08:00
JustSong
5c81e40612 fix: update Dockerfile and workflow for improved multi-architecture support 2025-02-07 01:35:53 +08:00
JustSong
0d5318b1b7 revert: fix: revert sqlite build related changes
This reverts commit db65db2807.
2025-02-07 01:15:33 +08:00
JustSong
db65db2807 fix: revert sqlite build related changes 2025-02-07 00:48:23 +08:00
JustSong
e0b7e6a9e2 fix: unify version retrieval in Dockerfile build commands 2025-02-07 00:39:55 +08:00
JustSong
27c2abe80f fix: update Docker setup actions to latest versions 2025-02-07 00:33:15 +08:00
JustSong
2c867251b5 fix: improve code formatting and readability in Dashboard component 2025-02-07 00:23:13 +08:00
JustSong
108111ebd3 fix: exclude preview tags from release workflows 2025-02-07 00:19:23 +08:00
JustSong
293ba93ad6 fix: remove outdated model from ModelList and add new deepseek models 2025-02-07 00:13:57 +08:00
JustSong
faced40d5b fix: update Docker image workflow to conditionally include arm64 platform 2025-02-07 00:06:32 +08:00
suziheng
cbf8413a39 feat: 支持CozeV3 2025-01-22 21:37:17 +08:00
suziheng
dde3cff708 feat: 支持CozeV3 2025-01-22 19:49:24 +08:00
suziheng
aca72dc979 feat: 支持CozeV3 2025-01-22 18:10:25 +08:00
suziheng
533f9853ac feat: 支持CozeV3 2025-01-22 16:43:40 +08:00
zephyrs988
9746803a2f Merge branch 'songquanpeng:main' into main 2025-01-16 11:58:45 +08:00
suziheng
6eb4e788c7 feat: 增加迁移开关 2024-12-24 10:08:38 +08:00
suziheng
f8fcb1d258 feat: 更新代码 2024-12-24 10:03:50 +08:00
suziheng
9c931b7d43 修改dockerfile 2024-12-02 11:08:36 +08:00
suziheng
4882fd60ab 修改dockerfile 2024-12-02 11:01:30 +08:00
suziheng
a9f42abb59 设置迁移开关 2024-12-02 10:32:39 +08:00
72 changed files with 1955 additions and 455 deletions

View File

@@ -32,10 +32,10 @@ jobs:
git describe --tags > VERSION
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
uses: docker/setup-buildx-action@v3
- name: Log in to Docker Hub
uses: docker/login-action@v2
@@ -62,8 +62,7 @@ jobs:
uses: docker/build-push-action@v3
with:
context: .
# platforms: linux/amd64,linux/arm64
platforms: linux/amd64 # TODO disable arm64 for now, because it cause error
platforms: ${{ contains(github.ref, 'alpha') && 'linux/amd64' || 'linux/amd64' }}
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}

View File

@@ -7,6 +7,7 @@ on:
tags:
- 'v*.*.*'
- '!*-alpha*'
- '!*-preview*'
workflow_dispatch:
inputs:
name:

View File

@@ -7,6 +7,7 @@ on:
tags:
- 'v*.*.*'
- '!*-alpha*'
- '!*-preview*'
workflow_dispatch:
inputs:
name:

View File

@@ -7,6 +7,7 @@ on:
tags:
- 'v*.*.*'
- '!*-alpha*'
- '!*-preview*'
workflow_dispatch:
inputs:
name:

View File

@@ -9,23 +9,22 @@ RUN npm install --prefix /web/default & \
npm install --prefix /web/air & \
wait
RUN DISABLE_ESLINT_PLUGIN='true' REACT_APP_VERSION=$(cat /web/default/VERSION) npm run build --prefix /web/default & \
DISABLE_ESLINT_PLUGIN='true' REACT_APP_VERSION=$(cat /web/berry/VERSION) npm run build --prefix /web/berry & \
DISABLE_ESLINT_PLUGIN='true' REACT_APP_VERSION=$(cat /web/air/VERSION) npm run build --prefix /web/air & \
RUN DISABLE_ESLINT_PLUGIN='true' REACT_APP_VERSION=$(cat ./VERSION) npm run build --prefix /web/default & \
DISABLE_ESLINT_PLUGIN='true' REACT_APP_VERSION=$(cat ./VERSION) npm run build --prefix /web/berry & \
DISABLE_ESLINT_PLUGIN='true' REACT_APP_VERSION=$(cat ./VERSION) npm run build --prefix /web/air & \
wait
FROM golang AS builder2
FROM golang:alpine AS builder2
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
sqlite3 libsqlite3-dev \
&& rm -rf /var/lib/apt/lists/*
RUN apk add --no-cache \
gcc \
musl-dev \
sqlite-dev \
build-base
ENV GO111MODULE=on \
CGO_ENABLED=1 \
GOOS=linux \
CGO_CFLAGS="-I/usr/include" \
CGO_LDFLAGS="-L/usr/lib"
GOOS=linux
WORKDIR /build
@@ -35,14 +34,11 @@ RUN go mod download
COPY . .
COPY --from=builder /web/build ./web/build
RUN go build -trimpath -ldflags "-s -w -X 'github.com/songquanpeng/one-api/common.Version=$(cat VERSION)'" -o one-api
RUN go build -trimpath -ldflags "-s -w -X 'github.com/songquanpeng/one-api/common.Version=$(cat VERSION)' -linkmode external -extldflags '-static'" -o one-api
# Final runtime image
FROM ubuntu:22.04
FROM alpine:latest
RUN apt-get update && apt-get install -y --no-install-recommends \
ca-certificates tzdata bash \
&& rm -rf /var/lib/apt/lists/*
RUN apk add --no-cache ca-certificates tzdata
COPY --from=builder2 /build/one-api /

View File

@@ -315,6 +315,7 @@ If the channel ID is not provided, load balancing will be used to distribute the
* [FastGPT](https://github.com/labring/FastGPT): Knowledge question answering system based on the LLM
* [VChart](https://github.com/VisActor/VChart): More than just a cross-platform charting library, but also an expressive data storyteller.
* [VMind](https://github.com/VisActor/VMind): Not just automatic, but also fantastic. Open-source solution for intelligent visualization.
* * [CherryStudio](https://github.com/CherryHQ/cherry-studio): A cross-platform AI client that integrates multiple service providers and supports local knowledge base management.
## Note
This project is an open-source project. Please use it in compliance with OpenAI's [Terms of Use](https://openai.com/policies/terms-of-use) and **applicable laws and regulations**. It must not be used for illegal purposes.

View File

@@ -287,8 +287,8 @@ graph LR
+ インターフェイスアドレスと API Key が正しいか再確認してください。
## 関連プロジェクト
[FastGPT](https://github.com/labring/FastGPT): LLM に基づく知識質問応答システム
* [FastGPT](https://github.com/labring/FastGPT): LLM に基づく知識質問応答システム
* [CherryStudio](https://github.com/CherryHQ/cherry-studio): マルチプラットフォーム対応のAIクライアント。複数のサービスプロバイダーを統合管理し、ローカル知識ベースをサポートします。
## 注
本プロジェクトはオープンソースプロジェクトです。OpenAI の[利用規約](https://openai.com/policies/terms-of-use)および**適用される法令**を遵守してご利用ください。違法な目的での利用はご遠慮ください。

View File

@@ -72,7 +72,7 @@ _✨ 通过标准的 OpenAI API 格式访问所有的大模型,开箱即用
+ [x] [Anthropic Claude 系列模型](https://anthropic.com) (支持 AWS Claude)
+ [x] [Google PaLM2/Gemini 系列模型](https://developers.generativeai.google)
+ [x] [Mistral 系列模型](https://mistral.ai/)
+ [x] [字节跳动豆包大模型](https://console.volcengine.com/ark/region:ark+cn-beijing/model)
+ [x] [字节跳动豆包大模型(火山引擎)](https://www.volcengine.com/experience/ark?utm_term=202502dsinvite&ac=DSASUQY5&rc=2QXCA1VI)
+ [x] [百度文心一言系列模型](https://cloud.baidu.com/doc/WENXINWORKSHOP/index.html)
+ [x] [阿里通义千问系列模型](https://help.aliyun.com/document_detail/2400395.html)
+ [x] [讯飞星火认知大模型](https://www.xfyun.cn/doc/spark/Web.html)
@@ -115,7 +115,7 @@ _✨ 通过标准的 OpenAI API 格式访问所有的大模型,开箱即用
19. 支持丰富的**自定义**设置,
1. 支持自定义系统名称logo 以及页脚。
2. 支持自定义首页和关于页面,可以选择使用 HTML & Markdown 代码进行自定义,或者使用一个单独的网页通过 iframe 嵌入。
20. 支持通过系统访问令牌调用管理 API进而**在无需二开的情况下扩展和自定义** One API 的功能,详情请参考此处 [API 文档](./docs/API.md)。
20. 支持通过系统访问令牌调用管理 API进而**在无需二开的情况下扩展和自定义** One API 的功能,详情请参考此处 [API 文档](./docs/API.md)。
21. 支持 Cloudflare Turnstile 用户校验。
22. 支持用户管理,支持**多种用户登录注册方式**
+ 邮箱登录注册(支持注册邮箱白名单)以及通过邮箱进行密码重置。
@@ -469,6 +469,7 @@ https://openai.justsong.cn
* [ChatGPT Next Web](https://github.com/Yidadaa/ChatGPT-Next-Web): 一键拥有你自己的跨平台 ChatGPT 应用
* [VChart](https://github.com/VisActor/VChart): 不只是开箱即用的多端图表库,更是生动灵活的数据故事讲述者。
* [VMind](https://github.com/VisActor/VMind): 不仅自动,还很智能。开源智能可视化解决方案。
* [CherryStudio](https://github.com/CherryHQ/cherry-studio): 全平台支持的AI客户端, 多服务商集成管理、本地知识库支持。
## 注意

View File

@@ -163,4 +163,4 @@ var UserContentRequestProxy = env.String("USER_CONTENT_REQUEST_PROXY", "")
var UserContentRequestTimeout = env.Int("USER_CONTENT_REQUEST_TIMEOUT", 30)
var EnforceIncludeUsage = env.Bool("ENFORCE_INCLUDE_USAGE", false)
var TestPrompt = env.String("TEST_PROMPT", "Print your model name exactly and do not output without any other text.")
var TestPrompt = env.String("TEST_PROMPT", "Output only your specific model name with no additional text.")

23
common/file/file.go Normal file
View File

@@ -0,0 +1,23 @@
package file
import (
"bytes"
"encoding/base64"
"net/http"
)
func GetFileFromUrl(url string) (mimeType string, data string, err error) {
resp, err := http.Get(url)
if err != nil {
return
}
defer resp.Body.Close()
buffer := bytes.NewBuffer(nil)
_, err = buffer.ReadFrom(resp.Body)
if err != nil {
return
}
mimeType = resp.Header.Get("Content-Type")
data = base64.StdEncoding.EncodeToString(buffer.Bytes())
return
}

View File

@@ -93,6 +93,9 @@ func Error(ctx context.Context, msg string) {
}
func Debugf(ctx context.Context, format string, a ...any) {
if !config.DebugEnabled {
return
}
logHelper(ctx, loggerDEBUG, fmt.Sprintf(format, a...))
}

13
common/utils/array.go Normal file
View File

@@ -0,0 +1,13 @@
package utils
func DeDuplication(slice []string) []string {
m := make(map[string]bool)
for _, v := range slice {
m[v] = true
}
result := make([]string, 0, len(m))
for v := range m {
result = append(result, v)
}
return result
}

View File

@@ -112,6 +112,13 @@ type DeepSeekUsageResponse struct {
} `json:"balance_infos"`
}
type OpenRouterResponse struct {
Data struct {
TotalCredits float64 `json:"total_credits"`
TotalUsage float64 `json:"total_usage"`
} `json:"data"`
}
// GetAuthHeader get auth header
func GetAuthHeader(token string) http.Header {
h := http.Header{}
@@ -285,6 +292,22 @@ func updateChannelDeepSeekBalance(channel *model.Channel) (float64, error) {
return balance, nil
}
func updateChannelOpenRouterBalance(channel *model.Channel) (float64, error) {
url := "https://openrouter.ai/api/v1/credits"
body, err := GetResponseBody("GET", url, channel, GetAuthHeader(channel.Key))
if err != nil {
return 0, err
}
response := OpenRouterResponse{}
err = json.Unmarshal(body, &response)
if err != nil {
return 0, err
}
balance := response.Data.TotalCredits - response.Data.TotalUsage
channel.UpdateBalance(balance)
return balance, nil
}
func updateChannelBalance(channel *model.Channel) (float64, error) {
baseURL := channeltype.ChannelBaseURLs[channel.Type]
if channel.GetBaseURL() == "" {
@@ -313,6 +336,8 @@ func updateChannelBalance(channel *model.Channel) (float64, error) {
return updateChannelSiliconFlowBalance(channel)
case channeltype.DeepSeek:
return updateChannelDeepSeekBalance(channel)
case channeltype.OpenRouter:
return updateChannelOpenRouterBalance(channel)
default:
return 0, errors.New("尚未实现")
}

View File

@@ -153,6 +153,7 @@ func testChannel(ctx context.Context, channel *model.Channel, request *relaymode
rawResponse := w.Body.String()
_, responseMessage, err = parseTestResponse(rawResponse)
if err != nil {
logger.SysError(fmt.Sprintf("failed to parse error: %s, \nresponse: %s", err.Error(), rawResponse))
return "", err, nil
}
result := w.Result()

View File

@@ -2,10 +2,13 @@ package model
import (
"context"
"github.com/songquanpeng/one-api/common"
"gorm.io/gorm"
"sort"
"strings"
"gorm.io/gorm"
"github.com/songquanpeng/one-api/common"
"github.com/songquanpeng/one-api/common/utils"
)
type Ability struct {
@@ -49,6 +52,7 @@ func GetRandomSatisfiedChannel(group string, model string, ignoreFirstPriority b
func (channel *Channel) AddAbilities() error {
models_ := strings.Split(channel.Models, ",")
models_ = utils.DeDuplication(models_)
groups_ := strings.Split(channel.Group, ",")
abilities := make([]Ability, 0, len(models_))
for _, model := range models_ {

View File

@@ -135,30 +135,32 @@ func InitDB() {
}
func migrateDB() error {
var err error
if err = DB.AutoMigrate(&Channel{}); err != nil {
return err
}
if err = DB.AutoMigrate(&Token{}); err != nil {
return err
}
if err = DB.AutoMigrate(&User{}); err != nil {
return err
}
if err = DB.AutoMigrate(&Option{}); err != nil {
return err
}
if err = DB.AutoMigrate(&Redemption{}); err != nil {
return err
}
if err = DB.AutoMigrate(&Ability{}); err != nil {
return err
}
if err = DB.AutoMigrate(&Log{}); err != nil {
return err
}
if err = DB.AutoMigrate(&Channel{}); err != nil {
return err
if env.Bool("StartSqlMigration", false) {
var err error
if err = DB.AutoMigrate(&Channel{}); err != nil {
return err
}
if err = DB.AutoMigrate(&Token{}); err != nil {
return err
}
if err = DB.AutoMigrate(&User{}); err != nil {
return err
}
if err = DB.AutoMigrate(&Option{}); err != nil {
return err
}
if err = DB.AutoMigrate(&Redemption{}); err != nil {
return err
}
if err = DB.AutoMigrate(&Ability{}); err != nil {
return err
}
if err = DB.AutoMigrate(&Log{}); err != nil {
return err
}
if err = DB.AutoMigrate(&Channel{}); err != nil {
return err
}
}
return nil
}

View File

@@ -64,6 +64,9 @@ func GetAdaptor(apiType int) adaptor.Adaptor {
return &proxy.Adaptor{}
case apitype.Replicate:
return &replicate.Adaptor{}
case apitype.CozeV3:
return &coze.AdaptorV3{}
}
return nil
}

View File

@@ -14,10 +14,14 @@ var ModelList = []string{
"qwen2-72b-instruct", "qwen2-57b-a14b-instruct", "qwen2-7b-instruct", "qwen2-1.5b-instruct", "qwen2-0.5b-instruct",
"qwen1.5-110b-chat", "qwen1.5-72b-chat", "qwen1.5-32b-chat", "qwen1.5-14b-chat", "qwen1.5-7b-chat", "qwen1.5-1.8b-chat", "qwen1.5-0.5b-chat",
"qwen-72b-chat", "qwen-14b-chat", "qwen-7b-chat", "qwen-1.8b-chat", "qwen-1.8b-longcontext-chat",
"qvq-72b-preview",
"qwen2.5-vl-72b-instruct", "qwen2.5-vl-7b-instruct", "qwen2.5-vl-2b-instruct", "qwen2.5-vl-1b-instruct", "qwen2.5-vl-0.5b-instruct",
"qwen2-vl-7b-instruct", "qwen2-vl-2b-instruct", "qwen-vl-v1", "qwen-vl-chat-v1",
"qwen2-audio-instruct", "qwen-audio-chat",
"qwen2.5-math-72b-instruct", "qwen2.5-math-7b-instruct", "qwen2.5-math-1.5b-instruct", "qwen2-math-72b-instruct", "qwen2-math-7b-instruct", "qwen2-math-1.5b-instruct",
"qwen2.5-coder-32b-instruct", "qwen2.5-coder-14b-instruct", "qwen2.5-coder-7b-instruct", "qwen2.5-coder-3b-instruct", "qwen2.5-coder-1.5b-instruct", "qwen2.5-coder-0.5b-instruct",
"text-embedding-v1", "text-embedding-v3", "text-embedding-v2", "text-embedding-async-v2", "text-embedding-async-v1",
"ali-stable-diffusion-xl", "ali-stable-diffusion-v1.5", "wanx-v1",
"qwen-mt-plus", "qwen-mt-turbo",
"deepseek-r1", "deepseek-v3", "deepseek-r1-distill-qwen-1.5b", "deepseek-r1-distill-qwen-7b", "deepseek-r1-distill-qwen-14b", "deepseek-r1-distill-qwen-32b", "deepseek-r1-distill-llama-8b", "deepseek-r1-distill-llama-70b",
}

View File

@@ -36,6 +36,12 @@ func ConvertRequest(request model.GeneralOpenAIRequest) *ChatRequest {
enableSearch = true
aliModel = strings.TrimSuffix(aliModel, EnableSearchModelSuffix)
}
enableThinking := false
if request.ReasoningEffort != nil {
enableThinking = true
}
request.TopP = helper.Float64PtrMax(request.TopP, 0.9999)
return &ChatRequest{
Model: aliModel,
@@ -52,6 +58,7 @@ func ConvertRequest(request model.GeneralOpenAIRequest) *ChatRequest {
TopK: request.TopK,
ResultFormat: "message",
Tools: request.Tools,
EnableThinking: enableThinking,
},
}
}

View File

@@ -25,6 +25,7 @@ type Parameters struct {
Temperature *float64 `json:"temperature,omitempty"`
ResultFormat string `json:"result_format,omitempty"`
Tools []model.Tool `json:"tools,omitempty"`
EnableThinking bool `json:"enable_thinking,omitempty"`
}
type ChatRequest struct {

View File

@@ -0,0 +1,21 @@
package alibailian
// https://help.aliyun.com/zh/model-studio/getting-started/models
var ModelList = []string{
"qwen-turbo",
"qwen-plus",
"qwen-long",
"qwen-max",
"qwen-coder-plus",
"qwen-coder-plus-latest",
"qwen-coder-turbo",
"qwen-coder-turbo-latest",
"qwen-mt-plus",
"qwen-mt-turbo",
"qwq-32b-preview",
"deepseek-r1",
"deepseek-v3",
"deepseek-v3.1",
}

View File

@@ -0,0 +1,19 @@
package alibailian
import (
"fmt"
"github.com/songquanpeng/one-api/relay/meta"
"github.com/songquanpeng/one-api/relay/relaymode"
)
func GetRequestURL(meta *meta.Meta) (string, error) {
switch meta.Mode {
case relaymode.ChatCompletions:
return fmt.Sprintf("%s/compatible-mode/v1/chat/completions", meta.BaseURL), nil
case relaymode.Embeddings:
return fmt.Sprintf("%s/compatible-mode/v1/embeddings", meta.BaseURL), nil
default:
}
return "", fmt.Errorf("unsupported relay mode %d for ali bailian", meta.Mode)
}

View File

@@ -0,0 +1,30 @@
package baiduv2
// https://console.bce.baidu.com/support/?_=1692863460488&timestamp=1739074632076#/api?product=QIANFAN&project=%E5%8D%83%E5%B8%86ModelBuilder&parent=%E5%AF%B9%E8%AF%9DChat%20V2&api=v2%2Fchat%2Fcompletions&method=post
// https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Fm2vrveyu#%E6%94%AF%E6%8C%81%E6%A8%A1%E5%9E%8B%E5%88%97%E8%A1%A8
var ModelList = []string{
"ernie-4.0-8k-latest",
"ernie-4.0-8k-preview",
"ernie-4.0-8k",
"ernie-4.0-turbo-8k-latest",
"ernie-4.0-turbo-8k-preview",
"ernie-4.0-turbo-8k",
"ernie-4.0-turbo-128k",
"ernie-3.5-8k-preview",
"ernie-3.5-8k",
"ernie-3.5-128k",
"ernie-speed-8k",
"ernie-speed-128k",
"ernie-speed-pro-128k",
"ernie-lite-8k",
"ernie-lite-pro-128k",
"ernie-tiny-8k",
"ernie-char-8k",
"ernie-char-fiction-8k",
"ernie-novel-8k",
"deepseek-v3",
"deepseek-r1",
"deepseek-r1-distill-qwen-32b",
"deepseek-r1-distill-qwen-14b",
}

View File

@@ -0,0 +1,17 @@
package baiduv2
import (
"fmt"
"github.com/songquanpeng/one-api/relay/meta"
"github.com/songquanpeng/one-api/relay/relaymode"
)
func GetRequestURL(meta *meta.Meta) (string, error) {
switch meta.Mode {
case relaymode.ChatCompletions:
return fmt.Sprintf("%s/v2/chat/completions", meta.BaseURL), nil
default:
}
return "", fmt.Errorf("unsupported relay mode %d for baidu v2", meta.Mode)
}

View File

@@ -0,0 +1,75 @@
package coze
import (
"errors"
"fmt"
"github.com/gin-gonic/gin"
"github.com/songquanpeng/one-api/relay/adaptor"
"github.com/songquanpeng/one-api/relay/adaptor/openai"
"github.com/songquanpeng/one-api/relay/meta"
"github.com/songquanpeng/one-api/relay/model"
"io"
"net/http"
)
type AdaptorV3 struct {
meta *meta.Meta
}
func (a *AdaptorV3) Init(meta *meta.Meta) {
a.meta = meta
}
func (a *AdaptorV3) GetRequestURL(meta *meta.Meta) (string, error) {
return fmt.Sprintf("%s/v3/chat", meta.BaseURL), nil
}
func (a *AdaptorV3) SetupRequestHeader(c *gin.Context, req *http.Request, meta *meta.Meta) error {
adaptor.SetupCommonRequestHeader(c, req, meta)
req.Header.Set("Authorization", "Bearer "+meta.APIKey)
return nil
}
func (a *AdaptorV3) ConvertRequest(c *gin.Context, relayMode int, request *model.GeneralOpenAIRequest) (any, error) {
if request == nil {
return nil, errors.New("request is nil")
}
request.User = a.meta.Config.UserID
return V3ConvertRequest(*request), nil
}
func (a *AdaptorV3) ConvertImageRequest(request *model.ImageRequest) (any, error) {
if request == nil {
return nil, errors.New("request is nil")
}
return request, nil
}
func (a *AdaptorV3) DoRequest(c *gin.Context, meta *meta.Meta, requestBody io.Reader) (*http.Response, error) {
return adaptor.DoRequestHelper(a, c, meta, requestBody)
}
func (a *AdaptorV3) DoResponse(c *gin.Context, resp *http.Response, meta *meta.Meta) (usage *model.Usage, err *model.ErrorWithStatusCode) {
var responseText *string
if meta.IsStream {
err, responseText = V3StreamHandler(c, resp)
} else {
err, responseText = V3Handler(c, resp, meta.PromptTokens, meta.ActualModelName)
}
if responseText != nil {
usage = openai.ResponseText2Usage(*responseText, meta.ActualModelName, meta.PromptTokens)
} else {
usage = &model.Usage{}
}
usage.PromptTokens = meta.PromptTokens
usage.TotalTokens = usage.PromptTokens + usage.CompletionTokens
return
}
func (a *AdaptorV3) GetModelList() []string {
return ModelList
}
func (a *AdaptorV3) GetChannelName() string {
return "CozeV3"
}

View File

@@ -1,6 +1,9 @@
package coze
import "github.com/songquanpeng/one-api/relay/adaptor/coze/constant/event"
import (
"github.com/songquanpeng/one-api/relay/adaptor/coze/constant/event"
"strings"
)
func event2StopReason(e *string) string {
if e == nil || *e == event.Message {
@@ -8,3 +11,16 @@ func event2StopReason(e *string) string {
}
return "stop"
}
func splitOnDoubleNewline(data []byte, atEOF bool) (advance int, token []byte, err error) {
if atEOF && len(data) == 0 {
return 0, nil, nil
}
if i := strings.Index(string(data), "\n\n"); i >= 0 {
return i + 1, data[0:i], nil
}
if atEOF {
return len(data), data, nil
}
return 0, nil, nil
}

View File

@@ -4,19 +4,18 @@ import (
"bufio"
"encoding/json"
"fmt"
"github.com/songquanpeng/one-api/common/render"
"io"
"net/http"
"strings"
"github.com/gin-gonic/gin"
"github.com/songquanpeng/one-api/common"
"github.com/songquanpeng/one-api/common/conv"
"github.com/songquanpeng/one-api/common/helper"
"github.com/songquanpeng/one-api/common/logger"
"github.com/songquanpeng/one-api/common/render"
"github.com/songquanpeng/one-api/relay/adaptor/coze/constant/messagetype"
"github.com/songquanpeng/one-api/relay/adaptor/openai"
"github.com/songquanpeng/one-api/relay/model"
"io"
"net/http"
"strings"
)
// https://www.coze.com/open
@@ -57,6 +56,32 @@ func ConvertRequest(textRequest model.GeneralOpenAIRequest) *Request {
return &cozeRequest
}
func V3ConvertRequest(textRequest model.GeneralOpenAIRequest) *V3Request {
cozeRequest := V3Request{
UserId: textRequest.User,
Stream: textRequest.Stream,
BotId: strings.TrimPrefix(textRequest.Model, "bot-"),
}
if cozeRequest.UserId == "" {
cozeRequest.UserId = "any"
}
for i, message := range textRequest.Messages {
if i == len(textRequest.Messages)-1 {
cozeRequest.AdditionalMessages = append(cozeRequest.AdditionalMessages, Message{
Role: "user",
Content: message.CozeV3StringContent(),
})
continue
}
cozeMessage := Message{
Role: message.Role,
Content: message.CozeV3StringContent(),
}
cozeRequest.AdditionalMessages = append(cozeRequest.AdditionalMessages, cozeMessage)
}
return &cozeRequest
}
func StreamResponseCoze2OpenAI(cozeResponse *StreamResponse) (*openai.ChatCompletionsStreamResponse, *Response) {
var response *Response
var stopReason string
@@ -80,6 +105,28 @@ func StreamResponseCoze2OpenAI(cozeResponse *StreamResponse) (*openai.ChatComple
return &openaiResponse, response
}
func V3StreamResponseCoze2OpenAI(cozeResponse *V3StreamResponse) (*openai.ChatCompletionsStreamResponse, *Response) {
var response *Response
var choice openai.ChatCompletionsStreamResponseChoice
choice.Delta.Role = cozeResponse.Role
choice.Delta.Content = cozeResponse.Content
var openaiResponse openai.ChatCompletionsStreamResponse
openaiResponse.Object = "chat.completion.chunk"
openaiResponse.Choices = []openai.ChatCompletionsStreamResponseChoice{choice}
openaiResponse.Id = cozeResponse.ConversationId
if cozeResponse.Usage.TokenCount > 0 {
openaiResponse.Usage = &model.Usage{
PromptTokens: cozeResponse.Usage.InputCount,
CompletionTokens: cozeResponse.Usage.OutputCount,
TotalTokens: cozeResponse.Usage.TokenCount,
}
}
return &openaiResponse, response
}
func ResponseCoze2OpenAI(cozeResponse *Response) *openai.TextResponse {
var responseText string
for _, message := range cozeResponse.Messages {
@@ -107,6 +154,26 @@ func ResponseCoze2OpenAI(cozeResponse *Response) *openai.TextResponse {
return &fullTextResponse
}
func V3ResponseCoze2OpenAI(cozeResponse *V3Response) *openai.TextResponse {
choice := openai.TextResponseChoice{
Index: 0,
Message: model.Message{
Role: "assistant",
Content: cozeResponse.Data.Content,
Name: nil,
},
FinishReason: "stop",
}
fullTextResponse := openai.TextResponse{
Id: fmt.Sprintf("chatcmpl-%s", cozeResponse.Data.ConversationId),
Model: "coze-bot",
Object: "chat.completion",
Created: helper.GetTimestamp(),
Choices: []openai.TextResponseChoice{choice},
}
return &fullTextResponse
}
func StreamHandler(c *gin.Context, resp *http.Response) (*model.ErrorWithStatusCode, *string) {
var responseText string
createdTime := helper.GetTimestamp()
@@ -162,6 +229,63 @@ func StreamHandler(c *gin.Context, resp *http.Response) (*model.ErrorWithStatusC
return nil, &responseText
}
func V3StreamHandler(c *gin.Context, resp *http.Response) (*model.ErrorWithStatusCode, *string) {
var responseText string
createdTime := helper.GetTimestamp()
scanner := bufio.NewScanner(resp.Body)
scanner.Split(splitOnDoubleNewline)
common.SetEventStreamHeaders(c)
var modelName string
for scanner.Scan() {
part := scanner.Text()
part = strings.TrimPrefix(part, "\n")
parts := strings.Split(part, "\n")
if len(parts) != 2 {
continue
}
if !strings.HasPrefix(parts[0], "event:") || !strings.HasPrefix(parts[1], "data:") {
continue
}
event, data := strings.TrimSpace(parts[0][6:]), strings.TrimSpace(parts[1][5:])
if event == "conversation.message.delta" || event == "conversation.chat.completed" {
data = strings.TrimSuffix(data, "\r")
var cozeResponse V3StreamResponse
err := json.Unmarshal([]byte(data), &cozeResponse)
if err != nil {
logger.SysError("error unmarshalling stream response: " + err.Error())
continue
}
response, _ := V3StreamResponseCoze2OpenAI(&cozeResponse)
if response == nil {
continue
}
for _, choice := range response.Choices {
responseText += conv.AsString(choice.Delta.Content)
}
response.Model = modelName
response.Created = createdTime
err = render.ObjectData(c, response)
if err != nil {
logger.SysError(err.Error())
}
}
}
if err := scanner.Err(); err != nil {
logger.SysError("error reading stream: " + err.Error())
}
render.Done(c)
err := resp.Body.Close()
if err != nil {
return openai.ErrorWrapper(err, "close_response_body_failed", http.StatusInternalServerError), nil
}
return nil, &responseText
}
func Handler(c *gin.Context, resp *http.Response, promptTokens int, modelName string) (*model.ErrorWithStatusCode, *string) {
responseBody, err := io.ReadAll(resp.Body)
if err != nil {
@@ -200,3 +324,42 @@ func Handler(c *gin.Context, resp *http.Response, promptTokens int, modelName st
}
return nil, &responseText
}
func V3Handler(c *gin.Context, resp *http.Response, promptTokens int, modelName string) (*model.ErrorWithStatusCode, *string) {
responseBody, err := io.ReadAll(resp.Body)
if err != nil {
return openai.ErrorWrapper(err, "read_response_body_failed", http.StatusInternalServerError), nil
}
err = resp.Body.Close()
if err != nil {
return openai.ErrorWrapper(err, "close_response_body_failed", http.StatusInternalServerError), nil
}
var cozeResponse V3Response
err = json.Unmarshal(responseBody, &cozeResponse)
if err != nil {
return openai.ErrorWrapper(err, "unmarshal_response_body_failed", http.StatusInternalServerError), nil
}
if cozeResponse.Code != 0 {
return &model.ErrorWithStatusCode{
Error: model.Error{
Message: cozeResponse.Msg,
Code: cozeResponse.Code,
},
StatusCode: resp.StatusCode,
}, nil
}
fullTextResponse := V3ResponseCoze2OpenAI(&cozeResponse)
fullTextResponse.Model = modelName
jsonResponse, err := json.Marshal(fullTextResponse)
if err != nil {
return openai.ErrorWrapper(err, "marshal_response_body_failed", http.StatusInternalServerError), nil
}
c.Writer.Header().Set("Content-Type", "application/json")
c.Writer.WriteHeader(resp.StatusCode)
_, err = c.Writer.Write(jsonResponse)
var responseText string
if len(fullTextResponse.Choices) > 0 {
responseText = fullTextResponse.Choices[0].Message.StringContent()
}
return nil, &responseText
}

View File

@@ -2,9 +2,9 @@ package coze
type Message struct {
Role string `json:"role"`
Type string `json:"type"`
Type string `json:"type,omitempty"`
Content string `json:"content"`
ContentType string `json:"content_type"`
ContentType string `json:"content_type,omitempty"`
}
type ErrorInformation struct {
@@ -36,3 +36,52 @@ type StreamResponse struct {
ConversationId string `json:"conversation_id,omitempty"`
ErrorInformation *ErrorInformation `json:"error_information,omitempty"`
}
type V3StreamResponse struct {
Id string `json:"id"`
ConversationId string `json:"conversation_id"`
BotId string `json:"bot_id"`
Role string `json:"role"`
Type string `json:"type"`
Content string `json:"content"`
ContentType string `json:"content_type"`
ChatId string `json:"chat_id"`
CreatedAt int `json:"created_at"`
CompletedAt int `json:"completed_at"`
LastError struct {
Code int `json:"code"`
Msg string `json:"msg"`
} `json:"last_error"`
Status string `json:"status"`
Usage struct {
TokenCount int `json:"token_count"`
OutputCount int `json:"output_count"`
InputCount int `json:"input_count"`
} `json:"usage"`
SectionId string `json:"section_id"`
}
type V3Response struct {
Data struct {
Id string `json:"id"`
ConversationId string `json:"conversation_id"`
BotId string `json:"bot_id"`
Content string `json:"content"`
ContentType string `json:"content_type"`
CreatedAt int `json:"created_at"`
LastError struct {
Code int `json:"code"`
Msg string `json:"msg"`
} `json:"last_error"`
Status string `json:"status"`
} `json:"data"`
Code int `json:"code"`
Msg string `json:"msg"`
}
type V3Request struct {
BotId string `json:"bot_id"`
UserId string `json:"user_id"`
AdditionalMessages []Message `json:"additional_messages"`
Stream bool `json:"stream"`
}

View File

@@ -5,9 +5,10 @@ import (
"fmt"
"io"
"net/http"
"strings"
"github.com/gin-gonic/gin"
"github.com/songquanpeng/one-api/common/config"
"github.com/songquanpeng/one-api/common/helper"
channelhelper "github.com/songquanpeng/one-api/relay/adaptor"
"github.com/songquanpeng/one-api/relay/adaptor/openai"
@@ -20,17 +21,12 @@ type Adaptor struct {
}
func (a *Adaptor) Init(meta *meta.Meta) {
}
func (a *Adaptor) GetRequestURL(meta *meta.Meta) (string, error) {
var defaultVersion string
switch meta.ActualModelName {
case "gemini-2.0-flash-exp",
"gemini-2.0-flash-thinking-exp",
"gemini-2.0-flash-thinking-exp-01-21":
defaultVersion = "v1beta"
default:
defaultVersion := config.GeminiVersion
if strings.Contains(meta.ActualModelName, "gemini-2") ||
strings.Contains(meta.ActualModelName, "gemini-1.5") {
defaultVersion = "v1beta"
}

View File

@@ -1,11 +1,35 @@
package gemini
import (
"github.com/songquanpeng/one-api/relay/adaptor/geminiv2"
)
// https://ai.google.dev/models/gemini
var ModelList = []string{
"gemini-pro", "gemini-1.0-pro",
"gemini-1.5-flash", "gemini-1.5-pro",
"text-embedding-004", "aqa",
"gemini-2.0-flash-exp",
"gemini-2.0-flash-thinking-exp", "gemini-2.0-flash-thinking-exp-01-21",
var ModelList = geminiv2.ModelList
// ModelsSupportSystemInstruction is the list of models that support system instruction.
//
// https://cloud.google.com/vertex-ai/generative-ai/docs/learn/prompts/system-instructions
var ModelsSupportSystemInstruction = []string{
// "gemini-1.0-pro-002",
// "gemini-1.5-flash", "gemini-1.5-flash-001", "gemini-1.5-flash-002",
// "gemini-1.5-flash-8b",
// "gemini-1.5-pro", "gemini-1.5-pro-001", "gemini-1.5-pro-002",
// "gemini-1.5-pro-experimental",
"gemini-2.0-flash", "gemini-2.0-flash-exp",
"gemini-2.0-flash-thinking-exp-01-21",
}
// IsModelSupportSystemInstruction check if the model support system instruction.
//
// Because the main version of Go is 1.20, slice.Contains cannot be used
func IsModelSupportSystemInstruction(model string) bool {
for _, m := range ModelsSupportSystemInstruction {
if m == model {
return true
}
}
return false
}

View File

@@ -12,6 +12,7 @@ import (
"github.com/songquanpeng/one-api/common"
"github.com/songquanpeng/one-api/common/config"
"github.com/songquanpeng/one-api/common/file"
"github.com/songquanpeng/one-api/common/helper"
"github.com/songquanpeng/one-api/common/image"
"github.com/songquanpeng/one-api/common/logger"
@@ -66,6 +67,23 @@ func ConvertRequest(textRequest model.GeneralOpenAIRequest) *ChatRequest {
MaxOutputTokens: textRequest.MaxTokens,
},
}
if textRequest.ReasoningEffort != nil {
var thinkBudget int
switch *textRequest.ReasoningEffort {
case "low":
thinkBudget = 1000
case "medium":
thinkBudget = 8000
case "high":
thinkBudget = 24000
}
geminiRequest.GenerationConfig.ThinkingConfig = &ThinkingConfig{
ThinkingBudget: thinkBudget,
IncludeThoughts: true,
}
}
if textRequest.ResponseFormat != nil {
if mimeType, ok := mimeTypeMap[textRequest.ResponseFormat.Type]; ok {
geminiRequest.GenerationConfig.ResponseMimeType = mimeType
@@ -76,22 +94,13 @@ func ConvertRequest(textRequest model.GeneralOpenAIRequest) *ChatRequest {
}
}
if textRequest.Tools != nil {
functions := make([]model.Function, 0, len(textRequest.Tools))
for _, tool := range textRequest.Tools {
functions = append(functions, tool.Function)
}
geminiRequest.Tools = []ChatTools{
{
FunctionDeclarations: functions,
},
}
} else if textRequest.Functions != nil {
geminiRequest.Tools = []ChatTools{
{
FunctionDeclarations: textRequest.Functions,
},
}
geminiRequest.Tools = textRequest.Tools
}
if textRequest.Functions != nil {
geminiRequest.Tools = textRequest.Functions
}
shouldAddDummyModelMessage := false
for _, message := range textRequest.Messages {
content := ChatContent{
@@ -110,6 +119,14 @@ func ConvertRequest(textRequest model.GeneralOpenAIRequest) *ChatRequest {
parts = append(parts, Part{
Text: part.Text,
})
} else if part.Type == model.ContentTypeInputFile {
mimeType, data, _ := file.GetFileFromUrl(part.File.FileData)
parts = append(parts, Part{
InlineData: &InlineData{
MimeType: mimeType,
Data: data,
},
})
} else if part.Type == model.ContentTypeImageURL {
imageNum += 1
if imageNum > VisionMaxImageNum {
@@ -132,9 +149,16 @@ func ConvertRequest(textRequest model.GeneralOpenAIRequest) *ChatRequest {
}
// Converting system prompt to prompt from user for the same reason
if content.Role == "system" {
content.Role = "user"
shouldAddDummyModelMessage = true
if IsModelSupportSystemInstruction(textRequest.Model) {
geminiRequest.SystemInstruction = &content
geminiRequest.SystemInstruction.Role = ""
continue
} else {
content.Role = "user"
}
}
geminiRequest.Contents = append(geminiRequest.Contents, content)
// If a system message is the last message, we need to add a dummy model message to make gemini happy
@@ -192,6 +216,21 @@ func (g *ChatResponse) GetResponseText() string {
return ""
}
func (g *ChatResponse) GetResponseTextAndThought() (content string, thought string) {
if g == nil {
return
}
if len(g.Candidates) > 0 && len(g.Candidates[0].Content.Parts) > 0 {
contentPart := g.Candidates[0].Content.Parts[0]
if contentPart.Thought {
thought = contentPart.Text
return
}
content = contentPart.Text
}
return
}
type ChatCandidate struct {
Content ChatContent `json:"content"`
FinishReason string `json:"finishReason"`
@@ -256,7 +295,11 @@ func responseGeminiChat2OpenAI(response *ChatResponse) *openai.TextResponse {
if i > 0 {
builder.WriteString("\n")
}
builder.WriteString(part.Text)
if part.Thought {
builder.WriteString(fmt.Sprintf("<think>%s</think>\n", part.Text))
} else {
builder.WriteString(part.Text)
}
}
choice.Message.Content = builder.String()
}
@@ -271,7 +314,7 @@ func responseGeminiChat2OpenAI(response *ChatResponse) *openai.TextResponse {
func streamResponseGeminiChat2OpenAI(geminiResponse *ChatResponse) *openai.ChatCompletionsStreamResponse {
var choice openai.ChatCompletionsStreamResponseChoice
choice.Delta.Content = geminiResponse.GetResponseText()
choice.Delta.Content, choice.Delta.ReasoningContent = geminiResponse.GetResponseTextAndThought()
//choice.FinishReason = &constant.StopFinishReason
var response openai.ChatCompletionsStreamResponse
response.Id = fmt.Sprintf("chatcmpl-%s", random.GetUUID())

View File

@@ -1,10 +1,11 @@
package gemini
type ChatRequest struct {
Contents []ChatContent `json:"contents"`
SafetySettings []ChatSafetySettings `json:"safety_settings,omitempty"`
GenerationConfig ChatGenerationConfig `json:"generation_config,omitempty"`
Tools []ChatTools `json:"tools,omitempty"`
Contents []ChatContent `json:"contents"`
SafetySettings []ChatSafetySettings `json:"safety_settings,omitempty"`
GenerationConfig ChatGenerationConfig `json:"generation_config,omitempty"`
Tools interface{} `json:"tools,omitempty"`
SystemInstruction *ChatContent `json:"system_instruction,omitempty"`
}
type EmbeddingRequest struct {
@@ -39,6 +40,11 @@ type InlineData struct {
Data string `json:"data"`
}
type FileData struct {
MimeType string `json:"mime_type"`
FileUri string `json:"file_uri"`
}
type FunctionCall struct {
FunctionName string `json:"name"`
Arguments any `json:"args"`
@@ -48,6 +54,8 @@ type Part struct {
Text string `json:"text,omitempty"`
InlineData *InlineData `json:"inlineData,omitempty"`
FunctionCall *FunctionCall `json:"functionCall,omitempty"`
Thought bool `json:"thought,omitempty"`
FileData *FileData `json:"fileData,omitempty"`
}
type ChatContent struct {
@@ -65,12 +73,18 @@ type ChatTools struct {
}
type ChatGenerationConfig struct {
ResponseMimeType string `json:"responseMimeType,omitempty"`
ResponseSchema any `json:"responseSchema,omitempty"`
Temperature *float64 `json:"temperature,omitempty"`
TopP *float64 `json:"topP,omitempty"`
TopK float64 `json:"topK,omitempty"`
MaxOutputTokens int `json:"maxOutputTokens,omitempty"`
CandidateCount int `json:"candidateCount,omitempty"`
StopSequences []string `json:"stopSequences,omitempty"`
ResponseMimeType string `json:"responseMimeType,omitempty"`
ResponseSchema any `json:"responseSchema,omitempty"`
Temperature *float64 `json:"temperature,omitempty"`
TopP *float64 `json:"topP,omitempty"`
TopK float64 `json:"topK,omitempty"`
MaxOutputTokens int `json:"maxOutputTokens,omitempty"`
CandidateCount int `json:"candidateCount,omitempty"`
StopSequences []string `json:"stopSequences,omitempty"`
ThinkingConfig *ThinkingConfig `json:"thinkingConfig,omitempty"`
}
type ThinkingConfig struct {
ThinkingBudget int `json:"thinkingBudget"`
IncludeThoughts bool `json:"includeThoughts"`
}

View File

@@ -0,0 +1,15 @@
package geminiv2
// https://ai.google.dev/models/gemini
var ModelList = []string{
"gemini-pro", "gemini-1.0-pro",
// "gemma-2-2b-it", "gemma-2-9b-it", "gemma-2-27b-it",
"gemini-1.5-flash", "gemini-1.5-flash-8b",
"gemini-1.5-pro", "gemini-1.5-pro-experimental",
"text-embedding-004", "aqa",
"gemini-2.0-flash", "gemini-2.0-flash-exp",
"gemini-2.0-flash-lite-preview-02-05",
"gemini-2.0-flash-thinking-exp-01-21",
"gemini-2.0-pro-exp-02-05",
}

View File

@@ -0,0 +1,14 @@
package geminiv2
import (
"fmt"
"strings"
"github.com/songquanpeng/one-api/relay/meta"
)
func GetRequestURL(meta *meta.Meta) (string, error) {
baseURL := strings.TrimSuffix(meta.BaseURL, "/")
requestPath := strings.TrimPrefix(meta.RequestURLPath, "/v1")
return fmt.Sprintf("%s%s", baseURL, requestPath), nil
}

View File

@@ -3,7 +3,6 @@ package groq
// https://console.groq.com/docs/models
var ModelList = []string{
"gemma-7b-it",
"gemma2-9b-it",
"llama-3.1-70b-versatile",
"llama-3.1-8b-instant",
@@ -23,4 +22,6 @@ var ModelList = []string{
"distil-whisper-large-v3-en",
"whisper-large-v3",
"whisper-large-v3-turbo",
"deepseek-r1-distill-llama-70b-specdec",
"deepseek-r1-distill-llama-70b",
}

View File

@@ -8,4 +8,6 @@ var ModelList = []string{
"abab6-chat",
"abab5.5-chat",
"abab5.5s-chat",
"MiniMax-VL-01",
"MiniMax-Text-01",
}

View File

@@ -8,8 +8,12 @@ import (
"strings"
"github.com/gin-gonic/gin"
"github.com/songquanpeng/one-api/relay/adaptor"
"github.com/songquanpeng/one-api/relay/adaptor/alibailian"
"github.com/songquanpeng/one-api/relay/adaptor/baiduv2"
"github.com/songquanpeng/one-api/relay/adaptor/doubao"
"github.com/songquanpeng/one-api/relay/adaptor/geminiv2"
"github.com/songquanpeng/one-api/relay/adaptor/minimax"
"github.com/songquanpeng/one-api/relay/adaptor/novita"
"github.com/songquanpeng/one-api/relay/channeltype"
@@ -41,7 +45,6 @@ func (a *Adaptor) GetRequestURL(meta *meta.Meta) (string, error) {
requestURL = fmt.Sprintf("%s?api-version=%s", requestURL, meta.Config.APIVersion)
task := strings.TrimPrefix(requestURL, "/v1/")
model_ := meta.ActualModelName
model_ = strings.Replace(model_, ".", "", -1)
//https://github.com/songquanpeng/one-api/issues/1191
// {your endpoint}/openai/deployments/{your azure_model}/chat/completions?api-version={api_version}
requestURL = fmt.Sprintf("/openai/deployments/%s/%s", model_, task)
@@ -52,6 +55,12 @@ func (a *Adaptor) GetRequestURL(meta *meta.Meta) (string, error) {
return doubao.GetRequestURL(meta)
case channeltype.Novita:
return novita.GetRequestURL(meta)
case channeltype.BaiduV2:
return baiduv2.GetRequestURL(meta)
case channeltype.AliBailian:
return alibailian.GetRequestURL(meta)
case channeltype.GeminiOpenAICompatible:
return geminiv2.GetRequestURL(meta)
default:
return GetFullRequestURL(meta.BaseURL, meta.RequestURLPath, meta.ChannelType), nil
}

View File

@@ -2,19 +2,24 @@ package openai
import (
"github.com/songquanpeng/one-api/relay/adaptor/ai360"
"github.com/songquanpeng/one-api/relay/adaptor/alibailian"
"github.com/songquanpeng/one-api/relay/adaptor/baichuan"
"github.com/songquanpeng/one-api/relay/adaptor/baiduv2"
"github.com/songquanpeng/one-api/relay/adaptor/deepseek"
"github.com/songquanpeng/one-api/relay/adaptor/doubao"
"github.com/songquanpeng/one-api/relay/adaptor/geminiv2"
"github.com/songquanpeng/one-api/relay/adaptor/groq"
"github.com/songquanpeng/one-api/relay/adaptor/lingyiwanwu"
"github.com/songquanpeng/one-api/relay/adaptor/minimax"
"github.com/songquanpeng/one-api/relay/adaptor/mistral"
"github.com/songquanpeng/one-api/relay/adaptor/moonshot"
"github.com/songquanpeng/one-api/relay/adaptor/novita"
"github.com/songquanpeng/one-api/relay/adaptor/openrouter"
"github.com/songquanpeng/one-api/relay/adaptor/siliconflow"
"github.com/songquanpeng/one-api/relay/adaptor/stepfun"
"github.com/songquanpeng/one-api/relay/adaptor/togetherai"
"github.com/songquanpeng/one-api/relay/adaptor/xai"
"github.com/songquanpeng/one-api/relay/adaptor/xunfeiv2"
"github.com/songquanpeng/one-api/relay/channeltype"
)
@@ -34,6 +39,8 @@ var CompatibleChannels = []int{
channeltype.Novita,
channeltype.SiliconFlow,
channeltype.XAI,
channeltype.BaiduV2,
channeltype.XunfeiV2,
}
func GetCompatibleChannelMeta(channelType int) (string, []string) {
@@ -68,6 +75,16 @@ func GetCompatibleChannelMeta(channelType int) (string, []string) {
return "siliconflow", siliconflow.ModelList
case channeltype.XAI:
return "xai", xai.ModelList
case channeltype.BaiduV2:
return "baiduv2", baiduv2.ModelList
case channeltype.XunfeiV2:
return "xunfeiv2", xunfeiv2.ModelList
case channeltype.OpenRouter:
return "openrouter", openrouter.ModelList
case channeltype.AliBailian:
return "alibailian", alibailian.ModelList
case channeltype.GeminiOpenAICompatible:
return "geminiv2", geminiv2.ModelList
default:
return "openai", ModelList
}

View File

@@ -4,7 +4,7 @@ var ModelList = []string{
"gpt-3.5-turbo", "gpt-3.5-turbo-0301", "gpt-3.5-turbo-0613", "gpt-3.5-turbo-1106", "gpt-3.5-turbo-0125",
"gpt-3.5-turbo-16k", "gpt-3.5-turbo-16k-0613",
"gpt-3.5-turbo-instruct",
"gpt-4", "gpt-4-0314", "gpt-4-0613", "gpt-4-1106-preview", "gpt-4-0125-preview",
"gpt-4", "gpt-4.1", "gpt-4-0314", "gpt-4-0613", "gpt-4-1106-preview", "gpt-4-0125-preview",
"gpt-4-32k", "gpt-4-32k-0314", "gpt-4-32k-0613",
"gpt-4-turbo-preview", "gpt-4-turbo", "gpt-4-turbo-2024-04-09",
"gpt-4o", "gpt-4o-2024-05-13",

View File

@@ -17,6 +17,9 @@ func ResponseText2Usage(responseText string, modelName string, promptTokens int)
}
func GetFullRequestURL(baseURL string, requestURL string, channelType int) string {
if channelType == channeltype.OpenAICompatible {
return fmt.Sprintf("%s%s", strings.TrimSuffix(baseURL, "/"), strings.TrimPrefix(requestURL, "/v1"))
}
fullRequestURL := fmt.Sprintf("%s%s", baseURL, requestURL)
if strings.HasPrefix(baseURL, "https://gateway.ai.cloudflare.com") {

View File

@@ -0,0 +1,235 @@
package openrouter
var ModelList = []string{
"01-ai/yi-large",
"aetherwiing/mn-starcannon-12b",
"ai21/jamba-1-5-large",
"ai21/jamba-1-5-mini",
"ai21/jamba-instruct",
"aion-labs/aion-1.0",
"aion-labs/aion-1.0-mini",
"aion-labs/aion-rp-llama-3.1-8b",
"allenai/llama-3.1-tulu-3-405b",
"alpindale/goliath-120b",
"alpindale/magnum-72b",
"amazon/nova-lite-v1",
"amazon/nova-micro-v1",
"amazon/nova-pro-v1",
"anthracite-org/magnum-v2-72b",
"anthracite-org/magnum-v4-72b",
"anthropic/claude-2",
"anthropic/claude-2.0",
"anthropic/claude-2.0:beta",
"anthropic/claude-2.1",
"anthropic/claude-2.1:beta",
"anthropic/claude-2:beta",
"anthropic/claude-3-haiku",
"anthropic/claude-3-haiku:beta",
"anthropic/claude-3-opus",
"anthropic/claude-3-opus:beta",
"anthropic/claude-3-sonnet",
"anthropic/claude-3-sonnet:beta",
"anthropic/claude-3.5-haiku",
"anthropic/claude-3.5-haiku-20241022",
"anthropic/claude-3.5-haiku-20241022:beta",
"anthropic/claude-3.5-haiku:beta",
"anthropic/claude-3.5-sonnet",
"anthropic/claude-3.5-sonnet-20240620",
"anthropic/claude-3.5-sonnet-20240620:beta",
"anthropic/claude-3.5-sonnet:beta",
"cognitivecomputations/dolphin-mixtral-8x22b",
"cognitivecomputations/dolphin-mixtral-8x7b",
"cohere/command",
"cohere/command-r",
"cohere/command-r-03-2024",
"cohere/command-r-08-2024",
"cohere/command-r-plus",
"cohere/command-r-plus-04-2024",
"cohere/command-r-plus-08-2024",
"cohere/command-r7b-12-2024",
"databricks/dbrx-instruct",
"deepseek/deepseek-chat",
"deepseek/deepseek-chat-v2.5",
"deepseek/deepseek-chat:free",
"deepseek/deepseek-r1",
"deepseek/deepseek-r1-distill-llama-70b",
"deepseek/deepseek-r1-distill-llama-70b:free",
"deepseek/deepseek-r1-distill-llama-8b",
"deepseek/deepseek-r1-distill-qwen-1.5b",
"deepseek/deepseek-r1-distill-qwen-14b",
"deepseek/deepseek-r1-distill-qwen-32b",
"deepseek/deepseek-r1:free",
"eva-unit-01/eva-llama-3.33-70b",
"eva-unit-01/eva-qwen-2.5-32b",
"eva-unit-01/eva-qwen-2.5-72b",
"google/gemini-2.0-flash-001",
"google/gemini-2.0-flash-exp:free",
"google/gemini-2.0-flash-lite-preview-02-05:free",
"google/gemini-2.0-flash-thinking-exp-1219:free",
"google/gemini-2.0-flash-thinking-exp:free",
"google/gemini-2.0-pro-exp-02-05:free",
"google/gemini-exp-1206:free",
"google/gemini-flash-1.5",
"google/gemini-flash-1.5-8b",
"google/gemini-flash-1.5-8b-exp",
"google/gemini-pro",
"google/gemini-pro-1.5",
"google/gemini-pro-vision",
"google/gemma-2-27b-it",
"google/gemma-2-9b-it",
"google/gemma-2-9b-it:free",
"google/gemma-7b-it",
"google/learnlm-1.5-pro-experimental:free",
"google/palm-2-chat-bison",
"google/palm-2-chat-bison-32k",
"google/palm-2-codechat-bison",
"google/palm-2-codechat-bison-32k",
"gryphe/mythomax-l2-13b",
"gryphe/mythomax-l2-13b:free",
"huggingfaceh4/zephyr-7b-beta:free",
"infermatic/mn-inferor-12b",
"inflection/inflection-3-pi",
"inflection/inflection-3-productivity",
"jondurbin/airoboros-l2-70b",
"liquid/lfm-3b",
"liquid/lfm-40b",
"liquid/lfm-7b",
"mancer/weaver",
"meta-llama/llama-2-13b-chat",
"meta-llama/llama-2-70b-chat",
"meta-llama/llama-3-70b-instruct",
"meta-llama/llama-3-8b-instruct",
"meta-llama/llama-3-8b-instruct:free",
"meta-llama/llama-3.1-405b",
"meta-llama/llama-3.1-405b-instruct",
"meta-llama/llama-3.1-70b-instruct",
"meta-llama/llama-3.1-8b-instruct",
"meta-llama/llama-3.2-11b-vision-instruct",
"meta-llama/llama-3.2-11b-vision-instruct:free",
"meta-llama/llama-3.2-1b-instruct",
"meta-llama/llama-3.2-3b-instruct",
"meta-llama/llama-3.2-90b-vision-instruct",
"meta-llama/llama-3.3-70b-instruct",
"meta-llama/llama-3.3-70b-instruct:free",
"meta-llama/llama-guard-2-8b",
"microsoft/phi-3-medium-128k-instruct",
"microsoft/phi-3-medium-128k-instruct:free",
"microsoft/phi-3-mini-128k-instruct",
"microsoft/phi-3-mini-128k-instruct:free",
"microsoft/phi-3.5-mini-128k-instruct",
"microsoft/phi-4",
"microsoft/wizardlm-2-7b",
"microsoft/wizardlm-2-8x22b",
"minimax/minimax-01",
"mistralai/codestral-2501",
"mistralai/codestral-mamba",
"mistralai/ministral-3b",
"mistralai/ministral-8b",
"mistralai/mistral-7b-instruct",
"mistralai/mistral-7b-instruct-v0.1",
"mistralai/mistral-7b-instruct-v0.3",
"mistralai/mistral-7b-instruct:free",
"mistralai/mistral-large",
"mistralai/mistral-large-2407",
"mistralai/mistral-large-2411",
"mistralai/mistral-medium",
"mistralai/mistral-nemo",
"mistralai/mistral-nemo:free",
"mistralai/mistral-small",
"mistralai/mistral-small-24b-instruct-2501",
"mistralai/mistral-small-24b-instruct-2501:free",
"mistralai/mistral-tiny",
"mistralai/mixtral-8x22b-instruct",
"mistralai/mixtral-8x7b",
"mistralai/mixtral-8x7b-instruct",
"mistralai/pixtral-12b",
"mistralai/pixtral-large-2411",
"neversleep/llama-3-lumimaid-70b",
"neversleep/llama-3-lumimaid-8b",
"neversleep/llama-3-lumimaid-8b:extended",
"neversleep/llama-3.1-lumimaid-70b",
"neversleep/llama-3.1-lumimaid-8b",
"neversleep/noromaid-20b",
"nothingiisreal/mn-celeste-12b",
"nousresearch/hermes-2-pro-llama-3-8b",
"nousresearch/hermes-3-llama-3.1-405b",
"nousresearch/hermes-3-llama-3.1-70b",
"nousresearch/nous-hermes-2-mixtral-8x7b-dpo",
"nousresearch/nous-hermes-llama2-13b",
"nvidia/llama-3.1-nemotron-70b-instruct",
"nvidia/llama-3.1-nemotron-70b-instruct:free",
"openai/chatgpt-4o-latest",
"openai/gpt-3.5-turbo",
"openai/gpt-3.5-turbo-0125",
"openai/gpt-3.5-turbo-0613",
"openai/gpt-3.5-turbo-1106",
"openai/gpt-3.5-turbo-16k",
"openai/gpt-3.5-turbo-instruct",
"openai/gpt-4",
"openai/gpt-4-0314",
"openai/gpt-4-1106-preview",
"openai/gpt-4-32k",
"openai/gpt-4-32k-0314",
"openai/gpt-4-turbo",
"openai/gpt-4-turbo-preview",
"openai/gpt-4o",
"openai/gpt-4o-2024-05-13",
"openai/gpt-4o-2024-08-06",
"openai/gpt-4o-2024-11-20",
"openai/gpt-4o-mini",
"openai/gpt-4o-mini-2024-07-18",
"openai/gpt-4o:extended",
"openai/o1",
"openai/o1-mini",
"openai/o1-mini-2024-09-12",
"openai/o1-preview",
"openai/o1-preview-2024-09-12",
"openai/o3-mini",
"openai/o3-mini-high",
"openchat/openchat-7b",
"openchat/openchat-7b:free",
"openrouter/auto",
"perplexity/llama-3.1-sonar-huge-128k-online",
"perplexity/llama-3.1-sonar-large-128k-chat",
"perplexity/llama-3.1-sonar-large-128k-online",
"perplexity/llama-3.1-sonar-small-128k-chat",
"perplexity/llama-3.1-sonar-small-128k-online",
"perplexity/sonar",
"perplexity/sonar-reasoning",
"pygmalionai/mythalion-13b",
"qwen/qvq-72b-preview",
"qwen/qwen-2-72b-instruct",
"qwen/qwen-2-7b-instruct",
"qwen/qwen-2-7b-instruct:free",
"qwen/qwen-2-vl-72b-instruct",
"qwen/qwen-2-vl-7b-instruct",
"qwen/qwen-2.5-72b-instruct",
"qwen/qwen-2.5-7b-instruct",
"qwen/qwen-2.5-coder-32b-instruct",
"qwen/qwen-max",
"qwen/qwen-plus",
"qwen/qwen-turbo",
"qwen/qwen-vl-plus:free",
"qwen/qwen2.5-vl-72b-instruct:free",
"qwen/qwq-32b-preview",
"raifle/sorcererlm-8x22b",
"sao10k/fimbulvetr-11b-v2",
"sao10k/l3-euryale-70b",
"sao10k/l3-lunaris-8b",
"sao10k/l3.1-70b-hanami-x1",
"sao10k/l3.1-euryale-70b",
"sao10k/l3.3-euryale-70b",
"sophosympatheia/midnight-rose-70b",
"sophosympatheia/rogue-rose-103b-v0.2:free",
"teknium/openhermes-2.5-mistral-7b",
"thedrummer/rocinante-12b",
"thedrummer/unslopnemo-12b",
"undi95/remm-slerp-l2-13b",
"undi95/toppy-m-7b",
"undi95/toppy-m-7b:free",
"x-ai/grok-2-1212",
"x-ai/grok-2-vision-1212",
"x-ai/grok-beta",
"x-ai/grok-vision-beta",
"xwin-lm/xwin-lm-70b",
}

View File

@@ -16,10 +16,12 @@ import (
var ModelList = []string{
"gemini-pro", "gemini-pro-vision",
"gemini-1.5-pro-001", "gemini-1.5-flash-001",
"gemini-1.5-pro-002", "gemini-1.5-flash-002",
"gemini-2.0-flash-exp",
"gemini-2.0-flash-thinking-exp", "gemini-2.0-flash-thinking-exp-01-21",
"gemini-exp-1206",
"gemini-1.5-pro-001", "gemini-1.5-pro-002",
"gemini-1.5-flash-001", "gemini-1.5-flash-002",
"gemini-2.0-flash-exp", "gemini-2.0-flash-001",
"gemini-2.0-flash-lite-preview-02-05",
"gemini-2.0-flash-thinking-exp-01-21",
}
type Adaptor struct {

View File

@@ -1,5 +1,14 @@
package xai
//https://console.x.ai/
var ModelList = []string{
"grok-2",
"grok-vision-beta",
"grok-2-vision-1212",
"grok-2-vision",
"grok-2-vision-latest",
"grok-2-1212",
"grok-2-latest",
"grok-beta",
}

View File

@@ -1,12 +1,10 @@
package xunfei
var ModelList = []string{
"SparkDesk",
"SparkDesk-v1.1",
"SparkDesk-v2.1",
"SparkDesk-v3.1",
"SparkDesk-v3.1-128K",
"SparkDesk-v3.5",
"SparkDesk-v3.5-32K",
"SparkDesk-v4.0",
"Spark-Lite",
"Spark-Pro",
"Spark-Pro-128K",
"Spark-Max",
"Spark-Max-32K",
"Spark-4.0-Ultra",
}

View File

@@ -0,0 +1,97 @@
package xunfei
import (
"fmt"
"strings"
)
// https://www.xfyun.cn/doc/spark/Web.html#_1-%E6%8E%A5%E5%8F%A3%E8%AF%B4%E6%98%8E
//Spark4.0 Ultra 请求地址对应的domain参数为4.0Ultra
//
//wss://spark-api.xf-yun.com/v4.0/chat
//Spark Max-32K请求地址对应的domain参数为max-32k
//
//wss://spark-api.xf-yun.com/chat/max-32k
//Spark Max请求地址对应的domain参数为generalv3.5
//
//wss://spark-api.xf-yun.com/v3.5/chat
//Spark Pro-128K请求地址对应的domain参数为pro-128k
//
// wss://spark-api.xf-yun.com/chat/pro-128k
//Spark Pro请求地址对应的domain参数为generalv3
//
//wss://spark-api.xf-yun.com/v3.1/chat
//Spark Lite请求地址对应的domain参数为lite
//
//wss://spark-api.xf-yun.com/v1.1/chat
// Lite、Pro、Pro-128K、Max、Max-32K和4.0 Ultra
func parseAPIVersionByModelName(modelName string) string {
apiVersion := modelName2APIVersion(modelName)
if apiVersion != "" {
return apiVersion
}
index := strings.IndexAny(modelName, "-")
if index != -1 {
return modelName[index+1:]
}
return ""
}
func modelName2APIVersion(modelName string) string {
switch modelName {
case "Spark-Lite":
return "v1.1"
case "Spark-Pro":
return "v3.1"
case "Spark-Pro-128K":
return "v3.1-128K"
case "Spark-Max":
return "v3.5"
case "Spark-Max-32K":
return "v3.5-32K"
case "Spark-4.0-Ultra":
return "v4.0"
}
return ""
}
// https://www.xfyun.cn/doc/spark/Web.html#_1-%E6%8E%A5%E5%8F%A3%E8%AF%B4%E6%98%8E
func apiVersion2domain(apiVersion string) string {
switch apiVersion {
case "v1.1":
return "lite"
case "v2.1":
return "generalv2"
case "v3.1":
return "generalv3"
case "v3.1-128K":
return "pro-128k"
case "v3.5":
return "generalv3.5"
case "v3.5-32K":
return "max-32k"
case "v4.0":
return "4.0Ultra"
}
return "general" + apiVersion
}
func getXunfeiAuthUrl(apiVersion string, apiKey string, apiSecret string) (string, string) {
var authUrl string
domain := apiVersion2domain(apiVersion)
switch apiVersion {
case "v3.1-128K":
authUrl = buildXunfeiAuthUrl(fmt.Sprintf("wss://spark-api.xf-yun.com/chat/pro-128k"), apiKey, apiSecret)
break
case "v3.5-32K":
authUrl = buildXunfeiAuthUrl(fmt.Sprintf("wss://spark-api.xf-yun.com/chat/max-32k"), apiKey, apiSecret)
break
default:
authUrl = buildXunfeiAuthUrl(fmt.Sprintf("wss://spark-api.xf-yun.com/%s/chat", apiVersion), apiKey, apiSecret)
}
return domain, authUrl
}

View File

@@ -15,6 +15,7 @@ import (
"github.com/gin-gonic/gin"
"github.com/gorilla/websocket"
"github.com/songquanpeng/one-api/common"
"github.com/songquanpeng/one-api/common/helper"
"github.com/songquanpeng/one-api/common/logger"
@@ -270,48 +271,3 @@ func xunfeiMakeRequest(textRequest model.GeneralOpenAIRequest, domain, authUrl,
return dataChan, stopChan, nil
}
func parseAPIVersionByModelName(modelName string) string {
index := strings.IndexAny(modelName, "-")
if index != -1 {
return modelName[index+1:]
}
return ""
}
// https://www.xfyun.cn/doc/spark/Web.html#_1-%E6%8E%A5%E5%8F%A3%E8%AF%B4%E6%98%8E
func apiVersion2domain(apiVersion string) string {
switch apiVersion {
case "v1.1":
return "lite"
case "v2.1":
return "generalv2"
case "v3.1":
return "generalv3"
case "v3.1-128K":
return "pro-128k"
case "v3.5":
return "generalv3.5"
case "v3.5-32K":
return "max-32k"
case "v4.0":
return "4.0Ultra"
}
return "general" + apiVersion
}
func getXunfeiAuthUrl(apiVersion string, apiKey string, apiSecret string) (string, string) {
var authUrl string
domain := apiVersion2domain(apiVersion)
switch apiVersion {
case "v3.1-128K":
authUrl = buildXunfeiAuthUrl(fmt.Sprintf("wss://spark-api.xf-yun.com/chat/pro-128k"), apiKey, apiSecret)
break
case "v3.5-32K":
authUrl = buildXunfeiAuthUrl(fmt.Sprintf("wss://spark-api.xf-yun.com/chat/max-32k"), apiKey, apiSecret)
break
default:
authUrl = buildXunfeiAuthUrl(fmt.Sprintf("wss://spark-api.xf-yun.com/%s/chat", apiVersion), apiKey, apiSecret)
}
return domain, authUrl
}

View File

@@ -0,0 +1,12 @@
package xunfeiv2
// https://www.xfyun.cn/doc/spark/HTTP%E8%B0%83%E7%94%A8%E6%96%87%E6%A1%A3.html#_3-%E8%AF%B7%E6%B1%82%E8%AF%B4%E6%98%8E
var ModelList = []string{
"lite",
"generalv3",
"pro-128k",
"generalv3.5",
"max-32k",
"4.0Ultra",
}

View File

@@ -20,6 +20,6 @@ const (
VertexAI
Proxy
Replicate
CozeV3
Dummy // this one is only for count, do not add any channel after this
)

View File

@@ -3,8 +3,10 @@ package ratio
import (
"encoding/json"
"github.com/songquanpeng/one-api/common/logger"
"sync"
)
var groupRatioLock sync.RWMutex
var GroupRatio = map[string]float64{
"default": 1,
"vip": 1,
@@ -20,11 +22,15 @@ func GroupRatio2JSONString() string {
}
func UpdateGroupRatioByJSONString(jsonStr string) error {
groupRatioLock.Lock()
defer groupRatioLock.Unlock()
GroupRatio = make(map[string]float64)
return json.Unmarshal([]byte(jsonStr), &GroupRatio)
}
func GetGroupRatio(name string) float64 {
groupRatioLock.RLock()
defer groupRatioLock.RUnlock()
ratio, ok := GroupRatio[name]
if !ok {
logger.SysError("group ratio not found: " + name)

View File

@@ -4,6 +4,7 @@ import (
"encoding/json"
"fmt"
"strings"
"sync"
"github.com/songquanpeng/one-api/common/logger"
)
@@ -15,6 +16,8 @@ const (
RMB = USD / USD2RMB
)
var modelRatioLock sync.RWMutex
// ModelRatio
// https://platform.openai.com/docs/models/model-endpoint-compatibility
// https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Blfmc9dlf
@@ -24,6 +27,7 @@ const (
var ModelRatio = map[string]float64{
// https://openai.com/pricing
"gpt-4": 15,
"gpt-4.1": 15,
"gpt-4-0314": 15,
"gpt-4-0613": 15,
"gpt-4-32k": 30,
@@ -56,6 +60,8 @@ var ModelRatio = map[string]float64{
"o1-preview-2024-09-12": 7.5,
"o1-mini": 1.5, // $3.00 / 1M input tokens
"o1-mini-2024-09-12": 1.5,
"o3-mini": 1.5, // $3.00 / 1M input tokens
"o3-mini-2025-01-31": 1.5,
"davinci-002": 1, // $0.002 / 1K tokens
"babbage-002": 0.2, // $0.0004 / 1K tokens
"text-ada-001": 0.2,
@@ -66,6 +72,8 @@ var ModelRatio = map[string]float64{
"text-davinci-edit-001": 10,
"code-davinci-edit-001": 10,
"whisper-1": 15, // $0.006 / minute -> $0.006 / 150 words -> $0.006 / 200 tokens -> $0.03 / 1k tokens
"gpt-4o-mini-transcribe": 15, // $0.006 / minute -> $0.006 / 150 words -> $0.006 / 200 tokens -> $0.03 / 1k tokens
"gpt-4o-transcribe": 15, // $0.006 / minute -> $0.006 / 150 words -> $0.006 / 200 tokens -> $0.03 / 1k tokens
"tts-1": 7.5, // $0.015 / 1K characters
"tts-1-1106": 7.5,
"tts-1-hd": 15, // $0.030 / 1K characters
@@ -88,11 +96,11 @@ var ModelRatio = map[string]float64{
"claude-2.1": 8.0 / 1000 * USD,
"claude-3-haiku-20240307": 0.25 / 1000 * USD,
"claude-3-5-haiku-20241022": 1.0 / 1000 * USD,
"claude-3-5-haiku-latest": 1.0 / 1000 * USD,
"claude-3-5-haiku-latest": 1.0 / 1000 * USD,
"claude-3-sonnet-20240229": 3.0 / 1000 * USD,
"claude-3-5-sonnet-20240620": 3.0 / 1000 * USD,
"claude-3-5-sonnet-20241022": 3.0 / 1000 * USD,
"claude-3-5-sonnet-latest" : 3.0 / 1000 * USD,
"claude-3-5-sonnet-latest": 3.0 / 1000 * USD,
"claude-3-opus-20240229": 15.0 / 1000 * USD,
// https://cloud.baidu.com/doc/WENXINWORKSHOP/s/hlrk4akp7
"ERNIE-4.0-8K": 0.120 * RMB,
@@ -112,15 +120,24 @@ var ModelRatio = map[string]float64{
"bge-large-en": 0.002 * RMB,
"tao-8k": 0.002 * RMB,
// https://ai.google.dev/pricing
"gemini-pro": 1, // $0.00025 / 1k characters -> $0.001 / 1k tokens
"gemini-1.0-pro": 1,
"gemini-1.5-pro": 1,
"gemini-1.5-pro-001": 1,
"gemini-1.5-flash": 1,
"gemini-1.5-flash-001": 1,
"gemini-2.0-flash-exp": 1,
"gemini-2.0-flash-thinking-exp": 1,
"gemini-2.0-flash-thinking-exp-01-21": 1,
// https://cloud.google.com/vertex-ai/generative-ai/pricing
// "gemma-2-2b-it": 0,
// "gemma-2-9b-it": 0,
// "gemma-2-27b-it": 0,
"gemini-pro": 0.25 * MILLI_USD, // $0.00025 / 1k characters -> $0.001 / 1k tokens
"gemini-1.0-pro": 0.125 * MILLI_USD,
"gemini-1.5-pro": 1.25 * MILLI_USD,
"gemini-1.5-pro-001": 1.25 * MILLI_USD,
"gemini-1.5-pro-experimental": 1.25 * MILLI_USD,
"gemini-1.5-flash": 0.075 * MILLI_USD,
"gemini-1.5-flash-001": 0.075 * MILLI_USD,
"gemini-1.5-flash-8b": 0.0375 * MILLI_USD,
"gemini-2.0-flash-exp": 0.075 * MILLI_USD,
"gemini-2.0-flash": 0.15 * MILLI_USD,
"gemini-2.0-flash-001": 0.15 * MILLI_USD,
"gemini-2.0-flash-lite-preview-02-05": 0.075 * MILLI_USD,
"gemini-2.0-flash-thinking-exp-01-21": 0.075 * MILLI_USD,
"gemini-2.0-pro-exp-02-05": 1.25 * MILLI_USD,
"aqa": 1,
// https://open.bigmodel.cn/pricing
"glm-zero-preview": 0.01 * RMB,
@@ -147,91 +164,105 @@ var ModelRatio = map[string]float64{
"embedding-2": 0.0005 * RMB,
"embedding-3": 0.0005 * RMB,
// https://help.aliyun.com/zh/dashscope/developer-reference/tongyi-thousand-questions-metering-and-billing
"qwen-turbo": 1.4286, // ¥0.02 / 1k tokens
"qwen-turbo-latest": 1.4286,
"qwen-plus": 1.4286,
"qwen-plus-latest": 1.4286,
"qwen-max": 1.4286,
"qwen-max-latest": 1.4286,
"qwen-max-longcontext": 1.4286,
"qwen-vl-max": 1.4286,
"qwen-vl-max-latest": 1.4286,
"qwen-vl-plus": 1.4286,
"qwen-vl-plus-latest": 1.4286,
"qwen-vl-ocr": 1.4286,
"qwen-vl-ocr-latest": 1.4286,
"qwen-audio-turbo": 1.4286,
"qwen-math-plus": 1.4286,
"qwen-math-plus-latest": 1.4286,
"qwen-math-turbo": 1.4286,
"qwen-math-turbo-latest": 1.4286,
"qwen-coder-plus": 1.4286,
"qwen-coder-plus-latest": 1.4286,
"qwen-coder-turbo": 1.4286,
"qwen-coder-turbo-latest": 1.4286,
"qwq-32b-preview": 1.4286,
"qwen2.5-72b-instruct": 1.4286,
"qwen2.5-32b-instruct": 1.4286,
"qwen2.5-14b-instruct": 1.4286,
"qwen2.5-7b-instruct": 1.4286,
"qwen2.5-3b-instruct": 1.4286,
"qwen2.5-1.5b-instruct": 1.4286,
"qwen2.5-0.5b-instruct": 1.4286,
"qwen2-72b-instruct": 1.4286,
"qwen2-57b-a14b-instruct": 1.4286,
"qwen2-7b-instruct": 1.4286,
"qwen2-1.5b-instruct": 1.4286,
"qwen2-0.5b-instruct": 1.4286,
"qwen1.5-110b-chat": 1.4286,
"qwen1.5-72b-chat": 1.4286,
"qwen1.5-32b-chat": 1.4286,
"qwen1.5-14b-chat": 1.4286,
"qwen1.5-7b-chat": 1.4286,
"qwen1.5-1.8b-chat": 1.4286,
"qwen1.5-0.5b-chat": 1.4286,
"qwen-72b-chat": 1.4286,
"qwen-14b-chat": 1.4286,
"qwen-7b-chat": 1.4286,
"qwen-1.8b-chat": 1.4286,
"qwen-1.8b-longcontext-chat": 1.4286,
"qwen2-vl-7b-instruct": 1.4286,
"qwen2-vl-2b-instruct": 1.4286,
"qwen-vl-v1": 1.4286,
"qwen-vl-chat-v1": 1.4286,
"qwen2-audio-instruct": 1.4286,
"qwen-audio-chat": 1.4286,
"qwen2.5-math-72b-instruct": 1.4286,
"qwen2.5-math-7b-instruct": 1.4286,
"qwen2.5-math-1.5b-instruct": 1.4286,
"qwen2-math-72b-instruct": 1.4286,
"qwen2-math-7b-instruct": 1.4286,
"qwen2-math-1.5b-instruct": 1.4286,
"qwen2.5-coder-32b-instruct": 1.4286,
"qwen2.5-coder-14b-instruct": 1.4286,
"qwen2.5-coder-7b-instruct": 1.4286,
"qwen2.5-coder-3b-instruct": 1.4286,
"qwen2.5-coder-1.5b-instruct": 1.4286,
"qwen2.5-coder-0.5b-instruct": 1.4286,
"text-embedding-v1": 0.05, // ¥0.0007 / 1k tokens
"text-embedding-v3": 0.05,
"text-embedding-v2": 0.05,
"text-embedding-async-v2": 0.05,
"text-embedding-async-v1": 0.05,
"ali-stable-diffusion-xl": 8.00,
"ali-stable-diffusion-v1.5": 8.00,
"wanx-v1": 8.00,
"SparkDesk": 1.2858, // ¥0.018 / 1k tokens
"SparkDesk-v1.1": 1.2858, // ¥0.018 / 1k tokens
"SparkDesk-v2.1": 1.2858, // ¥0.018 / 1k tokens
"SparkDesk-v3.1": 1.2858, // ¥0.018 / 1k tokens
"SparkDesk-v3.1-128K": 1.2858, // ¥0.018 / 1k tokens
"SparkDesk-v3.5": 1.2858, // ¥0.018 / 1k tokens
"SparkDesk-v3.5-32K": 1.2858, // ¥0.018 / 1k tokens
"SparkDesk-v4.0": 1.2858, // ¥0.018 / 1k tokens
"360GPT_S2_V9": 0.8572, // ¥0.012 / 1k tokens
"embedding-bert-512-v1": 0.0715, // ¥0.001 / 1k tokens
"embedding_s1_v1": 0.0715, // ¥0.001 / 1k tokens
"semantic_similarity_s1_v1": 0.0715, // ¥0.001 / 1k tokens
"qwen-turbo": 0.0003 * RMB,
"qwen-turbo-latest": 0.0003 * RMB,
"qwen-plus": 0.0008 * RMB,
"qwen-plus-latest": 0.0008 * RMB,
"qwen-max": 0.0024 * RMB,
"qwen-max-latest": 0.0024 * RMB,
"qwen-max-longcontext": 0.0005 * RMB,
"qwen-vl-max": 0.003 * RMB,
"qwen-vl-max-latest": 0.003 * RMB,
"qwen-vl-plus": 0.0015 * RMB,
"qwen-vl-plus-latest": 0.0015 * RMB,
"qwen-vl-ocr": 0.005 * RMB,
"qwen-vl-ocr-latest": 0.005 * RMB,
"qwen-audio-turbo": 1.4286,
"qwen-math-plus": 0.004 * RMB,
"qwen-math-plus-latest": 0.004 * RMB,
"qwen-math-turbo": 0.002 * RMB,
"qwen-math-turbo-latest": 0.002 * RMB,
"qwen-coder-plus": 0.0035 * RMB,
"qwen-coder-plus-latest": 0.0035 * RMB,
"qwen-coder-turbo": 0.002 * RMB,
"qwen-coder-turbo-latest": 0.002 * RMB,
"qwen-mt-plus": 0.015 * RMB,
"qwen-mt-turbo": 0.001 * RMB,
"qwq-32b-preview": 0.002 * RMB,
"qwen2.5-72b-instruct": 0.004 * RMB,
"qwen2.5-32b-instruct": 0.03 * RMB,
"qwen2.5-14b-instruct": 0.001 * RMB,
"qwen2.5-7b-instruct": 0.0005 * RMB,
"qwen2.5-3b-instruct": 0.006 * RMB,
"qwen2.5-1.5b-instruct": 0.0003 * RMB,
"qwen2.5-0.5b-instruct": 0.0003 * RMB,
"qwen2-72b-instruct": 0.004 * RMB,
"qwen2-57b-a14b-instruct": 0.0035 * RMB,
"qwen2-7b-instruct": 0.001 * RMB,
"qwen2-1.5b-instruct": 0.001 * RMB,
"qwen2-0.5b-instruct": 0.001 * RMB,
"qwen1.5-110b-chat": 0.007 * RMB,
"qwen1.5-72b-chat": 0.005 * RMB,
"qwen1.5-32b-chat": 0.0035 * RMB,
"qwen1.5-14b-chat": 0.002 * RMB,
"qwen1.5-7b-chat": 0.001 * RMB,
"qwen1.5-1.8b-chat": 0.001 * RMB,
"qwen1.5-0.5b-chat": 0.001 * RMB,
"qwen-72b-chat": 0.02 * RMB,
"qwen-14b-chat": 0.008 * RMB,
"qwen-7b-chat": 0.006 * RMB,
"qwen-1.8b-chat": 0.006 * RMB,
"qwen-1.8b-longcontext-chat": 0.006 * RMB,
"qvq-72b-preview": 0.012 * RMB,
"qwen2.5-vl-72b-instruct": 0.016 * RMB,
"qwen2.5-vl-7b-instruct": 0.002 * RMB,
"qwen2.5-vl-3b-instruct": 0.0012 * RMB,
"qwen2-vl-7b-instruct": 0.016 * RMB,
"qwen2-vl-2b-instruct": 0.002 * RMB,
"qwen-vl-v1": 0.002 * RMB,
"qwen-vl-chat-v1": 0.002 * RMB,
"qwen2-audio-instruct": 0.002 * RMB,
"qwen-audio-chat": 0.002 * RMB,
"qwen2.5-math-72b-instruct": 0.004 * RMB,
"qwen2.5-math-7b-instruct": 0.001 * RMB,
"qwen2.5-math-1.5b-instruct": 0.001 * RMB,
"qwen2-math-72b-instruct": 0.004 * RMB,
"qwen2-math-7b-instruct": 0.001 * RMB,
"qwen2-math-1.5b-instruct": 0.001 * RMB,
"qwen2.5-coder-32b-instruct": 0.002 * RMB,
"qwen2.5-coder-14b-instruct": 0.002 * RMB,
"qwen2.5-coder-7b-instruct": 0.001 * RMB,
"qwen2.5-coder-3b-instruct": 0.001 * RMB,
"qwen2.5-coder-1.5b-instruct": 0.001 * RMB,
"qwen2.5-coder-0.5b-instruct": 0.001 * RMB,
"text-embedding-v1": 0.0007 * RMB, // ¥0.0007 / 1k tokens
"text-embedding-v3": 0.0007 * RMB,
"text-embedding-v2": 0.0007 * RMB,
"text-embedding-async-v2": 0.0007 * RMB,
"text-embedding-async-v1": 0.0007 * RMB,
"ali-stable-diffusion-xl": 8.00,
"ali-stable-diffusion-v1.5": 8.00,
"wanx-v1": 8.00,
"deepseek-r1": 0.002 * RMB,
"deepseek-v3": 0.001 * RMB,
"deepseek-r1-distill-qwen-1.5b": 0.001 * RMB,
"deepseek-r1-distill-qwen-7b": 0.0005 * RMB,
"deepseek-r1-distill-qwen-14b": 0.001 * RMB,
"deepseek-r1-distill-qwen-32b": 0.002 * RMB,
"deepseek-r1-distill-llama-8b": 0.0005 * RMB,
"deepseek-r1-distill-llama-70b": 0.004 * RMB,
"SparkDesk": 1.2858, // ¥0.018 / 1k tokens
"SparkDesk-v1.1": 1.2858, // ¥0.018 / 1k tokens
"SparkDesk-v2.1": 1.2858, // ¥0.018 / 1k tokens
"SparkDesk-v3.1": 1.2858, // ¥0.018 / 1k tokens
"SparkDesk-v3.1-128K": 1.2858, // ¥0.018 / 1k tokens
"SparkDesk-v3.5": 1.2858, // ¥0.018 / 1k tokens
"SparkDesk-v3.5-32K": 1.2858, // ¥0.018 / 1k tokens
"SparkDesk-v4.0": 1.2858, // ¥0.018 / 1k tokens
"360GPT_S2_V9": 0.8572, // ¥0.012 / 1k tokens
"embedding-bert-512-v1": 0.0715, // ¥0.001 / 1k tokens
"embedding_s1_v1": 0.0715, // ¥0.001 / 1k tokens
"semantic_similarity_s1_v1": 0.0715, // ¥0.001 / 1k tokens
// https://cloud.tencent.com/document/product/1729/97731#e0e6be58-60c8-469f-bdeb-6c264ce3b4d0
"hunyuan-turbo": 0.015 * RMB,
"hunyuan-large": 0.004 * RMB,
@@ -359,6 +390,238 @@ var ModelRatio = map[string]float64{
"mistralai/mistral-7b-instruct-v0.2": 0.050 * USD,
"mistralai/mistral-7b-v0.1": 0.050 * USD,
"mistralai/mixtral-8x7b-instruct-v0.1": 0.300 * USD,
//https://openrouter.ai/models
"01-ai/yi-large": 1.5,
"aetherwiing/mn-starcannon-12b": 0.6,
"ai21/jamba-1-5-large": 4.0,
"ai21/jamba-1-5-mini": 0.2,
"ai21/jamba-instruct": 0.35,
"aion-labs/aion-1.0": 6.0,
"aion-labs/aion-1.0-mini": 1.2,
"aion-labs/aion-rp-llama-3.1-8b": 0.1,
"allenai/llama-3.1-tulu-3-405b": 5.0,
"alpindale/goliath-120b": 4.6875,
"alpindale/magnum-72b": 1.125,
"amazon/nova-lite-v1": 0.12,
"amazon/nova-micro-v1": 0.07,
"amazon/nova-pro-v1": 1.6,
"anthracite-org/magnum-v2-72b": 1.5,
"anthracite-org/magnum-v4-72b": 1.125,
"anthropic/claude-2": 12.0,
"anthropic/claude-2.0": 12.0,
"anthropic/claude-2.0:beta": 12.0,
"anthropic/claude-2.1": 12.0,
"anthropic/claude-2.1:beta": 12.0,
"anthropic/claude-2:beta": 12.0,
"anthropic/claude-3-haiku": 0.625,
"anthropic/claude-3-haiku:beta": 0.625,
"anthropic/claude-3-opus": 37.5,
"anthropic/claude-3-opus:beta": 37.5,
"anthropic/claude-3-sonnet": 7.5,
"anthropic/claude-3-sonnet:beta": 7.5,
"anthropic/claude-3.5-haiku": 2.0,
"anthropic/claude-3.5-haiku-20241022": 2.0,
"anthropic/claude-3.5-haiku-20241022:beta": 2.0,
"anthropic/claude-3.5-haiku:beta": 2.0,
"anthropic/claude-3.5-sonnet": 7.5,
"anthropic/claude-3.5-sonnet-20240620": 7.5,
"anthropic/claude-3.5-sonnet-20240620:beta": 7.5,
"anthropic/claude-3.5-sonnet:beta": 7.5,
"cognitivecomputations/dolphin-mixtral-8x22b": 0.45,
"cognitivecomputations/dolphin-mixtral-8x7b": 0.25,
"cohere/command": 0.95,
"cohere/command-r": 0.7125,
"cohere/command-r-03-2024": 0.7125,
"cohere/command-r-08-2024": 0.285,
"cohere/command-r-plus": 7.125,
"cohere/command-r-plus-04-2024": 7.125,
"cohere/command-r-plus-08-2024": 4.75,
"cohere/command-r7b-12-2024": 0.075,
"databricks/dbrx-instruct": 0.6,
"deepseek/deepseek-chat": 0.445,
"deepseek/deepseek-chat-v2.5": 1.0,
"deepseek/deepseek-chat:free": 0.0,
"deepseek/deepseek-r1": 1.2,
"deepseek/deepseek-r1-distill-llama-70b": 0.345,
"deepseek/deepseek-r1-distill-llama-70b:free": 0.0,
"deepseek/deepseek-r1-distill-llama-8b": 0.02,
"deepseek/deepseek-r1-distill-qwen-1.5b": 0.09,
"deepseek/deepseek-r1-distill-qwen-14b": 0.075,
"deepseek/deepseek-r1-distill-qwen-32b": 0.09,
"deepseek/deepseek-r1:free": 0.0,
"eva-unit-01/eva-llama-3.33-70b": 3.0,
"eva-unit-01/eva-qwen-2.5-32b": 1.7,
"eva-unit-01/eva-qwen-2.5-72b": 3.0,
"google/gemini-2.0-flash-001": 0.2,
"google/gemini-2.0-flash-exp:free": 0.0,
"google/gemini-2.0-flash-lite-preview-02-05:free": 0.0,
"google/gemini-2.0-flash-thinking-exp-1219:free": 0.0,
"google/gemini-2.0-flash-thinking-exp:free": 0.0,
"google/gemini-2.0-pro-exp-02-05:free": 0.0,
"google/gemini-exp-1206:free": 0.0,
"google/gemini-flash-1.5": 0.15,
"google/gemini-flash-1.5-8b": 0.075,
"google/gemini-flash-1.5-8b-exp": 0.0,
"google/gemini-pro": 0.75,
"google/gemini-pro-1.5": 2.5,
"google/gemini-pro-vision": 0.75,
"google/gemma-2-27b-it": 0.135,
"google/gemma-2-9b-it": 0.03,
"google/gemma-2-9b-it:free": 0.0,
"google/gemma-7b-it": 0.075,
"google/learnlm-1.5-pro-experimental:free": 0.0,
"google/palm-2-chat-bison": 1.0,
"google/palm-2-chat-bison-32k": 1.0,
"google/palm-2-codechat-bison": 1.0,
"google/palm-2-codechat-bison-32k": 1.0,
"gryphe/mythomax-l2-13b": 0.0325,
"gryphe/mythomax-l2-13b:free": 0.0,
"huggingfaceh4/zephyr-7b-beta:free": 0.0,
"infermatic/mn-inferor-12b": 0.6,
"inflection/inflection-3-pi": 5.0,
"inflection/inflection-3-productivity": 5.0,
"jondurbin/airoboros-l2-70b": 0.25,
"liquid/lfm-3b": 0.01,
"liquid/lfm-40b": 0.075,
"liquid/lfm-7b": 0.005,
"mancer/weaver": 1.125,
"meta-llama/llama-2-13b-chat": 0.11,
"meta-llama/llama-2-70b-chat": 0.45,
"meta-llama/llama-3-70b-instruct": 0.2,
"meta-llama/llama-3-8b-instruct": 0.03,
"meta-llama/llama-3-8b-instruct:free": 0.0,
"meta-llama/llama-3.1-405b": 1.0,
"meta-llama/llama-3.1-405b-instruct": 0.4,
"meta-llama/llama-3.1-70b-instruct": 0.15,
"meta-llama/llama-3.1-8b-instruct": 0.025,
"meta-llama/llama-3.2-11b-vision-instruct": 0.0275,
"meta-llama/llama-3.2-11b-vision-instruct:free": 0.0,
"meta-llama/llama-3.2-1b-instruct": 0.005,
"meta-llama/llama-3.2-3b-instruct": 0.0125,
"meta-llama/llama-3.2-90b-vision-instruct": 0.8,
"meta-llama/llama-3.3-70b-instruct": 0.15,
"meta-llama/llama-3.3-70b-instruct:free": 0.0,
"meta-llama/llama-guard-2-8b": 0.1,
"microsoft/phi-3-medium-128k-instruct": 0.5,
"microsoft/phi-3-medium-128k-instruct:free": 0.0,
"microsoft/phi-3-mini-128k-instruct": 0.05,
"microsoft/phi-3-mini-128k-instruct:free": 0.0,
"microsoft/phi-3.5-mini-128k-instruct": 0.05,
"microsoft/phi-4": 0.07,
"microsoft/wizardlm-2-7b": 0.035,
"microsoft/wizardlm-2-8x22b": 0.25,
"minimax/minimax-01": 0.55,
"mistralai/codestral-2501": 0.45,
"mistralai/codestral-mamba": 0.125,
"mistralai/ministral-3b": 0.02,
"mistralai/ministral-8b": 0.05,
"mistralai/mistral-7b-instruct": 0.0275,
"mistralai/mistral-7b-instruct-v0.1": 0.1,
"mistralai/mistral-7b-instruct-v0.3": 0.0275,
"mistralai/mistral-7b-instruct:free": 0.0,
"mistralai/mistral-large": 3.0,
"mistralai/mistral-large-2407": 3.0,
"mistralai/mistral-large-2411": 3.0,
"mistralai/mistral-medium": 4.05,
"mistralai/mistral-nemo": 0.04,
"mistralai/mistral-nemo:free": 0.0,
"mistralai/mistral-small": 0.3,
"mistralai/mistral-small-24b-instruct-2501": 0.07,
"mistralai/mistral-small-24b-instruct-2501:free": 0.0,
"mistralai/mistral-tiny": 0.125,
"mistralai/mixtral-8x22b-instruct": 0.45,
"mistralai/mixtral-8x7b": 0.3,
"mistralai/mixtral-8x7b-instruct": 0.12,
"mistralai/pixtral-12b": 0.05,
"mistralai/pixtral-large-2411": 3.0,
"neversleep/llama-3-lumimaid-70b": 2.25,
"neversleep/llama-3-lumimaid-8b": 0.5625,
"neversleep/llama-3-lumimaid-8b:extended": 0.5625,
"neversleep/llama-3.1-lumimaid-70b": 2.25,
"neversleep/llama-3.1-lumimaid-8b": 0.5625,
"neversleep/noromaid-20b": 1.125,
"nothingiisreal/mn-celeste-12b": 0.6,
"nousresearch/hermes-2-pro-llama-3-8b": 0.02,
"nousresearch/hermes-3-llama-3.1-405b": 0.4,
"nousresearch/hermes-3-llama-3.1-70b": 0.15,
"nousresearch/nous-hermes-2-mixtral-8x7b-dpo": 0.3,
"nousresearch/nous-hermes-llama2-13b": 0.085,
"nvidia/llama-3.1-nemotron-70b-instruct": 0.15,
"nvidia/llama-3.1-nemotron-70b-instruct:free": 0.0,
"openai/chatgpt-4o-latest": 7.5,
"openai/gpt-3.5-turbo": 0.75,
"openai/gpt-3.5-turbo-0125": 0.75,
"openai/gpt-3.5-turbo-0613": 1.0,
"openai/gpt-3.5-turbo-1106": 1.0,
"openai/gpt-3.5-turbo-16k": 2.0,
"openai/gpt-3.5-turbo-instruct": 1.0,
"openai/gpt-4": 30.0,
"openai/gpt-4-0314": 30.0,
"openai/gpt-4-1106-preview": 15.0,
"openai/gpt-4-32k": 60.0,
"openai/gpt-4-32k-0314": 60.0,
"openai/gpt-4-turbo": 15.0,
"openai/gpt-4-turbo-preview": 15.0,
"openai/gpt-4o": 5.0,
"openai/gpt-4o-2024-05-13": 7.5,
"openai/gpt-4o-2024-08-06": 5.0,
"openai/gpt-4o-2024-11-20": 5.0,
"openai/gpt-4o-mini": 0.3,
"openai/gpt-4o-mini-2024-07-18": 0.3,
"openai/gpt-4o:extended": 9.0,
"openai/o1": 30.0,
"openai/o1-mini": 2.2,
"openai/o1-mini-2024-09-12": 2.2,
"openai/o1-preview": 30.0,
"openai/o1-preview-2024-09-12": 30.0,
"openai/o3-mini": 2.2,
"openai/o3-mini-high": 2.2,
"openchat/openchat-7b": 0.0275,
"openchat/openchat-7b:free": 0.0,
"openrouter/auto": -500000.0,
"perplexity/llama-3.1-sonar-huge-128k-online": 2.5,
"perplexity/llama-3.1-sonar-large-128k-chat": 0.5,
"perplexity/llama-3.1-sonar-large-128k-online": 0.5,
"perplexity/llama-3.1-sonar-small-128k-chat": 0.1,
"perplexity/llama-3.1-sonar-small-128k-online": 0.1,
"perplexity/sonar": 0.5,
"perplexity/sonar-reasoning": 2.5,
"pygmalionai/mythalion-13b": 0.6,
"qwen/qvq-72b-preview": 0.25,
"qwen/qwen-2-72b-instruct": 0.45,
"qwen/qwen-2-7b-instruct": 0.027,
"qwen/qwen-2-7b-instruct:free": 0.0,
"qwen/qwen-2-vl-72b-instruct": 0.2,
"qwen/qwen-2-vl-7b-instruct": 0.05,
"qwen/qwen-2.5-72b-instruct": 0.2,
"qwen/qwen-2.5-7b-instruct": 0.025,
"qwen/qwen-2.5-coder-32b-instruct": 0.08,
"qwen/qwen-max": 3.2,
"qwen/qwen-plus": 0.6,
"qwen/qwen-turbo": 0.1,
"qwen/qwen-vl-plus:free": 0.0,
"qwen/qwen2.5-vl-72b-instruct:free": 0.0,
"qwen/qwq-32b-preview": 0.09,
"raifle/sorcererlm-8x22b": 2.25,
"sao10k/fimbulvetr-11b-v2": 0.6,
"sao10k/l3-euryale-70b": 0.4,
"sao10k/l3-lunaris-8b": 0.03,
"sao10k/l3.1-70b-hanami-x1": 1.5,
"sao10k/l3.1-euryale-70b": 0.4,
"sao10k/l3.3-euryale-70b": 0.4,
"sophosympatheia/midnight-rose-70b": 0.4,
"sophosympatheia/rogue-rose-103b-v0.2:free": 0.0,
"teknium/openhermes-2.5-mistral-7b": 0.085,
"thedrummer/rocinante-12b": 0.25,
"thedrummer/unslopnemo-12b": 0.25,
"undi95/remm-slerp-l2-13b": 0.6,
"undi95/toppy-m-7b": 0.035,
"undi95/toppy-m-7b:free": 0.0,
"x-ai/grok-2-1212": 5.0,
"x-ai/grok-2-vision-1212": 5.0,
"x-ai/grok-beta": 7.5,
"x-ai/grok-vision-beta": 7.5,
"xwin-lm/xwin-lm-70b": 1.875,
}
var CompletionRatio = map[string]float64{
@@ -366,7 +629,9 @@ var CompletionRatio = map[string]float64{
"llama3-8b-8192(33)": 0.0006 / 0.0003,
"llama3-70b-8192(33)": 0.0035 / 0.00265,
// whisper
"whisper-1": 0, // only count input tokens
"whisper-1": 0, // only count input tokens
"gpt-4o-mini-transcribe": 0,
"gpt-4o-transcribe": 0,
// deepseek
"deepseek-chat": 0.28 / 0.14,
"deepseek-reasoner": 2.19 / 0.55,
@@ -417,11 +682,15 @@ func ModelRatio2JSONString() string {
}
func UpdateModelRatioByJSONString(jsonStr string) error {
modelRatioLock.Lock()
defer modelRatioLock.Unlock()
ModelRatio = make(map[string]float64)
return json.Unmarshal([]byte(jsonStr), &ModelRatio)
}
func GetModelRatio(name string, channelType int) float64 {
modelRatioLock.RLock()
defer modelRatioLock.RUnlock()
if strings.HasPrefix(name, "qwen-") && strings.HasSuffix(name, "-internet") {
name = strings.TrimSuffix(name, "-internet")
}

View File

@@ -48,5 +48,11 @@ const (
SiliconFlow
XAI
Replicate
BaiduV2
XunfeiV2
AliBailian
OpenAICompatible
GeminiOpenAICompatible
CozeV3
Dummy
)

View File

@@ -23,12 +23,16 @@ func ToAPIType(channelType int) int {
apiType = apitype.Tencent
case Gemini:
apiType = apitype.Gemini
case GeminiOpenAICompatible:
apiType = apitype.Gemini
case Ollama:
apiType = apitype.Ollama
case AwsClaude:
apiType = apitype.AwsClaude
case Coze:
apiType = apitype.Coze
case CozeV3:
apiType = apitype.CozeV3
case Cohere:
apiType = apitype.Cohere
case Cloudflare:

View File

@@ -48,6 +48,13 @@ var ChannelBaseURLs = []string{
"https://api.siliconflow.cn", // 44
"https://api.x.ai", // 45
"https://api.replicate.com/v1/models/", // 46
"https://qianfan.baidubce.com", // 47
"https://spark-api-open.xf-yun.com", // 48
"https://dashscope.aliyuncs.com", // 49
"", // 50
"https://generativelanguage.googleapis.com/v1beta/openai/", // 51
"https://api.coze.cn", // 52
}
func init() {

View File

@@ -8,6 +8,7 @@ import (
"errors"
"fmt"
"io"
"mime/multipart"
"net/http"
"strings"
@@ -30,8 +31,7 @@ import (
func RelayAudioHelper(c *gin.Context, relayMode int) *relaymodel.ErrorWithStatusCode {
ctx := c.Request.Context()
meta := meta.GetByContext(c)
audioModel := "whisper-1"
audioModel := "gpt-4o-transcribe"
tokenId := c.GetInt(ctxkey.TokenId)
channelType := c.GetInt(ctxkey.Channel)
channelId := c.GetInt(ctxkey.ChannelId)
@@ -124,12 +124,13 @@ func RelayAudioHelper(c *gin.Context, relayMode int) *relaymodel.ErrorWithStatus
fullRequestURL := openai.GetFullRequestURL(baseURL, requestURL, channelType)
if channelType == channeltype.Azure {
apiVersion := meta.Config.APIVersion
deploymentName := c.GetString(ctxkey.ChannelName)
if relayMode == relaymode.AudioTranscription {
// https://learn.microsoft.com/en-us/azure/ai-services/openai/whisper-quickstart?tabs=command-line#rest-api
fullRequestURL = fmt.Sprintf("%s/openai/deployments/%s/audio/transcriptions?api-version=%s", baseURL, audioModel, apiVersion)
fullRequestURL = fmt.Sprintf("%s/openai/deployments/%s/audio/transcriptions?api-version=%s", baseURL, deploymentName, apiVersion)
} else if relayMode == relaymode.AudioSpeech {
// https://learn.microsoft.com/en-us/azure/ai-services/openai/text-to-speech-quickstart?tabs=command-line#rest-api
fullRequestURL = fmt.Sprintf("%s/openai/deployments/%s/audio/speech?api-version=%s", baseURL, audioModel, apiVersion)
fullRequestURL = fmt.Sprintf("%s/openai/deployments/%s/audio/speech?api-version=%s", baseURL, deploymentName, apiVersion)
}
}
@@ -138,8 +139,73 @@ func RelayAudioHelper(c *gin.Context, relayMode int) *relaymodel.ErrorWithStatus
if err != nil {
return openai.ErrorWrapper(err, "new_request_body_failed", http.StatusInternalServerError)
}
c.Request.Body = io.NopCloser(bytes.NewBuffer(requestBody.Bytes()))
responseFormat := c.DefaultPostForm("response_format", "json")
// 处理表单数据
contentType := c.Request.Header.Get("Content-Type")
responseFormat := "json"
var contentTypeWithBoundary string
if strings.Contains(contentType, "multipart/form-data") {
originalBody := requestBody.Bytes()
c.Request.Body = io.NopCloser(bytes.NewBuffer(originalBody))
err = c.Request.ParseMultipartForm(32 << 20) // 32MB 最大内存
if err != nil {
return openai.ErrorWrapper(err, "parse_multipart_form_failed", http.StatusInternalServerError)
}
// 获取响应格式
if format := c.Request.FormValue("response_format"); format != "" {
responseFormat = format
}
requestBody = &bytes.Buffer{}
writer := multipart.NewWriter(requestBody)
// 复制表单字段
for key, values := range c.Request.MultipartForm.Value {
for _, value := range values {
err = writer.WriteField(key, value)
if err != nil {
return openai.ErrorWrapper(err, "write_field_failed", http.StatusInternalServerError)
}
}
}
// 复制文件
for key, fileHeaders := range c.Request.MultipartForm.File {
for _, fileHeader := range fileHeaders {
file, err := fileHeader.Open()
if err != nil {
return openai.ErrorWrapper(err, "open_file_failed", http.StatusInternalServerError)
}
part, err := writer.CreateFormFile(key, fileHeader.Filename)
if err != nil {
file.Close()
return openai.ErrorWrapper(err, "create_form_file_failed", http.StatusInternalServerError)
}
_, err = io.Copy(part, file)
file.Close()
if err != nil {
return openai.ErrorWrapper(err, "copy_file_failed", http.StatusInternalServerError)
}
}
}
// 完成multipart写入
err = writer.Close()
if err != nil {
return openai.ErrorWrapper(err, "close_writer_failed", http.StatusInternalServerError)
}
// 更新Content-Type
contentTypeWithBoundary = writer.FormDataContentType()
c.Request.Header.Set("Content-Type", contentTypeWithBoundary)
} else {
// 对于非表单请求,直接重置请求体
c.Request.Body = io.NopCloser(bytes.NewBuffer(requestBody.Bytes()))
}
req, err := http.NewRequest(c.Request.Method, fullRequestURL, requestBody)
if err != nil {
@@ -151,11 +217,26 @@ func RelayAudioHelper(c *gin.Context, relayMode int) *relaymodel.ErrorWithStatus
apiKey := c.Request.Header.Get("Authorization")
apiKey = strings.TrimPrefix(apiKey, "Bearer ")
req.Header.Set("api-key", apiKey)
req.ContentLength = c.Request.ContentLength
// 确保请求体大小与Content-Length一致
req.ContentLength = int64(requestBody.Len())
} else {
req.Header.Set("Authorization", c.Request.Header.Get("Authorization"))
// 确保请求体大小与Content-Length一致
req.ContentLength = int64(requestBody.Len())
}
// 确保Content-Type正确传递
if strings.Contains(contentType, "multipart/form-data") && c.Request.MultipartForm != nil {
// 对于multipart请求使用我们重建时生成的Content-Type
// 注意此处必须使用writer生成的boundary
if contentTypeWithBoundary != "" {
req.Header.Set("Content-Type", contentTypeWithBoundary)
} else {
req.Header.Set("Content-Type", c.Request.Header.Get("Content-Type"))
}
} else {
req.Header.Set("Content-Type", c.Request.Header.Get("Content-Type"))
}
req.Header.Set("Content-Type", c.Request.Header.Get("Content-Type"))
req.Header.Set("Accept", c.Request.Header.Get("Accept"))
resp, err := client.HTTPClient.Do(req)

View File

@@ -38,7 +38,7 @@ func RelayTextHelper(c *gin.Context) *model.ErrorWithStatusCode {
textRequest.Model, _ = getMappedModelName(textRequest.Model, meta.ModelMapping)
meta.ActualModelName = textRequest.Model
// set system prompt if not empty
systemPromptReset := setSystemPrompt(ctx, textRequest, meta.SystemPrompt)
systemPromptReset := setSystemPrompt(ctx, textRequest, meta.ForcedSystemPrompt)
// get model ratio & group ratio
modelRatio := billingratio.GetModelRatio(textRequest.Model, meta.ChannelType)
groupRatio := billingratio.GetGroupRatio(meta.Group)
@@ -88,7 +88,11 @@ func RelayTextHelper(c *gin.Context) *model.ErrorWithStatusCode {
}
func getRequestBody(c *gin.Context, meta *meta.Meta, textRequest *model.GeneralOpenAIRequest, adaptor adaptor.Adaptor) (io.Reader, error) {
if !config.EnforceIncludeUsage && meta.APIType == apitype.OpenAI && meta.OriginModelName == meta.ActualModelName && meta.ChannelType != channeltype.Baichuan {
if !config.EnforceIncludeUsage &&
meta.APIType == apitype.OpenAI &&
meta.OriginModelName == meta.ActualModelName &&
meta.ChannelType != channeltype.Baichuan &&
meta.ForcedSystemPrompt == "" {
// no need to convert request for openai
return c.Request.Body, nil
}

View File

@@ -30,29 +30,29 @@ type Meta struct {
// OriginModelName is the model name from the raw user request
OriginModelName string
// ActualModelName is the model name after mapping
ActualModelName string
RequestURLPath string
PromptTokens int // only for DoResponse
SystemPrompt string
StartTime time.Time
ActualModelName string
RequestURLPath string
PromptTokens int // only for DoResponse
ForcedSystemPrompt string
StartTime time.Time
}
func GetByContext(c *gin.Context) *Meta {
meta := Meta{
Mode: relaymode.GetByPath(c.Request.URL.Path),
ChannelType: c.GetInt(ctxkey.Channel),
ChannelId: c.GetInt(ctxkey.ChannelId),
TokenId: c.GetInt(ctxkey.TokenId),
TokenName: c.GetString(ctxkey.TokenName),
UserId: c.GetInt(ctxkey.Id),
Group: c.GetString(ctxkey.Group),
ModelMapping: c.GetStringMapString(ctxkey.ModelMapping),
OriginModelName: c.GetString(ctxkey.RequestModel),
BaseURL: c.GetString(ctxkey.BaseURL),
APIKey: strings.TrimPrefix(c.Request.Header.Get("Authorization"), "Bearer "),
RequestURLPath: c.Request.URL.String(),
SystemPrompt: c.GetString(ctxkey.SystemPrompt),
StartTime: time.Now(),
Mode: relaymode.GetByPath(c.Request.URL.Path),
ChannelType: c.GetInt(ctxkey.Channel),
ChannelId: c.GetInt(ctxkey.ChannelId),
TokenId: c.GetInt(ctxkey.TokenId),
TokenName: c.GetString(ctxkey.TokenName),
UserId: c.GetInt(ctxkey.Id),
Group: c.GetString(ctxkey.Group),
ModelMapping: c.GetStringMapString(ctxkey.ModelMapping),
OriginModelName: c.GetString(ctxkey.RequestModel),
BaseURL: c.GetString(ctxkey.BaseURL),
APIKey: strings.TrimPrefix(c.Request.Header.Get("Authorization"), "Bearer "),
RequestURLPath: c.Request.URL.String(),
ForcedSystemPrompt: c.GetString(ctxkey.SystemPrompt),
StartTime: time.Now(),
}
cfg, ok := c.Get(ctxkey.Config)
if ok {

View File

@@ -4,4 +4,5 @@ const (
ContentTypeText = "text"
ContentTypeImageURL = "image_url"
ContentTypeInputAudio = "input_audio"
ContentTypeInputFile = "file"
)

View File

@@ -26,6 +26,7 @@ type GeneralOpenAIRequest struct {
Messages []Message `json:"messages,omitempty"`
Model string `json:"model,omitempty"`
Store *bool `json:"store,omitempty"`
ReasoningEffort *string `json:"reasoning_effort,omitempty"`
Metadata any `json:"metadata,omitempty"`
FrequencyPenalty *float64 `json:"frequency_penalty,omitempty"`
LogitBias any `json:"logit_bias,omitempty"`

View File

@@ -1,11 +1,14 @@
package model
import "encoding/json"
type Message struct {
Role string `json:"role,omitempty"`
Content any `json:"content,omitempty"`
Name *string `json:"name,omitempty"`
ToolCalls []Tool `json:"tool_calls,omitempty"`
ToolCallId string `json:"tool_call_id,omitempty"`
Role string `json:"role,omitempty"`
Content any `json:"content,omitempty"`
ReasoningContent any `json:"reasoning_content,omitempty"`
Name *string `json:"name,omitempty"`
ToolCalls []Tool `json:"tool_calls,omitempty"`
ToolCallId string `json:"tool_call_id,omitempty"`
}
func (m Message) IsStringContent() bool {
@@ -37,6 +40,53 @@ func (m Message) StringContent() string {
return ""
}
func (m Message) CozeV3StringContent() string {
content, ok := m.Content.(string)
if ok {
return content
}
contentList, ok := m.Content.([]any)
if ok {
contents := make([]map[string]any, 0)
var contentStr string
for _, contentItem := range contentList {
contentMap, ok := contentItem.(map[string]any)
if !ok {
continue
}
switch contentMap["type"] {
case "text":
if subStr, ok := contentMap["text"].(string); ok {
contents = append(contents, map[string]any{
"type": "text",
"text": subStr,
})
}
case "image_url":
if subStr, ok := contentMap["image_url"].(string); ok {
contents = append(contents, map[string]any{
"type": "image",
"file_url": subStr,
})
}
case "file":
if subStr, ok := contentMap["image_url"].(string); ok {
contents = append(contents, map[string]any{
"type": "file",
"file_url": subStr,
})
}
}
}
if len(contents) > 0 {
b, _ := json.Marshal(contents)
return string(b)
}
return contentStr
}
return ""
}
func (m Message) ParseContent() []MessageContent {
var contentList []MessageContent
content, ok := m.Content.(string)
@@ -71,6 +121,15 @@ func (m Message) ParseContent() []MessageContent {
},
})
}
case ContentTypeInputFile:
if subObj, ok := contentMap["file"].(map[string]any); ok {
contentList = append(contentList, MessageContent{
Type: ContentTypeInputFile,
File: &File{
FileData: subObj["file_data"].(string),
},
})
}
}
}
return contentList
@@ -87,4 +146,10 @@ type MessageContent struct {
Type string `json:"type,omitempty"`
Text string `json:"text"`
ImageURL *ImageURL `json:"image_url,omitempty"`
File *File `json:"file,omitempty"`
}
type File struct {
FileData string `json:"file_data,omitempty"`
FileName string `json:"filename,omitempty"`
}

View File

@@ -4,6 +4,14 @@ type Usage struct {
PromptTokens int `json:"prompt_tokens"`
CompletionTokens int `json:"completion_tokens"`
TotalTokens int `json:"total_tokens"`
CompletionTokensDetails *CompletionTokensDetails `json:"completion_tokens_details,omitempty"`
}
type CompletionTokensDetails struct {
ReasoningTokens int `json:"reasoning_tokens"`
AcceptedPredictionTokens int `json:"accepted_prediction_tokens"`
RejectedPredictionTokens int `json:"rejected_prediction_tokens"`
}
type Error struct {

View File

@@ -7,7 +7,7 @@ export const CHANNEL_OPTIONS = [
{ key: 24, text: 'Google Gemini', value: 24, color: 'orange' },
{ key: 28, text: 'Mistral AI', value: 28, color: 'orange' },
{ key: 41, text: 'Novita', value: 41, color: 'purple' },
{ key: 40, text: '字节跳动豆包', value: 40, color: 'blue' },
{key: 40, text: '字节火山引擎', value: 40, color: 'blue'},
{ key: 15, text: '百度文心千帆', value: 15, color: 'blue' },
{ key: 17, text: '阿里通义千问', value: 17, color: 'orange' },
{ key: 18, text: '讯飞星火认知', value: 18, color: 'blue' },
@@ -22,6 +22,7 @@ export const CHANNEL_OPTIONS = [
{ key: 31, text: '零一万物', value: 31, color: 'green' },
{ key: 32, text: '阶跃星辰', value: 32, color: 'blue' },
{ key: 34, text: 'Coze', value: 34, color: 'blue' },
{ key: 52, text: 'CozeV3', value: 52, color: 'blue' },
{ key: 35, text: 'Cohere', value: 35, color: 'blue' },
{ key: 36, text: 'DeepSeek', value: 36, color: 'black' },
{ key: 37, text: 'Cloudflare', value: 37, color: 'orange' },
@@ -35,7 +36,7 @@ export const CHANNEL_OPTIONS = [
{ key: 8, text: '自定义渠道', value: 8, color: 'pink' },
{ key: 22, text: '知识库FastGPT', value: 22, color: 'blue' },
{ key: 21, text: '知识库AI Proxy', value: 21, color: 'purple' },
{ key: 20, text: '代理:OpenRouter', value: 20, color: 'black' },
{key: 20, text: 'OpenRouter', value: 20, color: 'black'},
{ key: 2, text: '代理API2D', value: 2, color: 'blue' },
{ key: 5, text: '代理OpenAI-SB', value: 5, color: 'brown' },
{ key: 7, text: '代理OhMyGPT', value: 7, color: 'purple' },

View File

@@ -49,7 +49,7 @@ export const CHANNEL_OPTIONS = {
},
40: {
key: 40,
text: '字节跳动豆包',
text: '字节火山引擎',
value: 40,
color: 'primary'
},
@@ -137,6 +137,12 @@ export const CHANNEL_OPTIONS = {
value: 34,
color: 'primary'
},
52: {
key: 52,
text: 'CozeV3',
value: 52,
color: 'primary'
},
35: {
key: 35,
text: 'Cohere',
@@ -185,7 +191,7 @@ export const CHANNEL_OPTIONS = {
value: 45,
color: 'primary'
},
45: {
46: {
key: 46,
text: 'Replicate',
value: 46,
@@ -217,7 +223,7 @@ export const CHANNEL_OPTIONS = {
},
20: {
key: 20,
text: '代理:OpenRouter',
text: 'OpenRouter',
value: 20,
color: 'success'
},

View File

@@ -206,6 +206,20 @@ const typeConfig = {
},
modelGroup: 'Coze'
},
52: {
inputLabel: {
config: {
user_id: 'User ID'
}
},
prompt: {
models: '对于 CozeV3 而言,模型名称即 Bot ID你可以添加一个前缀 `bot-`,例如:`bot-123456`',
config: {
user_id: '生成该密钥的用户 ID'
}
},
modelGroup: 'CozeV3'
},
42: {
inputLabel: {
key: '',

View File

@@ -1,17 +1,7 @@
import React, { useEffect, useState } from 'react';
import { useTranslation } from 'react-i18next';
import {
Button,
Dropdown,
Form,
Input,
Label,
Message,
Pagination,
Popup,
Table,
} from 'semantic-ui-react';
import { Link } from 'react-router-dom';
import React, {useEffect, useState} from 'react';
import {useTranslation} from 'react-i18next';
import {Button, Dropdown, Form, Input, Label, Message, Pagination, Popup, Table,} from 'semantic-ui-react';
import {Link} from 'react-router-dom';
import {
API,
loadChannelModels,
@@ -23,8 +13,8 @@ import {
timestamp2string,
} from '../helpers';
import { CHANNEL_OPTIONS, ITEMS_PER_PAGE } from '../constants';
import { renderGroup, renderNumber } from '../helpers/render';
import {CHANNEL_OPTIONS, ITEMS_PER_PAGE} from '../constants';
import {renderGroup, renderNumber} from '../helpers/render';
function renderTimestamp(timestamp) {
return <>{timestamp2string(timestamp)}</>;
@@ -54,6 +44,9 @@ function renderType(type, t) {
function renderBalance(type, balance, t) {
switch (type) {
case 1: // OpenAI
if (balance === 0) {
return <span>{t('channel.table.balance_not_supported')}</span>;
}
return <span>${balance.toFixed(2)}</span>;
case 4: // CloseAI
return <span>¥{balance.toFixed(2)}</span>;
@@ -67,6 +60,8 @@ function renderBalance(type, balance, t) {
return <span>¥{balance.toFixed(2)}</span>;
case 13: // AIGC2D
return <span>{renderNumber(balance)}</span>;
case 20: // OpenRouter
return <span>${balance.toFixed(2)}</span>;
case 36: // DeepSeek
return <span>¥{balance.toFixed(2)}</span>;
case 44: // SiliconFlow
@@ -93,30 +88,32 @@ const ChannelsTable = () => {
const [showPrompt, setShowPrompt] = useState(shouldShowPrompt(promptID));
const [showDetail, setShowDetail] = useState(isShowDetail());
const processChannelData = (channel) => {
if (channel.models === '') {
channel.models = [];
channel.test_model = '';
} else {
channel.models = channel.models.split(',');
if (channel.models.length > 0) {
channel.test_model = channel.models[0];
}
channel.model_options = channel.models.map((model) => {
return {
key: model,
text: model,
value: model,
};
});
console.log('channel', channel);
}
return channel;
};
const loadChannels = async (startIdx) => {
const res = await API.get(`/api/channel/?p=${startIdx}`);
const { success, message, data } = res.data;
if (success) {
let localChannels = data.map((channel) => {
if (channel.models === '') {
channel.models = [];
channel.test_model = '';
} else {
channel.models = channel.models.split(',');
if (channel.models.length > 0) {
channel.test_model = channel.models[0];
}
channel.model_options = channel.models.map((model) => {
return {
key: model,
text: model,
value: model,
};
});
console.log('channel', channel);
}
return channel;
});
let localChannels = data.map(processChannelData);
if (startIdx === 0) {
setChannels(localChannels);
} else {
@@ -301,7 +298,8 @@ const ChannelsTable = () => {
const res = await API.get(`/api/channel/search?keyword=${searchKeyword}`);
const { success, message, data } = res.data;
if (success) {
setChannels(data);
let localChannels = data.map(processChannelData);
setChannels(localChannels);
setActivePage(1);
} else {
showError(message);
@@ -495,7 +493,6 @@ const ChannelsTable = () => {
onClick={() => {
sortChannel('balance');
}}
hidden={!showDetail}
>
{t('channel.table.balance')}
</Table.HeaderCell>
@@ -504,6 +501,7 @@ const ChannelsTable = () => {
onClick={() => {
sortChannel('priority');
}}
hidden={!showDetail}
>
{t('channel.table.priority')}
</Table.HeaderCell>
@@ -543,7 +541,7 @@ const ChannelsTable = () => {
basic
/>
</Table.Cell>
<Table.Cell hidden={!showDetail}>
<Table.Cell>
<Popup
trigger={
<span
@@ -559,7 +557,7 @@ const ChannelsTable = () => {
basic
/>
</Table.Cell>
<Table.Cell>
<Table.Cell hidden={!showDetail}>
<Popup
trigger={
<Input
@@ -593,7 +591,15 @@ const ChannelsTable = () => {
/>
</Table.Cell>
<Table.Cell>
<div>
<div
style={{
display: 'flex',
alignItems: 'center',
flexWrap: 'wrap',
gap: '2px',
rowGap: '6px',
}}
>
<Button
size={'tiny'}
positive

View File

@@ -1,48 +1,109 @@
export const CHANNEL_OPTIONS = [
{ key: 1, text: 'OpenAI', value: 1, color: 'green' },
{ key: 14, text: 'Anthropic Claude', value: 14, color: 'black' },
{ key: 33, text: 'AWS', value: 33, color: 'black' },
{ key: 3, text: 'Azure OpenAI', value: 3, color: 'olive' },
{ key: 11, text: 'Google PaLM2', value: 11, color: 'orange' },
{ key: 24, text: 'Google Gemini', value: 24, color: 'orange' },
{ key: 28, text: 'Mistral AI', value: 28, color: 'orange' },
{ key: 41, text: 'Novita', value: 41, color: 'purple' },
{ key: 40, text: '字节跳动豆包', value: 40, color: 'blue' },
{ key: 15, text: '百度文心千帆', value: 15, color: 'blue' },
{ key: 17, text: '阿里通义千问', value: 17, color: 'orange' },
{ key: 18, text: '讯飞星火认知', value: 18, color: 'blue' },
{ key: 16, text: '智谱 ChatGLM', value: 16, color: 'violet' },
{ key: 19, text: '360 智脑', value: 19, color: 'blue' },
{ key: 25, text: 'Moonshot AI', value: 25, color: 'black' },
{ key: 23, text: '腾讯混元', value: 23, color: 'teal' },
{ key: 26, text: '百川大模型', value: 26, color: 'orange' },
{ key: 27, text: 'MiniMax', value: 27, color: 'red' },
{ key: 29, text: 'Groq', value: 29, color: 'orange' },
{ key: 30, text: 'Ollama', value: 30, color: 'black' },
{ key: 31, text: '零一万物', value: 31, color: 'green' },
{ key: 32, text: '阶跃星辰', value: 32, color: 'blue' },
{ key: 34, text: 'Coze', value: 34, color: 'blue' },
{ key: 35, text: 'Cohere', value: 35, color: 'blue' },
{ key: 36, text: 'DeepSeek', value: 36, color: 'black' },
{ key: 37, text: 'Cloudflare', value: 37, color: 'orange' },
{ key: 38, text: 'DeepL', value: 38, color: 'black' },
{ key: 39, text: 'together.ai', value: 39, color: 'blue' },
{ key: 42, text: 'VertexAI', value: 42, color: 'blue' },
{ key: 43, text: 'Proxy', value: 43, color: 'blue' },
{ key: 44, text: 'SiliconFlow', value: 44, color: 'blue' },
{ key: 45, text: 'xAI', value: 45, color: 'blue' },
{ key: 46, text: 'Replicate', value: 46, color: 'blue' },
{ key: 8, text: '自定义渠道', value: 8, color: 'pink' },
{ key: 22, text: '知识库FastGPT', value: 22, color: 'blue' },
{ key: 21, text: '知识库AI Proxy', value: 21, color: 'purple' },
{ key: 20, text: '代理OpenRouter', value: 20, color: 'black' },
{ key: 2, text: '代理API2D', value: 2, color: 'blue' },
{ key: 5, text: '代理OpenAI-SB', value: 5, color: 'brown' },
{ key: 7, text: '代理OhMyGPT', value: 7, color: 'purple' },
{ key: 10, text: '代理AI Proxy', value: 10, color: 'purple' },
{ key: 4, text: '代理CloseAI', value: 4, color: 'teal' },
{ key: 6, text: '代理OpenAI Max', value: 6, color: 'violet' },
{ key: 9, text: '代理AI.LS', value: 9, color: 'yellow' },
{ key: 12, text: '代理API2GPT', value: 12, color: 'blue' },
{ key: 13, text: '代理AIGC2D', value: 13, color: 'purple' }
{key: 1, text: 'OpenAI', value: 1, color: 'green'},
{
key: 50,
text: 'OpenAI 兼容',
value: 50,
color: 'olive',
description: 'OpenAI 兼容渠道,支持设置 Base URL',
},
{key: 14, text: 'Anthropic', value: 14, color: 'black'},
{key: 33, text: 'AWS', value: 33, color: 'black'},
{key: 3, text: 'Azure', value: 3, color: 'olive'},
{key: 11, text: 'PaLM2', value: 11, color: 'orange'},
{key: 24, text: 'Gemini', value: 24, color: 'orange'},
{
key: 51,
text: 'Gemini (OpenAI)',
value: 51,
color: 'orange',
description: 'Gemini OpenAI 兼容格式',
},
{key: 28, text: 'Mistral AI', value: 28, color: 'orange'},
{key: 41, text: 'Novita', value: 41, color: 'purple'},
{
key: 40,
text: '字节火山引擎',
value: 40,
color: 'blue',
description: '原字节跳动豆包',
},
{
key: 15,
text: '百度文心千帆',
value: 15,
color: 'blue',
tip: '请前往<a href="https://console.bce.baidu.com/qianfan/ais/console/applicationConsole/application/v1" target="_blank">此处</a>获取 AKAPI Key以及 SKSecret Key注意V2 版本接口请使用 <strong>百度文心千帆 V2 </strong>渠道类型',
},
{
key: 47,
text: '百度文心千帆 V2',
value: 47,
color: 'blue',
tip: '请前往<a href="https://console.bce.baidu.com/iam/#/iam/apikey/list" target="_blank">此处</a>获取 API Key注意本渠道仅支持<a target="_blank" href="https://cloud.baidu.com/doc/WENXINWORKSHOP/s/em4tsqo3v">推理服务 V2</a>相关模型',
},
{
key: 17,
text: '阿里通义千问',
value: 17,
color: 'orange',
tip: '如需使用阿里云百炼,请使用<strong>阿里云百炼</strong>渠道',
},
{key: 49, text: '阿里云百炼', value: 49, color: 'orange'},
{
key: 18,
text: '讯飞星火认知',
value: 18,
color: 'blue',
tip: '本渠道基于讯飞 WebSocket 版本 API如需 HTTP 版本,请使用<strong>讯飞星火认知 V2</strong>渠道',
},
{
key: 48,
text: '讯飞星火认知 V2',
value: 48,
color: 'blue',
tip: 'HTTP 版本的讯飞接口,前往<a href="https://console.xfyun.cn/services/cbm" target="_blank">此处</a>获取 HTTP 服务接口认证密钥',
},
{key: 16, text: '智谱 ChatGLM', value: 16, color: 'violet'},
{key: 19, text: '360 智脑', value: 19, color: 'blue'},
{key: 25, text: 'Moonshot AI', value: 25, color: 'black'},
{key: 23, text: '腾讯混元', value: 23, color: 'teal'},
{key: 26, text: '百川大模型', value: 26, color: 'orange'},
{key: 27, text: 'MiniMax', value: 27, color: 'red'},
{key: 29, text: 'Groq', value: 29, color: 'orange'},
{key: 30, text: 'Ollama', value: 30, color: 'black'},
{key: 31, text: '零一万物', value: 31, color: 'green'},
{key: 32, text: '阶跃星辰', value: 32, color: 'blue'},
{key: 34, text: 'Coze', value: 34, color: 'blue'},
{key: 52, text: 'CozeV3', value: 52, color: 'blue'},
{key: 35, text: 'Cohere', value: 35, color: 'blue'},
{key: 36, text: 'DeepSeek', value: 36, color: 'black'},
{key: 37, text: 'Cloudflare', value: 37, color: 'orange'},
{key: 38, text: 'DeepL', value: 38, color: 'black'},
{key: 39, text: 'together.ai', value: 39, color: 'blue'},
{key: 42, text: 'VertexAI', value: 42, color: 'blue'},
{key: 43, text: 'Proxy', value: 43, color: 'blue'},
{key: 44, text: 'SiliconFlow', value: 44, color: 'blue'},
{key: 45, text: 'xAI', value: 45, color: 'blue'},
{key: 46, text: 'Replicate', value: 46, color: 'blue'},
{
key: 8,
text: '自定义渠道',
value: 8,
color: 'pink',
tip: '不推荐使用,请使用 <strong>OpenAI 兼容</strong>渠道类型。注意,这里所需要填入的代理地址仅会在实际请求时替换域名部分,如果你想填入 OpenAI SDK 中所要求的 Base URL请使用 OpenAI 兼容渠道类型',
description: '不推荐使用,请使用 OpenAI 兼容渠道类型',
},
{key: 22, text: '知识库FastGPT', value: 22, color: 'blue'},
{key: 21, text: '知识库AI Proxy', value: 21, color: 'purple'},
{key: 20, text: 'OpenRouter', value: 20, color: 'black'},
{key: 2, text: '代理API2D', value: 2, color: 'blue'},
{key: 5, text: '代理OpenAI-SB', value: 5, color: 'brown'},
{key: 7, text: '代理OhMyGPT', value: 7, color: 'purple'},
{key: 10, text: '代理AI Proxy', value: 10, color: 'purple'},
{key: 4, text: '代理CloseAI', value: 4, color: 'teal'},
{key: 6, text: '代理OpenAI Max', value: 6, color: 'violet'},
{key: 9, text: '代理AI.LS', value: 9, color: 'yellow'},
{key: 12, text: '代理API2GPT', value: 12, color: 'blue'},
{key: 13, text: '代理AIGC2D', value: 13, color: 'purple'},
];

View File

@@ -0,0 +1,13 @@
import {CHANNEL_OPTIONS} from '../constants';
let channelMap = undefined;
export function getChannelOption(channelId) {
if (channelMap === undefined) {
channelMap = {};
CHANNEL_OPTIONS.forEach((option) => {
channelMap[option.key] = option;
});
}
return channelMap[channelId];
}

View File

@@ -1,5 +1,6 @@
import { Label } from 'semantic-ui-react';
import { useTranslation } from 'react-i18next';
import { Label, Message } from 'semantic-ui-react';
import { getChannelOption } from './helper';
import React from 'react';
export function renderText(text, limit) {
if (text.length > limit) {
@@ -15,7 +16,15 @@ export function renderGroup(group) {
let groups = group.split(',');
groups.sort();
return (
<>
<div
style={{
display: 'flex',
alignItems: 'center',
flexWrap: 'wrap',
gap: '2px',
rowGap: '6px',
}}
>
{groups.map((group) => {
if (group === 'vip' || group === 'pro') {
return <Label color='yellow'>{group}</Label>;
@@ -24,7 +33,7 @@ export function renderGroup(group) {
}
return <Label>{group}</Label>;
})}
</>
</div>
);
}
@@ -98,3 +107,15 @@ export function renderColorLabel(text) {
</Label>
);
}
export function renderChannelTip(channelId) {
let channel = getChannelOption(channelId);
if (channel === undefined || channel.tip === undefined) {
return <></>;
}
return (
<Message>
<div dangerouslySetInnerHTML={{ __html: channel.tip }}></div>
</Message>
);
}

View File

@@ -1,7 +1,7 @@
import { toast } from 'react-toastify';
import { toastConstants } from '../constants';
import {toast} from 'react-toastify';
import {toastConstants} from '../constants';
import React from 'react';
import { API } from './api';
import {API} from './api';
const HTMLToastContent = ({ htmlContent }) => {
return <div dangerouslySetInnerHTML={{ __html: htmlContent }} />;
@@ -74,6 +74,7 @@ if (isMobile()) {
}
export function showError(error) {
if (!error) return;
console.error(error);
if (error.message) {
if (error.name === 'AxiosError') {
@@ -158,17 +159,7 @@ export function timestamp2string(timestamp) {
second = '0' + second;
}
return (
year +
'-' +
month +
'-' +
day +
' ' +
hour +
':' +
minute +
':' +
second
year + '-' + month + '-' + day + ' ' + hour + ':' + minute + ':' + second
);
}
@@ -193,7 +184,6 @@ export const verifyJSON = (str) => {
export function shouldShowPrompt(id) {
let prompt = localStorage.getItem(`prompt-${id}`);
return !prompt;
}
export function setPromptShown(id) {
@@ -224,4 +214,4 @@ export function getChannelModels(type) {
return channelModels[type];
}
return [];
}
}

View File

@@ -104,8 +104,10 @@
"model_mapping_placeholder": "Optional, used to modify model names in request body. A JSON string where keys are request model names and values are target model names",
"system_prompt": "System Prompt",
"system_prompt_placeholder": "Optional, used to force set system prompt. Use with custom model & model mapping. First create a unique custom model name above, then map it to a natively supported model",
"base_url": "Proxy",
"base_url_placeholder": "Optional, used for API calls through proxy. Enter proxy address in format: https://domain.com",
"proxy_url": "Proxy",
"proxy_url_placeholder": "This is optional and used for API calls via a proxy. Please enter the proxy URL, formatted as: https://domain.com",
"base_url": "Base URL",
"base_url_placeholder": "The Base URL required by the OpenAPI SDK",
"key": "Key",
"key_placeholder": "Please enter key",
"batch": "Batch Create",

View File

@@ -104,8 +104,10 @@
"model_mapping_placeholder": "此项可选,用于修改请求体中的模型名称,为一个 JSON 字符串,键为请求中模型名称,值为要替换的模型名称",
"system_prompt": "系统提示词",
"system_prompt_placeholder": "此项可选,用于强制设置给定的系统提示词,请配合自定义模型 & 模型重定向使用,首先创建一个唯一的自定义模型名称并在上面填入,之后将该自定义模型重定向映射到该渠道一个原生支持的模型",
"base_url": "代理",
"base_url_placeholder": "此项可选,用于通过代理站来进行 API 调用请输入代理站地址格式为https://domain.com",
"proxy_url": "代理",
"proxy_url_placeholder": "此项可选,用于通过代理站来进行 API 调用请输入代理站地址格式为https://domain.com。注意,这里所需要填入的代理地址仅会在实际请求时替换域名部分,如果你想填入 OpenAI SDK 中所要求的 Base URL请使用 OpenAI 兼容渠道类型",
"base_url": "Base URL",
"base_url_placeholder": "OpenAPI SDK 中所要求的 Base URL",
"key": "密钥",
"key_placeholder": "请输入密钥",
"batch": "批量创建",

View File

@@ -1,25 +1,10 @@
import React, { useEffect, useState } from 'react';
import { useTranslation } from 'react-i18next';
import {
Button,
Form,
Header,
Input,
Message,
Segment,
Card,
} from 'semantic-ui-react';
import { useNavigate, useParams } from 'react-router-dom';
import {
API,
copy,
getChannelModels,
showError,
showInfo,
showSuccess,
verifyJSON,
} from '../../helpers';
import { CHANNEL_OPTIONS } from '../../constants';
import React, {useEffect, useState} from 'react';
import {useTranslation} from 'react-i18next';
import {Button, Card, Form, Input, Message} from 'semantic-ui-react';
import {useNavigate, useParams} from 'react-router-dom';
import {API, copy, getChannelModels, showError, showInfo, showSuccess, verifyJSON,} from '../../helpers';
import {CHANNEL_OPTIONS} from '../../constants';
import {renderChannelTip} from '../../helpers/render';
const MODEL_MAPPING_EXAMPLE = {
'gpt-3.5-turbo-0301': 'gpt-3.5-turbo',
@@ -310,6 +295,7 @@ const EditChannel = () => {
options={groupOptions}
/>
</Form.Field>
{renderChannelTip(inputs.type)}
{/* Azure OpenAI specific fields */}
{inputs.type === 3 && (
@@ -353,6 +339,20 @@ const EditChannel = () => {
{inputs.type === 8 && (
<Form.Field>
<Form.Input
required
label={t('channel.edit.proxy_url')}
name='base_url'
placeholder={t('channel.edit.proxy_url_placeholder')}
onChange={handleInputChange}
value={inputs.base_url}
autoComplete='new-password'
/>
</Form.Field>
)}
{inputs.type === 50 && (
<Form.Field>
<Form.Input
required
label={t('channel.edit.base_url')}
name='base_url'
placeholder={t('channel.edit.base_url_placeholder')}
@@ -651,12 +651,13 @@ const EditChannel = () => {
{inputs.type !== 3 &&
inputs.type !== 33 &&
inputs.type !== 8 &&
inputs.type !== 50 &&
inputs.type !== 22 && (
<Form.Field>
<Form.Input
label={t('channel.edit.base_url')}
label={t('channel.edit.proxy_url')}
name='base_url'
placeholder={t('channel.edit.base_url_placeholder')}
placeholder={t('channel.edit.proxy_url_placeholder')}
onChange={handleInputChange}
value={inputs.base_url}
autoComplete='new-password'

View File

@@ -122,11 +122,11 @@ const Dashboard = () => {
? new Date(Math.min(...dates.map((d) => new Date(d))))
: new Date();
// 确保至少显示5天的数据
const fiveDaysAgo = new Date();
fiveDaysAgo.setDate(fiveDaysAgo.getDate() - 4); // -4是因为包含今天
if (minDate > fiveDaysAgo) {
minDate = fiveDaysAgo;
// 确保至少显示7天的数据
const sevenDaysAgo = new Date();
sevenDaysAgo.setDate(sevenDaysAgo.getDate() - 6); // -6是因为包含今天
if (minDate > sevenDaysAgo) {
minDate = sevenDaysAgo;
}
// 生成所有日期
@@ -164,11 +164,11 @@ const Dashboard = () => {
? new Date(Math.min(...dates.map((d) => new Date(d))))
: new Date();
// 确保至少显示5天的数据
const fiveDaysAgo = new Date();
fiveDaysAgo.setDate(fiveDaysAgo.getDate() - 4); // -4是因为包含今天
if (minDate > fiveDaysAgo) {
minDate = fiveDaysAgo;
// 确保至少显示7天的数据
const sevenDaysAgo = new Date();
sevenDaysAgo.setDate(sevenDaysAgo.getDate() - 6); // -6是因为包含今天
if (minDate > sevenDaysAgo) {
minDate = sevenDaysAgo;
}
// 生成所有日期
@@ -242,7 +242,7 @@ const Dashboard = () => {
<Card.Content>
<Card.Header>
{t('dashboard.charts.requests.title')}
{/* <span className='stat-value'>{summaryData.todayRequests}</span> */}
{/* <span className='stat-value'>{summaryData.todayRequests}</span> */}
</Card.Header>
<div className='chart-container'>
<ResponsiveContainer
@@ -271,7 +271,9 @@ const Dashboard = () => {
t('dashboard.charts.requests.tooltip'),
]}
labelFormatter={(label) =>
`${t('dashboard.statistics.tooltip.date')}: ${formatDate(label)}`
`${t(
'dashboard.statistics.tooltip.date'
)}: ${formatDate(label)}`
}
/>
<Line
@@ -294,7 +296,7 @@ const Dashboard = () => {
<Card.Content>
<Card.Header>
{t('dashboard.charts.quota.title')}
{/* <span className='stat-value'>
{/* <span className='stat-value'>
${summaryData.todayQuota.toFixed(3)}
</span> */}
</Card.Header>
@@ -321,11 +323,13 @@ const Dashboard = () => {
boxShadow: '0 2px 8px rgba(0,0,0,0.1)',
}}
formatter={(value) => [
value,
value.toFixed(6),
t('dashboard.charts.quota.tooltip'),
]}
labelFormatter={(label) =>
`${t('dashboard.statistics.tooltip.date')}: ${formatDate(label)}`
`${t(
'dashboard.statistics.tooltip.date'
)}: ${formatDate(label)}`
}
/>
<Line
@@ -348,7 +352,7 @@ const Dashboard = () => {
<Card.Content>
<Card.Header>
{t('dashboard.charts.tokens.title')}
{/* <span className='stat-value'>{summaryData.todayTokens}</span> */}
{/* <span className='stat-value'>{summaryData.todayTokens}</span> */}
</Card.Header>
<div className='chart-container'>
<ResponsiveContainer
@@ -377,7 +381,9 @@ const Dashboard = () => {
t('dashboard.charts.tokens.tooltip'),
]}
labelFormatter={(label) =>
`${t('dashboard.statistics.tooltip.date')}: ${formatDate(label)}`
`${t(
'dashboard.statistics.tooltip.date'
)}: ${formatDate(label)}`
}
/>
<Line
@@ -422,7 +428,9 @@ const Dashboard = () => {
boxShadow: '0 2px 8px rgba(0,0,0,0.1)',
}}
labelFormatter={(label) =>
`${t('dashboard.statistics.tooltip.date')}: ${formatDate(label)}`
`${t('dashboard.statistics.tooltip.date')}: ${formatDate(
label
)}`
}
/>
<Legend