mirror of
https://github.com/songquanpeng/one-api.git
synced 2026-02-19 04:14:24 +08:00
Compare commits
22 Commits
2df135a118
...
v0.6.9
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
36c8f4f15c | ||
|
|
45b51ea0ee | ||
|
|
7c8628bd95 | ||
|
|
6ab87f8a08 | ||
|
|
833fa7ad6f | ||
|
|
6eb0770a89 | ||
|
|
92cd46d64f | ||
|
|
2b2dc2c733 | ||
|
|
a3d7df7f89 | ||
|
|
c368232f50 | ||
|
|
cbfc983dc3 | ||
|
|
8ec092ba44 | ||
|
|
b0b88a79ff | ||
|
|
7e51b04221 | ||
|
|
f75a17f8eb | ||
|
|
6f13a3bb3c | ||
|
|
f092eed1db | ||
|
|
629378691b | ||
|
|
3716e1b0e6 | ||
|
|
a4d6e7a886 | ||
|
|
cb772e5d06 | ||
|
|
e32cb0b844 |
@@ -1 +0,0 @@
|
|||||||
{"ServicePort":"80","SecurityPorts":[80],"RepoName":"hanans426/one-api","Arch":"EcsSingle","RunCommand":"echo \"start run command\"\necho \"${AdminPassword}\"\n","SourceCodePath":".","CommandTimeout":3600,"ServiceType":"private","AllowedRegions":["cn-hangzhou","cn-shanghai","cn-beijing"],"ArtifactSourceType":"SourceCode","ImageId":"centos_7_9_x64_20G_alibase_20230613.vhd","RegionId":"cn-hangzhou","CustomParameters":[{"NoEcho":true,"Type":"String","AssociationProperty":"ALIYUN::ECS::Instance::Password","Label":"管理员密码","Name":"AdminPassword"}]}
|
|
||||||
@@ -1,21 +0,0 @@
|
|||||||
# 代码仓库结构
|
|
||||||
|
|
||||||
## 文档目录说明:
|
|
||||||
```
|
|
||||||
.
|
|
||||||
├── README.md - README
|
|
||||||
├── docs - 服务文档相关文件
|
|
||||||
│ └── index.md
|
|
||||||
├── resources - 服务资源文件
|
|
||||||
│ ├── icons
|
|
||||||
│ │ └── service_logo.png - 服务logo
|
|
||||||
│ └── artifact_resources - 部署物相关资源文件
|
|
||||||
├── ros_templates - 服务ROS模板目录,支持多模板
|
|
||||||
│ └── template.yaml - ROS模板,ROS模板引擎根据该模板会自动创建出所有的资源
|
|
||||||
├── config.yaml - 服务配置文件,服务构建过程中会使用计算巢命令行工具computenest-cli,computenest-cli会基于该配置文件构建服务
|
|
||||||
├── preset_parameters.yaml - (该文件只有托管版有)服务商预设参数,如VpcId,VSwitchId等,该ros模板内容会渲染为表单方便服务商填写
|
|
||||||
```
|
|
||||||
|
|
||||||
## 其他
|
|
||||||
关于ROS模板,请参见 [资源编排](https://help.aliyun.com/zh/ros)。
|
|
||||||
关于computenest-cli请参见 [computenest-cli](https://pypi.org/project/computenest-cli/)。
|
|
||||||
@@ -1,28 +0,0 @@
|
|||||||
Service:
|
|
||||||
RegionId: cn-hangzhou
|
|
||||||
DeployType: ros
|
|
||||||
DeployMetadata:
|
|
||||||
SupplierDeployMetadata:
|
|
||||||
FileArtifactRelation:
|
|
||||||
'{{ computenest::file::hanans426_one-api }}':
|
|
||||||
ArtifactId: ${Artifact.Artifact_1.ArtifactId}
|
|
||||||
ArtifactVersion: ${Artifact.Artifact_1.ArtifactVersion}
|
|
||||||
TemplateConfigs:
|
|
||||||
- Name: 单机版
|
|
||||||
Url: 'ros_templates/template.yaml'
|
|
||||||
AllowedRegions:
|
|
||||||
- cn-hangzhou
|
|
||||||
- cn-shanghai
|
|
||||||
- cn-beijing
|
|
||||||
ServiceType: private
|
|
||||||
ServiceInfo:
|
|
||||||
Locale: zh-CN
|
|
||||||
ShortDescription: demo
|
|
||||||
Image: 'resources/icons/service_logo.png'
|
|
||||||
Artifact:
|
|
||||||
Artifact_1:
|
|
||||||
ArtifactType: File
|
|
||||||
ArtifactName: hanans426_one-api
|
|
||||||
ArtifactProperty:
|
|
||||||
RegionId: cn-hangzhou
|
|
||||||
Url: 'resources/artifact_resources/file/hanans426_one-api.tar.gz'
|
|
||||||
Binary file not shown.
|
Before Width: | Height: | Size: 280 KiB |
@@ -1,70 +0,0 @@
|
|||||||
# 服务模板说明文档
|
|
||||||
|
|
||||||
## 服务说明
|
|
||||||
|
|
||||||
**简单描述服务的功能和用途。**
|
|
||||||
例如:
|
|
||||||
_(服务功能描述,如“WordPress 是一款免费开源的 CMS,适用于创建和管理各种类型的网站。”)_
|
|
||||||
|
|
||||||
_(服务快速上手链接或文档,如果有的话)_
|
|
||||||
|
|
||||||
## 服务架构
|
|
||||||
|
|
||||||
此服务模板构建出的服务的部署架构为单机ecs部署。
|
|
||||||
|
|
||||||
<img src="architecture_ecs_single.png" width="600" height="400" align="bottom"/>
|
|
||||||
|
|
||||||
## 计费说明
|
|
||||||
通过此服务模板构建服务不产生费用。
|
|
||||||
用户部署构建出的服务时,资源费用主要涉及:
|
|
||||||
- 所选ECS实例规格
|
|
||||||
- 磁盘容量
|
|
||||||
- 公网带宽
|
|
||||||
|
|
||||||
计费方式包括:
|
|
||||||
- 按量付费(小时)
|
|
||||||
- 包年包月
|
|
||||||
|
|
||||||
预估费用在部署前可实时看到。
|
|
||||||
|
|
||||||
## RAM账号所需权限
|
|
||||||
|
|
||||||
此服务模板构建出的服务需要对ECS、VPC等资源进行访问和创建操作,若使用RAM用户创建服务实例,需要在创建服务实例前,对使用的RAM用户的账号添加相应资源的权限。添加RAM权限的详细操作,请参见[为RAM用户授权](https://help.aliyun.com/document_detail/121945.html)。所需权限如下表所示:
|
|
||||||
|
|
||||||
| 权限策略名称 | 备注 |
|
|
||||||
|-------------------------------------|-------------------------------|
|
|
||||||
| AliyunECSFullAccess | 管理云服务器服务(ECS)的权限 |
|
|
||||||
| AliyunVPCFullAccess | 管理专有网络(VPC)的权限 |
|
|
||||||
| AliyunROSFullAccess | 管理资源编排服务(ROS)的权限 |
|
|
||||||
| AliyunComputeNestUserFullAccess | 管理计算巢服务(ComputeNest)的用户侧权限 |
|
|
||||||
| AliyunComputeNestSupplierFullAccess | 管理计算巢服务(ComputeNest)的服务商侧权限 | |
|
|
||||||
|
|
||||||
## 服务实例计费说明
|
|
||||||
|
|
||||||
**详细说明服务实例的计费方式。**
|
|
||||||
_(描述费用构成,例如所选 vCPU 和内存规格,系统盘类型和容量等)_
|
|
||||||
|
|
||||||
_(列出计费方式,例如按量付费或包年包月)_
|
|
||||||
|
|
||||||
## 服务实例部署流程
|
|
||||||
|
|
||||||
### 部署参数说明
|
|
||||||
|
|
||||||
| 参数组 | 参数项 | 说明 |
|
|
||||||
|---------------------------------|--------|-------------------------------------------------------------------------|
|
|
||||||
| 服务实例 | 服务实例名称 | 长度不超过64个字符,必须以英文字母开头,可包含数字、英文字母、短划线(-)和下划线(_)。 |
|
|
||||||
| | 地域 | 服务实例部署的地域。 |
|
|
||||||
| | 付费类型 | 资源的计费类型:按量付费和包年包月。 |
|
|
||||||
| ECS实例配置 | 实例类型 | ECS实例规格配置。 |
|
|
||||||
| | 实例密码 | 长度8-30,必须包含三项(大写字母、小写字母、数字、 ()`~!@#$%^&*-+=|{}[]:;'<>,.?/ 中的特殊符号)。 |
|
|
||||||
| 网络配置 | 可用区 | ECS实例所在可用区。 |
|
|
||||||
|
|
||||||
### 部署步骤
|
|
||||||
|
|
||||||
**简述如何一步步部署服务实例。**
|
|
||||||
|
|
||||||
1. _(步骤1描述和相关链接或图片,如果有的话)_
|
|
||||||
2. _(步骤2描述和相关链接或图片,如果有的话)_
|
|
||||||
...
|
|
||||||
|
|
||||||
[部署链接](部署链接地址)
|
|
||||||
Binary file not shown.
Binary file not shown.
|
Before Width: | Height: | Size: 2.7 KiB |
@@ -1,218 +0,0 @@
|
|||||||
ROSTemplateFormatVersion: '2015-09-01'
|
|
||||||
Description:
|
|
||||||
en: Source Code Service Ros Template
|
|
||||||
zh-cn: 源代码服务模板
|
|
||||||
Parameters:
|
|
||||||
PayType:
|
|
||||||
Type: String
|
|
||||||
Label:
|
|
||||||
en: ECS Instance Charge Type
|
|
||||||
zh-cn: 付费类型
|
|
||||||
Default: PostPaid
|
|
||||||
AllowedValues:
|
|
||||||
- PostPaid
|
|
||||||
- PrePaid
|
|
||||||
AssociationProperty: ChargeType
|
|
||||||
AssociationPropertyMetadata:
|
|
||||||
LocaleKey: InstanceChargeType
|
|
||||||
PayPeriodUnit:
|
|
||||||
Type: String
|
|
||||||
Label:
|
|
||||||
en: Pay Period Unit
|
|
||||||
zh-cn: 购买资源时长周期
|
|
||||||
Default: Month
|
|
||||||
AllowedValues:
|
|
||||||
- Month
|
|
||||||
- Year
|
|
||||||
AssociationProperty: PayPeriodUnit
|
|
||||||
AssociationPropertyMetadata:
|
|
||||||
Visible:
|
|
||||||
Condition:
|
|
||||||
Fn::Not:
|
|
||||||
Fn::Equals:
|
|
||||||
- ${PayType}
|
|
||||||
- PostPaid
|
|
||||||
PayPeriod:
|
|
||||||
Type: Number
|
|
||||||
Label:
|
|
||||||
en: Period
|
|
||||||
zh-cn: 购买资源时长
|
|
||||||
Default: 1
|
|
||||||
AllowedValues:
|
|
||||||
- 1
|
|
||||||
- 2
|
|
||||||
- 3
|
|
||||||
- 4
|
|
||||||
- 5
|
|
||||||
- 6
|
|
||||||
- 7
|
|
||||||
- 8
|
|
||||||
- 9
|
|
||||||
AssociationProperty: PayPeriod
|
|
||||||
AssociationPropertyMetadata:
|
|
||||||
Visible:
|
|
||||||
Condition:
|
|
||||||
Fn::Not:
|
|
||||||
Fn::Equals:
|
|
||||||
- ${PayType}
|
|
||||||
- PostPaid
|
|
||||||
EcsInstanceType:
|
|
||||||
Type: String
|
|
||||||
Label:
|
|
||||||
en: Instance Type
|
|
||||||
zh-cn: 实例类型
|
|
||||||
AssociationProperty: ALIYUN::ECS::Instance::InstanceType
|
|
||||||
AssociationPropertyMetadata:
|
|
||||||
InstanceChargeType: ${PayType}
|
|
||||||
Constraints:
|
|
||||||
InstanceTypeFamily:
|
|
||||||
- ecs.u1
|
|
||||||
- ecs.e
|
|
||||||
InstancePassword:
|
|
||||||
NoEcho: true
|
|
||||||
Type: String
|
|
||||||
Description:
|
|
||||||
en: Server login password, Length 8-30, must contain three(Capital letters, lowercase letters, numbers, ()`~!@#$%^&*_-+=|{}[]:;'<>,.?/ Special symbol in)
|
|
||||||
zh-cn: 服务器登录密码,长度8-30,必须包含三项(大写字母、小写字母、数字、 ()`~!@#$%^&*_-+=|{}[]:;'<>,.?/ 中的特殊符号)
|
|
||||||
AllowedPattern: '^[a-zA-Z0-9-\(\)\`\~\!\@\#\$\%\^\&\*\_\-\+\=\|\{\}\[\]\:\;\<\>\,\.\?\/]*$'
|
|
||||||
Label:
|
|
||||||
en: Instance Password
|
|
||||||
zh-cn: 实例密码
|
|
||||||
ConstraintDescription:
|
|
||||||
en: Length 8-30, must contain three(Capital letters, lowercase letters, numbers, ()`~!@#$%^&*_-+=|{}[]:;'<>,.?/ Special symbol in)
|
|
||||||
zh-cn: 长度8-30,必须包含三项(大写字母、小写字母、数字、 ()`~!@#$%^&*_-+=|{}[]:;'<>,.?/ 中的特殊符号)
|
|
||||||
MinLength: 8
|
|
||||||
MaxLength: 30
|
|
||||||
AssociationProperty: ALIYUN::ECS::Instance::Password
|
|
||||||
ZoneId:
|
|
||||||
Type: String
|
|
||||||
Label:
|
|
||||||
en: Zone ID
|
|
||||||
zh-cn: 可用区ID
|
|
||||||
AssociationProperty: ALIYUN::ECS::Instance::ZoneId
|
|
||||||
VpcId:
|
|
||||||
Type: String
|
|
||||||
Label:
|
|
||||||
en: VPC ID
|
|
||||||
zh-cn: 专有网络VPC实例ID
|
|
||||||
AssociationProperty: 'ALIYUN::ECS::VPC::VPCId'
|
|
||||||
VSwitchId:
|
|
||||||
Type: String
|
|
||||||
Label:
|
|
||||||
en: VSwitch ID
|
|
||||||
zh-cn: 交换机实例ID
|
|
||||||
Default: ''
|
|
||||||
AssociationProperty: 'ALIYUN::ECS::VSwitch::VSwitchId'
|
|
||||||
AssociationPropertyMetadata:
|
|
||||||
VpcId: VpcId
|
|
||||||
ZoneId: ZoneId
|
|
||||||
AdminPassword:
|
|
||||||
Type: String
|
|
||||||
AssociationProperty: ALIYUN::ECS::Instance::Password
|
|
||||||
Label: 管理员密码
|
|
||||||
NoEcho: True
|
|
||||||
Resources:
|
|
||||||
SecurityGroup:
|
|
||||||
Type: ALIYUN::ECS::SecurityGroup
|
|
||||||
Properties:
|
|
||||||
SecurityGroupName:
|
|
||||||
Ref: ALIYUN::StackName
|
|
||||||
VpcId:
|
|
||||||
Ref: VpcId
|
|
||||||
SecurityGroupIngress:
|
|
||||||
- PortRange: 80/80
|
|
||||||
Priority: 1
|
|
||||||
SourceCidrIp: 0.0.0.0/0
|
|
||||||
IpProtocol: tcp
|
|
||||||
NicType: internet
|
|
||||||
InstanceGroup:
|
|
||||||
Type: ALIYUN::ECS::InstanceGroup
|
|
||||||
Properties:
|
|
||||||
# 付费类型
|
|
||||||
InstanceChargeType:
|
|
||||||
Ref: PayType
|
|
||||||
PeriodUnit:
|
|
||||||
Ref: PayPeriodUnit
|
|
||||||
Period:
|
|
||||||
Ref: PayPeriod
|
|
||||||
VpcId:
|
|
||||||
Ref: VpcId
|
|
||||||
VSwitchId:
|
|
||||||
Ref: VSwitchId
|
|
||||||
ZoneId:
|
|
||||||
Ref: ZoneId
|
|
||||||
SecurityGroupId:
|
|
||||||
Ref: SecurityGroup
|
|
||||||
ImageId: centos_7_9_x64_20G_alibase_20230613.vhd
|
|
||||||
Password:
|
|
||||||
Ref: InstancePassword
|
|
||||||
InstanceType:
|
|
||||||
Ref: EcsInstanceType
|
|
||||||
SystemDiskCategory: cloud_essd
|
|
||||||
SystemDiskSize: 200
|
|
||||||
InternetMaxBandwidthOut: 5
|
|
||||||
IoOptimized: optimized
|
|
||||||
MaxAmount: 1
|
|
||||||
RunInstallCommand:
|
|
||||||
Type: ALIYUN::ECS::RunCommand
|
|
||||||
Properties:
|
|
||||||
InstanceIds:
|
|
||||||
Fn::GetAtt:
|
|
||||||
- InstanceGroup
|
|
||||||
- InstanceIds
|
|
||||||
Type: RunShellScript
|
|
||||||
Sync: true
|
|
||||||
Timeout: 3600
|
|
||||||
CommandContent:
|
|
||||||
Fn::Sub:
|
|
||||||
- |
|
|
||||||
#!/bin/bash
|
|
||||||
# 源代码通过computenest-cli被打包为tar.gz包,并发布为部署物
|
|
||||||
wget '{{ computenest::file::hanans426_one-api }}' -O hanans426_one-api.tar.gz > /var/log/download.log
|
|
||||||
tar -zxvf hanans426_one-api.tar.gz && cd "$(tar -tzf hanans426_one-api.tar.gz | head -1 | awk -F'/' '{print $1}')"
|
|
||||||
echo "start run command"
|
|
||||||
echo "${AdminPassword}"
|
|
||||||
|
|
||||||
ARGUS_VERSION=3.5.7 /bin/bash -c "$(curl -sS https://cms-agent-${RegionId}.oss-${RegionId}-internal.aliyuncs.com/Argus/agent_install_ecs-1.7.sh)" >> /root/install_cms_agent.log 2>&1
|
|
||||||
- RegionId:
|
|
||||||
Ref: ALIYUN::Region
|
|
||||||
Outputs:
|
|
||||||
ServerAddress:
|
|
||||||
Description:
|
|
||||||
en: ServerAddress.
|
|
||||||
zh-cn: 访问页面。
|
|
||||||
Value:
|
|
||||||
Fn::Sub:
|
|
||||||
- 'http://${ServerAddress}:80'
|
|
||||||
- ServerAddress:
|
|
||||||
Fn::Select:
|
|
||||||
- 0
|
|
||||||
- Fn::GetAtt:
|
|
||||||
- InstanceGroup
|
|
||||||
- PublicIps
|
|
||||||
Metadata:
|
|
||||||
ALIYUN::ROS::Interface:
|
|
||||||
ParameterGroups:
|
|
||||||
- Parameters:
|
|
||||||
- PayType
|
|
||||||
- PayPeriodUnit
|
|
||||||
- PayPeriod
|
|
||||||
Label:
|
|
||||||
default: 付费类型配置
|
|
||||||
- Parameters:
|
|
||||||
- EcsInstanceType
|
|
||||||
- InstancePassword
|
|
||||||
Label:
|
|
||||||
default: 资源配置
|
|
||||||
- Parameters:
|
|
||||||
- ZoneId
|
|
||||||
- VpcId
|
|
||||||
- VSwitchId
|
|
||||||
Label:
|
|
||||||
default: 可用区配置
|
|
||||||
- Parameters:
|
|
||||||
- AdminPassword
|
|
||||||
Label:
|
|
||||||
en: Software Configuration
|
|
||||||
zh-cn: 软件配置
|
|
||||||
|
|
||||||
11
README.en.md
11
README.en.md
@@ -204,17 +204,6 @@ If you encounter a blank page after deployment, refer to [#97](https://github.co
|
|||||||
</div>
|
</div>
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
<details>
|
|
||||||
<summary><strong>Deployment on Aliyun</strong></summary>
|
|
||||||
<div>
|
|
||||||
|
|
||||||
> Aliyun support one-click deployment to a dedicated VPC.。
|
|
||||||
|
|
||||||
Aliyun support the fast deployment,[Deployment Link](https://computenest.console.aliyun.com/service/instance/create/default?type=user&ServiceName=One%20API%20%E7%A4%BE%E5%8C%BA%E7%89%88)
|
|
||||||
|
|
||||||
</div>
|
|
||||||
</details>
|
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
The system is ready to use out of the box.
|
The system is ready to use out of the box.
|
||||||
|
|
||||||
|
|||||||
17
README.md
17
README.md
@@ -90,6 +90,7 @@ _✨ 通过标准的 OpenAI API 格式访问所有的大模型,开箱即用
|
|||||||
+ [x] [together.ai](https://www.together.ai/)
|
+ [x] [together.ai](https://www.together.ai/)
|
||||||
+ [x] [novita.ai](https://www.novita.ai/)
|
+ [x] [novita.ai](https://www.novita.ai/)
|
||||||
+ [x] [硅基流动 SiliconCloud](https://siliconflow.cn/siliconcloud)
|
+ [x] [硅基流动 SiliconCloud](https://siliconflow.cn/siliconcloud)
|
||||||
|
+ [x] [xAI](https://x.ai/)
|
||||||
2. 支持配置镜像以及众多[第三方代理服务](https://iamazing.cn/page/openai-api-third-party-services)。
|
2. 支持配置镜像以及众多[第三方代理服务](https://iamazing.cn/page/openai-api-third-party-services)。
|
||||||
3. 支持通过**负载均衡**的方式访问多个渠道。
|
3. 支持通过**负载均衡**的方式访问多个渠道。
|
||||||
4. 支持 **stream 模式**,可以通过流式传输实现打字机效果。
|
4. 支持 **stream 模式**,可以通过流式传输实现打字机效果。
|
||||||
@@ -114,8 +115,8 @@ _✨ 通过标准的 OpenAI API 格式访问所有的大模型,开箱即用
|
|||||||
21. 支持 Cloudflare Turnstile 用户校验。
|
21. 支持 Cloudflare Turnstile 用户校验。
|
||||||
22. 支持用户管理,支持**多种用户登录注册方式**:
|
22. 支持用户管理,支持**多种用户登录注册方式**:
|
||||||
+ 邮箱登录注册(支持注册邮箱白名单)以及通过邮箱进行密码重置。
|
+ 邮箱登录注册(支持注册邮箱白名单)以及通过邮箱进行密码重置。
|
||||||
+ 支持使用飞书进行授权登录。
|
+ 支持[飞书授权登录](https://open.feishu.cn/document/uAjLw4CM/ukTMukTMukTM/reference/authen-v1/authorize/get)([这里有 One API 的实现细节阐述供参考](https://iamazing.cn/page/feishu-oauth-login))。
|
||||||
+ [GitHub 开放授权](https://github.com/settings/applications/new)。
|
+ 支持 [GitHub 授权登录](https://github.com/settings/applications/new)。
|
||||||
+ 微信公众号授权(需要额外部署 [WeChat Server](https://github.com/songquanpeng/wechat-server))。
|
+ 微信公众号授权(需要额外部署 [WeChat Server](https://github.com/songquanpeng/wechat-server))。
|
||||||
23. 支持主题切换,设置环境变量 `THEME` 即可,默认为 `default`,欢迎 PR 更多主题,具体参考[此处](./web/README.md)。
|
23. 支持主题切换,设置环境变量 `THEME` 即可,默认为 `default`,欢迎 PR 更多主题,具体参考[此处](./web/README.md)。
|
||||||
24. 配合 [Message Pusher](https://github.com/songquanpeng/message-pusher) 可将报警信息推送到多种 App 上。
|
24. 配合 [Message Pusher](https://github.com/songquanpeng/message-pusher) 可将报警信息推送到多种 App 上。
|
||||||
@@ -302,17 +303,6 @@ Render 可以直接部署 docker 镜像,不需要 fork 仓库:https://dashbo
|
|||||||
</div>
|
</div>
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
<details>
|
|
||||||
<summary><strong>部署到阿里云</strong></summary>
|
|
||||||
<div>
|
|
||||||
|
|
||||||
> 阿里云支持一键部署到专属VPC。
|
|
||||||
|
|
||||||
阿里云支持快速一键部署,[部署链接](https://computenest.console.aliyun.com/service/instance/create/default?type=user&ServiceName=One%20API%20%E7%A4%BE%E5%8C%BA%E7%89%88)
|
|
||||||
|
|
||||||
</div>
|
|
||||||
</details>
|
|
||||||
|
|
||||||
## 配置
|
## 配置
|
||||||
系统本身开箱即用。
|
系统本身开箱即用。
|
||||||
|
|
||||||
@@ -410,6 +400,7 @@ graph LR
|
|||||||
26. `METRIC_SUCCESS_RATE_THRESHOLD`:请求成功率阈值,默认为 `0.8`。
|
26. `METRIC_SUCCESS_RATE_THRESHOLD`:请求成功率阈值,默认为 `0.8`。
|
||||||
27. `INITIAL_ROOT_TOKEN`:如果设置了该值,则在系统首次启动时会自动创建一个值为该环境变量值的 root 用户令牌。
|
27. `INITIAL_ROOT_TOKEN`:如果设置了该值,则在系统首次启动时会自动创建一个值为该环境变量值的 root 用户令牌。
|
||||||
28. `INITIAL_ROOT_ACCESS_TOKEN`:如果设置了该值,则在系统首次启动时会自动创建一个值为该环境变量的 root 用户创建系统管理令牌。
|
28. `INITIAL_ROOT_ACCESS_TOKEN`:如果设置了该值,则在系统首次启动时会自动创建一个值为该环境变量的 root 用户创建系统管理令牌。
|
||||||
|
29. `ENFORCE_INCLUDE_USAGE`:是否强制在 stream 模型下返回 usage,默认不开启,可选值为 `true` 和 `false`。
|
||||||
|
|
||||||
### 命令行参数
|
### 命令行参数
|
||||||
1. `--port <port_number>`: 指定服务器监听的端口号,默认为 `3000`。
|
1. `--port <port_number>`: 指定服务器监听的端口号,默认为 `3000`。
|
||||||
|
|||||||
@@ -160,3 +160,5 @@ var OnlyOneLogFile = env.Bool("ONLY_ONE_LOG_FILE", false)
|
|||||||
var RelayProxy = env.String("RELAY_PROXY", "")
|
var RelayProxy = env.String("RELAY_PROXY", "")
|
||||||
var UserContentRequestProxy = env.String("USER_CONTENT_REQUEST_PROXY", "")
|
var UserContentRequestProxy = env.String("USER_CONTENT_REQUEST_PROXY", "")
|
||||||
var UserContentRequestTimeout = env.Int("USER_CONTENT_REQUEST_TIMEOUT", 30)
|
var UserContentRequestTimeout = env.Int("USER_CONTENT_REQUEST_TIMEOUT", 30)
|
||||||
|
|
||||||
|
var EnforceIncludeUsage = env.Bool("ENFORCE_INCLUDE_USAGE", false)
|
||||||
|
|||||||
@@ -20,4 +20,5 @@ const (
|
|||||||
BaseURL = "base_url"
|
BaseURL = "base_url"
|
||||||
AvailableModels = "available_models"
|
AvailableModels = "available_models"
|
||||||
KeyRequestBody = "key_request_body"
|
KeyRequestBody = "key_request_body"
|
||||||
|
SystemPrompt = "system_prompt"
|
||||||
)
|
)
|
||||||
|
|||||||
@@ -137,3 +137,23 @@ func String2Int(str string) int {
|
|||||||
}
|
}
|
||||||
return num
|
return num
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func Float64PtrMax(p *float64, maxValue float64) *float64 {
|
||||||
|
if p == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
if *p > maxValue {
|
||||||
|
return &maxValue
|
||||||
|
}
|
||||||
|
return p
|
||||||
|
}
|
||||||
|
|
||||||
|
func Float64PtrMin(p *float64, minValue float64) *float64 {
|
||||||
|
if p == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
if *p < minValue {
|
||||||
|
return &minValue
|
||||||
|
}
|
||||||
|
return p
|
||||||
|
}
|
||||||
|
|||||||
@@ -40,7 +40,7 @@ func getLarkUserInfoByCode(code string) (*LarkUser, error) {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
req, err := http.NewRequest("POST", "https://passport.feishu.cn/suite/passport/oauth/token", bytes.NewBuffer(jsonData))
|
req, err := http.NewRequest("POST", "https://open.feishu.cn/open-apis/authen/v2/oauth/token", bytes.NewBuffer(jsonData))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -76,9 +76,9 @@ func testChannel(channel *model.Channel, request *relaymodel.GeneralOpenAIReques
|
|||||||
if len(modelNames) > 0 {
|
if len(modelNames) > 0 {
|
||||||
modelName = modelNames[0]
|
modelName = modelNames[0]
|
||||||
}
|
}
|
||||||
if modelMap != nil && modelMap[modelName] != "" {
|
}
|
||||||
modelName = modelMap[modelName]
|
if modelMap != nil && modelMap[modelName] != "" {
|
||||||
}
|
modelName = modelMap[modelName]
|
||||||
}
|
}
|
||||||
meta.OriginModelName, meta.ActualModelName = request.Model, modelName
|
meta.OriginModelName, meta.ActualModelName = request.Model, modelName
|
||||||
request.Model = modelName
|
request.Model = modelName
|
||||||
|
|||||||
@@ -61,6 +61,9 @@ func SetupContextForSelectedChannel(c *gin.Context, channel *model.Channel, mode
|
|||||||
c.Set(ctxkey.Channel, channel.Type)
|
c.Set(ctxkey.Channel, channel.Type)
|
||||||
c.Set(ctxkey.ChannelId, channel.Id)
|
c.Set(ctxkey.ChannelId, channel.Id)
|
||||||
c.Set(ctxkey.ChannelName, channel.Name)
|
c.Set(ctxkey.ChannelName, channel.Name)
|
||||||
|
if channel.SystemPrompt != nil && *channel.SystemPrompt != "" {
|
||||||
|
c.Set(ctxkey.SystemPrompt, *channel.SystemPrompt)
|
||||||
|
}
|
||||||
c.Set(ctxkey.ModelMapping, channel.GetModelMapping())
|
c.Set(ctxkey.ModelMapping, channel.GetModelMapping())
|
||||||
c.Set(ctxkey.OriginalModel, modelName) // for retry
|
c.Set(ctxkey.OriginalModel, modelName) // for retry
|
||||||
c.Request.Header.Set("Authorization", fmt.Sprintf("Bearer %s", channel.Key))
|
c.Request.Header.Set("Authorization", fmt.Sprintf("Bearer %s", channel.Key))
|
||||||
|
|||||||
27
middleware/gzip.go
Normal file
27
middleware/gzip.go
Normal file
@@ -0,0 +1,27 @@
|
|||||||
|
package middleware
|
||||||
|
|
||||||
|
import (
|
||||||
|
"compress/gzip"
|
||||||
|
"github.com/gin-gonic/gin"
|
||||||
|
"io"
|
||||||
|
"net/http"
|
||||||
|
)
|
||||||
|
|
||||||
|
func GzipDecodeMiddleware() gin.HandlerFunc {
|
||||||
|
return func(c *gin.Context) {
|
||||||
|
if c.GetHeader("Content-Encoding") == "gzip" {
|
||||||
|
gzipReader, err := gzip.NewReader(c.Request.Body)
|
||||||
|
if err != nil {
|
||||||
|
c.AbortWithStatus(http.StatusBadRequest)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
defer gzipReader.Close()
|
||||||
|
|
||||||
|
// Replace the request body with the decompressed data
|
||||||
|
c.Request.Body = io.NopCloser(gzipReader)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Continue processing the request
|
||||||
|
c.Next()
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -37,6 +37,7 @@ type Channel struct {
|
|||||||
ModelMapping *string `json:"model_mapping" gorm:"type:varchar(1024);default:''"`
|
ModelMapping *string `json:"model_mapping" gorm:"type:varchar(1024);default:''"`
|
||||||
Priority *int64 `json:"priority" gorm:"bigint;default:0"`
|
Priority *int64 `json:"priority" gorm:"bigint;default:0"`
|
||||||
Config string `json:"config"`
|
Config string `json:"config"`
|
||||||
|
SystemPrompt *string `json:"system_prompt" gorm:"type:text"`
|
||||||
}
|
}
|
||||||
|
|
||||||
type ChannelConfig struct {
|
type ChannelConfig struct {
|
||||||
|
|||||||
@@ -36,9 +36,7 @@ func ConvertRequest(request model.GeneralOpenAIRequest) *ChatRequest {
|
|||||||
enableSearch = true
|
enableSearch = true
|
||||||
aliModel = strings.TrimSuffix(aliModel, EnableSearchModelSuffix)
|
aliModel = strings.TrimSuffix(aliModel, EnableSearchModelSuffix)
|
||||||
}
|
}
|
||||||
if request.TopP >= 1 {
|
request.TopP = helper.Float64PtrMax(request.TopP, 0.9999)
|
||||||
request.TopP = 0.9999
|
|
||||||
}
|
|
||||||
return &ChatRequest{
|
return &ChatRequest{
|
||||||
Model: aliModel,
|
Model: aliModel,
|
||||||
Input: Input{
|
Input: Input{
|
||||||
|
|||||||
@@ -16,13 +16,13 @@ type Input struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
type Parameters struct {
|
type Parameters struct {
|
||||||
TopP float64 `json:"top_p,omitempty"`
|
TopP *float64 `json:"top_p,omitempty"`
|
||||||
TopK int `json:"top_k,omitempty"`
|
TopK int `json:"top_k,omitempty"`
|
||||||
Seed uint64 `json:"seed,omitempty"`
|
Seed uint64 `json:"seed,omitempty"`
|
||||||
EnableSearch bool `json:"enable_search,omitempty"`
|
EnableSearch bool `json:"enable_search,omitempty"`
|
||||||
IncrementalOutput bool `json:"incremental_output,omitempty"`
|
IncrementalOutput bool `json:"incremental_output,omitempty"`
|
||||||
MaxTokens int `json:"max_tokens,omitempty"`
|
MaxTokens int `json:"max_tokens,omitempty"`
|
||||||
Temperature float64 `json:"temperature,omitempty"`
|
Temperature *float64 `json:"temperature,omitempty"`
|
||||||
ResultFormat string `json:"result_format,omitempty"`
|
ResultFormat string `json:"result_format,omitempty"`
|
||||||
Tools []model.Tool `json:"tools,omitempty"`
|
Tools []model.Tool `json:"tools,omitempty"`
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -3,7 +3,11 @@ package anthropic
|
|||||||
var ModelList = []string{
|
var ModelList = []string{
|
||||||
"claude-instant-1.2", "claude-2.0", "claude-2.1",
|
"claude-instant-1.2", "claude-2.0", "claude-2.1",
|
||||||
"claude-3-haiku-20240307",
|
"claude-3-haiku-20240307",
|
||||||
|
"claude-3-5-haiku-20241022",
|
||||||
"claude-3-sonnet-20240229",
|
"claude-3-sonnet-20240229",
|
||||||
"claude-3-opus-20240229",
|
"claude-3-opus-20240229",
|
||||||
"claude-3-5-sonnet-20240620",
|
"claude-3-5-sonnet-20240620",
|
||||||
|
"claude-3-5-sonnet-20241022",
|
||||||
|
"claude-3-5-sonnet-latest",
|
||||||
|
"claude-3-5-haiku-20241022",
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -48,8 +48,8 @@ type Request struct {
|
|||||||
MaxTokens int `json:"max_tokens,omitempty"`
|
MaxTokens int `json:"max_tokens,omitempty"`
|
||||||
StopSequences []string `json:"stop_sequences,omitempty"`
|
StopSequences []string `json:"stop_sequences,omitempty"`
|
||||||
Stream bool `json:"stream,omitempty"`
|
Stream bool `json:"stream,omitempty"`
|
||||||
Temperature float64 `json:"temperature,omitempty"`
|
Temperature *float64 `json:"temperature,omitempty"`
|
||||||
TopP float64 `json:"top_p,omitempty"`
|
TopP *float64 `json:"top_p,omitempty"`
|
||||||
TopK int `json:"top_k,omitempty"`
|
TopK int `json:"top_k,omitempty"`
|
||||||
Tools []Tool `json:"tools,omitempty"`
|
Tools []Tool `json:"tools,omitempty"`
|
||||||
ToolChoice any `json:"tool_choice,omitempty"`
|
ToolChoice any `json:"tool_choice,omitempty"`
|
||||||
|
|||||||
@@ -29,10 +29,13 @@ var AwsModelIDMap = map[string]string{
|
|||||||
"claude-instant-1.2": "anthropic.claude-instant-v1",
|
"claude-instant-1.2": "anthropic.claude-instant-v1",
|
||||||
"claude-2.0": "anthropic.claude-v2",
|
"claude-2.0": "anthropic.claude-v2",
|
||||||
"claude-2.1": "anthropic.claude-v2:1",
|
"claude-2.1": "anthropic.claude-v2:1",
|
||||||
"claude-3-sonnet-20240229": "anthropic.claude-3-sonnet-20240229-v1:0",
|
|
||||||
"claude-3-5-sonnet-20240620": "anthropic.claude-3-5-sonnet-20240620-v1:0",
|
|
||||||
"claude-3-opus-20240229": "anthropic.claude-3-opus-20240229-v1:0",
|
|
||||||
"claude-3-haiku-20240307": "anthropic.claude-3-haiku-20240307-v1:0",
|
"claude-3-haiku-20240307": "anthropic.claude-3-haiku-20240307-v1:0",
|
||||||
|
"claude-3-sonnet-20240229": "anthropic.claude-3-sonnet-20240229-v1:0",
|
||||||
|
"claude-3-opus-20240229": "anthropic.claude-3-opus-20240229-v1:0",
|
||||||
|
"claude-3-5-sonnet-20240620": "anthropic.claude-3-5-sonnet-20240620-v1:0",
|
||||||
|
"claude-3-5-sonnet-20241022": "anthropic.claude-3-5-sonnet-20241022-v2:0",
|
||||||
|
"claude-3-5-sonnet-latest": "anthropic.claude-3-5-sonnet-20241022-v2:0",
|
||||||
|
"claude-3-5-haiku-20241022": "anthropic.claude-3-5-haiku-20241022-v1:0",
|
||||||
}
|
}
|
||||||
|
|
||||||
func awsModelID(requestModel string) (string, error) {
|
func awsModelID(requestModel string) (string, error) {
|
||||||
|
|||||||
@@ -11,8 +11,8 @@ type Request struct {
|
|||||||
Messages []anthropic.Message `json:"messages"`
|
Messages []anthropic.Message `json:"messages"`
|
||||||
System string `json:"system,omitempty"`
|
System string `json:"system,omitempty"`
|
||||||
MaxTokens int `json:"max_tokens,omitempty"`
|
MaxTokens int `json:"max_tokens,omitempty"`
|
||||||
Temperature float64 `json:"temperature,omitempty"`
|
Temperature *float64 `json:"temperature,omitempty"`
|
||||||
TopP float64 `json:"top_p,omitempty"`
|
TopP *float64 `json:"top_p,omitempty"`
|
||||||
TopK int `json:"top_k,omitempty"`
|
TopK int `json:"top_k,omitempty"`
|
||||||
StopSequences []string `json:"stop_sequences,omitempty"`
|
StopSequences []string `json:"stop_sequences,omitempty"`
|
||||||
Tools []anthropic.Tool `json:"tools,omitempty"`
|
Tools []anthropic.Tool `json:"tools,omitempty"`
|
||||||
|
|||||||
@@ -4,10 +4,10 @@ package aws
|
|||||||
//
|
//
|
||||||
// https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-meta.html
|
// https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-meta.html
|
||||||
type Request struct {
|
type Request struct {
|
||||||
Prompt string `json:"prompt"`
|
Prompt string `json:"prompt"`
|
||||||
MaxGenLen int `json:"max_gen_len,omitempty"`
|
MaxGenLen int `json:"max_gen_len,omitempty"`
|
||||||
Temperature float64 `json:"temperature,omitempty"`
|
Temperature *float64 `json:"temperature,omitempty"`
|
||||||
TopP float64 `json:"top_p,omitempty"`
|
TopP *float64 `json:"top_p,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// Response is the response from AWS Llama3
|
// Response is the response from AWS Llama3
|
||||||
|
|||||||
@@ -35,9 +35,9 @@ type Message struct {
|
|||||||
|
|
||||||
type ChatRequest struct {
|
type ChatRequest struct {
|
||||||
Messages []Message `json:"messages"`
|
Messages []Message `json:"messages"`
|
||||||
Temperature float64 `json:"temperature,omitempty"`
|
Temperature *float64 `json:"temperature,omitempty"`
|
||||||
TopP float64 `json:"top_p,omitempty"`
|
TopP *float64 `json:"top_p,omitempty"`
|
||||||
PenaltyScore float64 `json:"penalty_score,omitempty"`
|
PenaltyScore *float64 `json:"penalty_score,omitempty"`
|
||||||
Stream bool `json:"stream,omitempty"`
|
Stream bool `json:"stream,omitempty"`
|
||||||
System string `json:"system,omitempty"`
|
System string `json:"system,omitempty"`
|
||||||
DisableSearch bool `json:"disable_search,omitempty"`
|
DisableSearch bool `json:"disable_search,omitempty"`
|
||||||
|
|||||||
@@ -9,5 +9,5 @@ type Request struct {
|
|||||||
Prompt string `json:"prompt,omitempty"`
|
Prompt string `json:"prompt,omitempty"`
|
||||||
Raw bool `json:"raw,omitempty"`
|
Raw bool `json:"raw,omitempty"`
|
||||||
Stream bool `json:"stream,omitempty"`
|
Stream bool `json:"stream,omitempty"`
|
||||||
Temperature float64 `json:"temperature,omitempty"`
|
Temperature *float64 `json:"temperature,omitempty"`
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -43,7 +43,7 @@ func ConvertRequest(textRequest model.GeneralOpenAIRequest) *Request {
|
|||||||
K: textRequest.TopK,
|
K: textRequest.TopK,
|
||||||
Stream: textRequest.Stream,
|
Stream: textRequest.Stream,
|
||||||
FrequencyPenalty: textRequest.FrequencyPenalty,
|
FrequencyPenalty: textRequest.FrequencyPenalty,
|
||||||
PresencePenalty: textRequest.FrequencyPenalty,
|
PresencePenalty: textRequest.PresencePenalty,
|
||||||
Seed: int(textRequest.Seed),
|
Seed: int(textRequest.Seed),
|
||||||
}
|
}
|
||||||
if cohereRequest.Model == "" {
|
if cohereRequest.Model == "" {
|
||||||
|
|||||||
@@ -10,15 +10,15 @@ type Request struct {
|
|||||||
PromptTruncation string `json:"prompt_truncation,omitempty"` // 默认值为"AUTO"
|
PromptTruncation string `json:"prompt_truncation,omitempty"` // 默认值为"AUTO"
|
||||||
Connectors []Connector `json:"connectors,omitempty"`
|
Connectors []Connector `json:"connectors,omitempty"`
|
||||||
Documents []Document `json:"documents,omitempty"`
|
Documents []Document `json:"documents,omitempty"`
|
||||||
Temperature float64 `json:"temperature,omitempty"` // 默认值为0.3
|
Temperature *float64 `json:"temperature,omitempty"` // 默认值为0.3
|
||||||
MaxTokens int `json:"max_tokens,omitempty"`
|
MaxTokens int `json:"max_tokens,omitempty"`
|
||||||
MaxInputTokens int `json:"max_input_tokens,omitempty"`
|
MaxInputTokens int `json:"max_input_tokens,omitempty"`
|
||||||
K int `json:"k,omitempty"` // 默认值为0
|
K int `json:"k,omitempty"` // 默认值为0
|
||||||
P float64 `json:"p,omitempty"` // 默认值为0.75
|
P *float64 `json:"p,omitempty"` // 默认值为0.75
|
||||||
Seed int `json:"seed,omitempty"`
|
Seed int `json:"seed,omitempty"`
|
||||||
StopSequences []string `json:"stop_sequences,omitempty"`
|
StopSequences []string `json:"stop_sequences,omitempty"`
|
||||||
FrequencyPenalty float64 `json:"frequency_penalty,omitempty"` // 默认值为0.0
|
FrequencyPenalty *float64 `json:"frequency_penalty,omitempty"` // 默认值为0.0
|
||||||
PresencePenalty float64 `json:"presence_penalty,omitempty"` // 默认值为0.0
|
PresencePenalty *float64 `json:"presence_penalty,omitempty"` // 默认值为0.0
|
||||||
Tools []Tool `json:"tools,omitempty"`
|
Tools []Tool `json:"tools,omitempty"`
|
||||||
ToolResults []ToolResult `json:"tool_results,omitempty"`
|
ToolResults []ToolResult `json:"tool_results,omitempty"`
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -4,11 +4,12 @@ import (
|
|||||||
"bufio"
|
"bufio"
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
"fmt"
|
"fmt"
|
||||||
"github.com/songquanpeng/one-api/common/render"
|
|
||||||
"io"
|
"io"
|
||||||
"net/http"
|
"net/http"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
|
"github.com/songquanpeng/one-api/common/render"
|
||||||
|
|
||||||
"github.com/songquanpeng/one-api/common"
|
"github.com/songquanpeng/one-api/common"
|
||||||
"github.com/songquanpeng/one-api/common/config"
|
"github.com/songquanpeng/one-api/common/config"
|
||||||
"github.com/songquanpeng/one-api/common/helper"
|
"github.com/songquanpeng/one-api/common/helper"
|
||||||
@@ -28,6 +29,11 @@ const (
|
|||||||
VisionMaxImageNum = 16
|
VisionMaxImageNum = 16
|
||||||
)
|
)
|
||||||
|
|
||||||
|
var mimeTypeMap = map[string]string{
|
||||||
|
"json_object": "application/json",
|
||||||
|
"text": "text/plain",
|
||||||
|
}
|
||||||
|
|
||||||
// Setting safety to the lowest possible values since Gemini is already powerless enough
|
// Setting safety to the lowest possible values since Gemini is already powerless enough
|
||||||
func ConvertRequest(textRequest model.GeneralOpenAIRequest) *ChatRequest {
|
func ConvertRequest(textRequest model.GeneralOpenAIRequest) *ChatRequest {
|
||||||
geminiRequest := ChatRequest{
|
geminiRequest := ChatRequest{
|
||||||
@@ -56,6 +62,15 @@ func ConvertRequest(textRequest model.GeneralOpenAIRequest) *ChatRequest {
|
|||||||
MaxOutputTokens: textRequest.MaxTokens,
|
MaxOutputTokens: textRequest.MaxTokens,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
if textRequest.ResponseFormat != nil {
|
||||||
|
if mimeType, ok := mimeTypeMap[textRequest.ResponseFormat.Type]; ok {
|
||||||
|
geminiRequest.GenerationConfig.ResponseMimeType = mimeType
|
||||||
|
}
|
||||||
|
if textRequest.ResponseFormat.JsonSchema != nil {
|
||||||
|
geminiRequest.GenerationConfig.ResponseSchema = textRequest.ResponseFormat.JsonSchema.Schema
|
||||||
|
geminiRequest.GenerationConfig.ResponseMimeType = mimeTypeMap["json_object"]
|
||||||
|
}
|
||||||
|
}
|
||||||
if textRequest.Tools != nil {
|
if textRequest.Tools != nil {
|
||||||
functions := make([]model.Function, 0, len(textRequest.Tools))
|
functions := make([]model.Function, 0, len(textRequest.Tools))
|
||||||
for _, tool := range textRequest.Tools {
|
for _, tool := range textRequest.Tools {
|
||||||
|
|||||||
@@ -65,10 +65,12 @@ type ChatTools struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
type ChatGenerationConfig struct {
|
type ChatGenerationConfig struct {
|
||||||
Temperature float64 `json:"temperature,omitempty"`
|
ResponseMimeType string `json:"responseMimeType,omitempty"`
|
||||||
TopP float64 `json:"topP,omitempty"`
|
ResponseSchema any `json:"responseSchema,omitempty"`
|
||||||
TopK float64 `json:"topK,omitempty"`
|
Temperature *float64 `json:"temperature,omitempty"`
|
||||||
MaxOutputTokens int `json:"maxOutputTokens,omitempty"`
|
TopP *float64 `json:"topP,omitempty"`
|
||||||
CandidateCount int `json:"candidateCount,omitempty"`
|
TopK float64 `json:"topK,omitempty"`
|
||||||
StopSequences []string `json:"stopSequences,omitempty"`
|
MaxOutputTokens int `json:"maxOutputTokens,omitempty"`
|
||||||
|
CandidateCount int `json:"candidateCount,omitempty"`
|
||||||
|
StopSequences []string `json:"stopSequences,omitempty"`
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -4,14 +4,24 @@ package groq
|
|||||||
|
|
||||||
var ModelList = []string{
|
var ModelList = []string{
|
||||||
"gemma-7b-it",
|
"gemma-7b-it",
|
||||||
"mixtral-8x7b-32768",
|
|
||||||
"llama3-8b-8192",
|
|
||||||
"llama3-70b-8192",
|
|
||||||
"gemma2-9b-it",
|
"gemma2-9b-it",
|
||||||
"llama-3.1-405b-reasoning",
|
|
||||||
"llama-3.1-70b-versatile",
|
"llama-3.1-70b-versatile",
|
||||||
"llama-3.1-8b-instant",
|
"llama-3.1-8b-instant",
|
||||||
|
"llama-3.2-11b-text-preview",
|
||||||
|
"llama-3.2-11b-vision-preview",
|
||||||
|
"llama-3.2-1b-preview",
|
||||||
|
"llama-3.2-3b-preview",
|
||||||
|
"llama-3.2-11b-vision-preview",
|
||||||
|
"llama-3.2-90b-text-preview",
|
||||||
|
"llama-3.2-90b-vision-preview",
|
||||||
|
"llama-guard-3-8b",
|
||||||
|
"llama3-70b-8192",
|
||||||
|
"llama3-8b-8192",
|
||||||
"llama3-groq-70b-8192-tool-use-preview",
|
"llama3-groq-70b-8192-tool-use-preview",
|
||||||
"llama3-groq-8b-8192-tool-use-preview",
|
"llama3-groq-8b-8192-tool-use-preview",
|
||||||
|
"llava-v1.5-7b-4096-preview",
|
||||||
|
"mixtral-8x7b-32768",
|
||||||
|
"distil-whisper-large-v3-en",
|
||||||
"whisper-large-v3",
|
"whisper-large-v3",
|
||||||
|
"whisper-large-v3-turbo",
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,14 +1,14 @@
|
|||||||
package ollama
|
package ollama
|
||||||
|
|
||||||
type Options struct {
|
type Options struct {
|
||||||
Seed int `json:"seed,omitempty"`
|
Seed int `json:"seed,omitempty"`
|
||||||
Temperature float64 `json:"temperature,omitempty"`
|
Temperature *float64 `json:"temperature,omitempty"`
|
||||||
TopK int `json:"top_k,omitempty"`
|
TopK int `json:"top_k,omitempty"`
|
||||||
TopP float64 `json:"top_p,omitempty"`
|
TopP *float64 `json:"top_p,omitempty"`
|
||||||
FrequencyPenalty float64 `json:"frequency_penalty,omitempty"`
|
FrequencyPenalty *float64 `json:"frequency_penalty,omitempty"`
|
||||||
PresencePenalty float64 `json:"presence_penalty,omitempty"`
|
PresencePenalty *float64 `json:"presence_penalty,omitempty"`
|
||||||
NumPredict int `json:"num_predict,omitempty"`
|
NumPredict int `json:"num_predict,omitempty"`
|
||||||
NumCtx int `json:"num_ctx,omitempty"`
|
NumCtx int `json:"num_ctx,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
type Message struct {
|
type Message struct {
|
||||||
|
|||||||
@@ -75,6 +75,13 @@ func (a *Adaptor) ConvertRequest(c *gin.Context, relayMode int, request *model.G
|
|||||||
if request == nil {
|
if request == nil {
|
||||||
return nil, errors.New("request is nil")
|
return nil, errors.New("request is nil")
|
||||||
}
|
}
|
||||||
|
if request.Stream {
|
||||||
|
// always return usage in stream mode
|
||||||
|
if request.StreamOptions == nil {
|
||||||
|
request.StreamOptions = &model.StreamOptions{}
|
||||||
|
}
|
||||||
|
request.StreamOptions.IncludeUsage = true
|
||||||
|
}
|
||||||
return request, nil
|
return request, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -11,9 +11,10 @@ import (
|
|||||||
"github.com/songquanpeng/one-api/relay/adaptor/mistral"
|
"github.com/songquanpeng/one-api/relay/adaptor/mistral"
|
||||||
"github.com/songquanpeng/one-api/relay/adaptor/moonshot"
|
"github.com/songquanpeng/one-api/relay/adaptor/moonshot"
|
||||||
"github.com/songquanpeng/one-api/relay/adaptor/novita"
|
"github.com/songquanpeng/one-api/relay/adaptor/novita"
|
||||||
|
"github.com/songquanpeng/one-api/relay/adaptor/siliconflow"
|
||||||
"github.com/songquanpeng/one-api/relay/adaptor/stepfun"
|
"github.com/songquanpeng/one-api/relay/adaptor/stepfun"
|
||||||
"github.com/songquanpeng/one-api/relay/adaptor/togetherai"
|
"github.com/songquanpeng/one-api/relay/adaptor/togetherai"
|
||||||
"github.com/songquanpeng/one-api/relay/adaptor/siliconflow"
|
"github.com/songquanpeng/one-api/relay/adaptor/xai"
|
||||||
"github.com/songquanpeng/one-api/relay/channeltype"
|
"github.com/songquanpeng/one-api/relay/channeltype"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -32,6 +33,7 @@ var CompatibleChannels = []int{
|
|||||||
channeltype.TogetherAI,
|
channeltype.TogetherAI,
|
||||||
channeltype.Novita,
|
channeltype.Novita,
|
||||||
channeltype.SiliconFlow,
|
channeltype.SiliconFlow,
|
||||||
|
channeltype.XAI,
|
||||||
}
|
}
|
||||||
|
|
||||||
func GetCompatibleChannelMeta(channelType int) (string, []string) {
|
func GetCompatibleChannelMeta(channelType int) (string, []string) {
|
||||||
@@ -64,6 +66,8 @@ func GetCompatibleChannelMeta(channelType int) (string, []string) {
|
|||||||
return "novita", novita.ModelList
|
return "novita", novita.ModelList
|
||||||
case channeltype.SiliconFlow:
|
case channeltype.SiliconFlow:
|
||||||
return "siliconflow", siliconflow.ModelList
|
return "siliconflow", siliconflow.ModelList
|
||||||
|
case channeltype.XAI:
|
||||||
|
return "xai", xai.ModelList
|
||||||
default:
|
default:
|
||||||
return "openai", ModelList
|
return "openai", ModelList
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -19,11 +19,11 @@ type Prompt struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
type ChatRequest struct {
|
type ChatRequest struct {
|
||||||
Prompt Prompt `json:"prompt"`
|
Prompt Prompt `json:"prompt"`
|
||||||
Temperature float64 `json:"temperature,omitempty"`
|
Temperature *float64 `json:"temperature,omitempty"`
|
||||||
CandidateCount int `json:"candidateCount,omitempty"`
|
CandidateCount int `json:"candidateCount,omitempty"`
|
||||||
TopP float64 `json:"topP,omitempty"`
|
TopP *float64 `json:"topP,omitempty"`
|
||||||
TopK int `json:"topK,omitempty"`
|
TopK int `json:"topK,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
type Error struct {
|
type Error struct {
|
||||||
|
|||||||
@@ -39,8 +39,8 @@ func ConvertRequest(request model.GeneralOpenAIRequest) *ChatRequest {
|
|||||||
Model: &request.Model,
|
Model: &request.Model,
|
||||||
Stream: &request.Stream,
|
Stream: &request.Stream,
|
||||||
Messages: messages,
|
Messages: messages,
|
||||||
TopP: &request.TopP,
|
TopP: request.TopP,
|
||||||
Temperature: &request.Temperature,
|
Temperature: request.Temperature,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -13,7 +13,12 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
var ModelList = []string{
|
var ModelList = []string{
|
||||||
"claude-3-haiku@20240307", "claude-3-opus@20240229", "claude-3-5-sonnet@20240620", "claude-3-sonnet@20240229",
|
"claude-3-haiku@20240307",
|
||||||
|
"claude-3-sonnet@20240229",
|
||||||
|
"claude-3-opus@20240229",
|
||||||
|
"claude-3-5-sonnet@20240620",
|
||||||
|
"claude-3-5-sonnet-v2@20241022",
|
||||||
|
"claude-3-5-haiku@20241022",
|
||||||
}
|
}
|
||||||
|
|
||||||
const anthropicVersion = "vertex-2023-10-16"
|
const anthropicVersion = "vertex-2023-10-16"
|
||||||
|
|||||||
@@ -11,8 +11,8 @@ type Request struct {
|
|||||||
MaxTokens int `json:"max_tokens,omitempty"`
|
MaxTokens int `json:"max_tokens,omitempty"`
|
||||||
StopSequences []string `json:"stop_sequences,omitempty"`
|
StopSequences []string `json:"stop_sequences,omitempty"`
|
||||||
Stream bool `json:"stream,omitempty"`
|
Stream bool `json:"stream,omitempty"`
|
||||||
Temperature float64 `json:"temperature,omitempty"`
|
Temperature *float64 `json:"temperature,omitempty"`
|
||||||
TopP float64 `json:"top_p,omitempty"`
|
TopP *float64 `json:"top_p,omitempty"`
|
||||||
TopK int `json:"top_k,omitempty"`
|
TopK int `json:"top_k,omitempty"`
|
||||||
Tools []anthropic.Tool `json:"tools,omitempty"`
|
Tools []anthropic.Tool `json:"tools,omitempty"`
|
||||||
ToolChoice any `json:"tool_choice,omitempty"`
|
ToolChoice any `json:"tool_choice,omitempty"`
|
||||||
|
|||||||
@@ -15,7 +15,7 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
var ModelList = []string{
|
var ModelList = []string{
|
||||||
"gemini-1.5-pro-001", "gemini-1.5-flash-001", "gemini-pro", "gemini-pro-vision",
|
"gemini-1.5-pro-001", "gemini-1.5-flash-001", "gemini-pro", "gemini-pro-vision", "gemini-1.5-pro-002", "gemini-1.5-flash-002",
|
||||||
}
|
}
|
||||||
|
|
||||||
type Adaptor struct {
|
type Adaptor struct {
|
||||||
|
|||||||
5
relay/adaptor/xai/constants.go
Normal file
5
relay/adaptor/xai/constants.go
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
package xai
|
||||||
|
|
||||||
|
var ModelList = []string{
|
||||||
|
"grok-beta",
|
||||||
|
}
|
||||||
@@ -7,5 +7,6 @@ var ModelList = []string{
|
|||||||
"SparkDesk-v3.1",
|
"SparkDesk-v3.1",
|
||||||
"SparkDesk-v3.1-128K",
|
"SparkDesk-v3.1-128K",
|
||||||
"SparkDesk-v3.5",
|
"SparkDesk-v3.5",
|
||||||
|
"SparkDesk-v3.5-32K",
|
||||||
"SparkDesk-v4.0",
|
"SparkDesk-v4.0",
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -283,7 +283,7 @@ func parseAPIVersionByModelName(modelName string) string {
|
|||||||
func apiVersion2domain(apiVersion string) string {
|
func apiVersion2domain(apiVersion string) string {
|
||||||
switch apiVersion {
|
switch apiVersion {
|
||||||
case "v1.1":
|
case "v1.1":
|
||||||
return "general"
|
return "lite"
|
||||||
case "v2.1":
|
case "v2.1":
|
||||||
return "generalv2"
|
return "generalv2"
|
||||||
case "v3.1":
|
case "v3.1":
|
||||||
@@ -292,6 +292,8 @@ func apiVersion2domain(apiVersion string) string {
|
|||||||
return "pro-128k"
|
return "pro-128k"
|
||||||
case "v3.5":
|
case "v3.5":
|
||||||
return "generalv3.5"
|
return "generalv3.5"
|
||||||
|
case "v3.5-32K":
|
||||||
|
return "max-32k"
|
||||||
case "v4.0":
|
case "v4.0":
|
||||||
return "4.0Ultra"
|
return "4.0Ultra"
|
||||||
}
|
}
|
||||||
@@ -303,7 +305,10 @@ func getXunfeiAuthUrl(apiVersion string, apiKey string, apiSecret string) (strin
|
|||||||
domain := apiVersion2domain(apiVersion)
|
domain := apiVersion2domain(apiVersion)
|
||||||
switch apiVersion {
|
switch apiVersion {
|
||||||
case "v3.1-128K":
|
case "v3.1-128K":
|
||||||
authUrl = buildXunfeiAuthUrl(fmt.Sprintf("wss://spark-api.xf-yun.com/%s/pro-128k", apiVersion), apiKey, apiSecret)
|
authUrl = buildXunfeiAuthUrl(fmt.Sprintf("wss://spark-api.xf-yun.com/chat/pro-128k"), apiKey, apiSecret)
|
||||||
|
break
|
||||||
|
case "v3.5-32K":
|
||||||
|
authUrl = buildXunfeiAuthUrl(fmt.Sprintf("wss://spark-api.xf-yun.com/chat/max-32k"), apiKey, apiSecret)
|
||||||
break
|
break
|
||||||
default:
|
default:
|
||||||
authUrl = buildXunfeiAuthUrl(fmt.Sprintf("wss://spark-api.xf-yun.com/%s/chat", apiVersion), apiKey, apiSecret)
|
authUrl = buildXunfeiAuthUrl(fmt.Sprintf("wss://spark-api.xf-yun.com/%s/chat", apiVersion), apiKey, apiSecret)
|
||||||
|
|||||||
@@ -19,11 +19,11 @@ type ChatRequest struct {
|
|||||||
} `json:"header"`
|
} `json:"header"`
|
||||||
Parameter struct {
|
Parameter struct {
|
||||||
Chat struct {
|
Chat struct {
|
||||||
Domain string `json:"domain,omitempty"`
|
Domain string `json:"domain,omitempty"`
|
||||||
Temperature float64 `json:"temperature,omitempty"`
|
Temperature *float64 `json:"temperature,omitempty"`
|
||||||
TopK int `json:"top_k,omitempty"`
|
TopK int `json:"top_k,omitempty"`
|
||||||
MaxTokens int `json:"max_tokens,omitempty"`
|
MaxTokens int `json:"max_tokens,omitempty"`
|
||||||
Auditing bool `json:"auditing,omitempty"`
|
Auditing bool `json:"auditing,omitempty"`
|
||||||
} `json:"chat"`
|
} `json:"chat"`
|
||||||
} `json:"parameter"`
|
} `json:"parameter"`
|
||||||
Payload struct {
|
Payload struct {
|
||||||
|
|||||||
@@ -4,13 +4,13 @@ import (
|
|||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"github.com/gin-gonic/gin"
|
"github.com/gin-gonic/gin"
|
||||||
|
"github.com/songquanpeng/one-api/common/helper"
|
||||||
"github.com/songquanpeng/one-api/relay/adaptor"
|
"github.com/songquanpeng/one-api/relay/adaptor"
|
||||||
"github.com/songquanpeng/one-api/relay/adaptor/openai"
|
"github.com/songquanpeng/one-api/relay/adaptor/openai"
|
||||||
"github.com/songquanpeng/one-api/relay/meta"
|
"github.com/songquanpeng/one-api/relay/meta"
|
||||||
"github.com/songquanpeng/one-api/relay/model"
|
"github.com/songquanpeng/one-api/relay/model"
|
||||||
"github.com/songquanpeng/one-api/relay/relaymode"
|
"github.com/songquanpeng/one-api/relay/relaymode"
|
||||||
"io"
|
"io"
|
||||||
"math"
|
|
||||||
"net/http"
|
"net/http"
|
||||||
"strings"
|
"strings"
|
||||||
)
|
)
|
||||||
@@ -65,13 +65,13 @@ func (a *Adaptor) ConvertRequest(c *gin.Context, relayMode int, request *model.G
|
|||||||
baiduEmbeddingRequest, err := ConvertEmbeddingRequest(*request)
|
baiduEmbeddingRequest, err := ConvertEmbeddingRequest(*request)
|
||||||
return baiduEmbeddingRequest, err
|
return baiduEmbeddingRequest, err
|
||||||
default:
|
default:
|
||||||
// TopP (0.0, 1.0)
|
// TopP [0.0, 1.0]
|
||||||
request.TopP = math.Min(0.99, request.TopP)
|
request.TopP = helper.Float64PtrMax(request.TopP, 1)
|
||||||
request.TopP = math.Max(0.01, request.TopP)
|
request.TopP = helper.Float64PtrMin(request.TopP, 0)
|
||||||
|
|
||||||
// Temperature (0.0, 1.0)
|
// Temperature [0.0, 1.0]
|
||||||
request.Temperature = math.Min(0.99, request.Temperature)
|
request.Temperature = helper.Float64PtrMax(request.Temperature, 1)
|
||||||
request.Temperature = math.Max(0.01, request.Temperature)
|
request.Temperature = helper.Float64PtrMin(request.Temperature, 0)
|
||||||
a.SetVersionByModeName(request.Model)
|
a.SetVersionByModeName(request.Model)
|
||||||
if a.APIVersion == "v4" {
|
if a.APIVersion == "v4" {
|
||||||
return request, nil
|
return request, nil
|
||||||
|
|||||||
@@ -12,8 +12,8 @@ type Message struct {
|
|||||||
|
|
||||||
type Request struct {
|
type Request struct {
|
||||||
Prompt []Message `json:"prompt"`
|
Prompt []Message `json:"prompt"`
|
||||||
Temperature float64 `json:"temperature,omitempty"`
|
Temperature *float64 `json:"temperature,omitempty"`
|
||||||
TopP float64 `json:"top_p,omitempty"`
|
TopP *float64 `json:"top_p,omitempty"`
|
||||||
RequestId string `json:"request_id,omitempty"`
|
RequestId string `json:"request_id,omitempty"`
|
||||||
Incremental bool `json:"incremental,omitempty"`
|
Incremental bool `json:"incremental,omitempty"`
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -79,8 +79,10 @@ var ModelRatio = map[string]float64{
|
|||||||
"claude-2.0": 8.0 / 1000 * USD,
|
"claude-2.0": 8.0 / 1000 * USD,
|
||||||
"claude-2.1": 8.0 / 1000 * USD,
|
"claude-2.1": 8.0 / 1000 * USD,
|
||||||
"claude-3-haiku-20240307": 0.25 / 1000 * USD,
|
"claude-3-haiku-20240307": 0.25 / 1000 * USD,
|
||||||
|
"claude-3-5-haiku-20241022": 1.0 / 1000 * USD,
|
||||||
"claude-3-sonnet-20240229": 3.0 / 1000 * USD,
|
"claude-3-sonnet-20240229": 3.0 / 1000 * USD,
|
||||||
"claude-3-5-sonnet-20240620": 3.0 / 1000 * USD,
|
"claude-3-5-sonnet-20240620": 3.0 / 1000 * USD,
|
||||||
|
"claude-3-5-sonnet-20241022": 3.0 / 1000 * USD,
|
||||||
"claude-3-opus-20240229": 15.0 / 1000 * USD,
|
"claude-3-opus-20240229": 15.0 / 1000 * USD,
|
||||||
// https://cloud.baidu.com/doc/WENXINWORKSHOP/s/hlrk4akp7
|
// https://cloud.baidu.com/doc/WENXINWORKSHOP/s/hlrk4akp7
|
||||||
"ERNIE-4.0-8K": 0.120 * RMB,
|
"ERNIE-4.0-8K": 0.120 * RMB,
|
||||||
@@ -130,6 +132,7 @@ var ModelRatio = map[string]float64{
|
|||||||
"SparkDesk-v3.1": 1.2858, // ¥0.018 / 1k tokens
|
"SparkDesk-v3.1": 1.2858, // ¥0.018 / 1k tokens
|
||||||
"SparkDesk-v3.1-128K": 1.2858, // ¥0.018 / 1k tokens
|
"SparkDesk-v3.1-128K": 1.2858, // ¥0.018 / 1k tokens
|
||||||
"SparkDesk-v3.5": 1.2858, // ¥0.018 / 1k tokens
|
"SparkDesk-v3.5": 1.2858, // ¥0.018 / 1k tokens
|
||||||
|
"SparkDesk-v3.5-32K": 1.2858, // ¥0.018 / 1k tokens
|
||||||
"SparkDesk-v4.0": 1.2858, // ¥0.018 / 1k tokens
|
"SparkDesk-v4.0": 1.2858, // ¥0.018 / 1k tokens
|
||||||
"360GPT_S2_V9": 0.8572, // ¥0.012 / 1k tokens
|
"360GPT_S2_V9": 0.8572, // ¥0.012 / 1k tokens
|
||||||
"embedding-bert-512-v1": 0.0715, // ¥0.001 / 1k tokens
|
"embedding-bert-512-v1": 0.0715, // ¥0.001 / 1k tokens
|
||||||
@@ -161,15 +164,21 @@ var ModelRatio = map[string]float64{
|
|||||||
"mistral-embed": 0.1 / 1000 * USD,
|
"mistral-embed": 0.1 / 1000 * USD,
|
||||||
// https://wow.groq.com/#:~:text=inquiries%C2%A0here.-,Model,-Current%20Speed
|
// https://wow.groq.com/#:~:text=inquiries%C2%A0here.-,Model,-Current%20Speed
|
||||||
"gemma-7b-it": 0.07 / 1000000 * USD,
|
"gemma-7b-it": 0.07 / 1000000 * USD,
|
||||||
"mixtral-8x7b-32768": 0.24 / 1000000 * USD,
|
|
||||||
"llama3-8b-8192": 0.05 / 1000000 * USD,
|
|
||||||
"llama3-70b-8192": 0.59 / 1000000 * USD,
|
|
||||||
"gemma2-9b-it": 0.20 / 1000000 * USD,
|
"gemma2-9b-it": 0.20 / 1000000 * USD,
|
||||||
"llama-3.1-405b-reasoning": 0.89 / 1000000 * USD,
|
|
||||||
"llama-3.1-70b-versatile": 0.59 / 1000000 * USD,
|
"llama-3.1-70b-versatile": 0.59 / 1000000 * USD,
|
||||||
"llama-3.1-8b-instant": 0.05 / 1000000 * USD,
|
"llama-3.1-8b-instant": 0.05 / 1000000 * USD,
|
||||||
|
"llama-3.2-11b-text-preview": 0.05 / 1000000 * USD,
|
||||||
|
"llama-3.2-11b-vision-preview": 0.05 / 1000000 * USD,
|
||||||
|
"llama-3.2-1b-preview": 0.05 / 1000000 * USD,
|
||||||
|
"llama-3.2-3b-preview": 0.05 / 1000000 * USD,
|
||||||
|
"llama-3.2-90b-text-preview": 0.59 / 1000000 * USD,
|
||||||
|
"llama-guard-3-8b": 0.05 / 1000000 * USD,
|
||||||
|
"llama3-70b-8192": 0.59 / 1000000 * USD,
|
||||||
|
"llama3-8b-8192": 0.05 / 1000000 * USD,
|
||||||
"llama3-groq-70b-8192-tool-use-preview": 0.89 / 1000000 * USD,
|
"llama3-groq-70b-8192-tool-use-preview": 0.89 / 1000000 * USD,
|
||||||
"llama3-groq-8b-8192-tool-use-preview": 0.19 / 1000000 * USD,
|
"llama3-groq-8b-8192-tool-use-preview": 0.19 / 1000000 * USD,
|
||||||
|
"mixtral-8x7b-32768": 0.24 / 1000000 * USD,
|
||||||
|
|
||||||
// https://platform.lingyiwanwu.com/docs#-计费单元
|
// https://platform.lingyiwanwu.com/docs#-计费单元
|
||||||
"yi-34b-chat-0205": 2.5 / 1000 * RMB,
|
"yi-34b-chat-0205": 2.5 / 1000 * RMB,
|
||||||
"yi-34b-chat-200k": 12.0 / 1000 * RMB,
|
"yi-34b-chat-200k": 12.0 / 1000 * RMB,
|
||||||
@@ -200,6 +209,8 @@ var ModelRatio = map[string]float64{
|
|||||||
"deepl-zh": 25.0 / 1000 * USD,
|
"deepl-zh": 25.0 / 1000 * USD,
|
||||||
"deepl-en": 25.0 / 1000 * USD,
|
"deepl-en": 25.0 / 1000 * USD,
|
||||||
"deepl-ja": 25.0 / 1000 * USD,
|
"deepl-ja": 25.0 / 1000 * USD,
|
||||||
|
// https://console.x.ai/
|
||||||
|
"grok-beta": 5.0 / 1000 * USD,
|
||||||
}
|
}
|
||||||
|
|
||||||
var CompletionRatio = map[string]float64{
|
var CompletionRatio = map[string]float64{
|
||||||
@@ -364,6 +375,8 @@ func GetCompletionRatio(name string, channelType int) float64 {
|
|||||||
return 3
|
return 3
|
||||||
case "command-r-plus":
|
case "command-r-plus":
|
||||||
return 5
|
return 5
|
||||||
|
case "grok-beta":
|
||||||
|
return 3
|
||||||
}
|
}
|
||||||
return 1
|
return 1
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -46,5 +46,6 @@ const (
|
|||||||
VertextAI
|
VertextAI
|
||||||
Proxy
|
Proxy
|
||||||
SiliconFlow
|
SiliconFlow
|
||||||
|
XAI
|
||||||
Dummy
|
Dummy
|
||||||
)
|
)
|
||||||
|
|||||||
@@ -45,7 +45,8 @@ var ChannelBaseURLs = []string{
|
|||||||
"https://api.novita.ai/v3/openai", // 41
|
"https://api.novita.ai/v3/openai", // 41
|
||||||
"", // 42
|
"", // 42
|
||||||
"", // 43
|
"", // 43
|
||||||
"https://api.siliconflow.cn", // 44
|
"https://api.siliconflow.cn", // 44
|
||||||
|
"https://api.x.ai", // 45
|
||||||
}
|
}
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
|
|||||||
@@ -1,5 +1,6 @@
|
|||||||
package role
|
package role
|
||||||
|
|
||||||
const (
|
const (
|
||||||
|
System = "system"
|
||||||
Assistant = "assistant"
|
Assistant = "assistant"
|
||||||
)
|
)
|
||||||
|
|||||||
@@ -4,6 +4,7 @@ import (
|
|||||||
"context"
|
"context"
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"github.com/songquanpeng/one-api/relay/constant/role"
|
||||||
"math"
|
"math"
|
||||||
"net/http"
|
"net/http"
|
||||||
"strings"
|
"strings"
|
||||||
@@ -90,7 +91,7 @@ func preConsumeQuota(ctx context.Context, textRequest *relaymodel.GeneralOpenAIR
|
|||||||
return preConsumedQuota, nil
|
return preConsumedQuota, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func postConsumeQuota(ctx context.Context, usage *relaymodel.Usage, meta *meta.Meta, textRequest *relaymodel.GeneralOpenAIRequest, ratio float64, preConsumedQuota int64, modelRatio float64, groupRatio float64) {
|
func postConsumeQuota(ctx context.Context, usage *relaymodel.Usage, meta *meta.Meta, textRequest *relaymodel.GeneralOpenAIRequest, ratio float64, preConsumedQuota int64, modelRatio float64, groupRatio float64, systemPromptReset bool) {
|
||||||
if usage == nil {
|
if usage == nil {
|
||||||
logger.Error(ctx, "usage is nil, which is unexpected")
|
logger.Error(ctx, "usage is nil, which is unexpected")
|
||||||
return
|
return
|
||||||
@@ -118,7 +119,11 @@ func postConsumeQuota(ctx context.Context, usage *relaymodel.Usage, meta *meta.M
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
logger.Error(ctx, "error update user quota cache: "+err.Error())
|
logger.Error(ctx, "error update user quota cache: "+err.Error())
|
||||||
}
|
}
|
||||||
logContent := fmt.Sprintf("模型倍率 %.2f,分组倍率 %.2f,补全倍率 %.2f", modelRatio, groupRatio, completionRatio)
|
var extraLog string
|
||||||
|
if systemPromptReset {
|
||||||
|
extraLog = " (注意系统提示词已被重置)"
|
||||||
|
}
|
||||||
|
logContent := fmt.Sprintf("模型倍率 %.2f,分组倍率 %.2f,补全倍率 %.2f%s", modelRatio, groupRatio, completionRatio, extraLog)
|
||||||
model.RecordConsumeLog(ctx, meta.UserId, meta.ChannelId, promptTokens, completionTokens, textRequest.Model, meta.TokenName, quota, logContent)
|
model.RecordConsumeLog(ctx, meta.UserId, meta.ChannelId, promptTokens, completionTokens, textRequest.Model, meta.TokenName, quota, logContent)
|
||||||
model.UpdateUserUsedQuotaAndRequestCount(meta.UserId, quota)
|
model.UpdateUserUsedQuotaAndRequestCount(meta.UserId, quota)
|
||||||
model.UpdateChannelUsedQuota(meta.ChannelId, quota)
|
model.UpdateChannelUsedQuota(meta.ChannelId, quota)
|
||||||
@@ -154,3 +159,23 @@ func isErrorHappened(meta *meta.Meta, resp *http.Response) bool {
|
|||||||
}
|
}
|
||||||
return false
|
return false
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func setSystemPrompt(ctx context.Context, request *relaymodel.GeneralOpenAIRequest, prompt string) (reset bool) {
|
||||||
|
if prompt == "" {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
if len(request.Messages) == 0 {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
if request.Messages[0].Role == role.System {
|
||||||
|
request.Messages[0].Content = prompt
|
||||||
|
logger.Infof(ctx, "rewrite system prompt")
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
request.Messages = append([]relaymodel.Message{{
|
||||||
|
Role: role.System,
|
||||||
|
Content: prompt,
|
||||||
|
}}, request.Messages...)
|
||||||
|
logger.Infof(ctx, "add system prompt")
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|||||||
@@ -4,6 +4,7 @@ import (
|
|||||||
"bytes"
|
"bytes"
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"github.com/songquanpeng/one-api/common/config"
|
||||||
"io"
|
"io"
|
||||||
"net/http"
|
"net/http"
|
||||||
|
|
||||||
@@ -35,6 +36,8 @@ func RelayTextHelper(c *gin.Context) *model.ErrorWithStatusCode {
|
|||||||
meta.OriginModelName = textRequest.Model
|
meta.OriginModelName = textRequest.Model
|
||||||
textRequest.Model, _ = getMappedModelName(textRequest.Model, meta.ModelMapping)
|
textRequest.Model, _ = getMappedModelName(textRequest.Model, meta.ModelMapping)
|
||||||
meta.ActualModelName = textRequest.Model
|
meta.ActualModelName = textRequest.Model
|
||||||
|
// set system prompt if not empty
|
||||||
|
systemPromptReset := setSystemPrompt(ctx, textRequest, meta.SystemPrompt)
|
||||||
// get model ratio & group ratio
|
// get model ratio & group ratio
|
||||||
modelRatio := billingratio.GetModelRatio(textRequest.Model, meta.ChannelType)
|
modelRatio := billingratio.GetModelRatio(textRequest.Model, meta.ChannelType)
|
||||||
groupRatio := billingratio.GetGroupRatio(meta.Group)
|
groupRatio := billingratio.GetGroupRatio(meta.Group)
|
||||||
@@ -79,12 +82,12 @@ func RelayTextHelper(c *gin.Context) *model.ErrorWithStatusCode {
|
|||||||
return respErr
|
return respErr
|
||||||
}
|
}
|
||||||
// post-consume quota
|
// post-consume quota
|
||||||
go postConsumeQuota(ctx, usage, meta, textRequest, ratio, preConsumedQuota, modelRatio, groupRatio)
|
go postConsumeQuota(ctx, usage, meta, textRequest, ratio, preConsumedQuota, modelRatio, groupRatio, systemPromptReset)
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func getRequestBody(c *gin.Context, meta *meta.Meta, textRequest *model.GeneralOpenAIRequest, adaptor adaptor.Adaptor) (io.Reader, error) {
|
func getRequestBody(c *gin.Context, meta *meta.Meta, textRequest *model.GeneralOpenAIRequest, adaptor adaptor.Adaptor) (io.Reader, error) {
|
||||||
if meta.APIType == apitype.OpenAI && meta.OriginModelName == meta.ActualModelName && meta.ChannelType != channeltype.Baichuan {
|
if !config.EnforceIncludeUsage && meta.APIType == apitype.OpenAI && meta.OriginModelName == meta.ActualModelName && meta.ChannelType != channeltype.Baichuan {
|
||||||
// no need to convert request for openai
|
// no need to convert request for openai
|
||||||
return c.Request.Body, nil
|
return c.Request.Body, nil
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -30,6 +30,7 @@ type Meta struct {
|
|||||||
ActualModelName string
|
ActualModelName string
|
||||||
RequestURLPath string
|
RequestURLPath string
|
||||||
PromptTokens int // only for DoResponse
|
PromptTokens int // only for DoResponse
|
||||||
|
SystemPrompt string
|
||||||
}
|
}
|
||||||
|
|
||||||
func GetByContext(c *gin.Context) *Meta {
|
func GetByContext(c *gin.Context) *Meta {
|
||||||
@@ -46,6 +47,7 @@ func GetByContext(c *gin.Context) *Meta {
|
|||||||
BaseURL: c.GetString(ctxkey.BaseURL),
|
BaseURL: c.GetString(ctxkey.BaseURL),
|
||||||
APIKey: strings.TrimPrefix(c.Request.Header.Get("Authorization"), "Bearer "),
|
APIKey: strings.TrimPrefix(c.Request.Header.Get("Authorization"), "Bearer "),
|
||||||
RequestURLPath: c.Request.URL.String(),
|
RequestURLPath: c.Request.URL.String(),
|
||||||
|
SystemPrompt: c.GetString(ctxkey.SystemPrompt),
|
||||||
}
|
}
|
||||||
cfg, ok := c.Get(ctxkey.Config)
|
cfg, ok := c.Get(ctxkey.Config)
|
||||||
if ok {
|
if ok {
|
||||||
|
|||||||
@@ -1,6 +1,7 @@
|
|||||||
package model
|
package model
|
||||||
|
|
||||||
const (
|
const (
|
||||||
ContentTypeText = "text"
|
ContentTypeText = "text"
|
||||||
ContentTypeImageURL = "image_url"
|
ContentTypeImageURL = "image_url"
|
||||||
|
ContentTypeInputAudio = "input_audio"
|
||||||
)
|
)
|
||||||
|
|||||||
@@ -12,32 +12,59 @@ type JSONSchema struct {
|
|||||||
Strict *bool `json:"strict,omitempty"`
|
Strict *bool `json:"strict,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
|
type Audio struct {
|
||||||
|
Voice string `json:"voice,omitempty"`
|
||||||
|
Format string `json:"format,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type StreamOptions struct {
|
||||||
|
IncludeUsage bool `json:"include_usage,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
type GeneralOpenAIRequest struct {
|
type GeneralOpenAIRequest struct {
|
||||||
Messages []Message `json:"messages,omitempty"`
|
// https://platform.openai.com/docs/api-reference/chat/create
|
||||||
Model string `json:"model,omitempty"`
|
Messages []Message `json:"messages,omitempty"`
|
||||||
FrequencyPenalty float64 `json:"frequency_penalty,omitempty"`
|
Model string `json:"model,omitempty"`
|
||||||
MaxTokens int `json:"max_tokens,omitempty"`
|
Store *bool `json:"store,omitempty"`
|
||||||
N int `json:"n,omitempty"`
|
Metadata any `json:"metadata,omitempty"`
|
||||||
PresencePenalty float64 `json:"presence_penalty,omitempty"`
|
FrequencyPenalty *float64 `json:"frequency_penalty,omitempty"`
|
||||||
ResponseFormat *ResponseFormat `json:"response_format,omitempty"`
|
LogitBias any `json:"logit_bias,omitempty"`
|
||||||
Seed float64 `json:"seed,omitempty"`
|
Logprobs *bool `json:"logprobs,omitempty"`
|
||||||
Stop any `json:"stop,omitempty"`
|
TopLogprobs *int `json:"top_logprobs,omitempty"`
|
||||||
Stream bool `json:"stream,omitempty"`
|
MaxTokens int `json:"max_tokens,omitempty"`
|
||||||
Temperature float64 `json:"temperature,omitempty"`
|
MaxCompletionTokens *int `json:"max_completion_tokens,omitempty"`
|
||||||
TopP float64 `json:"top_p,omitempty"`
|
N int `json:"n,omitempty"`
|
||||||
TopK int `json:"top_k,omitempty"`
|
Modalities []string `json:"modalities,omitempty"`
|
||||||
Tools []Tool `json:"tools,omitempty"`
|
Prediction any `json:"prediction,omitempty"`
|
||||||
ToolChoice any `json:"tool_choice,omitempty"`
|
Audio *Audio `json:"audio,omitempty"`
|
||||||
FunctionCall any `json:"function_call,omitempty"`
|
PresencePenalty *float64 `json:"presence_penalty,omitempty"`
|
||||||
Functions any `json:"functions,omitempty"`
|
ResponseFormat *ResponseFormat `json:"response_format,omitempty"`
|
||||||
User string `json:"user,omitempty"`
|
Seed float64 `json:"seed,omitempty"`
|
||||||
Prompt any `json:"prompt,omitempty"`
|
ServiceTier *string `json:"service_tier,omitempty"`
|
||||||
Input any `json:"input,omitempty"`
|
Stop any `json:"stop,omitempty"`
|
||||||
EncodingFormat string `json:"encoding_format,omitempty"`
|
Stream bool `json:"stream,omitempty"`
|
||||||
Dimensions int `json:"dimensions,omitempty"`
|
StreamOptions *StreamOptions `json:"stream_options,omitempty"`
|
||||||
Instruction string `json:"instruction,omitempty"`
|
Temperature *float64 `json:"temperature,omitempty"`
|
||||||
Size string `json:"size,omitempty"`
|
TopP *float64 `json:"top_p,omitempty"`
|
||||||
NumCtx int `json:"num_ctx,omitempty"`
|
TopK int `json:"top_k,omitempty"`
|
||||||
|
Tools []Tool `json:"tools,omitempty"`
|
||||||
|
ToolChoice any `json:"tool_choice,omitempty"`
|
||||||
|
ParallelTooCalls *bool `json:"parallel_tool_calls,omitempty"`
|
||||||
|
User string `json:"user,omitempty"`
|
||||||
|
FunctionCall any `json:"function_call,omitempty"`
|
||||||
|
Functions any `json:"functions,omitempty"`
|
||||||
|
// https://platform.openai.com/docs/api-reference/embeddings/create
|
||||||
|
Input any `json:"input,omitempty"`
|
||||||
|
EncodingFormat string `json:"encoding_format,omitempty"`
|
||||||
|
Dimensions int `json:"dimensions,omitempty"`
|
||||||
|
// https://platform.openai.com/docs/api-reference/images/create
|
||||||
|
Prompt any `json:"prompt,omitempty"`
|
||||||
|
Quality *string `json:"quality,omitempty"`
|
||||||
|
Size string `json:"size,omitempty"`
|
||||||
|
Style *string `json:"style,omitempty"`
|
||||||
|
// Others
|
||||||
|
Instruction string `json:"instruction,omitempty"`
|
||||||
|
NumCtx int `json:"num_ctx,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
func (r GeneralOpenAIRequest) ParseInput() []string {
|
func (r GeneralOpenAIRequest) ParseInput() []string {
|
||||||
|
|||||||
@@ -9,6 +9,7 @@ import (
|
|||||||
|
|
||||||
func SetRelayRouter(router *gin.Engine) {
|
func SetRelayRouter(router *gin.Engine) {
|
||||||
router.Use(middleware.CORS())
|
router.Use(middleware.CORS())
|
||||||
|
router.Use(middleware.GzipDecodeMiddleware())
|
||||||
// https://platform.openai.com/docs/api-reference/introduction
|
// https://platform.openai.com/docs/api-reference/introduction
|
||||||
modelsRouter := router.Group("/v1/models")
|
modelsRouter := router.Group("/v1/models")
|
||||||
modelsRouter.Use(middleware.TokenAuth())
|
modelsRouter.Use(middleware.TokenAuth())
|
||||||
|
|||||||
@@ -395,7 +395,7 @@ const TokensTable = () => {
|
|||||||
url = mjLink + `/#/?settings={"key":"sk-${key}","url":"${serverAddress}"}`;
|
url = mjLink + `/#/?settings={"key":"sk-${key}","url":"${serverAddress}"}`;
|
||||||
break;
|
break;
|
||||||
case 'lobechat':
|
case 'lobechat':
|
||||||
url = chatLink + `/?settings={"keyVaults":{"openai":{"apiKey":"sk-${key}","baseURL":"${serverAddress}"/v1"}}}`;
|
url = chatLink + `/?settings={"keyVaults":{"openai":{"apiKey":"sk-${key}","baseURL":"${serverAddress}/v1"}}}`;
|
||||||
break;
|
break;
|
||||||
default:
|
default:
|
||||||
if (!chatLink) {
|
if (!chatLink) {
|
||||||
|
|||||||
@@ -30,6 +30,7 @@ export const CHANNEL_OPTIONS = [
|
|||||||
{ key: 42, text: 'VertexAI', value: 42, color: 'blue' },
|
{ key: 42, text: 'VertexAI', value: 42, color: 'blue' },
|
||||||
{ key: 43, text: 'Proxy', value: 43, color: 'blue' },
|
{ key: 43, text: 'Proxy', value: 43, color: 'blue' },
|
||||||
{ key: 44, text: 'SiliconFlow', value: 44, color: 'blue' },
|
{ key: 44, text: 'SiliconFlow', value: 44, color: 'blue' },
|
||||||
|
{ key: 45, text: 'xAI', value: 45, color: 'blue' },
|
||||||
{ key: 8, text: '自定义渠道', value: 8, color: 'pink' },
|
{ key: 8, text: '自定义渠道', value: 8, color: 'pink' },
|
||||||
{ key: 22, text: '知识库:FastGPT', value: 22, color: 'blue' },
|
{ key: 22, text: '知识库:FastGPT', value: 22, color: 'blue' },
|
||||||
{ key: 21, text: '知识库:AI Proxy', value: 21, color: 'purple' },
|
{ key: 21, text: '知识库:AI Proxy', value: 21, color: 'purple' },
|
||||||
|
|||||||
@@ -43,6 +43,7 @@ const EditChannel = (props) => {
|
|||||||
base_url: '',
|
base_url: '',
|
||||||
other: '',
|
other: '',
|
||||||
model_mapping: '',
|
model_mapping: '',
|
||||||
|
system_prompt: '',
|
||||||
models: [],
|
models: [],
|
||||||
auto_ban: 1,
|
auto_ban: 1,
|
||||||
groups: ['default']
|
groups: ['default']
|
||||||
@@ -63,7 +64,7 @@ const EditChannel = (props) => {
|
|||||||
let localModels = [];
|
let localModels = [];
|
||||||
switch (value) {
|
switch (value) {
|
||||||
case 14:
|
case 14:
|
||||||
localModels = ["claude-instant-1.2", "claude-2", "claude-2.0", "claude-2.1", "claude-3-opus-20240229", "claude-3-sonnet-20240229", "claude-3-haiku-20240307", "claude-3-5-sonnet-20240620"];
|
localModels = ["claude-instant-1.2", "claude-2", "claude-2.0", "claude-2.1", "claude-3-opus-20240229", "claude-3-sonnet-20240229", "claude-3-haiku-20240307", "claude-3-5-haiku-20241022", "claude-3-5-sonnet-20240620", "claude-3-5-sonnet-20241022"];
|
||||||
break;
|
break;
|
||||||
case 11:
|
case 11:
|
||||||
localModels = ['PaLM-2'];
|
localModels = ['PaLM-2'];
|
||||||
@@ -78,7 +79,7 @@ const EditChannel = (props) => {
|
|||||||
localModels = ['chatglm_pro', 'chatglm_std', 'chatglm_lite'];
|
localModels = ['chatglm_pro', 'chatglm_std', 'chatglm_lite'];
|
||||||
break;
|
break;
|
||||||
case 18:
|
case 18:
|
||||||
localModels = ['SparkDesk', 'SparkDesk-v1.1', 'SparkDesk-v2.1', 'SparkDesk-v3.1', 'SparkDesk-v3.1-128K', 'SparkDesk-v3.5', 'SparkDesk-v4.0'];
|
localModels = ['SparkDesk', 'SparkDesk-v1.1', 'SparkDesk-v2.1', 'SparkDesk-v3.1', 'SparkDesk-v3.1-128K', 'SparkDesk-v3.5', 'SparkDesk-v3.5-32K', 'SparkDesk-v4.0'];
|
||||||
break;
|
break;
|
||||||
case 19:
|
case 19:
|
||||||
localModels = ['360GPT_S2_V9', 'embedding-bert-512-v1', 'embedding_s1_v1', 'semantic_similarity_s1_v1'];
|
localModels = ['360GPT_S2_V9', 'embedding-bert-512-v1', 'embedding_s1_v1', 'semantic_similarity_s1_v1'];
|
||||||
@@ -304,163 +305,163 @@ const EditChannel = (props) => {
|
|||||||
width={isMobile() ? '100%' : 600}
|
width={isMobile() ? '100%' : 600}
|
||||||
>
|
>
|
||||||
<Spin spinning={loading}>
|
<Spin spinning={loading}>
|
||||||
<div style={{marginTop: 10}}>
|
<div style={{ marginTop: 10 }}>
|
||||||
<Typography.Text strong>类型:</Typography.Text>
|
<Typography.Text strong>类型:</Typography.Text>
|
||||||
</div>
|
</div>
|
||||||
<Select
|
<Select
|
||||||
name='type'
|
name='type'
|
||||||
required
|
required
|
||||||
optionList={CHANNEL_OPTIONS}
|
optionList={CHANNEL_OPTIONS}
|
||||||
value={inputs.type}
|
value={inputs.type}
|
||||||
onChange={value => handleInputChange('type', value)}
|
onChange={value => handleInputChange('type', value)}
|
||||||
style={{width: '50%'}}
|
style={{ width: '50%' }}
|
||||||
/>
|
/>
|
||||||
{
|
{
|
||||||
inputs.type === 3 && (
|
inputs.type === 3 && (
|
||||||
<>
|
<>
|
||||||
<div style={{marginTop: 10}}>
|
<div style={{ marginTop: 10 }}>
|
||||||
<Banner type={"warning"} description={
|
<Banner type={"warning"} description={
|
||||||
<>
|
<>
|
||||||
注意,<strong>模型部署名称必须和模型名称保持一致</strong>,因为 One API 会把请求体中的
|
注意,<strong>模型部署名称必须和模型名称保持一致</strong>,因为 One API 会把请求体中的
|
||||||
model
|
model
|
||||||
参数替换为你的部署名称(模型名称中的点会被剔除),<a target='_blank'
|
参数替换为你的部署名称(模型名称中的点会被剔除),<a target='_blank'
|
||||||
href='https://github.com/songquanpeng/one-api/issues/133?notification_referrer_id=NT_kwDOAmJSYrM2NjIwMzI3NDgyOjM5OTk4MDUw#issuecomment-1571602271'>图片演示</a>。
|
href='https://github.com/songquanpeng/one-api/issues/133?notification_referrer_id=NT_kwDOAmJSYrM2NjIwMzI3NDgyOjM5OTk4MDUw#issuecomment-1571602271'>图片演示</a>。
|
||||||
</>
|
</>
|
||||||
}>
|
}>
|
||||||
</Banner>
|
</Banner>
|
||||||
</div>
|
</div>
|
||||||
<div style={{marginTop: 10}}>
|
<div style={{ marginTop: 10 }}>
|
||||||
<Typography.Text strong>AZURE_OPENAI_ENDPOINT:</Typography.Text>
|
<Typography.Text strong>AZURE_OPENAI_ENDPOINT:</Typography.Text>
|
||||||
</div>
|
</div>
|
||||||
<Input
|
<Input
|
||||||
label='AZURE_OPENAI_ENDPOINT'
|
label='AZURE_OPENAI_ENDPOINT'
|
||||||
name='azure_base_url'
|
name='azure_base_url'
|
||||||
placeholder={'请输入 AZURE_OPENAI_ENDPOINT,例如:https://docs-test-001.openai.azure.com'}
|
placeholder={'请输入 AZURE_OPENAI_ENDPOINT,例如:https://docs-test-001.openai.azure.com'}
|
||||||
onChange={value => {
|
onChange={value => {
|
||||||
handleInputChange('base_url', value)
|
handleInputChange('base_url', value)
|
||||||
}}
|
}}
|
||||||
value={inputs.base_url}
|
value={inputs.base_url}
|
||||||
autoComplete='new-password'
|
autoComplete='new-password'
|
||||||
/>
|
/>
|
||||||
<div style={{marginTop: 10}}>
|
<div style={{ marginTop: 10 }}>
|
||||||
<Typography.Text strong>默认 API 版本:</Typography.Text>
|
<Typography.Text strong>默认 API 版本:</Typography.Text>
|
||||||
</div>
|
</div>
|
||||||
<Input
|
<Input
|
||||||
label='默认 API 版本'
|
label='默认 API 版本'
|
||||||
name='azure_other'
|
name='azure_other'
|
||||||
placeholder={'请输入默认 API 版本,例如:2024-03-01-preview,该配置可以被实际的请求查询参数所覆盖'}
|
placeholder={'请输入默认 API 版本,例如:2024-03-01-preview,该配置可以被实际的请求查询参数所覆盖'}
|
||||||
onChange={value => {
|
onChange={value => {
|
||||||
handleInputChange('other', value)
|
handleInputChange('other', value)
|
||||||
}}
|
}}
|
||||||
value={inputs.other}
|
value={inputs.other}
|
||||||
autoComplete='new-password'
|
autoComplete='new-password'
|
||||||
/>
|
/>
|
||||||
</>
|
</>
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
{
|
{
|
||||||
inputs.type === 8 && (
|
inputs.type === 8 && (
|
||||||
<>
|
<>
|
||||||
<div style={{marginTop: 10}}>
|
<div style={{ marginTop: 10 }}>
|
||||||
<Typography.Text strong>Base URL:</Typography.Text>
|
<Typography.Text strong>Base URL:</Typography.Text>
|
||||||
</div>
|
</div>
|
||||||
<Input
|
<Input
|
||||||
name='base_url'
|
name='base_url'
|
||||||
placeholder={'请输入自定义渠道的 Base URL'}
|
placeholder={'请输入自定义渠道的 Base URL'}
|
||||||
onChange={value => {
|
onChange={value => {
|
||||||
handleInputChange('base_url', value)
|
handleInputChange('base_url', value)
|
||||||
}}
|
}}
|
||||||
value={inputs.base_url}
|
value={inputs.base_url}
|
||||||
autoComplete='new-password'
|
autoComplete='new-password'
|
||||||
/>
|
/>
|
||||||
</>
|
</>
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
<div style={{marginTop: 10}}>
|
<div style={{ marginTop: 10 }}>
|
||||||
<Typography.Text strong>名称:</Typography.Text>
|
<Typography.Text strong>名称:</Typography.Text>
|
||||||
</div>
|
</div>
|
||||||
<Input
|
<Input
|
||||||
required
|
required
|
||||||
name='name'
|
name='name'
|
||||||
placeholder={'请为渠道命名'}
|
placeholder={'请为渠道命名'}
|
||||||
onChange={value => {
|
onChange={value => {
|
||||||
handleInputChange('name', value)
|
handleInputChange('name', value)
|
||||||
}}
|
}}
|
||||||
value={inputs.name}
|
value={inputs.name}
|
||||||
autoComplete='new-password'
|
autoComplete='new-password'
|
||||||
/>
|
/>
|
||||||
<div style={{marginTop: 10}}>
|
<div style={{ marginTop: 10 }}>
|
||||||
<Typography.Text strong>分组:</Typography.Text>
|
<Typography.Text strong>分组:</Typography.Text>
|
||||||
</div>
|
</div>
|
||||||
<Select
|
<Select
|
||||||
placeholder={'请选择可以使用该渠道的分组'}
|
placeholder={'请选择可以使用该渠道的分组'}
|
||||||
name='groups'
|
name='groups'
|
||||||
required
|
required
|
||||||
multiple
|
multiple
|
||||||
selection
|
selection
|
||||||
allowAdditions
|
allowAdditions
|
||||||
additionLabel={'请在系统设置页面编辑分组倍率以添加新的分组:'}
|
additionLabel={'请在系统设置页面编辑分组倍率以添加新的分组:'}
|
||||||
onChange={value => {
|
onChange={value => {
|
||||||
handleInputChange('groups', value)
|
handleInputChange('groups', value)
|
||||||
}}
|
}}
|
||||||
value={inputs.groups}
|
value={inputs.groups}
|
||||||
autoComplete='new-password'
|
autoComplete='new-password'
|
||||||
optionList={groupOptions}
|
optionList={groupOptions}
|
||||||
/>
|
/>
|
||||||
{
|
{
|
||||||
inputs.type === 18 && (
|
inputs.type === 18 && (
|
||||||
<>
|
<>
|
||||||
<div style={{marginTop: 10}}>
|
<div style={{ marginTop: 10 }}>
|
||||||
<Typography.Text strong>模型版本:</Typography.Text>
|
<Typography.Text strong>模型版本:</Typography.Text>
|
||||||
</div>
|
</div>
|
||||||
<Input
|
<Input
|
||||||
name='other'
|
name='other'
|
||||||
placeholder={'请输入星火大模型版本,注意是接口地址中的版本号,例如:v2.1'}
|
placeholder={'请输入星火大模型版本,注意是接口地址中的版本号,例如:v2.1'}
|
||||||
onChange={value => {
|
onChange={value => {
|
||||||
handleInputChange('other', value)
|
handleInputChange('other', value)
|
||||||
}}
|
}}
|
||||||
value={inputs.other}
|
value={inputs.other}
|
||||||
autoComplete='new-password'
|
autoComplete='new-password'
|
||||||
/>
|
/>
|
||||||
</>
|
</>
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
{
|
{
|
||||||
inputs.type === 21 && (
|
inputs.type === 21 && (
|
||||||
<>
|
<>
|
||||||
<div style={{marginTop: 10}}>
|
<div style={{ marginTop: 10 }}>
|
||||||
<Typography.Text strong>知识库 ID:</Typography.Text>
|
<Typography.Text strong>知识库 ID:</Typography.Text>
|
||||||
</div>
|
</div>
|
||||||
<Input
|
<Input
|
||||||
label='知识库 ID'
|
label='知识库 ID'
|
||||||
name='other'
|
name='other'
|
||||||
placeholder={'请输入知识库 ID,例如:123456'}
|
placeholder={'请输入知识库 ID,例如:123456'}
|
||||||
onChange={value => {
|
onChange={value => {
|
||||||
handleInputChange('other', value)
|
handleInputChange('other', value)
|
||||||
}}
|
}}
|
||||||
value={inputs.other}
|
value={inputs.other}
|
||||||
autoComplete='new-password'
|
autoComplete='new-password'
|
||||||
/>
|
/>
|
||||||
</>
|
</>
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
<div style={{marginTop: 10}}>
|
<div style={{ marginTop: 10 }}>
|
||||||
<Typography.Text strong>模型:</Typography.Text>
|
<Typography.Text strong>模型:</Typography.Text>
|
||||||
</div>
|
</div>
|
||||||
<Select
|
<Select
|
||||||
placeholder={'请选择该渠道所支持的模型'}
|
placeholder={'请选择该渠道所支持的模型'}
|
||||||
name='models'
|
name='models'
|
||||||
required
|
required
|
||||||
multiple
|
multiple
|
||||||
selection
|
selection
|
||||||
onChange={value => {
|
onChange={value => {
|
||||||
handleInputChange('models', value)
|
handleInputChange('models', value)
|
||||||
}}
|
}}
|
||||||
value={inputs.models}
|
value={inputs.models}
|
||||||
autoComplete='new-password'
|
autoComplete='new-password'
|
||||||
optionList={modelOptions}
|
optionList={modelOptions}
|
||||||
/>
|
/>
|
||||||
<div style={{lineHeight: '40px', marginBottom: '12px'}}>
|
<div style={{ lineHeight: '40px', marginBottom: '12px' }}>
|
||||||
<Space>
|
<Space>
|
||||||
<Button type='primary' onClick={() => {
|
<Button type='primary' onClick={() => {
|
||||||
handleInputChange('models', basicModels);
|
handleInputChange('models', basicModels);
|
||||||
@@ -473,28 +474,41 @@ const EditChannel = (props) => {
|
|||||||
}}>清除所有模型</Button>
|
}}>清除所有模型</Button>
|
||||||
</Space>
|
</Space>
|
||||||
<Input
|
<Input
|
||||||
addonAfter={
|
addonAfter={
|
||||||
<Button type='primary' onClick={addCustomModel}>填入</Button>
|
<Button type='primary' onClick={addCustomModel}>填入</Button>
|
||||||
}
|
}
|
||||||
placeholder='输入自定义模型名称'
|
placeholder='输入自定义模型名称'
|
||||||
value={customModel}
|
value={customModel}
|
||||||
onChange={(value) => {
|
onChange={(value) => {
|
||||||
setCustomModel(value.trim());
|
setCustomModel(value.trim());
|
||||||
}}
|
}}
|
||||||
/>
|
/>
|
||||||
</div>
|
</div>
|
||||||
<div style={{marginTop: 10}}>
|
<div style={{ marginTop: 10 }}>
|
||||||
<Typography.Text strong>模型重定向:</Typography.Text>
|
<Typography.Text strong>模型重定向:</Typography.Text>
|
||||||
</div>
|
</div>
|
||||||
<TextArea
|
<TextArea
|
||||||
placeholder={`此项可选,用于修改请求体中的模型名称,为一个 JSON 字符串,键为请求中模型名称,值为要替换的模型名称,例如:\n${JSON.stringify(MODEL_MAPPING_EXAMPLE, null, 2)}`}
|
placeholder={`此项可选,用于修改请求体中的模型名称,为一个 JSON 字符串,键为请求中模型名称,值为要替换的模型名称,例如:\n${JSON.stringify(MODEL_MAPPING_EXAMPLE, null, 2)}`}
|
||||||
name='model_mapping'
|
name='model_mapping'
|
||||||
onChange={value => {
|
onChange={value => {
|
||||||
handleInputChange('model_mapping', value)
|
handleInputChange('model_mapping', value)
|
||||||
}}
|
}}
|
||||||
autosize
|
autosize
|
||||||
value={inputs.model_mapping}
|
value={inputs.model_mapping}
|
||||||
autoComplete='new-password'
|
autoComplete='new-password'
|
||||||
|
/>
|
||||||
|
<div style={{ marginTop: 10 }}>
|
||||||
|
<Typography.Text strong>系统提示词:</Typography.Text>
|
||||||
|
</div>
|
||||||
|
<TextArea
|
||||||
|
placeholder={`此项可选,用于强制设置给定的系统提示词,请配合自定义模型 & 模型重定向使用,首先创建一个唯一的自定义模型名称并在上面填入,之后将该自定义模型重定向映射到该渠道一个原生支持的模型`}
|
||||||
|
name='system_prompt'
|
||||||
|
onChange={value => {
|
||||||
|
handleInputChange('system_prompt', value)
|
||||||
|
}}
|
||||||
|
autosize
|
||||||
|
value={inputs.system_prompt}
|
||||||
|
autoComplete='new-password'
|
||||||
/>
|
/>
|
||||||
<Typography.Text style={{
|
<Typography.Text style={{
|
||||||
color: 'rgba(var(--semi-blue-5), 1)',
|
color: 'rgba(var(--semi-blue-5), 1)',
|
||||||
@@ -507,116 +521,116 @@ const EditChannel = (props) => {
|
|||||||
}>
|
}>
|
||||||
填入模板
|
填入模板
|
||||||
</Typography.Text>
|
</Typography.Text>
|
||||||
<div style={{marginTop: 10}}>
|
<div style={{ marginTop: 10 }}>
|
||||||
<Typography.Text strong>密钥:</Typography.Text>
|
<Typography.Text strong>密钥:</Typography.Text>
|
||||||
</div>
|
</div>
|
||||||
{
|
{
|
||||||
batch ?
|
batch ?
|
||||||
<TextArea
|
<TextArea
|
||||||
label='密钥'
|
label='密钥'
|
||||||
name='key'
|
name='key'
|
||||||
required
|
required
|
||||||
placeholder={'请输入密钥,一行一个'}
|
placeholder={'请输入密钥,一行一个'}
|
||||||
onChange={value => {
|
onChange={value => {
|
||||||
handleInputChange('key', value)
|
handleInputChange('key', value)
|
||||||
}}
|
}}
|
||||||
value={inputs.key}
|
value={inputs.key}
|
||||||
style={{minHeight: 150, fontFamily: 'JetBrains Mono, Consolas'}}
|
style={{ minHeight: 150, fontFamily: 'JetBrains Mono, Consolas' }}
|
||||||
autoComplete='new-password'
|
autoComplete='new-password'
|
||||||
/>
|
/>
|
||||||
:
|
:
|
||||||
<Input
|
<Input
|
||||||
label='密钥'
|
label='密钥'
|
||||||
name='key'
|
name='key'
|
||||||
required
|
required
|
||||||
placeholder={type2secretPrompt(inputs.type)}
|
placeholder={type2secretPrompt(inputs.type)}
|
||||||
onChange={value => {
|
onChange={value => {
|
||||||
handleInputChange('key', value)
|
handleInputChange('key', value)
|
||||||
}}
|
}}
|
||||||
value={inputs.key}
|
value={inputs.key}
|
||||||
autoComplete='new-password'
|
autoComplete='new-password'
|
||||||
/>
|
/>
|
||||||
}
|
}
|
||||||
<div style={{marginTop: 10}}>
|
<div style={{ marginTop: 10 }}>
|
||||||
<Typography.Text strong>组织:</Typography.Text>
|
<Typography.Text strong>组织:</Typography.Text>
|
||||||
</div>
|
</div>
|
||||||
<Input
|
<Input
|
||||||
label='组织,可选,不填则为默认组织'
|
label='组织,可选,不填则为默认组织'
|
||||||
name='openai_organization'
|
name='openai_organization'
|
||||||
placeholder='请输入组织org-xxx'
|
placeholder='请输入组织org-xxx'
|
||||||
onChange={value => {
|
onChange={value => {
|
||||||
handleInputChange('openai_organization', value)
|
handleInputChange('openai_organization', value)
|
||||||
}}
|
}}
|
||||||
value={inputs.openai_organization}
|
value={inputs.openai_organization}
|
||||||
/>
|
/>
|
||||||
<div style={{marginTop: 10, display: 'flex'}}>
|
<div style={{ marginTop: 10, display: 'flex' }}>
|
||||||
<Space>
|
<Space>
|
||||||
<Checkbox
|
<Checkbox
|
||||||
name='auto_ban'
|
name='auto_ban'
|
||||||
checked={autoBan}
|
checked={autoBan}
|
||||||
onChange={
|
onChange={
|
||||||
() => {
|
() => {
|
||||||
setAutoBan(!autoBan);
|
setAutoBan(!autoBan);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
// onChange={handleInputChange}
|
// onChange={handleInputChange}
|
||||||
/>
|
/>
|
||||||
<Typography.Text
|
<Typography.Text
|
||||||
strong>是否自动禁用(仅当自动禁用开启时有效),关闭后不会自动禁用该渠道:</Typography.Text>
|
strong>是否自动禁用(仅当自动禁用开启时有效),关闭后不会自动禁用该渠道:</Typography.Text>
|
||||||
</Space>
|
</Space>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
{
|
{
|
||||||
!isEdit && (
|
!isEdit && (
|
||||||
<div style={{marginTop: 10, display: 'flex'}}>
|
<div style={{ marginTop: 10, display: 'flex' }}>
|
||||||
<Space>
|
<Space>
|
||||||
<Checkbox
|
<Checkbox
|
||||||
checked={batch}
|
checked={batch}
|
||||||
label='批量创建'
|
label='批量创建'
|
||||||
name='batch'
|
name='batch'
|
||||||
onChange={() => setBatch(!batch)}
|
onChange={() => setBatch(!batch)}
|
||||||
/>
|
/>
|
||||||
<Typography.Text strong>批量创建</Typography.Text>
|
<Typography.Text strong>批量创建</Typography.Text>
|
||||||
</Space>
|
</Space>
|
||||||
|
</div>
|
||||||
|
)
|
||||||
|
}
|
||||||
|
{
|
||||||
|
inputs.type !== 3 && inputs.type !== 8 && inputs.type !== 22 && (
|
||||||
|
<>
|
||||||
|
<div style={{ marginTop: 10 }}>
|
||||||
|
<Typography.Text strong>代理:</Typography.Text>
|
||||||
</div>
|
</div>
|
||||||
)
|
<Input
|
||||||
|
label='代理'
|
||||||
|
name='base_url'
|
||||||
|
placeholder={'此项可选,用于通过代理站来进行 API 调用'}
|
||||||
|
onChange={value => {
|
||||||
|
handleInputChange('base_url', value)
|
||||||
|
}}
|
||||||
|
value={inputs.base_url}
|
||||||
|
autoComplete='new-password'
|
||||||
|
/>
|
||||||
|
</>
|
||||||
|
)
|
||||||
}
|
}
|
||||||
{
|
{
|
||||||
inputs.type !== 3 && inputs.type !== 8 && inputs.type !== 22 && (
|
inputs.type === 22 && (
|
||||||
<>
|
<>
|
||||||
<div style={{marginTop: 10}}>
|
<div style={{ marginTop: 10 }}>
|
||||||
<Typography.Text strong>代理:</Typography.Text>
|
<Typography.Text strong>私有部署地址:</Typography.Text>
|
||||||
</div>
|
</div>
|
||||||
<Input
|
<Input
|
||||||
label='代理'
|
name='base_url'
|
||||||
name='base_url'
|
placeholder={'请输入私有部署地址,格式为:https://fastgpt.run/api/openapi'}
|
||||||
placeholder={'此项可选,用于通过代理站来进行 API 调用'}
|
onChange={value => {
|
||||||
onChange={value => {
|
handleInputChange('base_url', value)
|
||||||
handleInputChange('base_url', value)
|
}}
|
||||||
}}
|
value={inputs.base_url}
|
||||||
value={inputs.base_url}
|
autoComplete='new-password'
|
||||||
autoComplete='new-password'
|
/>
|
||||||
/>
|
</>
|
||||||
</>
|
)
|
||||||
)
|
|
||||||
}
|
|
||||||
{
|
|
||||||
inputs.type === 22 && (
|
|
||||||
<>
|
|
||||||
<div style={{marginTop: 10}}>
|
|
||||||
<Typography.Text strong>私有部署地址:</Typography.Text>
|
|
||||||
</div>
|
|
||||||
<Input
|
|
||||||
name='base_url'
|
|
||||||
placeholder={'请输入私有部署地址,格式为:https://fastgpt.run/api/openapi'}
|
|
||||||
onChange={value => {
|
|
||||||
handleInputChange('base_url', value)
|
|
||||||
}}
|
|
||||||
value={inputs.base_url}
|
|
||||||
autoComplete='new-password'
|
|
||||||
/>
|
|
||||||
</>
|
|
||||||
)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
</Spin>
|
</Spin>
|
||||||
|
|||||||
@@ -179,6 +179,12 @@ export const CHANNEL_OPTIONS = {
|
|||||||
value: 44,
|
value: 44,
|
||||||
color: 'primary'
|
color: 'primary'
|
||||||
},
|
},
|
||||||
|
45: {
|
||||||
|
key: 45,
|
||||||
|
text: 'xAI',
|
||||||
|
value: 45,
|
||||||
|
color: 'primary'
|
||||||
|
},
|
||||||
41: {
|
41: {
|
||||||
key: 41,
|
key: 41,
|
||||||
text: 'Novita',
|
text: 'Novita',
|
||||||
|
|||||||
@@ -95,7 +95,7 @@ export async function onLarkOAuthClicked(lark_client_id) {
|
|||||||
const state = await getOAuthState();
|
const state = await getOAuthState();
|
||||||
if (!state) return;
|
if (!state) return;
|
||||||
let redirect_uri = `${window.location.origin}/oauth/lark`;
|
let redirect_uri = `${window.location.origin}/oauth/lark`;
|
||||||
window.open(`https://open.feishu.cn/open-apis/authen/v1/index?redirect_uri=${redirect_uri}&app_id=${lark_client_id}&state=${state}`);
|
window.open(`https://accounts.feishu.cn/open-apis/authen/v1/authorize?redirect_uri=${redirect_uri}&client_id=${lark_client_id}&state=${state}`);
|
||||||
}
|
}
|
||||||
|
|
||||||
export async function onOidcClicked(auth_url, client_id, openInNewTab = false) {
|
export async function onOidcClicked(auth_url, client_id, openInNewTab = false) {
|
||||||
|
|||||||
@@ -595,6 +595,28 @@ const EditModal = ({ open, channelId, onCancel, onOk }) => {
|
|||||||
<FormHelperText id="helper-tex-channel-model_mapping-label"> {inputPrompt.model_mapping} </FormHelperText>
|
<FormHelperText id="helper-tex-channel-model_mapping-label"> {inputPrompt.model_mapping} </FormHelperText>
|
||||||
)}
|
)}
|
||||||
</FormControl>
|
</FormControl>
|
||||||
|
<FormControl fullWidth error={Boolean(touched.system_prompt && errors.system_prompt)} sx={{ ...theme.typography.otherInput }}>
|
||||||
|
{/* <InputLabel htmlFor="channel-model_mapping-label">{inputLabel.model_mapping}</InputLabel> */}
|
||||||
|
<TextField
|
||||||
|
multiline
|
||||||
|
id="channel-system_prompt-label"
|
||||||
|
label={inputLabel.system_prompt}
|
||||||
|
value={values.system_prompt}
|
||||||
|
name="system_prompt"
|
||||||
|
onBlur={handleBlur}
|
||||||
|
onChange={handleChange}
|
||||||
|
aria-describedby="helper-text-channel-system_prompt-label"
|
||||||
|
minRows={5}
|
||||||
|
placeholder={inputPrompt.system_prompt}
|
||||||
|
/>
|
||||||
|
{touched.system_prompt && errors.system_prompt ? (
|
||||||
|
<FormHelperText error id="helper-tex-channel-system_prompt-label">
|
||||||
|
{errors.system_prompt}
|
||||||
|
</FormHelperText>
|
||||||
|
) : (
|
||||||
|
<FormHelperText id="helper-tex-channel-system_prompt-label"> {inputPrompt.system_prompt} </FormHelperText>
|
||||||
|
)}
|
||||||
|
</FormControl>
|
||||||
<DialogActions>
|
<DialogActions>
|
||||||
<Button onClick={onCancel}>取消</Button>
|
<Button onClick={onCancel}>取消</Button>
|
||||||
<Button disableElevation disabled={isSubmitting} type="submit" variant="contained" color="primary">
|
<Button disableElevation disabled={isSubmitting} type="submit" variant="contained" color="primary">
|
||||||
|
|||||||
@@ -18,6 +18,7 @@ const defaultConfig = {
|
|||||||
other: '其他参数',
|
other: '其他参数',
|
||||||
models: '模型',
|
models: '模型',
|
||||||
model_mapping: '模型映射关系',
|
model_mapping: '模型映射关系',
|
||||||
|
system_prompt: '系统提示词',
|
||||||
groups: '用户组',
|
groups: '用户组',
|
||||||
config: null
|
config: null
|
||||||
},
|
},
|
||||||
@@ -30,6 +31,7 @@ const defaultConfig = {
|
|||||||
models: '请选择该渠道所支持的模型',
|
models: '请选择该渠道所支持的模型',
|
||||||
model_mapping:
|
model_mapping:
|
||||||
'请输入要修改的模型映射关系,格式为:api请求模型ID:实际转发给渠道的模型ID,使用JSON数组表示,例如:{"gpt-3.5": "gpt-35"}',
|
'请输入要修改的模型映射关系,格式为:api请求模型ID:实际转发给渠道的模型ID,使用JSON数组表示,例如:{"gpt-3.5": "gpt-35"}',
|
||||||
|
system_prompt:"此项可选,用于强制设置给定的系统提示词,请配合自定义模型 & 模型重定向使用,首先创建一个唯一的自定义模型名称并在上面填入,之后将该自定义模型重定向映射到该渠道一个原生支持的模型此项可选,用于强制设置给定的系统提示词,请配合自定义模型 & 模型重定向使用,首先创建一个唯一的自定义模型名称并在上面填入,之后将该自定义模型重定向映射到该渠道一个原生支持的模型",
|
||||||
groups: '请选择该渠道所支持的用户组',
|
groups: '请选择该渠道所支持的用户组',
|
||||||
config: null
|
config: null
|
||||||
},
|
},
|
||||||
@@ -91,7 +93,7 @@ const typeConfig = {
|
|||||||
other: '版本号'
|
other: '版本号'
|
||||||
},
|
},
|
||||||
input: {
|
input: {
|
||||||
models: ['SparkDesk', 'SparkDesk-v1.1', 'SparkDesk-v2.1', 'SparkDesk-v3.1', 'SparkDesk-v3.1-128K', 'SparkDesk-v3.5', 'SparkDesk-v4.0']
|
models: ['SparkDesk', 'SparkDesk-v1.1', 'SparkDesk-v2.1', 'SparkDesk-v3.1', 'SparkDesk-v3.1-128K', 'SparkDesk-v3.5', 'SparkDesk-v3.5-32K', 'SparkDesk-v4.0']
|
||||||
},
|
},
|
||||||
prompt: {
|
prompt: {
|
||||||
key: '按照如下格式输入:APPID|APISecret|APIKey',
|
key: '按照如下格式输入:APPID|APISecret|APIKey',
|
||||||
@@ -223,6 +225,9 @@ const typeConfig = {
|
|||||||
},
|
},
|
||||||
modelGroup: 'anthropic'
|
modelGroup: 'anthropic'
|
||||||
},
|
},
|
||||||
|
45: {
|
||||||
|
modelGroup: 'xai'
|
||||||
|
},
|
||||||
};
|
};
|
||||||
|
|
||||||
export { defaultConfig, typeConfig };
|
export { defaultConfig, typeConfig };
|
||||||
|
|||||||
@@ -33,7 +33,7 @@ const COPY_OPTIONS = [
|
|||||||
},
|
},
|
||||||
{ key: 'ama', text: 'BotGem', url: 'ama://set-api-key?server={serverAddress}&key=sk-{key}', encode: true },
|
{ key: 'ama', text: 'BotGem', url: 'ama://set-api-key?server={serverAddress}&key=sk-{key}', encode: true },
|
||||||
{ key: 'opencat', text: 'OpenCat', url: 'opencat://team/join?domain={serverAddress}&token=sk-{key}', encode: true },
|
{ key: 'opencat', text: 'OpenCat', url: 'opencat://team/join?domain={serverAddress}&token=sk-{key}', encode: true },
|
||||||
{ key: 'lobechat', text: 'LobeChat', url: 'https://lobehub.com/?settings={"keyVaults":{"openai":{"apiKey":"user-key","baseURL":"https://your-proxy.com/v1"}}}', encode: true }
|
{ key: 'lobechat', text: 'LobeChat', url: 'https://lobehub.com/?settings={"keyVaults":{"openai":{"apiKey":"sk-{key}","baseURL":"{serverAddress}"}}}', encode: true }
|
||||||
];
|
];
|
||||||
|
|
||||||
function replacePlaceholders(text, key, serverAddress) {
|
function replacePlaceholders(text, key, serverAddress) {
|
||||||
|
|||||||
0
web/build.sh
Normal file → Executable file
0
web/build.sh
Normal file → Executable file
@@ -59,6 +59,12 @@ function renderBalance(type, balance) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
function isShowDetail() {
|
||||||
|
return localStorage.getItem("show_detail") === "true";
|
||||||
|
}
|
||||||
|
|
||||||
|
const promptID = "detail"
|
||||||
|
|
||||||
const ChannelsTable = () => {
|
const ChannelsTable = () => {
|
||||||
const [channels, setChannels] = useState([]);
|
const [channels, setChannels] = useState([]);
|
||||||
const [loading, setLoading] = useState(true);
|
const [loading, setLoading] = useState(true);
|
||||||
@@ -66,7 +72,8 @@ const ChannelsTable = () => {
|
|||||||
const [searchKeyword, setSearchKeyword] = useState('');
|
const [searchKeyword, setSearchKeyword] = useState('');
|
||||||
const [searching, setSearching] = useState(false);
|
const [searching, setSearching] = useState(false);
|
||||||
const [updatingBalance, setUpdatingBalance] = useState(false);
|
const [updatingBalance, setUpdatingBalance] = useState(false);
|
||||||
const [showPrompt, setShowPrompt] = useState(shouldShowPrompt("channel-test"));
|
const [showPrompt, setShowPrompt] = useState(shouldShowPrompt(promptID));
|
||||||
|
const [showDetail, setShowDetail] = useState(isShowDetail());
|
||||||
|
|
||||||
const loadChannels = async (startIdx) => {
|
const loadChannels = async (startIdx) => {
|
||||||
const res = await API.get(`/api/channel/?p=${startIdx}`);
|
const res = await API.get(`/api/channel/?p=${startIdx}`);
|
||||||
@@ -120,6 +127,11 @@ const ChannelsTable = () => {
|
|||||||
await loadChannels(activePage - 1);
|
await loadChannels(activePage - 1);
|
||||||
};
|
};
|
||||||
|
|
||||||
|
const toggleShowDetail = () => {
|
||||||
|
setShowDetail(!showDetail);
|
||||||
|
localStorage.setItem("show_detail", (!showDetail).toString());
|
||||||
|
}
|
||||||
|
|
||||||
useEffect(() => {
|
useEffect(() => {
|
||||||
loadChannels(0)
|
loadChannels(0)
|
||||||
.then()
|
.then()
|
||||||
@@ -364,11 +376,13 @@ const ChannelsTable = () => {
|
|||||||
showPrompt && (
|
showPrompt && (
|
||||||
<Message onDismiss={() => {
|
<Message onDismiss={() => {
|
||||||
setShowPrompt(false);
|
setShowPrompt(false);
|
||||||
setPromptShown("channel-test");
|
setPromptShown(promptID);
|
||||||
}}>
|
}}>
|
||||||
OpenAI 渠道已经不再支持通过 key 获取余额,因此余额显示为 0。对于支持的渠道类型,请点击余额进行刷新。
|
OpenAI 渠道已经不再支持通过 key 获取余额,因此余额显示为 0。对于支持的渠道类型,请点击余额进行刷新。
|
||||||
<br/>
|
<br/>
|
||||||
渠道测试仅支持 chat 模型,优先使用 gpt-3.5-turbo,如果该模型不可用则使用你所配置的模型列表中的第一个模型。
|
渠道测试仅支持 chat 模型,优先使用 gpt-3.5-turbo,如果该模型不可用则使用你所配置的模型列表中的第一个模型。
|
||||||
|
<br/>
|
||||||
|
点击下方详情按钮可以显示余额以及设置额外的测试模型。
|
||||||
</Message>
|
</Message>
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
@@ -428,6 +442,7 @@ const ChannelsTable = () => {
|
|||||||
onClick={() => {
|
onClick={() => {
|
||||||
sortChannel('balance');
|
sortChannel('balance');
|
||||||
}}
|
}}
|
||||||
|
hidden={!showDetail}
|
||||||
>
|
>
|
||||||
余额
|
余额
|
||||||
</Table.HeaderCell>
|
</Table.HeaderCell>
|
||||||
@@ -439,7 +454,7 @@ const ChannelsTable = () => {
|
|||||||
>
|
>
|
||||||
优先级
|
优先级
|
||||||
</Table.HeaderCell>
|
</Table.HeaderCell>
|
||||||
<Table.HeaderCell>测试模型</Table.HeaderCell>
|
<Table.HeaderCell hidden={!showDetail}>测试模型</Table.HeaderCell>
|
||||||
<Table.HeaderCell>操作</Table.HeaderCell>
|
<Table.HeaderCell>操作</Table.HeaderCell>
|
||||||
</Table.Row>
|
</Table.Row>
|
||||||
</Table.Header>
|
</Table.Header>
|
||||||
@@ -467,7 +482,7 @@ const ChannelsTable = () => {
|
|||||||
basic
|
basic
|
||||||
/>
|
/>
|
||||||
</Table.Cell>
|
</Table.Cell>
|
||||||
<Table.Cell>
|
<Table.Cell hidden={!showDetail}>
|
||||||
<Popup
|
<Popup
|
||||||
trigger={<span onClick={() => {
|
trigger={<span onClick={() => {
|
||||||
updateChannelBalance(channel.id, channel.name, idx);
|
updateChannelBalance(channel.id, channel.name, idx);
|
||||||
@@ -494,7 +509,7 @@ const ChannelsTable = () => {
|
|||||||
basic
|
basic
|
||||||
/>
|
/>
|
||||||
</Table.Cell>
|
</Table.Cell>
|
||||||
<Table.Cell>
|
<Table.Cell hidden={!showDetail}>
|
||||||
<Dropdown
|
<Dropdown
|
||||||
placeholder='请选择测试模型'
|
placeholder='请选择测试模型'
|
||||||
selection
|
selection
|
||||||
@@ -573,7 +588,7 @@ const ChannelsTable = () => {
|
|||||||
|
|
||||||
<Table.Footer>
|
<Table.Footer>
|
||||||
<Table.Row>
|
<Table.Row>
|
||||||
<Table.HeaderCell colSpan='9'>
|
<Table.HeaderCell colSpan={showDetail ? "10" : "8"}>
|
||||||
<Button size='small' as={Link} to='/channel/add' loading={loading}>
|
<Button size='small' as={Link} to='/channel/add' loading={loading}>
|
||||||
添加新的渠道
|
添加新的渠道
|
||||||
</Button>
|
</Button>
|
||||||
@@ -611,6 +626,7 @@ const ChannelsTable = () => {
|
|||||||
}
|
}
|
||||||
/>
|
/>
|
||||||
<Button size='small' onClick={refresh} loading={loading}>刷新</Button>
|
<Button size='small' onClick={refresh} loading={loading}>刷新</Button>
|
||||||
|
<Button size='small' onClick={toggleShowDetail}>{showDetail ? "隐藏详情" : "详情"}</Button>
|
||||||
</Table.HeaderCell>
|
</Table.HeaderCell>
|
||||||
</Table.Row>
|
</Table.Row>
|
||||||
</Table.Footer>
|
</Table.Footer>
|
||||||
|
|||||||
@@ -117,7 +117,7 @@ const TokensTable = () => {
|
|||||||
url = nextUrl;
|
url = nextUrl;
|
||||||
break;
|
break;
|
||||||
case 'lobechat':
|
case 'lobechat':
|
||||||
url = nextLink + `/?settings={"keyVaults":{"openai":{"apiKey":"sk-${key}","baseURL":"${serverAddress}"/v1"}}}`;
|
url = nextLink + `/?settings={"keyVaults":{"openai":{"apiKey":"sk-${key}","baseURL":"${serverAddress}/v1"}}}`;
|
||||||
break;
|
break;
|
||||||
default:
|
default:
|
||||||
url = `sk-${key}`;
|
url = `sk-${key}`;
|
||||||
@@ -160,7 +160,7 @@ const TokensTable = () => {
|
|||||||
break;
|
break;
|
||||||
|
|
||||||
case 'lobechat':
|
case 'lobechat':
|
||||||
url = chatLink + `/?settings={"keyVaults":{"openai":{"apiKey":"sk-${key}","baseURL":"${serverAddress}"/v1"}}}`;
|
url = chatLink + `/?settings={"keyVaults":{"openai":{"apiKey":"sk-${key}","baseURL":"${serverAddress}/v1"}}}`;
|
||||||
break;
|
break;
|
||||||
|
|
||||||
default:
|
default:
|
||||||
|
|||||||
@@ -30,6 +30,7 @@ export const CHANNEL_OPTIONS = [
|
|||||||
{ key: 42, text: 'VertexAI', value: 42, color: 'blue' },
|
{ key: 42, text: 'VertexAI', value: 42, color: 'blue' },
|
||||||
{ key: 43, text: 'Proxy', value: 43, color: 'blue' },
|
{ key: 43, text: 'Proxy', value: 43, color: 'blue' },
|
||||||
{ key: 44, text: 'SiliconFlow', value: 44, color: 'blue' },
|
{ key: 44, text: 'SiliconFlow', value: 44, color: 'blue' },
|
||||||
|
{ key: 45, text: 'xAI', value: 45, color: 'blue' },
|
||||||
{ key: 8, text: '自定义渠道', value: 8, color: 'pink' },
|
{ key: 8, text: '自定义渠道', value: 8, color: 'pink' },
|
||||||
{ key: 22, text: '知识库:FastGPT', value: 22, color: 'blue' },
|
{ key: 22, text: '知识库:FastGPT', value: 22, color: 'blue' },
|
||||||
{ key: 21, text: '知识库:AI Proxy', value: 21, color: 'purple' },
|
{ key: 21, text: '知识库:AI Proxy', value: 21, color: 'purple' },
|
||||||
|
|||||||
@@ -43,6 +43,7 @@ const EditChannel = () => {
|
|||||||
base_url: '',
|
base_url: '',
|
||||||
other: '',
|
other: '',
|
||||||
model_mapping: '',
|
model_mapping: '',
|
||||||
|
system_prompt: '',
|
||||||
models: [],
|
models: [],
|
||||||
groups: ['default']
|
groups: ['default']
|
||||||
};
|
};
|
||||||
@@ -425,7 +426,7 @@ const EditChannel = () => {
|
|||||||
)
|
)
|
||||||
}
|
}
|
||||||
{
|
{
|
||||||
inputs.type !== 43 && (
|
inputs.type !== 43 && (<>
|
||||||
<Form.Field>
|
<Form.Field>
|
||||||
<Form.TextArea
|
<Form.TextArea
|
||||||
label='模型重定向'
|
label='模型重定向'
|
||||||
@@ -437,6 +438,18 @@ const EditChannel = () => {
|
|||||||
autoComplete='new-password'
|
autoComplete='new-password'
|
||||||
/>
|
/>
|
||||||
</Form.Field>
|
</Form.Field>
|
||||||
|
<Form.Field>
|
||||||
|
<Form.TextArea
|
||||||
|
label='系统提示词'
|
||||||
|
placeholder={`此项可选,用于强制设置给定的系统提示词,请配合自定义模型 & 模型重定向使用,首先创建一个唯一的自定义模型名称并在上面填入,之后将该自定义模型重定向映射到该渠道一个原生支持的模型`}
|
||||||
|
name='system_prompt'
|
||||||
|
onChange={handleInputChange}
|
||||||
|
value={inputs.system_prompt}
|
||||||
|
style={{ minHeight: 150, fontFamily: 'JetBrains Mono, Consolas' }}
|
||||||
|
autoComplete='new-password'
|
||||||
|
/>
|
||||||
|
</Form.Field>
|
||||||
|
</>
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -2,7 +2,7 @@ import React from 'react';
|
|||||||
import { Header, Segment } from 'semantic-ui-react';
|
import { Header, Segment } from 'semantic-ui-react';
|
||||||
import ChannelsTable from '../../components/ChannelsTable';
|
import ChannelsTable from '../../components/ChannelsTable';
|
||||||
|
|
||||||
const File = () => (
|
const Channel = () => (
|
||||||
<>
|
<>
|
||||||
<Segment>
|
<Segment>
|
||||||
<Header as='h3'>管理渠道</Header>
|
<Header as='h3'>管理渠道</Header>
|
||||||
@@ -11,4 +11,4 @@ const File = () => (
|
|||||||
</>
|
</>
|
||||||
);
|
);
|
||||||
|
|
||||||
export default File;
|
export default Channel;
|
||||||
|
|||||||
Reference in New Issue
Block a user