Compare commits

..

188 Commits

Author SHA1 Message Date
JustSong
8df4a2670b docs: update ByteDance Doubao model link in README
Some checks failed
CI / Unit tests (push) Has been cancelled
CI / commit_lint (push) Has been cancelled
2025-02-21 19:30:16 +08:00
longkeyy
7ac553541b
feat: update openrouter models and price 20250213 (#2084)
Some checks failed
CI / Unit tests (push) Has been cancelled
CI / commit_lint (push) Has been cancelled
2025-02-16 18:01:59 +08:00
longkeyy
a5c517c27a
feat: update ali models and price 20250213 (#2086) 2025-02-16 18:01:24 +08:00
JustSong
3f421c4f04 feat: support Gemini openai compatible api 2025-02-16 17:59:39 +08:00
JustSong
1ce6a226f6 chore: update prompt 2025-02-16 17:42:20 +08:00
JustSong
cafd0a0327 feat: add OpenAI compatible channel (close #2091) 2025-02-16 17:38:06 +08:00
JustSong
8b8cd03e85 feat: add balance not supported message in ChannelsTable
Some checks failed
CI / Unit tests (push) Has been cancelled
CI / commit_lint (push) Has been cancelled
2025-02-12 01:20:28 +08:00
JustSong
54c38de813 style: improve code formatting and structure in ChannelsTable and render helpers 2025-02-12 01:15:45 +08:00
JustSong
d6284bf6b0 feat: enhance error handling for utils.js 2025-02-12 00:46:13 +08:00
DobyAsa
df5d2ca93d
docs: fix README typo (#2060) 2025-02-12 00:35:29 +08:00
Laisky.Cai
fef7ae048b
feat: support gemini-2.0-flash (#2055)
* feat: support gemini-2.0-flash

- Enhance model support by adding new entries and refining checks for system instruction compatibility.
- Update logging display behavior and adjust default quotas for better user experience.
- Revamp pricing structures in the billing system to reflect current model values and deprecate outdated entries.
- Streamline code by replacing hardcoded values with configurations for maintainability.

* feat: add new Gemini 2.0 flash models to adapter and billing ratio

* fix: update GetRequestURL to support gemini-1.5 model in versioning
2025-02-12 00:34:25 +08:00
JustSong
6916debf66 feat: update TestPrompt to specify output format for model name 2025-02-12 00:28:23 +08:00
JustSong
53da209134 feat: add AliBailian adaptor and update channel options 2025-02-12 00:15:43 +08:00
JustSong
517f6ad211 feat: update date range to display at least 7 days of data in Dashboard
Some checks are pending
CI / Unit tests (push) Waiting to run
CI / commit_lint (push) Waiting to run
2025-02-11 01:48:26 +08:00
JustSong
10aba11f18 style: improve code formatting in ChannelsTable component 2025-02-11 00:38:15 +08:00
JustSong
4d011c5f98 feat: add OpenRouter balance update functionality and improve code formatting 2025-02-11 00:35:06 +08:00
JustSong
eb96aa635e feat: update OpenRouter channel name and add model list for OpenRouter adaptor 2025-02-11 00:20:55 +08:00
JustSong
c715f2bc1d feat: add new models for xai
Some checks are pending
CI / Unit tests (push) Waiting to run
CI / commit_lint (push) Waiting to run
2025-02-09 21:21:28 +08:00
JustSong
aed090dd55 fix: fix cannot select test model when searching
Some checks are pending
CI / Unit tests (push) Waiting to run
CI / commit_lint (push) Waiting to run
2025-02-09 19:09:53 +08:00
JustSong
696265774e feat: add MiniMax model constants to the adaptor 2025-02-09 18:55:32 +08:00
JustSong
974729426d feat: refactor Xunfei API version handling and update model list 2025-02-09 18:50:51 +08:00
JustSong
57c1367ec8 feat: add Xunfei V2 channel support and update related configurations 2025-02-09 18:31:54 +08:00
JustSong
44233d5c04 feat: add completion tokens details and reasoning effort fields to model (close #2050) 2025-02-09 18:14:01 +08:00
JustSong
bf45a955c3 fix: update system prompt handling by renaming field and ensuring proper usage in request processing (close #2069) 2025-02-09 14:41:42 +08:00
JustSong
20435fcbfc fix: simplify Docker build configuration by removing unnecessary platform and architecture settings 2025-02-09 14:33:25 +08:00
JustSong
6e7a1c2323 fix: format channel options for consistency and improve tips for user guidance 2025-02-09 12:42:31 +08:00
JustSong
dd65b997dd feat: add Baidu V2 channel support and improve model handling 2025-02-09 12:37:26 +08:00
JustSong
0b6d03d6c6 fix: update channel name from '火山引擎' to '字节火山引擎' for consistency 2025-02-09 12:08:40 +08:00
JustSong
4375246e24 feat: enhance channel options with tips and descriptions for better user guidance 2025-02-09 12:03:31 +08:00
longkeyy
3e3b8230ac
fix: add read/write locks for ModelRatio and GroupRatio to prevent concurrent map read/write issues (#2067)
Some checks are pending
CI / Unit tests (push) Waiting to run
CI / commit_lint (push) Waiting to run
2025-02-09 11:02:45 +08:00
JustSong
07808122a6 fix: fix Debugf not using DebugEnabled (close #2068) 2025-02-09 10:57:22 +08:00
牡丹凤凰
c96895e35b
docs: add related project CherryStudio (#2059)
Some checks failed
CI / Unit tests (push) Has been cancelled
CI / commit_lint (push) Has been cancelled
* Update README.md

增加相关项目CherryStudio

* Update README.en.md

* Update README.ja.md
2025-02-08 00:07:55 +08:00
JustSong
2552c68249 fix: update doubao channel name
Some checks are pending
CI / Unit tests (push) Waiting to run
CI / commit_lint (push) Waiting to run
2025-02-07 01:51:28 +08:00
JustSong
5c81e40612 fix: update Dockerfile and workflow for improved multi-architecture support 2025-02-07 01:35:53 +08:00
JustSong
0d5318b1b7 revert: fix: revert sqlite build related changes
This reverts commit db65db2807.
2025-02-07 01:15:33 +08:00
JustSong
db65db2807 fix: revert sqlite build related changes 2025-02-07 00:48:23 +08:00
JustSong
e0b7e6a9e2 fix: unify version retrieval in Dockerfile build commands 2025-02-07 00:39:55 +08:00
JustSong
27c2abe80f fix: update Docker setup actions to latest versions 2025-02-07 00:33:15 +08:00
JustSong
2c867251b5 fix: improve code formatting and readability in Dashboard component 2025-02-07 00:23:13 +08:00
JustSong
108111ebd3 fix: exclude preview tags from release workflows 2025-02-07 00:19:23 +08:00
JustSong
293ba93ad6 fix: remove outdated model from ModelList and add new deepseek models 2025-02-07 00:13:57 +08:00
JustSong
faced40d5b fix: update Docker image workflow to conditionally include arm64 platform 2025-02-07 00:06:32 +08:00
JustSong
2ae9997f29 fix: enhance error handling for Gemini API key validation 2025-02-07 00:03:00 +08:00
JustSong
e146b14d46 fix: add default API version handling and enhance error message checks for Gemini 2025-02-07 00:01:38 +08:00
JustSong
e19045f925 chore: add deepseek-reasoner 2025-02-06 23:38:29 +08:00
JustSong
d2903b673d fix: update SiliconCloud link in README
Some checks failed
CI / Unit tests (push) Has been cancelled
CI / commit_lint (push) Has been cancelled
2025-02-04 20:07:05 +08:00
JustSong
33edcf604c feat: add balance_not_supported translation to English and Chinese locales
Some checks are pending
CI / Unit tests (push) Waiting to run
CI / commit_lint (push) Waiting to run
2025-02-03 18:22:59 +08:00
JustSong
78fd6ef161 fix: update README for clarity on stable and alpha image addresses 2025-02-03 18:15:44 +08:00
JustSong
9b6d5f76e0 fix: enhance douban_notice_2 for clarity on model name conversion
Some checks are pending
CI / Unit tests (push) Waiting to run
CI / commit_lint (push) Waiting to run
2025-02-03 00:11:43 +08:00
JustSong
2250f311e1 feat: add latest version of Claude model to constants 2025-02-02 22:25:52 +08:00
lyj
e43f758623
Update model.go (#1963) 2025-02-02 22:24:31 +08:00
江南一根葱
6ded638f70
fix: audio transcription 400 error (#1882) (#2003) 2025-02-02 22:22:42 +08:00
jiz4oh
ea3331b79a
fix: remove duplicated model (#2012)
the model `llama-3.2-11b-vision-preview` was declared at 3915ce9814/relay/adaptor/groq/constants.go (L11)
2025-02-02 22:21:30 +08:00
Jeremy JIANG
7a97ddc03c
fix: fix hunyuan paramete & update price (#2019)
* fix: omit null TopP and Temperature fields in request body

* feat: update Hunyuan model ratios
2025-02-02 22:20:38 +08:00
JustSong
d7028b55fd feat: update model list in Zhipu constants for expanded options 2025-02-02 22:15:42 +08:00
Jeremy JIANG
36a03f465b
feat: update Zhipu model prices (#2020) 2025-02-02 22:06:07 +08:00
JustSong
ae16647047 fix: change settings tab item color for improved visibility 2025-02-02 21:32:49 +08:00
JustSong
4dbc2ad86d fix: update tooltip keys in Dashboard for consistency in statistics display 2025-02-02 20:25:14 +08:00
JustSong
94479c2800 fix: add success message for completed operations in translation files 2025-02-02 20:07:08 +08:00
JustSong
614903b524 feat: comment out stat value displays in Dashboard for cleaner UI 2025-02-02 19:58:51 +08:00
JustSong
1dc255c219 feat: add Account ID input field for specific channel type and prevent potential bug with undefined key 2025-02-02 19:52:34 +08:00
JustSong
15c27e4b12 feat: increase toast notification timeouts for better user experience 2025-02-02 19:22:26 +08:00
JustSong
a910d3ba25 feat: increase global API and web rate limits for improved performance 2025-02-02 19:20:50 +08:00
JustSong
34f889437d feat: update Dashboard CSS to change active settings tab color to black for better visibility 2025-02-02 19:08:38 +08:00
JustSong
a56cb4a2d5 feat: update LogsTable input and button sizes to 'small' for improved UI consistency 2025-02-02 18:59:36 +08:00
JustSong
95009b254a feat: update TokensTable button sizes to 'tiny' and adjust translation for 'never expire' 2025-02-02 18:57:57 +08:00
JustSong
20c30a060b feat: enhance About page layout with Card component and improve Dashboard CSS styling 2025-02-02 18:49:42 +08:00
JustSong
ca334a5502 feat: refactor i18n setup to use local translation files and remove backend dependency 2025-02-02 18:40:04 +08:00
JustSong
c81ee68dd0 feat: improve error logging for token encoder initialization with offline usage guidance 2025-02-02 17:24:23 +08:00
JustSong
ececfe5ba0 feat: add SysWarn and SysWarnf logging functions; handle SMTP short response warning 2025-02-02 17:13:41 +08:00
JustSong
9e1f06df5c chore: update .gitignore to include .DS_Store and ensure temp directory is ignored 2025-02-02 16:50:32 +08:00
JustSong
8fa9df5b3c feat: enhance email notifications with HTML templates and improved content
Some checks are pending
CI / Unit tests (push) Waiting to run
CI / commit_lint (push) Waiting to run
2025-02-02 16:39:30 +08:00
JustSong
19998ebb8a feat: update i18n messages and improve error handling 2025-02-02 16:25:52 +08:00
JustSong
5659fde3fb chore: update style 2025-02-02 16:05:52 +08:00
JustSong
c64b7c891f chore: update default theme style 2025-02-02 16:02:38 +08:00
JustSong
9b83d63896 docs: update README files to remove references to deprecated Docker image 2025-02-02 15:19:53 +08:00
JustSong
0bf3fd4038 docs: update README files with stable and alpha Docker image addresses 2025-02-02 15:17:59 +08:00
JustSong
99b6200156 docs: update README with stable and alpha Docker image links 2025-02-02 15:16:24 +08:00
JustSong
8f40e11c97 feat: initial official i18n support for backend 2025-02-02 15:09:31 +08:00
JustSong
562964238c chore: refactor docker image workflow to use format function for repository URLs 2025-02-02 14:21:20 +08:00
JustSong
46b1d35d83 chore: update docker image workflow for alpha releases 2025-02-02 14:17:48 +08:00
JustSong
6a197ceb69 chore: update default theme style 2025-02-02 14:10:38 +08:00
JustSong
bd7d0d1e96 chore: update default theme style 2025-02-02 14:00:40 +08:00
JustSong
f325f58edf chore: update default theme style 2025-02-02 13:51:19 +08:00
JustSong
aef2100be1 chore: update dashboard 2025-02-02 13:40:09 +08:00
JustSong
4685c52a5d chore: update dashboard 2025-02-02 13:38:48 +08:00
JustSong
174dea7763 feat: i18n support 2025-02-02 13:32:52 +08:00
JustSong
9b81c88250 feat: i18n support 2025-02-02 13:21:14 +08:00
JustSong
72d911986c feat: i18n support 2025-02-02 13:16:56 +08:00
JustSong
a01d769a83 chore: make buttons tiny 2025-02-02 13:14:52 +08:00
JustSong
d0965050a9 feat: i18n support 2025-02-02 00:12:22 +08:00
JustSong
e7ea7c866f feat: i18n support 2025-02-02 00:08:02 +08:00
JustSong
b1fe81a84f feat: i18n support 2025-02-02 00:05:40 +08:00
JustSong
e183e3b9b0 feat: i18n support 2025-02-01 23:58:55 +08:00
JustSong
b7f008cd72 feat: i18n support 2025-02-01 23:52:42 +08:00
JustSong
33102c4586 feat: i18n support 2025-02-01 23:50:32 +08:00
JustSong
ee3ed65356 feat: i18n support 2025-02-01 23:48:05 +08:00
JustSong
958f2f4ea8 feat: i18n support 2025-02-01 23:42:00 +08:00
JustSong
4a5f872dce feat: i18n support 2025-02-01 23:32:36 +08:00
JustSong
6ca6a3ea74 feat: i18n support 2025-02-01 17:05:46 +08:00
JustSong
2c8c29bfc7 feat: i18n support 2025-02-01 17:04:31 +08:00
JustSong
ae20aea555 feat: i18n support 2025-02-01 17:00:24 +08:00
JustSong
60f2776795 feat: i18n for token related pages 2025-02-01 15:11:07 +08:00
JustSong
93ce6c4cd7 chore: disable arm64 build for now
Some checks are pending
CI / Unit tests (push) Waiting to run
CI / commit_lint (push) Waiting to run
2025-02-01 14:06:00 +08:00
JustSong
4fe5ab8d09 fix: fix syntax error 2025-02-01 13:38:57 +08:00
JustSong
afbbfbbf83 chore: do not static build anymore 2025-02-01 13:32:50 +08:00
JustSong
75d9d9d560 fix: fix Dockerfile 2025-02-01 13:30:58 +08:00
JustSong
d1af30ee5a chore: use ubuntu to replace alpine 2025-02-01 13:27:21 +08:00
JustSong
a3924a2353 chore: update go-sqlite version 2025-02-01 13:19:55 +08:00
JustSong
9af5a1d11d fix: try to fix docker build problem 2025-02-01 13:13:43 +08:00
JustSong
57f9f7dfbb chore: drop docker-image-en.yml 2025-02-01 12:27:09 +08:00
JustSong
d9f2df2baf feat: initial i18n support 2025-02-01 12:25:58 +08:00
JustSong
bdf312e5dc feat: initial i18n support 2025-02-01 12:15:38 +08:00
JustSong
1521df6551 fix: fix unable to login via wechat 2025-02-01 12:04:28 +08:00
JustSong
c67b167f4f fix: try to fix docker build error 2025-02-01 11:56:23 +08:00
JustSong
c351e196e6 chore: add build-base to Dockerfile for enhanced build capabilities
Some checks are pending
CI / Unit tests (push) Waiting to run
CI / commit_lint (push) Waiting to run
2025-02-01 02:22:38 +08:00
JustSong
a316ed7abc fix: handle empty dashboard data and improve summary calculation 2025-02-01 01:54:00 +08:00
JustSong
0895d8660e fix: fix about page 2025-02-01 01:42:38 +08:00
JustSong
be1ed114f4 chore: add gcc and sqlite-dev dependencies to Dockerfile 2025-02-01 01:37:21 +08:00
JustSong
eb6da573a3 chore: optimize Dockerfile for multi-directory npm installation and build 2025-02-01 01:12:26 +08:00
JustSong
0a6273fc08 chore: bug fix for home page 2025-02-01 01:08:28 +08:00
JustSong
5997fce454 chore: update button style 2025-02-01 00:36:33 +08:00
JustSong
0df6d7a131 chore: fix prompt 2025-02-01 00:27:05 +08:00
JustSong
93fdb60de5 feat: update log table style 2025-02-01 00:21:04 +08:00
JustSong
4db834da95 chore: update default theme style 2025-02-01 00:13:09 +08:00
JustSong
6818ed5ca8 chore: update default theme style 2025-02-01 00:07:41 +08:00
JustSong
7be3b5547d chore: update default theme style 2025-02-01 00:06:19 +08:00
JustSong
2d7ea61d67 chore: update default theme style 2025-02-01 00:02:30 +08:00
JustSong
83b34be067 chore: update default theme style 2025-02-01 00:01:06 +08:00
JustSong
d5d879afdc chore: update default theme style 2025-01-31 23:54:45 +08:00
JustSong
0f205a3aa3 chore: update default theme style 2025-01-31 23:53:00 +08:00
JustSong
76c3f87351 chore: update default theme style 2025-01-31 23:46:05 +08:00
JustSong
6d9a92f8f7 chore: update default theme style 2025-01-31 23:44:39 +08:00
JustSong
835f0e0d67 chore: update default theme style 2025-01-31 23:38:40 +08:00
JustSong
a6981f0d51 chore: update default theme style 2025-01-31 23:33:14 +08:00
JustSong
678d613179 chore: update default theme style 2025-01-31 23:31:41 +08:00
JustSong
be089a072b chore: update default theme style 2025-01-31 23:25:32 +08:00
JustSong
45d10aa3df chore: update default theme style 2025-01-31 23:24:11 +08:00
JustSong
9cdd48ac22 feat: update log table style
Some checks are pending
CI / Unit tests (push) Waiting to run
CI / commit_lint (push) Waiting to run
2025-01-31 23:21:42 +08:00
JustSong
310e7120e5 chore: update default theme style 2025-01-31 23:20:57 +08:00
JustSong
3d29713268 chore: update default theme style 2025-01-31 23:10:02 +08:00
JustSong
f2c7c424e9 chore: update default theme style 2025-01-31 23:08:07 +08:00
JustSong
38a42bb265 chore: update default theme style 2025-01-31 23:01:03 +08:00
JustSong
fa2e8f44b1 chore: update default theme style 2025-01-31 22:55:45 +08:00
JustSong
9f74101543 chore: update default theme style 2025-01-31 22:53:40 +08:00
JustSong
28a271a896 chore: update default theme style 2025-01-31 22:50:48 +08:00
JustSong
e8ea87fff3 chore: update home page style 2025-01-31 22:45:57 +08:00
JustSong
abe2d2dba8 chore: update style 2025-01-31 22:38:39 +08:00
JustSong
4bcaa064d6 chore: update style 2025-01-31 22:27:26 +08:00
JustSong
52d81e0e24 feat: remove first section for overview 2025-01-31 22:24:13 +08:00
JustSong
dc8c3bc69e feat: basic overview is done 2025-01-31 22:18:02 +08:00
JustSong
b4e69df802 fix: do not send access_token 2025-01-31 21:53:56 +08:00
JustSong
d9f74bdff3 feat: support new log type 2025-01-31 21:49:34 +08:00
JustSong
fa2a772731 feat: able to query test log 2025-01-31 21:23:12 +08:00
JustSong
4f68f3e1b3 chore: update log content 2025-01-31 20:16:56 +08:00
JustSong
0bab887b2d chore: update log content 2025-01-31 20:15:04 +08:00
JustSong
0230d36643 feat: update log table style 2025-01-31 20:06:43 +08:00
JustSong
bad57d049a feat: update log table style 2025-01-31 20:02:51 +08:00
JustSong
dc470ce82e feat: show stream & elapsed time in log detail 2025-01-31 19:34:22 +08:00
JustSong
ea0721d525 feat: update log content format 2025-01-31 18:15:43 +08:00
JustSong
d0402f9086 feat: record request_id 2025-01-31 17:54:04 +08:00
JustSong
1fead8e7f7 chore: add debug log for distributor 2025-01-31 17:26:33 +08:00
Fennng
09911a301d
feat: support hunyuan-embedding (#2035)
* feat: support hunyuan-embedding

* chore: improve implementation

---------

Co-authored-by: LUO Feng <luofeng@flowpp.com>
Co-authored-by: JustSong <quanpengsong@gmail.com>
2025-01-31 16:48:02 +08:00
chenzikun
f95e6b78b8
fix: fix berry copy token (#2041)
* [bugfix]修复copy问题

* [update]两阶段编译代码

---------

Co-authored-by: zicorn <a24395@autel.com>
2025-01-31 16:12:59 +08:00
JustSong
605bb06667 feat: update logger 2025-01-31 16:00:53 +08:00
Laisky.Cai
d88e07fd9a
feat: add deepseek-reasoner & gemini-2.0-flash-thinking-exp-01-21 (#2045)
Some checks are pending
CI / Unit tests (push) Waiting to run
CI / commit_lint (push) Waiting to run
* feat: add MILLI_USD constant and update pricing for deepseek services

* feat: add support for new Gemini model version 'gemini-2.0-flash-thinking-exp-01-21'
2025-01-31 15:15:59 +08:00
JustSong
3915ce9814 chore: update ci yaml
Some checks failed
CI / Unit tests (push) Has been cancelled
CI / commit_lint (push) Has been cancelled
2024-12-27 22:01:37 +08:00
JustSong
999defc88b chore: update readme 2024-12-27 21:59:38 +08:00
JustSong
b51c47bc77
docs: update README.md 2024-12-27 21:45:51 +08:00
JustSong
4f25cde132 fix: add branch check 2024-12-27 20:41:20 +08:00
JustSong
d89e9d7e44 fix: add branch limitation and drop pull_request trigger for ci.yml 2024-12-27 20:34:04 +08:00
Qiying Wang
a858292b54
feat: support gpt-4o-2024-11-20 (#1941)
Some checks failed
CI / Unit tests (push) Has been cancelled
CI / commit_lint (push) Has been cancelled
2024-12-22 19:49:50 +08:00
Yuwei Ba
ff589b5e4a
chore: update model mapping implementation for audio (#1932)
* fixed model mapping

* chore: update implementation

---------

Co-authored-by: JustSong <quanpengsong@gmail.com>
2024-12-22 19:33:11 +08:00
Ke Wang
95e8c16338
feat: add balance query support for DeepSeek (#1946)
* Support Balance Query for DeepSeek

* Fix
2024-12-22 19:26:33 +08:00
lihangfu
381172cb36
feat: support Redis Sentinel and Redis Cluster (#1952)
* feature: support Redis Sentinel and Redis Cluster

* chore: update implementation

---------

Co-authored-by: JustSong <quanpengsong@gmail.com>
2024-12-22 19:21:24 +08:00
ZhangTianrong
59eae186a3
fix: remove the duplicate claude-3-5-haiku-20241022 in Anthropic's base model list (#1957)
* Update constants.go

Remove the duplicate `claude-3-5-haiku-20241022` causing issue 1928

* fix: fix syntax error

---------

Co-authored-by: JustSong <quanpengsong@gmail.com>
2024-12-22 18:58:29 +08:00
bestlaw66
ce52f355bb
docs: add tutorial section for BT Panel installation (#1985)
* Update README.md

在国内有大部分用户都在使用宝塔面板管理服务器,因此增加使用宝塔面板部署的教程,可视化的部署方式可以帮助用户更加便捷的部署one-api

* docs: update readme

---------

Co-authored-by: JustSong <quanpengsong@gmail.com>
2024-12-22 18:55:04 +08:00
Ke Wang
cb9d0a74c9
fix: fix balance query for siliconflow (#1960) 2024-12-22 18:48:47 +08:00
Wei Tingjiang
49ffb1c60d
feat: enhance response handling to support gemini-2.0-thinking (#1995) 2024-12-22 18:25:44 +08:00
longkeyy
2f16649896
feat: update qwen model and price (#1966) 2024-12-22 18:22:57 +08:00
dependabot[bot]
af3aa57bd6
chore(deps): bump golang.org/x/crypto from 0.24.0 to 0.31.0 (#1976)
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.24.0 to 0.31.0.
- [Commits](https://github.com/golang/crypto/compare/v0.24.0...v0.31.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-22 18:21:00 +08:00
Laisky.Cai
e9f117ff72
feat: add gemini-2.0-flash-exp and fix race condition in processChannelRelayError (#1983)
Some checks are pending
CI / Unit tests (push) Waiting to run
CI / commit_lint (push) Waiting to run
Co-authored-by: JustSong <39998050+songquanpeng@users.noreply.github.com>
2024-12-21 20:32:30 +08:00
Laisky.Cai
6bb5247bd6
feat: add support for new OpenAI models and update billing ratios (#1990) 2024-12-21 20:28:51 +08:00
Laisky.Cai
305ce14fe3
feat: support replicate chat models (#1989)
* feat: add Replicate adaptor and integrate into channel and API types

* feat: support llm chat on replicate
2024-12-21 14:41:19 +08:00
JustSong
36c8f4f15c docs: update readme
Some checks are pending
CI / Unit tests (push) Waiting to run
CI / commit_lint (push) Waiting to run
2024-12-21 00:33:20 +08:00
JustSong
45b51ea0ee feat: update feishu oauth login 2024-12-20 23:27:00 +08:00
Calcium-Ion
7c8628bd95
feat: support gzip decode (#1962)
Some checks failed
CI / Unit tests (push) Has been cancelled
CI / commit_lint (push) Has been cancelled
2024-12-04 23:34:24 +08:00
JustSong
6ab87f8a08 feat: add warning in log when system prompt is reset
Some checks failed
CI / Unit tests (push) Has been cancelled
CI / commit_lint (push) Has been cancelled
2024-11-10 17:18:46 +08:00
158 changed files with 9498 additions and 3876 deletions

View File

@ -12,8 +12,6 @@ name: CI
# would trigger our jobs twice on pull requests (once from "push" event and once
# from "pull_request->synchronize")
on:
pull_request:
types: [opened, reopened, synchronize]
push:
branches:
- 'main'

View File

@ -1,64 +0,0 @@
name: Publish Docker image (English)
on:
push:
tags:
- 'v*.*.*'
workflow_dispatch:
inputs:
name:
description: 'reason'
required: false
jobs:
push_to_registries:
name: Push Docker image to multiple registries
runs-on: ubuntu-latest
permissions:
packages: write
contents: read
steps:
- name: Check out the repo
uses: actions/checkout@v3
- name: Check repository URL
run: |
REPO_URL=$(git config --get remote.origin.url)
if [[ $REPO_URL == *"pro" ]]; then
exit 1
fi
- name: Save version info
run: |
git describe --tags > VERSION
- name: Translate
run: |
python ./i18n/translate.py --repository_path . --json_file_path ./i18n/en.json
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Log in to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@v4
with:
images: |
justsong/one-api-en
- name: Build and push Docker images
uses: docker/build-push-action@v3
with:
context: .
platforms: linux/amd64,linux/arm64
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}

View File

@ -32,10 +32,10 @@ jobs:
git describe --tags > VERSION
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
uses: docker/setup-buildx-action@v3
- name: Log in to Docker Hub
uses: docker/login-action@v2
@ -55,14 +55,14 @@ jobs:
uses: docker/metadata-action@v4
with:
images: |
justsong/one-api
ghcr.io/${{ github.repository }}
${{ contains(github.ref, 'alpha') && 'justsong/one-api-alpha' || 'justsong/one-api' }}
${{ contains(github.ref, 'alpha') && format('ghcr.io/{0}-alpha', github.repository) || format('ghcr.io/{0}', github.repository) }}
- name: Build and push Docker images
uses: docker/build-push-action@v3
with:
context: .
platforms: linux/amd64,linux/arm64
platforms: ${{ contains(github.ref, 'alpha') && 'linux/amd64' || 'linux/amd64' }}
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}

View File

@ -7,6 +7,7 @@ on:
tags:
- 'v*.*.*'
- '!*-alpha*'
- '!*-preview*'
workflow_dispatch:
inputs:
name:

View File

@ -7,6 +7,7 @@ on:
tags:
- 'v*.*.*'
- '!*-alpha*'
- '!*-preview*'
workflow_dispatch:
inputs:
name:

View File

@ -7,6 +7,7 @@ on:
tags:
- 'v*.*.*'
- '!*-alpha*'
- '!*-preview*'
workflow_dispatch:
inputs:
name:

3
.gitignore vendored
View File

@ -10,3 +10,6 @@ data
/web/node_modules
cmd.md
.env
/one-api
temp
.DS_Store

View File

@ -4,41 +4,44 @@ WORKDIR /web
COPY ./VERSION .
COPY ./web .
WORKDIR /web/default
RUN npm install
RUN DISABLE_ESLINT_PLUGIN='true' REACT_APP_VERSION=$(cat VERSION) npm run build
RUN npm install --prefix /web/default & \
npm install --prefix /web/berry & \
npm install --prefix /web/air & \
wait
WORKDIR /web/berry
RUN npm install
RUN DISABLE_ESLINT_PLUGIN='true' REACT_APP_VERSION=$(cat VERSION) npm run build
WORKDIR /web/air
RUN npm install
RUN DISABLE_ESLINT_PLUGIN='true' REACT_APP_VERSION=$(cat VERSION) npm run build
RUN DISABLE_ESLINT_PLUGIN='true' REACT_APP_VERSION=$(cat ./VERSION) npm run build --prefix /web/default & \
DISABLE_ESLINT_PLUGIN='true' REACT_APP_VERSION=$(cat ./VERSION) npm run build --prefix /web/berry & \
DISABLE_ESLINT_PLUGIN='true' REACT_APP_VERSION=$(cat ./VERSION) npm run build --prefix /web/air & \
wait
FROM golang:alpine AS builder2
RUN apk add --no-cache g++
RUN apk add --no-cache \
gcc \
musl-dev \
sqlite-dev \
build-base
ENV GO111MODULE=on \
CGO_ENABLED=1 \
GOOS=linux
WORKDIR /build
ADD go.mod go.sum ./
RUN go mod download
COPY . .
COPY --from=builder /web/build ./web/build
RUN go build -trimpath -ldflags "-s -w -X 'github.com/songquanpeng/one-api/common.Version=$(cat VERSION)' -extldflags '-static'" -o one-api
FROM alpine
RUN go build -trimpath -ldflags "-s -w -X 'github.com/songquanpeng/one-api/common.Version=$(cat VERSION)' -linkmode external -extldflags '-static'" -o one-api
RUN apk update \
&& apk upgrade \
&& apk add --no-cache ca-certificates tzdata \
&& update-ca-certificates 2>/dev/null || true
FROM alpine:latest
RUN apk add --no-cache ca-certificates tzdata
COPY --from=builder2 /build/one-api /
EXPOSE 3000
WORKDIR /data
ENTRYPOINT ["/one-api"]

View File

@ -52,8 +52,6 @@ _✨ Access all LLM through the standard OpenAI API format, easy to deploy & use
> **Warning**: This README is translated by ChatGPT. Please feel free to submit a PR if you find any translation errors.
> **Warning**: The Docker image for English version is `justsong/one-api-en`.
> **Note**: The latest image pulled from Docker may be an `alpha` release. Specify the version manually if you require stability.
## Features
@ -89,7 +87,9 @@ _✨ Access all LLM through the standard OpenAI API format, easy to deploy & use
## Deployment
### Docker Deployment
Deployment command: `docker run --name one-api -d --restart always -p 3000:3000 -e TZ=Asia/Shanghai -v /home/ubuntu/data/one-api:/data justsong/one-api-en`
Deployment command:
`docker run --name one-api -d --restart always -p 3000:3000 -e TZ=Asia/Shanghai -v /home/ubuntu/data/one-api:/data justsong/one-api`
Update command: `docker run --rm -v /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower -cR`
@ -315,6 +315,7 @@ If the channel ID is not provided, load balancing will be used to distribute the
* [FastGPT](https://github.com/labring/FastGPT): Knowledge question answering system based on the LLM
* [VChart](https://github.com/VisActor/VChart): More than just a cross-platform charting library, but also an expressive data storyteller.
* [VMind](https://github.com/VisActor/VMind): Not just automatic, but also fantastic. Open-source solution for intelligent visualization.
* * [CherryStudio](https://github.com/CherryHQ/cherry-studio): A cross-platform AI client that integrates multiple service providers and supports local knowledge base management.
## Note
This project is an open-source project. Please use it in compliance with OpenAI's [Terms of Use](https://openai.com/policies/terms-of-use) and **applicable laws and regulations**. It must not be used for illegal purposes.

View File

@ -52,8 +52,6 @@ _✨ 標準的な OpenAI API フォーマットを通じてすべての LLM に
> **警告**: この README は ChatGPT によって翻訳されています。翻訳ミスを発見した場合は遠慮なく PR を投稿してください。
> **警告** 英語版の Docker イメージは `justsong/one-api-en` です。
> **注**: Docker からプルされた最新のイメージは、`alpha` リリースかもしれません。安定性が必要な場合は、手動でバージョンを指定してください。
## 特徴
@ -89,7 +87,9 @@ _✨ 標準的な OpenAI API フォーマットを通じてすべての LLM に
## デプロイメント
### Docker デプロイメント
デプロイコマンド: `docker run --name one-api -d --restart always -p 3000:3000 -e TZ=Asia/Shanghai -v /home/ubuntu/data/one-api:/data justsong/one-api-en`
デプロイコマンド:
`docker run --name one-api -d --restart always -p 3000:3000 -e TZ=Asia/Shanghai -v /home/ubuntu/data/one-api:/data justsong/one-api`
コマンドを更新する: `docker run --rm -v /var/run/docker.sock:/var/run/docker.sock containrr/watchtower -cR`
@ -287,8 +287,8 @@ graph LR
+ インターフェイスアドレスと API Key が正しいか再確認してください。
## 関連プロジェクト
[FastGPT](https://github.com/labring/FastGPT): LLM に基づく知識質問応答システム
* [FastGPT](https://github.com/labring/FastGPT): LLM に基づく知識質問応答システム
* [CherryStudio](https://github.com/CherryHQ/cherry-studio): マルチプラットフォーム対応のAIクライアント。複数のサービスプロバイダーを統合管理し、ローカル知識ベースをサポートします。
## 注
本プロジェクトはオープンソースプロジェクトです。OpenAI の[利用規約](https://openai.com/policies/terms-of-use)および**適用される法令**を遵守してご利用ください。違法な目的での利用はご遠慮ください。

View File

@ -56,8 +56,12 @@ _✨ 通过标准的 OpenAI API 格式访问所有的大模型,开箱即用
>
> 根据[《生成式人工智能服务管理暂行办法》](http://www.cac.gov.cn/2023-07/13/c_1690898327029107.htm)的要求,请勿对中国地区公众提供一切未经备案的生成式人工智能服务。
> [!WARNING]
> 使用 Docker 拉取的最新镜像可能是 `alpha` 版本,如果追求稳定性请手动指定版本。
> [!NOTE]
> 稳定版 / 预览版镜像地址:[justsong/one-api](https://hub.docker.com/repository/docker/justsong/one-api)
> 或者 [ghcr.io/songquanpeng/one-api](https://github.com/songquanpeng/one-api/pkgs/container/one-api)
>
> alpha 版镜像地址:[justsong/one-api-alpha](https://hub.docker.com/repository/docker/justsong/one-api-alpha)
> 或者 [ghcr.io/songquanpeng/one-api-alpha](https://github.com/songquanpeng/one-api/pkgs/container/one-api-alpha)
> [!WARNING]
> 使用 root 用户初次登录系统后,务必修改默认密码 `123456`
@ -68,7 +72,7 @@ _✨ 通过标准的 OpenAI API 格式访问所有的大模型,开箱即用
+ [x] [Anthropic Claude 系列模型](https://anthropic.com) (支持 AWS Claude)
+ [x] [Google PaLM2/Gemini 系列模型](https://developers.generativeai.google)
+ [x] [Mistral 系列模型](https://mistral.ai/)
+ [x] [字节跳动豆包大模型](https://console.volcengine.com/ark/region:ark+cn-beijing/model)
+ [x] [字节跳动豆包大模型(火山引擎)](https://www.volcengine.com/experience/ark?utm_term=202502dsinvite&ac=DSASUQY5&rc=2QXCA1VI)
+ [x] [百度文心一言系列模型](https://cloud.baidu.com/doc/WENXINWORKSHOP/index.html)
+ [x] [阿里通义千问系列模型](https://help.aliyun.com/document_detail/2400395.html)
+ [x] [讯飞星火认知大模型](https://www.xfyun.cn/doc/spark/Web.html)
@ -89,7 +93,7 @@ _✨ 通过标准的 OpenAI API 格式访问所有的大模型,开箱即用
+ [x] [DeepL](https://www.deepl.com/)
+ [x] [together.ai](https://www.together.ai/)
+ [x] [novita.ai](https://www.novita.ai/)
+ [x] [硅基流动 SiliconCloud](https://siliconflow.cn/siliconcloud)
+ [x] [硅基流动 SiliconCloud](https://cloud.siliconflow.cn/i/rKXmRobW)
+ [x] [xAI](https://x.ai/)
2. 支持配置镜像以及众多[第三方代理服务](https://iamazing.cn/page/openai-api-third-party-services)。
3. 支持通过**负载均衡**的方式访问多个渠道。
@ -111,12 +115,12 @@ _✨ 通过标准的 OpenAI API 格式访问所有的大模型,开箱即用
19. 支持丰富的**自定义**设置,
1. 支持自定义系统名称logo 以及页脚。
2. 支持自定义首页和关于页面,可以选择使用 HTML & Markdown 代码进行自定义,或者使用一个单独的网页通过 iframe 嵌入。
20. 支持通过系统访问令牌调用管理 API进而**在无需二开的情况下扩展和自定义** One API 的功能,详情请参考此处 [API 文档](./docs/API.md)。
20. 支持通过系统访问令牌调用管理 API进而**在无需二开的情况下扩展和自定义** One API 的功能,详情请参考此处 [API 文档](./docs/API.md)。
21. 支持 Cloudflare Turnstile 用户校验。
22. 支持用户管理,支持**多种用户登录注册方式**
+ 邮箱登录注册(支持注册邮箱白名单)以及通过邮箱进行密码重置。
+ 支持使用飞书进行授权登录
+ [GitHub 开放授权](https://github.com/settings/applications/new)。
+ 支持[飞书授权登录](https://open.feishu.cn/document/uAjLw4CM/ukTMukTMukTM/reference/authen-v1/authorize/get)[这里有 One API 的实现细节阐述供参考](https://iamazing.cn/page/feishu-oauth-login)
+ 支持 [GitHub 授权登录](https://github.com/settings/applications/new)。
+ 微信公众号授权(需要额外部署 [WeChat Server](https://github.com/songquanpeng/wechat-server))。
23. 支持主题切换,设置环境变量 `THEME` 即可,默认为 `default`,欢迎 PR 更多主题,具体参考[此处](./web/README.md)。
24. 配合 [Message Pusher](https://github.com/songquanpeng/message-pusher) 可将报警信息推送到多种 App 上。
@ -175,6 +179,10 @@ sudo service nginx restart
初始账号用户名为 `root`,密码为 `123456`
### 通过宝塔面板进行一键部署
1. 安装宝塔面板9.2.0及以上版本,前往 [宝塔面板](https://www.bt.cn/new/download.html?r=dk_oneapi) 官网,选择正式版的脚本下载安装;
2. 安装后登录宝塔面板,在左侧菜单栏中点击 `Docker`,首次进入会提示安装 `Docker` 服务,点击立即安装,按提示完成安装;
3. 安装完成后在应用商店中搜索 `One-API`,点击安装,配置域名等基本信息即可完成安装;
### 基于 Docker Compose 进行部署
@ -218,7 +226,7 @@ docker-compose ps
3. 所有从服务器必须设置 `NODE_TYPE``slave`,不设置则默认为主服务器。
4. 设置 `SYNC_FREQUENCY` 后服务器将定期从数据库同步配置,在使用远程数据库的情况下,推荐设置该项并启用 Redis无论主从。
5. 从服务器可以选择设置 `FRONTEND_BASE_URL`,以重定向页面请求到主服务器。
6. 从服务器上**分别**装好 Redis设置好 `REDIS_CONN_STRING`,这样可以做到在缓存未过期的情况下数据库零访问,可以减少延迟。
6. 从服务器上**分别**装好 Redis设置好 `REDIS_CONN_STRING`,这样可以做到在缓存未过期的情况下数据库零访问,可以减少延迟Redis 集群或者哨兵模式的支持请参考环境变量说明)
7. 如果主服务器访问数据库延迟也比较高,则也需要启用 Redis并设置 `SYNC_FREQUENCY`,以定期从数据库同步配置。
环境变量的具体使用方法详见[此处](#环境变量)。
@ -347,6 +355,11 @@ graph LR
1. `REDIS_CONN_STRING`:设置之后将使用 Redis 作为缓存使用。
+ 例子:`REDIS_CONN_STRING=redis://default:redispw@localhost:49153`
+ 如果数据库访问延迟很低,没有必要启用 Redis启用后反而会出现数据滞后的问题。
+ 如果需要使用哨兵或者集群模式:
+ 则需要把该环境变量设置为节点列表,例如:`localhost:49153,localhost:49154,localhost:49155`。
+ 除此之外还需要设置以下环境变量:
+ `REDIS_PASSWORD`Redis 集群或者哨兵模式下的密码设置。
+ `REDIS_MASTER_NAME`Redis 哨兵模式下主节点的名称。
2. `SESSION_SECRET`:设置之后将使用固定的会话密钥,这样系统重新启动后已登录用户的 cookie 将依旧有效。
+ 例子:`SESSION_SECRET=random_string`
3. `SQL_DSN`:设置之后将使用指定数据库而非 SQLite请使用 MySQL 或 PostgreSQL。
@ -401,6 +414,7 @@ graph LR
27. `INITIAL_ROOT_TOKEN`:如果设置了该值,则在系统首次启动时会自动创建一个值为该环境变量值的 root 用户令牌。
28. `INITIAL_ROOT_ACCESS_TOKEN`:如果设置了该值,则在系统首次启动时会自动创建一个值为该环境变量的 root 用户创建系统管理令牌。
29. `ENFORCE_INCLUDE_USAGE`:是否强制在 stream 模型下返回 usage默认不开启可选值为 `true``false`
30. `TEST_PROMPT`:测试模型时的用户 prompt默认为 `Print your model name exactly and do not output without any other text.`
### 命令行参数
1. `--port <port_number>`: 指定服务器监听的端口号,默认为 `3000`
@ -455,6 +469,7 @@ https://openai.justsong.cn
* [ChatGPT Next Web](https://github.com/Yidadaa/ChatGPT-Next-Web): 一键拥有你自己的跨平台 ChatGPT 应用
* [VChart](https://github.com/VisActor/VChart): 不只是开箱即用的多端图表库,更是生动灵活的数据故事讲述者。
* [VMind](https://github.com/VisActor/VMind): 不仅自动,还很智能。开源智能可视化解决方案。
* [CherryStudio](https://github.com/CherryHQ/cherry-studio): 全平台支持的AI客户端, 多服务商集成管理、本地知识库支持。
## 注意

View File

@ -1,13 +1,14 @@
package config
import (
"github.com/songquanpeng/one-api/common/env"
"os"
"strconv"
"strings"
"sync"
"time"
"github.com/songquanpeng/one-api/common/env"
"github.com/google/uuid"
)
@ -125,10 +126,10 @@ var ValidThemes = map[string]bool{
// All duration's unit is seconds
// Shouldn't larger then RateLimitKeyExpirationDuration
var (
GlobalApiRateLimitNum = env.Int("GLOBAL_API_RATE_LIMIT", 240)
GlobalApiRateLimitNum = env.Int("GLOBAL_API_RATE_LIMIT", 480)
GlobalApiRateLimitDuration int64 = 3 * 60
GlobalWebRateLimitNum = env.Int("GLOBAL_WEB_RATE_LIMIT", 120)
GlobalWebRateLimitNum = env.Int("GLOBAL_WEB_RATE_LIMIT", 240)
GlobalWebRateLimitDuration int64 = 3 * 60
UploadRateLimitNum = 10
@ -162,3 +163,4 @@ var UserContentRequestProxy = env.String("USER_CONTENT_REQUEST_PROXY", "")
var UserContentRequestTimeout = env.Int("USER_CONTENT_REQUEST_TIMEOUT", 30)
var EnforceIncludeUsage = env.Bool("ENFORCE_INCLUDE_USAGE", false)
var TestPrompt = env.String("TEST_PROMPT", "Output only your specific model name with no additional text.")

View File

@ -3,10 +3,11 @@ package common
import (
"bytes"
"encoding/json"
"github.com/gin-gonic/gin"
"github.com/songquanpeng/one-api/common/ctxkey"
"io"
"strings"
"github.com/gin-gonic/gin"
"github.com/songquanpeng/one-api/common/ctxkey"
)
func GetRequestBody(c *gin.Context) ([]byte, error) {
@ -31,7 +32,6 @@ func UnmarshalBodyReusable(c *gin.Context, v any) error {
contentType := c.Request.Header.Get("Content-Type")
if strings.HasPrefix(contentType, "application/json") {
err = json.Unmarshal(requestBody, &v)
c.Request.Body = io.NopCloser(bytes.NewBuffer(requestBody))
} else {
c.Request.Body = io.NopCloser(bytes.NewBuffer(requestBody))
err = c.ShouldBind(&v)
@ -40,6 +40,7 @@ func UnmarshalBodyReusable(c *gin.Context, v any) error {
return err
}
// Reset request body
c.Request.Body = io.NopCloser(bytes.NewBuffer(requestBody))
return nil
}

View File

@ -1,9 +1,8 @@
package helper
import (
"context"
"fmt"
"github.com/gin-gonic/gin"
"github.com/songquanpeng/one-api/common/random"
"html/template"
"log"
"net"
@ -11,6 +10,10 @@ import (
"runtime"
"strconv"
"strings"
"github.com/gin-gonic/gin"
"github.com/songquanpeng/one-api/common/random"
)
func OpenBrowser(url string) {
@ -106,6 +109,18 @@ func GenRequestID() string {
return GetTimeString() + random.GetRandomNumberString(8)
}
func SetRequestID(ctx context.Context, id string) context.Context {
return context.WithValue(ctx, RequestIdKey, id)
}
func GetRequestID(ctx context.Context) string {
rawRequestId := ctx.Value(RequestIdKey)
if rawRequestId == nil {
return ""
}
return rawRequestId.(string)
}
func GetResponseID(c *gin.Context) string {
logID := c.GetString(RequestIdKey)
return fmt.Sprintf("chatcmpl-%s", logID)

View File

@ -13,3 +13,8 @@ func GetTimeString() string {
now := time.Now()
return fmt.Sprintf("%s%d", now.Format("20060102150405"), now.UnixNano()%1e9)
}
// CalcElapsedTime return the elapsed time in milliseconds (ms)
func CalcElapsedTime(start time.Time) int64 {
return time.Now().Sub(start).Milliseconds()
}

72
common/i18n/i18n.go Normal file
View File

@ -0,0 +1,72 @@
package i18n
import (
"embed"
"encoding/json"
"strings"
"github.com/gin-gonic/gin"
)
//go:embed locales/*.json
var localesFS embed.FS
var (
translations = make(map[string]map[string]string)
defaultLang = "en"
ContextKey = "i18n"
)
// Init loads all translation files from embedded filesystem
func Init() error {
entries, err := localesFS.ReadDir("locales")
if err != nil {
return err
}
for _, entry := range entries {
if entry.IsDir() || !strings.HasSuffix(entry.Name(), ".json") {
continue
}
langCode := strings.TrimSuffix(entry.Name(), ".json")
content, err := localesFS.ReadFile("locales/" + entry.Name())
if err != nil {
return err
}
var translation map[string]string
if err := json.Unmarshal(content, &translation); err != nil {
return err
}
translations[langCode] = translation
}
return nil
}
func GetLang(c *gin.Context) string {
rawLang, ok := c.Get(ContextKey)
if !ok {
return defaultLang
}
lang, _ := rawLang.(string)
if lang != "" {
return lang
}
return defaultLang
}
func Translate(c *gin.Context, message string) string {
lang := GetLang(c)
return translateHelper(lang, message)
}
func translateHelper(lang, message string) string {
if trans, ok := translations[lang]; ok {
if translated, exists := trans[message]; exists {
return translated
}
}
return message
}

View File

@ -0,0 +1,5 @@
{
"invalid_input": "Invalid input, please check your input",
"send_email_failed": "failed to send email: ",
"invalid_parameter": "invalid parameter"
}

View File

@ -0,0 +1,5 @@
{
"invalid_input": "无效的输入,请检查您的输入",
"send_email_failed": "发送邮件失败:",
"invalid_parameter": "无效的参数"
}

View File

@ -7,19 +7,25 @@ import (
"log"
"os"
"path/filepath"
"runtime"
"strings"
"sync"
"time"
"github.com/gin-gonic/gin"
"github.com/songquanpeng/one-api/common/config"
"github.com/songquanpeng/one-api/common/helper"
)
type loggerLevel string
const (
loggerDEBUG = "DEBUG"
loggerINFO = "INFO"
loggerWarn = "WARN"
loggerError = "ERR"
loggerDEBUG loggerLevel = "DEBUG"
loggerINFO loggerLevel = "INFO"
loggerWarn loggerLevel = "WARN"
loggerError loggerLevel = "ERROR"
loggerFatal loggerLevel = "FATAL"
)
var setupLogOnce sync.Once
@ -44,27 +50,34 @@ func SetupLogger() {
}
func SysLog(s string) {
t := time.Now()
_, _ = fmt.Fprintf(gin.DefaultWriter, "[SYS] %v | %s \n", t.Format("2006/01/02 - 15:04:05"), s)
logHelper(nil, loggerINFO, s)
}
func SysLogf(format string, a ...any) {
SysLog(fmt.Sprintf(format, a...))
logHelper(nil, loggerINFO, fmt.Sprintf(format, a...))
}
func SysWarn(s string) {
logHelper(nil, loggerWarn, s)
}
func SysWarnf(format string, a ...any) {
logHelper(nil, loggerWarn, fmt.Sprintf(format, a...))
}
func SysError(s string) {
t := time.Now()
_, _ = fmt.Fprintf(gin.DefaultErrorWriter, "[SYS] %v | %s \n", t.Format("2006/01/02 - 15:04:05"), s)
logHelper(nil, loggerError, s)
}
func SysErrorf(format string, a ...any) {
SysError(fmt.Sprintf(format, a...))
logHelper(nil, loggerError, fmt.Sprintf(format, a...))
}
func Debug(ctx context.Context, msg string) {
if config.DebugEnabled {
logHelper(ctx, loggerDEBUG, msg)
if !config.DebugEnabled {
return
}
logHelper(ctx, loggerDEBUG, msg)
}
func Info(ctx context.Context, msg string) {
@ -80,37 +93,68 @@ func Error(ctx context.Context, msg string) {
}
func Debugf(ctx context.Context, format string, a ...any) {
Debug(ctx, fmt.Sprintf(format, a...))
if !config.DebugEnabled {
return
}
logHelper(ctx, loggerDEBUG, fmt.Sprintf(format, a...))
}
func Infof(ctx context.Context, format string, a ...any) {
Info(ctx, fmt.Sprintf(format, a...))
logHelper(ctx, loggerINFO, fmt.Sprintf(format, a...))
}
func Warnf(ctx context.Context, format string, a ...any) {
Warn(ctx, fmt.Sprintf(format, a...))
logHelper(ctx, loggerWarn, fmt.Sprintf(format, a...))
}
func Errorf(ctx context.Context, format string, a ...any) {
Error(ctx, fmt.Sprintf(format, a...))
logHelper(ctx, loggerError, fmt.Sprintf(format, a...))
}
func logHelper(ctx context.Context, level string, msg string) {
func FatalLog(s string) {
logHelper(nil, loggerFatal, s)
}
func FatalLogf(format string, a ...any) {
logHelper(nil, loggerFatal, fmt.Sprintf(format, a...))
}
func logHelper(ctx context.Context, level loggerLevel, msg string) {
writer := gin.DefaultErrorWriter
if level == loggerINFO {
writer = gin.DefaultWriter
}
id := ctx.Value(helper.RequestIdKey)
if id == nil {
id = helper.GenRequestID()
var requestId string
if ctx != nil {
rawRequestId := helper.GetRequestID(ctx)
if rawRequestId != "" {
requestId = fmt.Sprintf(" | %s", rawRequestId)
}
}
lineInfo, funcName := getLineInfo()
now := time.Now()
_, _ = fmt.Fprintf(writer, "[%s] %v | %s | %s \n", level, now.Format("2006/01/02 - 15:04:05"), id, msg)
_, _ = fmt.Fprintf(writer, "[%s] %v%s%s %s%s \n", level, now.Format("2006/01/02 - 15:04:05"), requestId, lineInfo, funcName, msg)
SetupLogger()
if level == loggerFatal {
os.Exit(1)
}
}
func FatalLog(v ...any) {
t := time.Now()
_, _ = fmt.Fprintf(gin.DefaultErrorWriter, "[FATAL] %v | %v \n", t.Format("2006/01/02 - 15:04:05"), v)
os.Exit(1)
func getLineInfo() (string, string) {
funcName := "[unknown] "
pc, file, line, ok := runtime.Caller(3)
if ok {
if fn := runtime.FuncForPC(pc); fn != nil {
parts := strings.Split(fn.Name(), ".")
funcName = "[" + parts[len(parts)-1] + "] "
}
} else {
file = "unknown"
line = 0
}
parts := strings.Split(file, "one-api/")
if len(parts) > 1 {
file = parts[1]
}
return fmt.Sprintf(" | %s:%d", file, line), funcName
}

View File

@ -5,11 +5,13 @@ import (
"crypto/tls"
"encoding/base64"
"fmt"
"github.com/songquanpeng/one-api/common/config"
"net"
"net/smtp"
"strings"
"time"
"github.com/songquanpeng/one-api/common/config"
"github.com/songquanpeng/one-api/common/logger"
)
func shouldAuth() bool {
@ -98,8 +100,12 @@ func SendEmail(subject string, receiver string, content string) error {
if err != nil {
return err
}
} else {
err = smtp.SendMail(addr, auth, config.SMTPAccount, to, mail)
return nil
}
err = smtp.SendMail(addr, auth, config.SMTPAccount, to, mail)
if err != nil && strings.Contains(err.Error(), "short response") { // 部分提供商返回该错误,但实际上邮件已经发送成功
logger.SysWarnf("short response from SMTP server, return nil instead of error: %s", err.Error())
return nil
}
return err
}

View File

@ -0,0 +1,34 @@
package message
import (
"fmt"
"github.com/songquanpeng/one-api/common/config"
)
// EmailTemplate 生成美观的 HTML 邮件内容
func EmailTemplate(title, content string) string {
return fmt.Sprintf(`
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
</head>
<body style="margin: 0; padding: 20px; font-family: Arial, sans-serif; line-height: 1.6; background-color: #f4f4f4;">
<div style="max-width: 600px; margin: 20px auto; padding: 30px; background-color: #ffffff; border-radius: 8px; box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1);">
<div style="text-align: center; margin-bottom: 30px;">
<h2 style="color: #333; margin: 0; font-size: 24px;">%s</h2>
</div>
<div style="color: #555; font-size: 16px;">
%s
</div>
<div style="margin-top: 40px; padding-top: 20px; border-top: 1px solid #eee; color: #888; font-size: 14px; text-align: center;">
<p style="margin: 5px 0;">此邮件由系统自动发送请勿直接回复</p>
<p style="margin: 5px 0;">%s</p>
</div>
</div>
</body>
</html>
`, title, content, config.SystemName)
}

View File

@ -2,13 +2,15 @@ package common
import (
"context"
"os"
"strings"
"time"
"github.com/go-redis/redis/v8"
"github.com/songquanpeng/one-api/common/logger"
"os"
"time"
)
var RDB *redis.Client
var RDB redis.Cmdable
var RedisEnabled = true
// InitRedisClient This function is called after init()
@ -23,13 +25,23 @@ func InitRedisClient() (err error) {
logger.SysLog("SYNC_FREQUENCY not set, Redis is disabled")
return nil
}
logger.SysLog("Redis is enabled")
opt, err := redis.ParseURL(os.Getenv("REDIS_CONN_STRING"))
if err != nil {
logger.FatalLog("failed to parse Redis connection string: " + err.Error())
redisConnString := os.Getenv("REDIS_CONN_STRING")
if os.Getenv("REDIS_MASTER_NAME") == "" {
logger.SysLog("Redis is enabled")
opt, err := redis.ParseURL(redisConnString)
if err != nil {
logger.FatalLog("failed to parse Redis connection string: " + err.Error())
}
RDB = redis.NewClient(opt)
} else {
// cluster mode
logger.SysLog("Redis cluster mode enabled")
RDB = redis.NewUniversalClient(&redis.UniversalOptions{
Addrs: strings.Split(redisConnString, ","),
Password: os.Getenv("REDIS_PASSWORD"),
MasterName: os.Getenv("REDIS_MASTER_NAME"),
})
}
RDB = redis.NewClient(opt)
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()

View File

@ -3,9 +3,10 @@ package render
import (
"encoding/json"
"fmt"
"strings"
"github.com/gin-gonic/gin"
"github.com/songquanpeng/one-api/common"
"strings"
)
func StringData(c *gin.Context, str string) {

13
common/utils/array.go Normal file
View File

@ -0,0 +1,13 @@
package utils
func DeDuplication(slice []string) []string {
m := make(map[string]bool)
for _, v := range slice {
m[v] = true
}
result := make([]string, 0, len(m))
for v := range m {
result = append(result, v)
}
return result
}

View File

@ -5,16 +5,18 @@ import (
"encoding/json"
"errors"
"fmt"
"net/http"
"strconv"
"time"
"github.com/gin-contrib/sessions"
"github.com/gin-gonic/gin"
"github.com/songquanpeng/one-api/common/config"
"github.com/songquanpeng/one-api/common/logger"
"github.com/songquanpeng/one-api/common/random"
"github.com/songquanpeng/one-api/controller"
"github.com/songquanpeng/one-api/model"
"net/http"
"strconv"
"time"
)
type GitHubOAuthResponse struct {
@ -81,6 +83,7 @@ func getGitHubUserInfoByCode(code string) (*GitHubUser, error) {
}
func GitHubOAuth(c *gin.Context) {
ctx := c.Request.Context()
session := sessions.Default(c)
state := c.Query("state")
if state == "" || session.Get("oauth_state") == nil || state != session.Get("oauth_state").(string) {
@ -136,7 +139,7 @@ func GitHubOAuth(c *gin.Context) {
user.Role = model.RoleCommonUser
user.Status = model.UserStatusEnabled
if err := user.Insert(0); err != nil {
if err := user.Insert(ctx, 0); err != nil {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": err.Error(),

View File

@ -5,15 +5,17 @@ import (
"encoding/json"
"errors"
"fmt"
"net/http"
"strconv"
"time"
"github.com/gin-contrib/sessions"
"github.com/gin-gonic/gin"
"github.com/songquanpeng/one-api/common/config"
"github.com/songquanpeng/one-api/common/logger"
"github.com/songquanpeng/one-api/controller"
"github.com/songquanpeng/one-api/model"
"net/http"
"strconv"
"time"
)
type LarkOAuthResponse struct {
@ -40,7 +42,7 @@ func getLarkUserInfoByCode(code string) (*LarkUser, error) {
if err != nil {
return nil, err
}
req, err := http.NewRequest("POST", "https://passport.feishu.cn/suite/passport/oauth/token", bytes.NewBuffer(jsonData))
req, err := http.NewRequest("POST", "https://open.feishu.cn/open-apis/authen/v2/oauth/token", bytes.NewBuffer(jsonData))
if err != nil {
return nil, err
}
@ -79,6 +81,7 @@ func getLarkUserInfoByCode(code string) (*LarkUser, error) {
}
func LarkOAuth(c *gin.Context) {
ctx := c.Request.Context()
session := sessions.Default(c)
state := c.Query("state")
if state == "" || session.Get("oauth_state") == nil || state != session.Get("oauth_state").(string) {
@ -125,7 +128,7 @@ func LarkOAuth(c *gin.Context) {
user.Role = model.RoleCommonUser
user.Status = model.UserStatusEnabled
if err := user.Insert(0); err != nil {
if err := user.Insert(ctx, 0); err != nil {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": err.Error(),

View File

@ -5,15 +5,17 @@ import (
"encoding/json"
"errors"
"fmt"
"net/http"
"strconv"
"time"
"github.com/gin-contrib/sessions"
"github.com/gin-gonic/gin"
"github.com/songquanpeng/one-api/common/config"
"github.com/songquanpeng/one-api/common/logger"
"github.com/songquanpeng/one-api/controller"
"github.com/songquanpeng/one-api/model"
"net/http"
"strconv"
"time"
)
type OidcResponse struct {
@ -87,6 +89,7 @@ func getOidcUserInfoByCode(code string) (*OidcUser, error) {
}
func OidcAuth(c *gin.Context) {
ctx := c.Request.Context()
session := sessions.Default(c)
state := c.Query("state")
if state == "" || session.Get("oauth_state") == nil || state != session.Get("oauth_state").(string) {
@ -142,7 +145,7 @@ func OidcAuth(c *gin.Context) {
} else {
user.DisplayName = "OIDC User"
}
err := user.Insert(0)
err := user.Insert(ctx, 0)
if err != nil {
c.JSON(http.StatusOK, gin.H{
"success": false,

View File

@ -4,14 +4,16 @@ import (
"encoding/json"
"errors"
"fmt"
"net/http"
"strconv"
"time"
"github.com/gin-gonic/gin"
"github.com/songquanpeng/one-api/common/config"
"github.com/songquanpeng/one-api/common/ctxkey"
"github.com/songquanpeng/one-api/controller"
"github.com/songquanpeng/one-api/model"
"net/http"
"strconv"
"time"
)
type wechatLoginResponse struct {
@ -52,6 +54,7 @@ func getWeChatIdByCode(code string) (string, error) {
}
func WeChatAuth(c *gin.Context) {
ctx := c.Request.Context()
if !config.WeChatAuthEnabled {
c.JSON(http.StatusOK, gin.H{
"message": "管理员未开启通过微信登录以及注册",
@ -87,7 +90,7 @@ func WeChatAuth(c *gin.Context) {
user.Role = model.RoleCommonUser
user.Status = model.UserStatusEnabled
if err := user.Insert(0); err != nil {
if err := user.Insert(ctx, 0); err != nil {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": err.Error(),

View File

@ -4,16 +4,17 @@ import (
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
"strconv"
"time"
"github.com/songquanpeng/one-api/common/client"
"github.com/songquanpeng/one-api/common/config"
"github.com/songquanpeng/one-api/common/logger"
"github.com/songquanpeng/one-api/model"
"github.com/songquanpeng/one-api/monitor"
"github.com/songquanpeng/one-api/relay/channeltype"
"io"
"net/http"
"strconv"
"time"
"github.com/gin-gonic/gin"
)
@ -101,6 +102,23 @@ type SiliconFlowUsageResponse struct {
} `json:"data"`
}
type DeepSeekUsageResponse struct {
IsAvailable bool `json:"is_available"`
BalanceInfos []struct {
Currency string `json:"currency"`
TotalBalance string `json:"total_balance"`
GrantedBalance string `json:"granted_balance"`
ToppedUpBalance string `json:"topped_up_balance"`
} `json:"balance_infos"`
}
type OpenRouterResponse struct {
Data struct {
TotalCredits float64 `json:"total_credits"`
TotalUsage float64 `json:"total_usage"`
} `json:"data"`
}
// GetAuthHeader get auth header
func GetAuthHeader(token string) http.Header {
h := http.Header{}
@ -237,7 +255,7 @@ func updateChannelSiliconFlowBalance(channel *model.Channel) (float64, error) {
if response.Code != 20000 {
return 0, fmt.Errorf("code: %d, message: %s", response.Code, response.Message)
}
balance, err := strconv.ParseFloat(response.Data.Balance, 64)
balance, err := strconv.ParseFloat(response.Data.TotalBalance, 64)
if err != nil {
return 0, err
}
@ -245,6 +263,51 @@ func updateChannelSiliconFlowBalance(channel *model.Channel) (float64, error) {
return balance, nil
}
func updateChannelDeepSeekBalance(channel *model.Channel) (float64, error) {
url := "https://api.deepseek.com/user/balance"
body, err := GetResponseBody("GET", url, channel, GetAuthHeader(channel.Key))
if err != nil {
return 0, err
}
response := DeepSeekUsageResponse{}
err = json.Unmarshal(body, &response)
if err != nil {
return 0, err
}
index := -1
for i, balanceInfo := range response.BalanceInfos {
if balanceInfo.Currency == "CNY" {
index = i
break
}
}
if index == -1 {
return 0, errors.New("currency CNY not found")
}
balance, err := strconv.ParseFloat(response.BalanceInfos[index].TotalBalance, 64)
if err != nil {
return 0, err
}
channel.UpdateBalance(balance)
return balance, nil
}
func updateChannelOpenRouterBalance(channel *model.Channel) (float64, error) {
url := "https://openrouter.ai/api/v1/credits"
body, err := GetResponseBody("GET", url, channel, GetAuthHeader(channel.Key))
if err != nil {
return 0, err
}
response := OpenRouterResponse{}
err = json.Unmarshal(body, &response)
if err != nil {
return 0, err
}
balance := response.Data.TotalCredits - response.Data.TotalUsage
channel.UpdateBalance(balance)
return balance, nil
}
func updateChannelBalance(channel *model.Channel) (float64, error) {
baseURL := channeltype.ChannelBaseURLs[channel.Type]
if channel.GetBaseURL() == "" {
@ -271,6 +334,10 @@ func updateChannelBalance(channel *model.Channel) (float64, error) {
return updateChannelAIGC2DBalance(channel)
case channeltype.SiliconFlow:
return updateChannelSiliconFlowBalance(channel)
case channeltype.DeepSeek:
return updateChannelDeepSeekBalance(channel)
case channeltype.OpenRouter:
return updateChannelOpenRouterBalance(channel)
default:
return 0, errors.New("尚未实现")
}

View File

@ -2,6 +2,7 @@ package controller
import (
"bytes"
"context"
"encoding/json"
"errors"
"fmt"
@ -15,14 +16,17 @@ import (
"time"
"github.com/gin-gonic/gin"
"github.com/songquanpeng/one-api/common/config"
"github.com/songquanpeng/one-api/common/ctxkey"
"github.com/songquanpeng/one-api/common/helper"
"github.com/songquanpeng/one-api/common/logger"
"github.com/songquanpeng/one-api/common/message"
"github.com/songquanpeng/one-api/middleware"
"github.com/songquanpeng/one-api/model"
"github.com/songquanpeng/one-api/monitor"
relay "github.com/songquanpeng/one-api/relay"
"github.com/songquanpeng/one-api/relay"
"github.com/songquanpeng/one-api/relay/adaptor/openai"
"github.com/songquanpeng/one-api/relay/channeltype"
"github.com/songquanpeng/one-api/relay/controller"
"github.com/songquanpeng/one-api/relay/meta"
@ -35,18 +39,34 @@ func buildTestRequest(model string) *relaymodel.GeneralOpenAIRequest {
model = "gpt-3.5-turbo"
}
testRequest := &relaymodel.GeneralOpenAIRequest{
MaxTokens: 2,
Model: model,
Model: model,
}
testMessage := relaymodel.Message{
Role: "user",
Content: "hi",
Content: config.TestPrompt,
}
testRequest.Messages = append(testRequest.Messages, testMessage)
return testRequest
}
func testChannel(channel *model.Channel, request *relaymodel.GeneralOpenAIRequest) (err error, openaiErr *relaymodel.Error) {
func parseTestResponse(resp string) (*openai.TextResponse, string, error) {
var response openai.TextResponse
err := json.Unmarshal([]byte(resp), &response)
if err != nil {
return nil, "", err
}
if len(response.Choices) == 0 {
return nil, "", errors.New("response has no choices")
}
stringContent, ok := response.Choices[0].Content.(string)
if !ok {
return nil, "", errors.New("response content is not string")
}
return &response, stringContent, nil
}
func testChannel(ctx context.Context, channel *model.Channel, request *relaymodel.GeneralOpenAIRequest) (responseMessage string, err error, openaiErr *relaymodel.Error) {
startTime := time.Now()
w := httptest.NewRecorder()
c, _ := gin.CreateTestContext(w)
c.Request = &http.Request{
@ -66,7 +86,7 @@ func testChannel(channel *model.Channel, request *relaymodel.GeneralOpenAIReques
apiType := channeltype.ToAPIType(channel.Type)
adaptor := relay.GetAdaptor(apiType)
if adaptor == nil {
return fmt.Errorf("invalid api type: %d, adaptor is nil", apiType), nil
return "", fmt.Errorf("invalid api type: %d, adaptor is nil", apiType), nil
}
adaptor.Init(meta)
modelName := request.Model
@ -84,41 +104,70 @@ func testChannel(channel *model.Channel, request *relaymodel.GeneralOpenAIReques
request.Model = modelName
convertedRequest, err := adaptor.ConvertRequest(c, relaymode.ChatCompletions, request)
if err != nil {
return err, nil
return "", err, nil
}
jsonData, err := json.Marshal(convertedRequest)
if err != nil {
return err, nil
return "", err, nil
}
defer func() {
logContent := fmt.Sprintf("渠道 %s 测试成功,响应:%s", channel.Name, responseMessage)
if err != nil || openaiErr != nil {
errorMessage := ""
if err != nil {
errorMessage = err.Error()
} else {
errorMessage = openaiErr.Message
}
logContent = fmt.Sprintf("渠道 %s 测试失败,错误:%s", channel.Name, errorMessage)
}
go model.RecordTestLog(ctx, &model.Log{
ChannelId: channel.Id,
ModelName: modelName,
Content: logContent,
ElapsedTime: helper.CalcElapsedTime(startTime),
})
}()
logger.SysLog(string(jsonData))
requestBody := bytes.NewBuffer(jsonData)
c.Request.Body = io.NopCloser(requestBody)
resp, err := adaptor.DoRequest(c, meta, requestBody)
if err != nil {
return err, nil
return "", err, nil
}
if resp != nil && resp.StatusCode != http.StatusOK {
err := controller.RelayErrorHandler(resp)
return fmt.Errorf("status code %d: %s", resp.StatusCode, err.Error.Message), &err.Error
errorMessage := err.Error.Message
if errorMessage != "" {
errorMessage = ", error message: " + errorMessage
}
return "", fmt.Errorf("http status code: %d%s", resp.StatusCode, errorMessage), &err.Error
}
usage, respErr := adaptor.DoResponse(c, resp, meta)
if respErr != nil {
return fmt.Errorf("%s", respErr.Error.Message), &respErr.Error
return "", fmt.Errorf("%s", respErr.Error.Message), &respErr.Error
}
if usage == nil {
return errors.New("usage is nil"), nil
return "", errors.New("usage is nil"), nil
}
rawResponse := w.Body.String()
_, responseMessage, err = parseTestResponse(rawResponse)
if err != nil {
logger.SysError(fmt.Sprintf("failed to parse error: %s, \nresponse: %s", err.Error(), rawResponse))
return "", err, nil
}
result := w.Result()
// print result.Body
respBody, err := io.ReadAll(result.Body)
if err != nil {
return err, nil
return "", err, nil
}
logger.SysLog(fmt.Sprintf("testing channel #%d, response: \n%s", channel.Id, string(respBody)))
return nil, nil
return responseMessage, nil, nil
}
func TestChannel(c *gin.Context) {
ctx := c.Request.Context()
id, err := strconv.Atoi(c.Param("id"))
if err != nil {
c.JSON(http.StatusOK, gin.H{
@ -135,10 +184,10 @@ func TestChannel(c *gin.Context) {
})
return
}
model := c.Query("model")
testRequest := buildTestRequest(model)
modelName := c.Query("model")
testRequest := buildTestRequest(modelName)
tik := time.Now()
err, _ = testChannel(channel, testRequest)
responseMessage, err, _ := testChannel(ctx, channel, testRequest)
tok := time.Now()
milliseconds := tok.Sub(tik).Milliseconds()
if err != nil {
@ -148,18 +197,18 @@ func TestChannel(c *gin.Context) {
consumedTime := float64(milliseconds) / 1000.0
if err != nil {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": err.Error(),
"time": consumedTime,
"model": model,
"success": false,
"message": err.Error(),
"time": consumedTime,
"modelName": modelName,
})
return
}
c.JSON(http.StatusOK, gin.H{
"success": true,
"message": "",
"time": consumedTime,
"model": model,
"success": true,
"message": responseMessage,
"time": consumedTime,
"modelName": modelName,
})
return
}
@ -167,7 +216,7 @@ func TestChannel(c *gin.Context) {
var testAllChannelsLock sync.Mutex
var testAllChannelsRunning bool = false
func testChannels(notify bool, scope string) error {
func testChannels(ctx context.Context, notify bool, scope string) error {
if config.RootUserEmail == "" {
config.RootUserEmail = model.GetRootUserEmail()
}
@ -191,7 +240,7 @@ func testChannels(notify bool, scope string) error {
isChannelEnabled := channel.Status == model.ChannelStatusEnabled
tik := time.Now()
testRequest := buildTestRequest("")
err, openaiErr := testChannel(channel, testRequest)
_, err, openaiErr := testChannel(ctx, channel, testRequest)
tok := time.Now()
milliseconds := tok.Sub(tik).Milliseconds()
if isChannelEnabled && milliseconds > disableThreshold {
@ -225,11 +274,12 @@ func testChannels(notify bool, scope string) error {
}
func TestChannels(c *gin.Context) {
ctx := c.Request.Context()
scope := c.Query("scope")
if scope == "" {
scope = "all"
}
err := testChannels(true, scope)
err := testChannels(ctx, true, scope)
if err != nil {
c.JSON(http.StatusOK, gin.H{
"success": false,
@ -245,10 +295,11 @@ func TestChannels(c *gin.Context) {
}
func AutomaticallyTestChannels(frequency int) {
ctx := context.Background()
for {
time.Sleep(time.Duration(frequency) * time.Minute)
logger.SysLog("testing all channels")
_ = testChannels(false, "all")
_ = testChannels(ctx, false, "all")
logger.SysLog("channel test finished")
}
}

View File

@ -3,13 +3,15 @@ package controller
import (
"encoding/json"
"fmt"
"github.com/songquanpeng/one-api/common"
"github.com/songquanpeng/one-api/common/config"
"github.com/songquanpeng/one-api/common/message"
"github.com/songquanpeng/one-api/model"
"net/http"
"strings"
"github.com/songquanpeng/one-api/common"
"github.com/songquanpeng/one-api/common/config"
"github.com/songquanpeng/one-api/common/i18n"
"github.com/songquanpeng/one-api/common/message"
"github.com/songquanpeng/one-api/model"
"github.com/gin-gonic/gin"
)
@ -85,7 +87,7 @@ func SendEmailVerification(c *gin.Context) {
if err := common.Validate.Var(email, "required,email"); err != nil {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": "无效的参数",
"message": i18n.Translate(c, "invalid_parameter"),
})
return
}
@ -114,10 +116,17 @@ func SendEmailVerification(c *gin.Context) {
}
code := common.GenerateVerificationCode(6)
common.RegisterVerificationCodeWithKey(email, code, common.EmailVerificationPurpose)
subject := fmt.Sprintf("%s邮箱验证邮件", config.SystemName)
content := fmt.Sprintf("<p>您好,你正在进行%s邮箱验证。</p>"+
"<p>您的验证码为: <strong>%s</strong></p>"+
"<p>验证码 %d 分钟内有效,如果不是本人操作,请忽略。</p>", config.SystemName, code, common.VerificationValidMinutes)
subject := fmt.Sprintf("%s 邮箱验证邮件", config.SystemName)
content := message.EmailTemplate(
subject,
fmt.Sprintf(`
<p>您好</p>
<p>您正在进行 %s 邮箱验证</p>
<p>您的验证码为</p>
<p style="font-size: 24px; font-weight: bold; color: #333; background-color: #f8f8f8; padding: 10px; text-align: center; border-radius: 4px;">%s</p>
<p style="color: #666;">验证码 %d 分钟内有效如果不是本人操作请忽略</p>
`, config.SystemName, code, common.VerificationValidMinutes),
)
err := message.SendEmail(subject, email, content)
if err != nil {
c.JSON(http.StatusOK, gin.H{
@ -138,7 +147,7 @@ func SendPasswordResetEmail(c *gin.Context) {
if err := common.Validate.Var(email, "required,email"); err != nil {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": "无效的参数",
"message": i18n.Translate(c, "invalid_parameter"),
})
return
}
@ -152,16 +161,26 @@ func SendPasswordResetEmail(c *gin.Context) {
code := common.GenerateVerificationCode(0)
common.RegisterVerificationCodeWithKey(email, code, common.PasswordResetPurpose)
link := fmt.Sprintf("%s/user/reset?email=%s&token=%s", config.ServerAddress, email, code)
subject := fmt.Sprintf("%s密码重置", config.SystemName)
content := fmt.Sprintf("<p>您好,你正在进行%s密码重置。</p>"+
"<p>点击 <a href='%s'>此处</a> 进行密码重置。</p>"+
"<p>如果链接无法点击,请尝试点击下面的链接或将其复制到浏览器中打开:<br> %s </p>"+
"<p>重置链接 %d 分钟内有效,如果不是本人操作,请忽略。</p>", config.SystemName, link, link, common.VerificationValidMinutes)
subject := fmt.Sprintf("%s 密码重置", config.SystemName)
content := message.EmailTemplate(
subject,
fmt.Sprintf(`
<p>您好</p>
<p>您正在进行 %s 密码重置</p>
<p>请点击下面的按钮进行密码重置</p>
<p style="text-align: center; margin: 30px 0;">
<a href="%s" style="background-color: #007bff; color: white; padding: 12px 24px; text-decoration: none; border-radius: 4px; display: inline-block;">重置密码</a>
</p>
<p style="color: #666;">如果按钮无法点击请复制以下链接到浏览器中打开</p>
<p style="background-color: #f8f8f8; padding: 10px; border-radius: 4px; word-break: break-all;">%s</p>
<p style="color: #666;">重置链接 %d 分钟内有效如果不是本人操作请忽略</p>
`, config.SystemName, link, link, common.VerificationValidMinutes),
)
err := message.SendEmail(subject, email, content)
if err != nil {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": err.Error(),
"message": fmt.Sprintf("%s%s", i18n.Translate(c, "send_email_failed"), err.Error()),
})
return
}
@ -183,7 +202,7 @@ func ResetPassword(c *gin.Context) {
if req.Email == "" || req.Token == "" {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": "无效的参数",
"message": i18n.Translate(c, "invalid_parameter"),
})
return
}

View File

@ -2,12 +2,14 @@ package controller
import (
"encoding/json"
"github.com/songquanpeng/one-api/common/config"
"github.com/songquanpeng/one-api/common/helper"
"github.com/songquanpeng/one-api/model"
"net/http"
"strings"
"github.com/songquanpeng/one-api/common/config"
"github.com/songquanpeng/one-api/common/helper"
"github.com/songquanpeng/one-api/common/i18n"
"github.com/songquanpeng/one-api/model"
"github.com/gin-gonic/gin"
)
@ -38,7 +40,7 @@ func UpdateOption(c *gin.Context) {
if err != nil {
c.JSON(http.StatusBadRequest, gin.H{
"success": false,
"message": "无效的参数",
"message": i18n.Translate(c, "invalid_parameter"),
})
return
}

View File

@ -60,7 +60,7 @@ func Relay(c *gin.Context) {
channelName := c.GetString(ctxkey.ChannelName)
group := c.GetString(ctxkey.Group)
originalModel := c.GetString(ctxkey.OriginalModel)
go processChannelRelayError(ctx, userId, channelId, channelName, bizErr)
go processChannelRelayError(ctx, userId, channelId, channelName, *bizErr)
requestId := c.GetString(helper.RequestIdKey)
retryTimes := config.RetryTimes
if !shouldRetry(c, bizErr.StatusCode) {
@ -87,8 +87,7 @@ func Relay(c *gin.Context) {
channelId := c.GetInt(ctxkey.ChannelId)
lastFailedChannelId = channelId
channelName := c.GetString(ctxkey.ChannelName)
// BUG: bizErr is in race condition
go processChannelRelayError(ctx, userId, channelId, channelName, bizErr)
go processChannelRelayError(ctx, userId, channelId, channelName, *bizErr)
}
if bizErr != nil {
if bizErr.StatusCode == http.StatusTooManyRequests {
@ -122,7 +121,7 @@ func shouldRetry(c *gin.Context, statusCode int) bool {
return true
}
func processChannelRelayError(ctx context.Context, userId int, channelId int, channelName string, err *model.ErrorWithStatusCode) {
func processChannelRelayError(ctx context.Context, userId int, channelId int, channelName string, err model.ErrorWithStatusCode) {
logger.Errorf(ctx, "relay error (channel id %d, user id: %d): %s", channelId, userId, err.Message)
// https://platform.openai.com/docs/guides/error-codes/api-errors
if monitor.ShouldDisableChannel(&err.Error, err.StatusCode) {

View File

@ -3,17 +3,19 @@ package controller
import (
"encoding/json"
"fmt"
"github.com/songquanpeng/one-api/common"
"github.com/songquanpeng/one-api/common/config"
"github.com/songquanpeng/one-api/common/ctxkey"
"github.com/songquanpeng/one-api/common/random"
"github.com/songquanpeng/one-api/model"
"net/http"
"strconv"
"time"
"github.com/gin-contrib/sessions"
"github.com/gin-gonic/gin"
"github.com/songquanpeng/one-api/common"
"github.com/songquanpeng/one-api/common/config"
"github.com/songquanpeng/one-api/common/ctxkey"
"github.com/songquanpeng/one-api/common/i18n"
"github.com/songquanpeng/one-api/common/random"
"github.com/songquanpeng/one-api/model"
)
type LoginRequest struct {
@ -33,7 +35,7 @@ func Login(c *gin.Context) {
err := json.NewDecoder(c.Request.Body).Decode(&loginRequest)
if err != nil {
c.JSON(http.StatusOK, gin.H{
"message": "无效的参数",
"message": i18n.Translate(c, "invalid_parameter"),
"success": false,
})
return
@ -42,7 +44,7 @@ func Login(c *gin.Context) {
password := loginRequest.Password
if username == "" || password == "" {
c.JSON(http.StatusOK, gin.H{
"message": "无效的参数",
"message": i18n.Translate(c, "invalid_parameter"),
"success": false,
})
return
@ -109,6 +111,7 @@ func Logout(c *gin.Context) {
}
func Register(c *gin.Context) {
ctx := c.Request.Context()
if !config.RegisterEnabled {
c.JSON(http.StatusOK, gin.H{
"message": "管理员关闭了新用户注册",
@ -128,14 +131,14 @@ func Register(c *gin.Context) {
if err != nil {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": "无效的参数",
"message": i18n.Translate(c, "invalid_parameter"),
})
return
}
if err := common.Validate.Struct(&user); err != nil {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": "输入不合法 " + err.Error(),
"message": i18n.Translate(c, "invalid_input"),
})
return
}
@ -166,7 +169,7 @@ func Register(c *gin.Context) {
if config.EmailVerificationEnabled {
cleanUser.Email = user.Email
}
if err := cleanUser.Insert(inviterId); err != nil {
if err := cleanUser.Insert(ctx, inviterId); err != nil {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": err.Error(),
@ -362,12 +365,13 @@ func GetSelf(c *gin.Context) {
}
func UpdateUser(c *gin.Context) {
ctx := c.Request.Context()
var updatedUser model.User
err := json.NewDecoder(c.Request.Body).Decode(&updatedUser)
if err != nil || updatedUser.Id == 0 {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": "无效的参数",
"message": i18n.Translate(c, "invalid_parameter"),
})
return
}
@ -377,7 +381,7 @@ func UpdateUser(c *gin.Context) {
if err := common.Validate.Struct(&updatedUser); err != nil {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": "输入不合法 " + err.Error(),
"message": i18n.Translate(c, "invalid_input"),
})
return
}
@ -416,7 +420,7 @@ func UpdateUser(c *gin.Context) {
return
}
if originUser.Quota != updatedUser.Quota {
model.RecordLog(originUser.Id, model.LogTypeManage, fmt.Sprintf("管理员将用户额度从 %s修改为 %s", common.LogQuota(originUser.Quota), common.LogQuota(updatedUser.Quota)))
model.RecordLog(ctx, originUser.Id, model.LogTypeManage, fmt.Sprintf("管理员将用户额度从 %s修改为 %s", common.LogQuota(originUser.Quota), common.LogQuota(updatedUser.Quota)))
}
c.JSON(http.StatusOK, gin.H{
"success": true,
@ -431,7 +435,7 @@ func UpdateSelf(c *gin.Context) {
if err != nil {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": "无效的参数",
"message": i18n.Translate(c, "invalid_parameter"),
})
return
}
@ -535,19 +539,20 @@ func DeleteSelf(c *gin.Context) {
}
func CreateUser(c *gin.Context) {
ctx := c.Request.Context()
var user model.User
err := json.NewDecoder(c.Request.Body).Decode(&user)
if err != nil || user.Username == "" || user.Password == "" {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": "无效的参数",
"message": i18n.Translate(c, "invalid_parameter"),
})
return
}
if err := common.Validate.Struct(&user); err != nil {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": "输入不合法 " + err.Error(),
"message": i18n.Translate(c, "invalid_input"),
})
return
}
@ -568,7 +573,7 @@ func CreateUser(c *gin.Context) {
Password: user.Password,
DisplayName: user.DisplayName,
}
if err := cleanUser.Insert(0); err != nil {
if err := cleanUser.Insert(ctx, 0); err != nil {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": err.Error(),
@ -596,7 +601,7 @@ func ManageUser(c *gin.Context) {
if err != nil {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": "无效的参数",
"message": i18n.Translate(c, "invalid_parameter"),
})
return
}
@ -747,6 +752,7 @@ type topUpRequest struct {
}
func TopUp(c *gin.Context) {
ctx := c.Request.Context()
req := topUpRequest{}
err := c.ShouldBindJSON(&req)
if err != nil {
@ -757,7 +763,7 @@ func TopUp(c *gin.Context) {
return
}
id := c.GetInt("id")
quota, err := model.Redeem(req.Key, id)
quota, err := model.Redeem(ctx, req.Key, id)
if err != nil {
c.JSON(http.StatusOK, gin.H{
"success": false,
@ -780,6 +786,7 @@ type adminTopUpRequest struct {
}
func AdminTopUp(c *gin.Context) {
ctx := c.Request.Context()
req := adminTopUpRequest{}
err := c.ShouldBindJSON(&req)
if err != nil {
@ -800,7 +807,7 @@ func AdminTopUp(c *gin.Context) {
if req.Remark == "" {
req.Remark = fmt.Sprintf("通过 API 充值 %s", common.LogQuota(int64(req.Quota)))
}
model.RecordTopupLog(req.UserId, req.Remark, req.Quota)
model.RecordTopupLog(ctx, req.UserId, req.Remark, req.Quota)
c.JSON(http.StatusOK, gin.H{
"success": true,
"message": "",

13
go.mod
View File

@ -1,6 +1,5 @@
module github.com/songquanpeng/one-api
// +heroku goVersion go1.18
go 1.20
require (
@ -25,12 +24,13 @@ require (
github.com/pkoukk/tiktoken-go v0.1.7
github.com/smartystreets/goconvey v1.8.1
github.com/stretchr/testify v1.9.0
golang.org/x/crypto v0.24.0
golang.org/x/crypto v0.31.0
golang.org/x/image v0.18.0
golang.org/x/sync v0.10.0
google.golang.org/api v0.187.0
gorm.io/driver/mysql v1.5.6
gorm.io/driver/postgres v1.5.7
gorm.io/driver/sqlite v1.5.5
gorm.io/driver/sqlite v1.5.1
gorm.io/gorm v1.25.10
)
@ -82,7 +82,7 @@ require (
github.com/kr/text v0.2.0 // indirect
github.com/leodido/go-urn v1.4.0 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/mattn/go-sqlite3 v1.14.22 // indirect
github.com/mattn/go-sqlite3 v1.14.24 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/pelletier/go-toml/v2 v2.2.2 // indirect
@ -99,9 +99,8 @@ require (
golang.org/x/arch v0.8.0 // indirect
golang.org/x/net v0.26.0 // indirect
golang.org/x/oauth2 v0.21.0 // indirect
golang.org/x/sync v0.7.0 // indirect
golang.org/x/sys v0.21.0 // indirect
golang.org/x/text v0.16.0 // indirect
golang.org/x/sys v0.28.0 // indirect
golang.org/x/text v0.21.0 // indirect
golang.org/x/time v0.5.0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20240617180043-68d350f18fd4 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20240624140628-dc46fd24d27d // indirect

24
go.sum
View File

@ -163,8 +163,8 @@ github.com/leodido/go-urn v1.4.0 h1:WT9HwE9SGECu3lg4d/dIA+jxlljEa1/ffXKmRjqdmIQ=
github.com/leodido/go-urn v1.4.0/go.mod h1:bvxc+MVxLKB4z00jd1z+Dvzr47oO32F/QSNjSBOlFxI=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-sqlite3 v1.14.22 h1:2gZY6PC6kBnID23Tichd1K+Z0oS6nE/XwU+Vz/5o4kU=
github.com/mattn/go-sqlite3 v1.14.22/go.mod h1:Uh1q+B4BYcTPb+yiD3kU8Ct7aC0hY9fxUwlHK0RXw+Y=
github.com/mattn/go-sqlite3 v1.14.24 h1:tpSp2G2KyMnnQu99ngJ47EIkWVmliIizyZBfPrBWDRM=
github.com/mattn/go-sqlite3 v1.14.24/go.mod h1:Uh1q+B4BYcTPb+yiD3kU8Ct7aC0hY9fxUwlHK0RXw+Y=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
@ -222,8 +222,8 @@ golang.org/x/arch v0.8.0 h1:3wRIsP3pM4yUptoR96otTUOXI367OS0+c9eeRi9doIc=
golang.org/x/arch v0.8.0/go.mod h1:FEVrYAQjsQXMVJ1nsMoVVXPZg6p2JE2mx8psSWTDQys=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.24.0 h1:mnl8DM0o513X8fdIkmyFE/5hTYxbwYOjDS/+rK6qpRI=
golang.org/x/crypto v0.24.0/go.mod h1:Z1PMYSOR5nyMcyAVAIQSKCDwalqy85Aqn1x3Ws4L5DM=
golang.org/x/crypto v0.31.0 h1:ihbySMvVjLAeSH1IbfcRTkD/iNscyz8rGzjF/E5hV6U=
golang.org/x/crypto v0.31.0/go.mod h1:kDsLvtWBEx7MV9tJOj9bnXsPbxwJQ6csT/x4KIN4Ssk=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/image v0.18.0 h1:jGzIakQa/ZXI1I0Fxvaa9W7yP25TqT6cHIHn+6CqvSQ=
golang.org/x/image v0.18.0/go.mod h1:4yyo5vMFQjVjUcVk4jEQcU9MGy/rulF5WvUILseCM2E=
@ -244,20 +244,20 @@ golang.org/x/oauth2 v0.21.0/go.mod h1:XYTD2NtWslqkgxebSiOHnXEap4TF09sJSc7H1sXbht
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.7.0 h1:YsImfSBoP9QPYL0xyKJPq0gcaJdG3rInoqxTWbfQu9M=
golang.org/x/sync v0.7.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.10.0 h1:3NQrjDixjgGwUOCaF8w2+VYHv0Ve/vGYSbdkTa98gmQ=
golang.org/x/sync v0.10.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.21.0 h1:rF+pYz3DAGSQAxAu1CbC7catZg4ebC4UIeIhKxBZvws=
golang.org/x/sys v0.21.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.28.0 h1:Fksou7UEQUWlKvIdsqzJmUmCX3cZuD2+P3XyyzwMhlA=
golang.org/x/sys v0.28.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.16.0 h1:a94ExnEXNtEwYLGJSIUxnWoxoRz/ZcCsV63ROupILh4=
golang.org/x/text v0.16.0/go.mod h1:GhwF1Be+LQoKShO3cGOHzqOgRrGaYc9AvblQOmPVHnI=
golang.org/x/text v0.21.0 h1:zyQAAkrwaneQ066sspRyJaG9VNi/YJ1NfzcGB3hZ/qo=
golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ=
golang.org/x/time v0.5.0 h1:o7cqy6amK/52YcAKIPlM3a+Fpj35zvRj2TP+e1xFSfk=
golang.org/x/time v0.5.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
@ -306,8 +306,8 @@ gorm.io/driver/mysql v1.5.6 h1:Ld4mkIickM+EliaQZQx3uOJDJHtrd70MxAUqWqlx3Y8=
gorm.io/driver/mysql v1.5.6/go.mod h1:sEtPWMiqiN1N1cMXoXmBbd8C6/l+TESwriotuRRpkDM=
gorm.io/driver/postgres v1.5.7 h1:8ptbNJTDbEmhdr62uReG5BGkdQyeasu/FZHxI0IMGnM=
gorm.io/driver/postgres v1.5.7/go.mod h1:3e019WlBaYI5o5LIdNV+LyxCMNtLOQETBXL2h4chKpA=
gorm.io/driver/sqlite v1.5.5 h1:7MDMtUZhV065SilG62E0MquljeArQZNfJnjd9i9gx3E=
gorm.io/driver/sqlite v1.5.5/go.mod h1:6NgQ7sQWAIFsPrJJl1lSNSu2TABh0ZZ/zm5fosATavE=
gorm.io/driver/sqlite v1.5.1 h1:hYyrLkAWE71bcarJDPdZNTLWtr8XrSjOWyjUYI6xdL4=
gorm.io/driver/sqlite v1.5.1/go.mod h1:7MZZ2Z8bqyfSQA1gYEV6MagQWj3cpUkJj9Z+d1HEMEQ=
gorm.io/gorm v1.25.7/go.mod h1:hbnx/Oo0ChWMn1BIhpy1oYozzpM15i4YPuHDmfYtwg8=
gorm.io/gorm v1.25.10 h1:dQpO+33KalOA+aFYGlK+EfxcI5MbO7EP2yYygwh9h+s=
gorm.io/gorm v1.25.10/go.mod h1:hbnx/Oo0ChWMn1BIhpy1oYozzpM15i4YPuHDmfYtwg8=

View File

@ -1,778 +0,0 @@
{
"%.6f 额度": "%.6f quota",
"%d 点额度": "%d point quota",
"尚未实现": "Not yet implemented",
"余额不足": "Insufficient balance",
"危险操作": "Hazardous operations",
"输入你的账户名": "Enter your account name",
"确认删除": "Confirm Delete",
"确认绑定": "Confirm Binding",
"您正在删除自己的帐户,将清空所有数据且不可恢复": "You are deleting your account, all data will be cleared and unrecoverable.",
"\"渠道「%s」#%d已被禁用\"": "\"Channel %s (#%d) has been disabled\"",
"渠道「%s」#%d已被禁用原因%s": "Channel %s (#%d) has been disabled, reason: %s",
"测试已在运行中": "Test is already running",
"响应时间 %.2fs 超过阈值 %.2fs": "Response time %.2fs exceeds threshold %.2fs",
"渠道测试完成": "Channel test completed",
"渠道测试完成,如果没有收到禁用通知,说明所有渠道都正常": "Channel test completed, if you have not received the disable notification, it means that all channels are normal",
"无法连接至 GitHub 服务器,请稍后重试!": "Unable to connect to GitHub server, please try again later!",
"返回值非法,用户字段为空,请稍后重试!": "The return value is illegal, the user field is empty, please try again later!",
"管理员未开启通过 GitHub 登录以及注册": "The administrator did not turn on login and registration via GitHub",
"管理员关闭了新用户注册": "The administrator has turned off new user registration",
"用户已被封禁": "User has been banned",
"该 GitHub 账户已被绑定": "The GitHub account has been bound",
"邮箱地址已被占用": "Email address is occupied",
"%s邮箱验证邮件": "%s Email verification email",
"<p>您好,你正在进行%s邮箱验证。</p>": "<p>Hello, you are verifying %s email.</p>",
"<p>您的验证码为: <strong>%s</strong></p>": "<p>Your verification code is: <strong>%s</strong></p>",
"<p>验证码 %d 分钟内有效,如果不是本人操作,请忽略。</p>": "<p>The verification code is valid within %d minutes. If it is not your operation, please ignore it.</p>",
"无效的参数": "Invalid parameter",
"该邮箱地址未注册": "The email address is not registered",
"%s密码重置": "%s Password reset",
"<p>您好,你正在进行%s密码重置。</p>": "<p>Hello, you are resetting %s password.</p>",
"<p>点击<a href='%s'>此处</a>进行密码重置。</p>": "<p>Click <a href='%s'>here</a> to reset your password.</p>",
"<p>重置链接 %d 分钟内有效,如果不是本人操作,请忽略。</p>": "<p>The reset link is valid within %d minutes. If it is not your operation, please ignore it.</p>",
"重置链接非法或已过期": "Reset link is illegal or expired",
"无法启用 GitHub OAuth请先填入 GitHub Client ID 以及 GitHub Client Secret": "Unable to enable GitHub OAuth, please fill in GitHub Client ID and GitHub Client Secret first!",
"无法启用微信登录,请先填入微信登录相关配置信息!": "Unable to enable WeChat login, please fill in the relevant configuration information for WeChat login first!",
"无法启用 Turnstile 校验,请先填入 Turnstile 校验相关配置信息!": "Unable to enable Turnstile verification, please fill in the relevant configuration information for Turnstile verification first!",
"兑换码名称长度必须在1-20之间": "The length of the redemption code name must be between 1-20",
"兑换码个数必须大于0": "The number of redemption codes must be greater than 0",
"一次兑换码批量生成的个数不能大于 100": "The number of redemption codes generated in a batch cannot be greater than 100",
"通过令牌「%s」使用模型 %s 消耗 %s模型倍率 %.2f,分组倍率 %.2f": "Using model %s with token %s consumes %s (model rate %.2f, group rate %.2f)",
"当前分组上游负载已饱和,请稍后再试": "The current group load is saturated, please try again later",
"令牌名称过长": "Token name is too long",
"令牌已过期,无法启用,请先修改令牌过期时间,或者设置为永不过期": "The token has expired and cannot be enabled. Please modify the expiration time of the token, or set it to never expire.",
"令牌可用额度已用尽,无法启用,请先修改令牌剩余额度,或者设置为无限额度": "The available quota of the token has been used up and cannot be enabled. Please modify the remaining quota of the token, or set it to unlimited quota",
"管理员关闭了密码登录": "The administrator has turned off password login",
"无法保存会话信息,请重试": "Unable to save session information, please try again",
"管理员关闭了通过密码进行注册,请使用第三方账户验证的形式进行注册": "The administrator has turned off registration via password. Please use the form of third-party account verification to register",
"输入不合法 ": "Input is illegal ",
"管理员开启了邮箱验证,请输入邮箱地址和验证码": "The administrator has turned on email verification, please enter the email address and verification code",
"验证码错误或已过期": "Verification code error or expired",
"无权获取同级或更高等级用户的信息": "No permission to get information of users at the same level or higher",
"请重试,系统生成的 UUID 竟然重复了!": "Please try again, the system-generated UUID is actually duplicated!",
"输入不合法": "Input is illegal",
"无权更新同权限等级或更高权限等级的用户信息": "No permission to update user information with the same permission level or higher permission level",
"管理员将用户额度从 %s修改为 %s": "The administrator changed the user quota from %s to %s",
"无权删除同权限等级或更高权限等级的用户": "No permission to delete users with the same permission level or higher permission level",
"无法创建权限大于等于自己的用户": "Unable to create users with permissions greater than or equal to your own",
"用户不存在": "User does not exist",
"无法禁用超级管理员用户": "Unable to disable super administrator user",
"无法删除超级管理员用户": "Unable to delete super administrator user",
"普通管理员用户无法提升其他用户为管理员": "Ordinary administrator users cannot promote other users to administrators",
"该用户已经是管理员": "The user is already an administrator",
"无法降级超级管理员用户": "Unable to downgrade super administrator user",
"该用户已经是普通用户": "The user is already an ordinary user",
"管理员未开启通过微信登录以及注册": "The administrator has not enabled login and registration via WeChat",
"该微信账号已被绑定": "The WeChat account has been bound",
"无权进行此操作,未登录且未提供 access token": "No permission to perform this operation, not logged in and no access token provided",
"无权进行此操作access token 无效": "No permission to perform this operation, access token is invalid",
"无权进行此操作,权限不足": "No permission to perform this operation, insufficient permissions",
"普通用户不支持指定渠道": "Ordinary users do not support specifying channels",
"无效的渠道 ID": "Invalid channel ID",
"该渠道已被禁用": "The channel has been disabled",
"无效的请求": "Invalid request",
"无可用渠道": "No available channels",
"Turnstile token 为空": "Turnstile token is empty",
"Turnstile 校验失败,请刷新重试!": "Turnstile verification failed, please refresh and try again!",
"id 为空!": "id is empty!",
"未提供兑换码": "No redemption code provided",
"无效的 user id": "Invalid user id",
"无效的兑换码": "Invalid redemption code",
"该兑换码已被使用": "The redemption code has been used",
"通过兑换码充值 %s": "Recharge %s through redemption code",
"未提供令牌": "No token provided",
"该令牌状态不可用": "The token status is not available",
"该令牌已过期": "The token has expired",
"该令牌额度已用尽": "The token quota has been used up",
"无效的令牌": "Invalid token",
"令牌验证失败": "Token verification failed",
"id 或 userId 为空!": "id or userId is empty!",
"quota 不能为负数!": "quota cannot be negative!",
"令牌额度不足": "Insufficient token quota",
"用户额度不足": "Insufficient user quota",
"您的额度即将用尽": "Your quota is about to run out",
"您的额度已用尽": "Your quota has been used up",
"%s当前剩余额度为 %d为了不影响您的使用请及时充值。<br/>充值链接:<a href='%s'>%s</a>": "%s, the current remaining quota is %d, in order not to affect your use, please recharge in time. <br/> Recharge link: <a href='%s'>%s</a>",
"affCode 为空!": "affCode is empty!",
"新用户注册赠送 %s": "New user registration gives %s",
"使用邀请码赠送 %s": "Use invitation code to give %s",
"邀请用户赠送 %s": "Invite users to give %s",
"用户名或密码为空": "Username or password is empty",
"用户名或密码错误,或用户已被封禁": "Username or password is wrong, or user has been banned",
"email 为空!": "email is empty!",
"GitHub id 为空!": "GitHub id is empty!",
"WeChat id 为空!": "WeChat id is empty!",
"username 为空!": "username is empty!",
"邮箱地址或密码为空!": "Email address or password is empty!",
"OpenAI 接口聚合管理,支持多种渠道包括 Azure可用于二次分发管理 key仅单可执行文件已打包好 Docker 镜像,一键部署,开箱即用": "OpenAI interface aggregation management, supports multiple channels including Azure, can be used for secondary distribution management key, only single executable file, Docker image has been packaged, one-click deployment, out of the box",
"未知类型": "Unknown type",
"不支持": "Not supported",
"操作成功完成!": "Operation completed successfully!",
"已启用": "Enabled",
"已禁用": "Disabled",
"未知状态": "Unknown status",
" 秒": "s",
" 分钟 ": " m ",
" 小时 ": " h ",
" 天 ": " d ",
" 个月 ": " M ",
" 年 ": " y ",
"未测试": "Not tested",
"渠道 ${name} 测试成功,耗时 ${time.toFixed(2)} 秒。": "Channel ${name} test succeeded, time consumed ${time.toFixed(2)} s.",
"已成功开始测试所有渠道,请刷新页面查看结果。": "All channels have been successfully tested, please refresh the page to view the results.",
"已成功开始测试所有已启用渠道,请刷新页面查看结果。": "All enabled channels have been successfully tested, please refresh the page to view the results.",
"渠道 ${name} 余额更新成功!": "Channel ${name} balance updated successfully!",
"已更新完毕所有已启用渠道余额!": "The balance of all enabled channels has been updated!",
"搜索渠道的 ID名称和密钥 ...": "Search for channel ID, name and key ...",
"名称": "Name",
"分组": "Group",
"类型": "Type",
"状态": "Status",
"响应时间": "Response time",
"余额": "Balance",
"操作": "Operation",
"未更新": "Not updated",
"测试": "Test",
"更新余额": "Update balance",
"删除": "Delete",
"删除渠道 {channel.name}": "Delete channel {channel.name}",
"禁用": "Disable",
"启用": "Enable",
"编辑": "Edit",
"添加新的渠道": "Add a new channel",
"测试所有渠道": "Test all channels",
"测试所有已启用渠道": "Test all enabled channels",
"更新所有已启用渠道余额": "Update the balance of all enabled channels",
"刷新": "Refresh",
"处理中...": "Processing...",
"绑定成功!": "Binding succeeded!",
"登录成功!": "Login succeeded!",
"操作失败,重定向至登录界面中...": "Operation failed, redirecting to the login page...",
"出现错误,第 ${count} 次重试中...": "An error occurred, retrying for the ${count} time...",
"首页": "Home",
"渠道": "Channel",
"令牌": "Token",
"兑换": "Redeem",
"充值": "Recharge",
"用户": "User",
"日志": "Log",
"设置": "Settings",
"关于": "About",
"聊天": "Chat",
"注销成功!": "Logout succeeded!",
"注销": "Logout",
"登录": "Login",
"注册": "Register",
"加载{name}中...": "Loading {name}...",
"未登录或登录已过期,请重新登录!": "Not logged in or login has expired, please log in again!",
"用户登录": "User login",
"\"用户名\"": "\"Username\"",
"\"密码\"": "\"Password\"",
"忘记密码?": "Forget password?",
"点击重置": "Click to reset",
" 没有账户?": "; No account?",
"点击注册": "Click to register",
"微信扫码关注公众号,输入「验证码」获取验证码(三分钟内有效)": "Scan the QR code of WeChat to follow the official account, enter \"verification code\" to get the verification code (valid within three minutes)",
"\"验证码\"": "\"Verification code\"",
"全部用户": "All users",
"当前用户": "Current user",
"'全部'": "'All'",
"'充值'": "'Recharge'",
"'消费'": "'Consumption'",
"'管理'": "'Management'",
"'系统'": "'System'",
" 充值 ": " Recharge ",
" 消费 ": " Consumption ",
" 管理 ": " Management ",
" 系统 ": " System ",
" 未知 ": " Unknown ",
"时间": "Time",
"详情": "Details",
"选择模式": "Select mode",
"选择明细分类": "Select details category",
"模型倍率不是合法的 JSON 字符串": "Model rate is not a valid JSON string",
"分组倍率不是合法的 JSON 字符串": "Group rate is not a valid JSON string",
"通用设置": "General Settings",
"充值链接": "Recharge Link",
"例如发卡网站的购买链接": "For example, the purchase link of the card issuing website",
"聊天页面链接": "Chat Page Link",
"例如 ChatGPT Next Web 的部署地址": "For example, the deployment address of ChatGPT Next Web",
"单位美元额度": "Unit Dollar Quota",
"一单位货币能兑换的额度": "Quota that can be exchanged for one unit of currency",
"启用额度消费日志记录": "Enable quota consumption log recording",
"以货币形式显示额度": "Display quota in the form of currency",
"相关 API 显示令牌额度而非用户额度": "Related API displays token quota instead of user quota",
"保存通用设置": "Save General Settings",
"监控设置": "Monitoring Settings",
"最长响应时间": "Longest Response Time",
"单位秒": "Unit in seconds",
"当运行渠道全部测试时": "When all operating channels are tested",
"超过此时间将自动禁用渠道": "Channels will be automatically disabled if this time is exceeded",
"额度提醒阈值": "Quota reminder threshold",
"低于此额度时将发送邮件提醒用户": "Email will be sent to remind users when the quota is below this",
"失败时自动禁用渠道": "Automatically disable the channel when it fails",
"保存监控设置": "Save Monitoring Settings",
"额度设置": "Quota Settings",
"新用户初始额度": "Initial quota for new users",
"例如": "For example",
"请求预扣费额度": "Request for pre-deducted quota",
"请求结束后多退少补": "Refund more or less after the request ends",
"邀请新用户奖励额度": "Invite new users to reward quota",
"新用户使用邀请码奖励额度": "New user rewards quota using invitation code",
"保存额度设置": "Save Quota Settings",
"倍率设置": "Rate Settings",
"模型倍率": "Model rate",
"为一个 JSON 文本": "Is a JSON text",
"键为模型名称": "Key is model name",
"值为倍率": "Value is the rate",
"分组倍率": "Group rate",
"键为分组名称": "Key is group name",
"保存倍率设置": "Save Rate Settings",
"已是最新版本": "Is the latest version",
"检查更新": "Check for updates",
"公告": "Announcement",
"在此输入新的公告内容,支持 Markdown & HTML 代码": "Enter the new announcement content here, supports Markdown & HTML code",
"保存公告": "Save Announcement",
"个性化设置": "Personalization Settings",
"系统名称": "System Name",
"在此输入系统名称": "Enter the system name here",
"设置系统名称": "Set system name",
"图片地址": "Image URL",
"在此输入 Logo 图片地址": "Enter the Logo image URL here",
"首页内容": "Home Page Content",
"在此输入首页内容,支持 Markdown & HTML 代码,设置后首页的状态信息将不再显示。如果输入的是一个链接,则会使用该链接作为 iframe 的 src 属性,这允许你设置任意网页作为首页": "Enter the homepage content here, supports Markdown & HTML code. Once set, the status information of the homepage will not be displayed. If a link is entered, it will be used as the src attribute of the iframe, allowing you to set any webpage as the homepage.",
"保存首页内容": "Save Home Page Content",
"在此输入新的关于内容,支持 Markdown & HTML 代码。如果输入的是一个链接,则会使用该链接作为 iframe 的 src 属性,这允许你设置任意网页作为关于页面": "Enter new about content here, supports Markdown & HTML code. If a link is entered, it will be used as the src attribute of the iframe, allowing you to set any webpage as the about page.",
"保存关于": "Save About",
"移除 One API 的版权标识必须首先获得授权,项目维护需要花费大量精力,如果本项目对你有意义,请主动支持本项目": "Removal of One API copyright mark must first be authorized. Project maintenance requires a lot of effort. If this project is meaningful to you, please actively support it.",
"页脚": "Footer",
"在此输入新的页脚,留空则使用默认页脚,支持 HTML 代码": "Enter the new footer here, leave blank to use the default footer, supports HTML code.",
"设置页脚": "Set Footer",
"新版本": "New Version",
"关闭": "Close",
"密码已重置并已复制到剪贴板": "Password has been reset and copied to clipboard",
"密码重置确认": "Password Reset Confirmation",
"邮箱地址": "Email Address",
"提交": "Submit",
"请稍后几秒重试": "Please retry in a few seconds",
"正在检查用户环境": "Checking user environment",
"重置邮件发送成功": "Reset mail sent successfully",
"请检查邮箱": "Please check your email",
"密码重置": "Password Reset",
"令牌已重置并已复制到剪贴板": "Token has been reset and copied to clipboard",
"邀请链接已复制到剪切板": "Invitation link has been copied to clipboard",
"微信账户绑定成功": "WeChat account binding succeeded",
"验证码发送成功": "Verification code sent successfully",
"邮箱账户绑定成功": "Email account binding succeeded",
"注意": "Note",
"此处生成的令牌用于系统管理": "The token generated here is used for system management",
"而非用于请求 OpenAI 相关的服务": "Not for requesting OpenAI related services",
"请知悉": "Please be aware",
"更新个人信息": "Update Personal Information",
"生成系统访问令牌": "Generate System Access Token",
"复制邀请链接": "Copy Invitation Link",
"账号绑定": "Account Binding",
"绑定微信账号": "Bind WeChat Account",
"微信扫码关注公众号": "Scan the QR code with WeChat to follow the official account",
"输入": "Enter",
"验证码": "Verification Code",
"获取验证码": "Get Verification Code",
"三分钟内有效": "Valid for three minutes",
"绑定": "Bind",
"绑定 GitHub 账号": "Bind GitHub Account",
"绑定邮箱地址": "Bind Email Address",
"输入邮箱地址": "Enter Email Address",
"未使用": "Unused",
"已使用": "Used",
"操作成功完成": "Operation successfully completed",
"搜索兑换码的 ID 和名称": "Search for ID and name",
"额度": "Quota",
"创建时间": "Creation Time",
"兑换时间": "Redemption Time",
"尚未兑换": "Not yet redeemed",
"已复制到剪贴板": "Copied to clipboard",
"无法复制到剪贴板": "Unable to copy to clipboard",
"请手动复制": "Please copy manually",
"已将兑换码填入搜索框": "The voucher code has been filled into the search box",
"复制": "Copy",
"添加新的兑换码": "Add a new voucher",
"密码长度不得小于 8 位": "Password length must not be less than 8 characters",
"两次输入的密码不一致": "The two passwords entered do not match",
"注册成功": "Registration succeeded",
"请稍后几秒重试Turnstile 正在检查用户环境": "Please retry in a few seconds, Turnstile is checking user environment",
"验证码发送成功,请检查你的邮箱": "Verification code sent successfully, please check your email",
"新用户注册": "New User Registration",
"输入用户名,最长 12 位": "Enter username, up to 12 characters",
"输入密码,最短 8 位,最长 20 位": "Enter password, at least 8 characters and up to 20 characters",
"输入验证码": "Enter Verification Code",
"已有账户": "Already have an account",
"点击登录": "Click to log in",
"服务器地址": "Server Address",
"更新服务器地址": "Update Server Address",
"配置登录注册": "Configure Login/Registration",
"允许通过密码进行登录": "Allow login via password",
"允许通过密码进行注册": "Allow registration via password",
"通过密码注册时需要进行邮箱验证": "Email verification is required when registering via password",
"允许通过 GitHub 账户登录 & 注册": "Allow login & registration via GitHub account",
"允许通过微信登录 & 注册": "Allow login & registration via WeChat",
"允许新用户注册(此项为否时,新用户将无法以任何方式进行注册": "Allow new user registration (if this option is off, new users will not be able to register in any way",
"启用 Turnstile 用户校验": "Enable Turnstile user verification",
"配置 SMTP": "Configure SMTP",
"用以支持系统的邮件发送": "To support the system email sending",
"SMTP 服务器地址": "SMTP Server Address",
"例如smtp.qq.com": "For example: smtp.qq.com",
"SMTP 端口": "SMTP Port",
"默认: 587": "Default: 587",
"SMTP 账户": "SMTP Account",
"通常是邮箱地址": "Usually an email address",
"发送者邮箱": "Sender email",
"通常和邮箱地址保持一致": "Usually consistent with the email address",
"SMTP 访问凭证": "SMTP Access Credential",
"敏感信息不会发送到前端显示": "Sensitive information will not be displayed in the frontend",
"保存 SMTP 设置": "Save SMTP Settings",
"配置 GitHub OAuth App": "Configure GitHub OAuth App",
"用以支持通过 GitHub 进行登录注册": "To support login & registration via GitHub",
"点击此处": "Click here",
"管理你的 GitHub OAuth App": "Manage your GitHub OAuth App",
"输入你注册的 GitHub OAuth APP 的 ID": "Enter your registered GitHub OAuth APP ID",
"保存 GitHub OAuth 设置": "Save GitHub OAuth Settings",
"配置 WeChat Server": "Configure WeChat Server",
"用以支持通过微信进行登录注册": "To support login & registration via WeChat",
"了解 WeChat Server": "Learn about WeChat Server",
"WeChat Server 访问凭证": "WeChat Server Access Credential",
"微信公众号二维码图片链接": "WeChat Public Account QR Code Image Link",
"输入一个图片链接": "Enter an image link",
"保存 WeChat Server 设置": "Save WeChat Server Settings",
"配置 Turnstile": "Configure Turnstile",
"用以支持用户校验": "To support user verification",
"管理你的 Turnstile Sites推荐选择 Invisible Widget Type": "Manage your Turnstile Sites, recommend selecting Invisible Widget Type",
"输入你注册的 Turnstile Site Key": "Enter your registered Turnstile Site Key",
"保存 Turnstile 设置": "Save Turnstile Settings",
"已过期": "Expired",
"已耗尽": "Exhausted",
"搜索令牌的名称 ...": "Search for the name of the token...",
"已用额度": "Quota used",
"剩余额度": "Remaining quota",
"过期时间": "Expiration time",
"无": "None",
"无限制": "Unlimited",
"永不过期": "Never expires",
"无法复制到剪贴板,请手动复制,已将令牌填入搜索框": "Unable to copy to clipboard, please copy manually, the token has been entered into the search box",
"删除令牌": "Delete Token",
"添加新的令牌": "Add New Token",
"普通用户": "Regular User",
"管理员": "Admin",
"超级管理员": "Super Admin",
"未知身份": "Unknown Identity",
"已激活": "Activated",
"已封禁": "Banned",
"搜索用户的 ID用户名显示名称以及邮箱地址 ...": "Search user ID, username, display name, and email address...",
"用户名": "Username",
"统计信息": "Statistics",
"用户角色": "User Role",
"未绑定邮箱地址": "Email not bound",
"请求次数": "Number of Requests",
"提升": "Promote",
"降级": "Demote",
"删除用户": "Delete User",
"添加新的用户": "Add New User",
"自定义": "Custom",
"等价金额": "Equivalent Amount",
"未登录或登录已过期,请重新登录": "Not logged in or login has expired, please log in again",
"请求次数过多,请稍后再试": "Too many requests, please try again later",
"服务器内部错误,请联系管理员": "Server internal error, please contact the administrator",
"本站仅作演示之用,无服务端": "This site is for demonstration purposes only, no server-side",
"超级管理员未设置充值链接!": "Super administrator has not set the recharge link!",
"错误:": "Error: ",
"新版本可用:${data.version},请使用快捷键 Shift + F5 刷新页面": "New version available: ${data.version}, please refresh the page using shortcut Shift + F5",
"无法正常连接至服务器": "Unable to connect to the server normally",
"管理渠道": "Manage Channels",
"系统状况": "System Status",
"系统信息": "System Information",
"系统信息总览": "System Information Overview",
"版本": "Version",
"源码": "Source Code",
"启动时间": "Startup Time",
"系统配置": "System Configuration",
"系统配置总览": "System Configuration Overview",
"邮箱验证": "Email Verification",
"未启用": "Not Enabled",
"GitHub 身份验证": "GitHub Authentication",
"微信身份验证": "WeChat Authentication",
"Turnstile 用户校验": "Turnstile User Verification",
"创建新的渠道": "Create New Channel",
"镜像": "Mirror",
"请输入镜像站地址格式为https://domain.com可不填不填则使用渠道默认值": "Please enter the mirror site address, the format is: https://domain.com, it can be left blank, if left blank, the default value of the channel will be used",
"模型": "Model",
"请选择该渠道所支持的模型": "Please select the model supported by the channel",
"填入基础模型": "Fill in the basic model",
"填入所有模型": "Fill in all models",
"清除所有模型": "Clear all models",
"密钥": "Key",
"请输入密钥": "Please enter the key",
"批量创建": "Batch Create",
"更新渠道信息": "Update Channel Information",
"我的令牌": "My Tokens",
"管理兑换码": "Manage Redeem Codes",
"兑换码": "Redeem Code",
"管理用户": "Manage Users",
"额度明细": "Quota Details",
"个人设置": "Personal Settings",
"运营设置": "Operation Settings",
"系统设置": "System Settings",
"其他设置": "Other Settings",
"项目仓库地址": "Project Repository Address",
"可在设置页面设置关于内容,支持 HTML & Markdown": "You can set the content about in the settings page, support HTML & Markdown",
"由{' '}": "built by{' '}",
"构建,源代码遵循{' '}": ", the source code licensed under{' '}",
"MIT 协议": "MIT License",
"充值额度": "Recharge Quota",
"获取兑换码": "Get Redeem Code",
"一个月后过期": "Expires after one month",
"一天后过期": "Expires after one day",
"一小时后过期": "Expires after one hour",
"一分钟后过期": "Expires after one minute",
"创建新的令牌": "Create New Token",
"注意,令牌的额度仅用于限制令牌本身的最大额度使用量,实际的使用受到账户的剩余额度限制。": "Note that the quota of the token is only used to limit the maximum quota usage of the token itself, and the actual usage is limited by the remaining quota of the account.",
"设为无限额度": "Set to unlimited quota",
"更新令牌信息": "Update Token Information",
"请输入充值码!": "Please enter the recharge code!",
"请输入名称": "Please enter a name",
"请输入密钥,一行一个": "Please enter the key, one per line",
"请输入额度": "Please enter the quota",
"令牌创建成功": "Token created successfully",
"令牌更新成功": "Token updated successfully",
"充值成功!": "Recharge successful!",
"更新用户信息": "Update User Information",
"请输入新的用户名": "Please enter a new username",
"密码": "Password",
"请输入新的密码": "Please enter a new password",
"显示名称": "Display Name",
"请输入新的显示名称": "Please enter a new display name",
"已绑定的 GitHub 账户": "GitHub Account Bound",
"此项只读,需要用户通过个人设置页面的相关绑定按钮进行绑定,不可直接修改": "This item is read-only. Users need to bind through the relevant binding button on the personal settings page, and cannot be modified directly",
"已绑定的微信账户": "WeChat Account Bound",
"已绑定的邮箱账户": "Email Account Bound",
"用户信息更新成功!": "User information updated successfully!",
"模型倍率 %.2f,分组倍率 %.2f": "model rate %.2f, group rate %.2f",
"模型倍率 %.2f,分组倍率 %.2f,补全倍率 %.2f": "model rate %.2f, group rate %.2f, completion rate %.2f",
"使用明细(总消耗额度:{renderQuota(stat.quota)}": "Usage Details (Total Consumption Quota: {renderQuota(stat.quota)})",
"用户名称": "User Name",
"令牌名称": "Token Name",
"默认令牌": "Default Token",
"留空则查询全部用户": "Leave blank to query all users",
"留空则查询全部令牌": "Leave blank to query all tokens",
"模型名称": "Model Name",
"留空则查询全部模型": "Leave blank to query all models",
"起始时间": "Start Time",
"结束时间": "End Time",
"查询": "Query",
"提示": "Prompt",
"补全": "Completion",
"消耗额度": "Used Quota",
"可选值": "Optional Values",
"渠道不存在:%d": "Channel does not exist: %d",
"数据库一致性已被破坏,请联系管理员": "Database consistency has been broken, please contact the administrator",
"使用近似的方式估算 token 数以减少计算量": "Estimate the number of tokens in an approximate way to reduce computational load",
"请填写ChannelName和ChannelKey": "Please fill in the ChannelName and ChannelKey!",
"请至少选择一个Model": "Please select at least one Model!",
"加载首页内容失败": "Failed to load the homepage content",
"加载关于内容失败": "Failed to load the About content",
"兑换码更新成功!": "Redemption code updated successfully!",
"兑换码创建成功!": "Redemption code created successfully!",
"用户账户创建成功!": "User account created successfully!",
"生成数量": "Generate quantity",
"请输入生成数量": "Please enter the quantity to generate",
"创建新用户账户": "Create new user account",
"渠道更新成功!": "Channel updated successfully!",
"渠道创建成功!": "Channel created successfully!",
"请选择分组": "Please select a group",
"更新兑换码信息": "Update redemption code information",
"创建新的兑换码": "Create a new redemption code",
"请在系统设置页面编辑分组倍率以添加新的分组:": "Please edit the group ratio in the system settings page to add a new group:",
"未找到所请求的页面": "The requested page was not found",
"过期时间格式错误!": "Expiration time format error!",
"请输入过期时间,格式为 yyyy-MM-dd HH:mm:ss-1 表示无限制": "Please enter the expiration time, the format is yyyy-MM-dd HH:mm:ss, -1 means no limit",
"此项可选,为一个 JSON 文本,键为用户请求的模型名称,值为要替换的模型名称,例如:": "This is optional, it's a JSON text, the key is the model name requested by the user, and the value is the model name to be replaced, for example:",
"此项可选,输入镜像站地址,格式为:": "This is optional, enter the mirror site address, the format is:",
"模型映射": "Model mapping",
"请输入默认 API 版本例如2023-03-15-preview该配置可以被实际的请求查询参数所覆盖": "Please enter the default API version, for example: 2023-03-15-preview, this configuration can be overridden by the actual request query parameters",
"默认": "Default",
"图片演示": "Image demo",
"参数替换为你的部署名称(模型名称中的点会被剔除)": "Replace the parameter with your deployment name (dots in the model name will be removed)",
"模型映射必须是合法的 JSON 格式!": "Model mapping must be in valid JSON format!",
"取消无限额度": "Cancel unlimited quota",
"取消": "Cancel",
"请输入新的剩余额度": "Please enter the new remaining quota",
"请输入单个兑换码中包含的额度": "Please enter the quota included in a single redemption code",
"请输入用户名": "Please enter username",
"请输入显示名称": "Please enter display name",
"请输入密码": "Please enter password",
"模型部署名称必须和模型名称保持一致": "The model deployment name must be consistent with the model name",
",因为 One API 会把请求体中的 model": ", because One API will take the model in the request body",
"请输入 AZURE_OPENAI_ENDPOINT": "Please enter AZURE_OPENAI_ENDPOINT",
"请输入自定义渠道的 Base URL": "Please enter the Base URL of the custom channel",
"Homepage URL 填": "Fill in the Homepage URL",
"Authorization callback URL 填": "Fill in the Authorization callback URL",
"请为渠道命名": "Please name the channel",
"此项可选,用于修改请求体中的模型名称,为一个 JSON 字符串,键为请求中模型名称,值为要替换的模型名称,例如:": "This is optional, used to modify the model name in the request body, it's a JSON string, the key is the model name in the request, and the value is the model name to be replaced, for example:",
"模型重定向": "Model redirection",
"请输入渠道对应的鉴权密钥": "Please enter the authentication key corresponding to the channel",
"注意,": "Note that, ",
",图片演示。": "related image demo.",
"令牌创建成功,请在列表页面点击复制获取令牌!": "Token created successfully, please click copy on the list page to get the token!",
"代理": "Proxy",
"此项可选,用于通过代理站来进行 API 调用请输入代理站地址格式为https://domain.com": "This is optional, used to make API calls through the proxy site, please enter the proxy site address, the format is: https://domain.com",
"取消密码登录将导致所有未绑定其他登录方式的用户(包括管理员)无法通过密码登录,确认取消?": "Canceling password login will cause all users (including administrators) who have not bound other login methods to be unable to log in via password, confirm cancel?",
"按照如下格式输入:": "Enter in the following format:",
"模型版本": "Model version",
"请输入星火大模型版本注意是接口地址中的版本号例如v2.1": "Please enter the version of the Starfire model, note that it is the version number in the interface address, for example: v2.1",
"点击查看": "click to view",
"请确保已在 Azure 上创建了 gpt-35-turbo 模型,并且 apiVersion 已正确填写!": "Please make sure that the gpt-35-turbo model has been created on Azure, and the apiVersion has been filled in correctly!",
"处理中...": "Processing...",
"绑定成功!": "Binding successful!",
"登录成功!": "Login successful!",
"操作失败,重定向至登录界面中...": "Operation failed, redirecting to login screen...",
"出现错误,第 ${count} 次重试中...": "An error occurred, retrying ${count}...",
"首页": "Home",
"渠道": "Channel",
"令牌": "API Keys",
"兑换": "Redeem",
"充值": "Recharge",
"用户": "Users",
"日志": "Logs",
"设置": "Settings",
"关于": "About",
"聊天": "Chat",
"注销成功!": "Logout successful!",
"注销": "Log out",
"登录": "Log in",
"注册": "Sign up",
"加载{name}中...": "Loading {name}...",
"未登录或登录已过期,请重新登录!": "Not logged in or login has expired, please log in again!",
"请立刻修改默认密码!": "Please change the default password immediately!",
"欢迎回来": "Welcome back",
"没有账户?": "No account?",
"立刻注册": "Sign up now",
"用户名": "Username",
"密码": "Password",
"正在登录……": "Logging in...",
"忘记密码": "Forgot password",
"其他方式": "Other methods",
"微信扫码关注公众号,输入「验证码」获取验证码(三分钟内有效)": "Scan the QR code with WeChat, follow the official account and enter 'verification code' to get the verification code (valid within three minutes)",
"验证码": "Verification code",
"全部用户": "All users",
"当前用户": "Current user",
"全部": "All",
"消费": "Consumption",
"管理": "Management",
"系统": "System",
"未知": "Unknown",
"其他模型": "Other models",
"复制成功": "Copy successful",
"使用明细": "Usages",
"刷新": "Refresh",
"收起面板": "Collapse panel",
"展开面板": "Expand panel",
"显示查询选项": "Show search options",
"隐藏查询选项": "Hide search options",
"用户名称": "User name",
"可选值": "Optional values",
"渠道 ID": "Channel ID",
"令牌名称": "Key name",
"模型名称": "Model name",
"起始时间": "Start time",
"结束时间": "End time",
"查询": "Query",
"隐藏条形图": "Hide bar chart",
"显示条形图": "Show bar chart",
"折线条形图只展示最新50条数据": "Line and bar charts only show the latest 50 pieces of data",
"总消耗": "Total consumption",
"总共调用了 {payload[0].value} 次": "A total of {payload[0].value} calls were made",
"{model.name}: {model.value} 次": "{model.name}: {model.value} times",
"总共调用了 {payload[0].value} 次 {payload[0].name}": "A total of {payload[0].value} {payload[0].name} calls were made",
"总消耗额度": "Total consumption limit",
"暂无数据": "No data available",
"更多数据统计图形即将到来,敬请期待!": "More data statistics graphics are coming soon, stay tuned!",
"复制用户名": "Copy username",
"{`共 ${counts} 条数据`}": "{`A total of ${counts} pieces of data`}",
"共 0 条数据": "A total of 0 pieces of data",
"选择明细分类": "Select detail category",
"模型倍率": "model rate",
"分组倍率": "group rate",
"新密码已复制到剪贴板:": "New password has been copied to the clipboard:",
"密码重置确认": "Password reset confirmation",
"邮箱地址": "Email address",
"新密码": "New password",
"密码已复制到剪贴板:": "Password has been copied to the clipboard:",
"密码重置完成": "Password reset complete",
"提交": "Submit",
"返回登录": "Return to login",
"请稍后重试,浏览器环境检查未通过": "Please try again later, browser environment check failed",
"重置邮件发送成功,请检查邮箱!": "Reset email sent successfully, please check your email!",
"密码重置": "Password reset",
"重试": "Retry",
"组": "Group",
"令牌已重置并已复制到剪贴板": "Token has been reset and copied to the clipboard",
"邀请链接已复制到剪切板": "Invitation link has been copied to the clipboard",
"系统令牌已复制到剪切板": "System token has been copied to the clipboard",
"请输入你的账户名以确认删除!": "Please enter your account name to confirm deletion!",
"账户已删除!": "Account has been deleted!",
"微信账户绑定成功!": "WeChat account binding successful!",
"请稍后几秒重试Turnstile 正在检查用户环境!": "Please try again in a few seconds, Turnstile is checking the user environment!",
"验证码发送成功,请检查邮箱!": "Verification code sent successfully, please check your email!",
"邮箱账户绑定成功!": "Email account binding successful!",
"个人信息": "Personal information",
"编辑个人信息": "Edit personal information",
"生成系统访问令牌": "Generate system access token",
"复制邀请链接": "Copy invitation link",
"删除个人帐户": "Delete personal account",
"普通用户": "Regular user",
"管理员": "Administrator",
"超级管理员": "Super administrator",
"显示名称": "Display name",
"GitHub 账号": "GitHub account",
"微信账号": "WeChat account",
"修改个人信息只允许在电脑端进行。生成的令牌用于系统管理,而非用于请求 OpenAI 相关的服务,请知悉。": "Modifying personal information is only allowed on a computer. The generated token is for system management, not for requesting OpenAI related services. Please be aware.",
"可用模型": "Available models",
"账号绑定": "Account binding",
"绑定微信": "Bind WeChat",
"绑定 GitHub": "Bind GitHub",
"绑定邮箱": "Bind Email",
"绑定": "Bind",
"绑定邮箱地址": "Bind email address",
"输入邮箱地址": "Enter email address",
"重新发送": "Resend",
"获取验证码": "Get verification code",
"确认绑定": "Confirm binding",
"取消": "Cancel",
"危险操作": "Dangerous operation",
"您正在删除自己的帐户,将清空所有数据且不可恢复": "You are deleting your own account, all data will be cleared and cannot be recovered",
"输入你的账户名": "Enter your account name",
"以确认删除": "To confirm deletion",
"确认删除": "Confirm deletion",
"未使用": "Not used",
"已禁用": "Disabled",
"已使用": "Used",
"未知状态": "Unknown status",
"操作成功完成!": "Operation successfully completed!",
"搜索兑换码的 ID 和名称 ...": "Search for the ID and name of the redemption code ...",
"名称": "Name",
"状态": "Status",
"额度": "Quota",
"创建时间": "Creation time",
"兑换时间": "Redemption time",
"操作": "Operation",
"尚未兑换": "Not yet redeemed",
"已复制到剪贴板!": "Copied to clipboard!",
"无法复制到剪贴板,请手动复制,已将兑换码填入搜索框。": "Unable to copy to clipboard, please copy manually. The redemption code has been filled in the search box.",
"复制": "Copy",
"删除": "Delete",
"禁用": "Disable",
"启用": "Enable",
"编辑": "Edit",
"添加新的兑换码": "Add new redemption code",
"密码长度不得小于 8 位!": "Password length must not be less than 8 characters!",
"两次输入的密码不一致": "The two passwords entered do not match",
"注册成功!": "Registration successful!",
"请填写注册邮箱!": "Please fill in the registration email!",
"请在${verificationTimeout}秒后再试": "Please try again after ${verificationTimeout} seconds",
"验证码发送成功,请检查你的邮箱!": "Verification code sent successfully, please check your email!",
"已有账户?": "Already have an account?",
"请输入用户名(最长 12 位)": "Please enter a username (up to 12 characters)",
"请输入密码(最短 8 位,最长 20 位)": "Please enter a password (minimum 8 characters, maximum 20 characters)",
"请再次输入密码": "Please enter the password again",
"请输入邮箱地址": "Please enter an email address",
"秒后可重发": "Can be resent after seconds",
"请输入邮箱验证码": "Please enter the email verification code",
"已过期": "Expired",
"已启用": "Enabled",
"已耗尽": "Exhausted",
"无": "None",
"令牌密钥": "API Key",
"令牌状态": "Key status",
"已用额度": "Used quota",
"剩余额度": "Remaining quota",
"过期时间": "Expiration time",
"你确定要删除这个令牌吗?": "Are you sure you want to delete this key?",
"无法复制到剪贴板,请手动复制,已将令牌密钥填入搜索框": "Unable to copy to clipboard, please copy manually. The key key has been filled in the search box.",
"无限制": "Unlimited",
"永不过期": "Never expires",
"使用 API 访问令牌进行服务鉴权和计费。": "Use API Key for service authentication and billing.",
"API 访问令牌关系到您的个人利益,请妥善留存,不要与其他人共享,也不要保存在客户端代码中。": "API Key is related to your personal interests. Please keep it properly. Do not share it with others or save it in client code.",
"创建令牌": "Create Key",
"什么都还没有,快去创建一个令牌开始使用吧!": "Nothing yet, go create a key to start using!",
"你确定要删除该令牌吗": "Are you sure you want to delete this key",
"导出令牌信息": "Export key information",
"错误:未登录或登录已过期,请重新登录!": "Error: Not logged in or login has expired, please log in again!",
"错误:请求次数过多,请稍后再试!": "Error: Too many requests, please try again later!",
"错误:服务器内部错误,请联系管理员!": "Error: Server internal error, please contact the online customer service!",
"本站仅作演示之用,无服务端!": "This site is for demonstration purposes only, no server!",
"错误:": "Error:",
"加载首页内容失败...": "Failed to load homepage content...",
"系统状况": "System status",
"系统信息": "System information",
"系统信息总览": "System information overview",
"名称:": "Name:",
"版本:": "Version:",
"源码:": "Source code:",
"启动时间:": "Startup time:",
"系统配置": "System configuration",
"系统配置总览": "System configuration overview",
"邮箱验证:": "Email verification:",
"未启用": "Not enabled",
"Turnstile 用户校验:": "Turnstile user verification:",
"页面不存在": "Page does not exist",
"请检查你的浏览器地址是否正确": "Please check if your browser address is correct",
"个人设置": "Personal settings",
"运营设置": "Operations settings",
"系统设置": "System settings",
"其他设置": "Other settings",
"默认令牌": "Default key",
"过期时间必须在当前时间之后!": "Expiration time must be after the current time!",
"额度必须大于等于 0": "Quota must be greater than or equal to 0!",
"过期时间格式错误!": "Expiration time format error!",
"创建令牌数量必须大于等于 1": "The number of keys to create must be greater than or equal to 1!",
"令牌修改成功": "API Key modification successful",
"令牌创建成功": "API Key creation successful",
"更新令牌信息": "Update key information",
"创建新的令牌": "Create a new key",
"请输入名称": "Please enter a name",
"请输入过期时间,格式为 yyyy-MM-dd HH:mm:ss-1 表示无限制": "Please enter the expiration time, the format is yyyy-MM-dd HH:mm:ss, -1 means unlimited",
"无限额度": "Unlimited quota",
"注意:启用无限额度后,已用额度将不再进行计算。": "Note: After enabling unlimited quota, the used quota will no longer be calculated.",
"等于": "Equals",
"请输入额度单位token": "Please enter the quota (unit: token)",
"创建令牌数量": "Create key quantity",
"请输入令牌数量": "Please enter the number of keys",
"注意:令牌的额度仅用于限制令牌本身的最大额度使用量,实际的使用受到账户的剩余额度限制。": "Note: The quota of the key is only used to limit the maximum quota usage of the key itself, and the actual usage is subject to the remaining quota of the account.",
"我的令牌": "My keys",
"请输入额度兑换码!": "Please enter the redeem code!",
"充值成功!": "Recharge successful!",
"请求失败": "Request failed",
"超级管理员未设置充值链接!": "The super administrator did not set a recharge link!",
"充值额度": "Recharge quota",
"兑换中...": "Redeeming...",
"请点击充值以获取额度兑换码。": "Please click recharge to get the quota redemption code.",
"用户信息更新成功!": "User information updated successfully!",
"更新用户信息": "Update user information",
"请输入新的用户名": "Please enter a new username",
"请输入新的密码,最短 8 位": "Please enter a new password, at least 8 characters",
"请输入新的显示名称": "Please enter a new display name",
"分组": "Group",
"请选择分组": "Please select a group",
"请在系统设置页面编辑分组倍率以添加新的分组:": "Please edit the group rate on the system settings page to add a new group:",
"请输入新的剩余额度": "Please enter a new remaining quota",
"已绑定的 GitHub 账户": "Bound GitHub account",
"此项只读,需要用户通过个人设置页面的相关绑定按钮进行绑定,不可直接修改": "This item is read-only, users need to bind through the relevant binding button on the personal settings page, cannot be directly modified",
"已绑定的微信账户": "Bound WeChat account",
"已绑定的邮箱账户": "Bound email account",
"新版本可用:${data.version},请使用快捷键 Shift + F5 刷新页面": "New version available: ${data.version}, please refresh the page using the shortcut key Shift + F5",
"无法正常连接至服务器!": "Unable to connect to the server normally!",
"提示:": "Input:",
"补全:": "Output:",
"搜索令牌名称": "Search key name",
"测试所有渠道": "Test all channels",
"更新已启用渠道余额": "Update the balance of enabled channels"
}

View File

@ -1,61 +0,0 @@
import argparse
import json
import os
def list_file_paths(path):
file_paths = []
for root, dirs, files in os.walk(path):
if "node_modules" in dirs:
dirs.remove("node_modules")
if "build" in dirs:
dirs.remove("build")
if "i18n" in dirs:
dirs.remove("i18n")
for file in files:
file_path = os.path.join(root, file)
if file_path.endswith("png") or file_path.endswith("ico") or file_path.endswith("db") or file_path.endswith("exe"):
continue
file_paths.append(file_path)
for dir in dirs:
dir_path = os.path.join(root, dir)
file_paths += list_file_paths(dir_path)
return file_paths
def replace_keys_in_repository(repo_path, json_file_path):
with open(json_file_path, 'r', encoding="utf-8") as json_file:
key_value_pairs = json.load(json_file)
pairs = []
for key, value in key_value_pairs.items():
pairs.append((key, value))
pairs.sort(key=lambda x: len(x[0]), reverse=True)
files = list_file_paths(repo_path)
print('Total files: {}'.format(len(files)))
for file_path in files:
replace_keys_in_file(file_path, pairs)
def replace_keys_in_file(file_path, pairs):
try:
with open(file_path, 'r', encoding="utf-8") as file:
content = file.read()
for key, value in pairs:
content = content.replace(key, value)
with open(file_path, 'w', encoding="utf-8") as file:
file.write(content)
except UnicodeDecodeError:
print('UnicodeDecodeError: {}'.format(file_path))
if __name__ == "__main__":
parser = argparse.ArgumentParser(description='Replace keys in repository.')
parser.add_argument('--repository_path', help='Path to repository')
parser.add_argument('--json_file_path', help='Path to JSON file')
args = parser.parse_args()
replace_keys_in_repository(args.repository_path, args.json_file_path)

13
main.go
View File

@ -3,21 +3,24 @@ package main
import (
"embed"
"fmt"
"os"
"strconv"
"github.com/gin-contrib/sessions"
"github.com/gin-contrib/sessions/cookie"
"github.com/gin-gonic/gin"
_ "github.com/joho/godotenv/autoload"
"github.com/songquanpeng/one-api/common"
"github.com/songquanpeng/one-api/common/client"
"github.com/songquanpeng/one-api/common/config"
"github.com/songquanpeng/one-api/common/i18n"
"github.com/songquanpeng/one-api/common/logger"
"github.com/songquanpeng/one-api/controller"
"github.com/songquanpeng/one-api/middleware"
"github.com/songquanpeng/one-api/model"
"github.com/songquanpeng/one-api/relay/adaptor/openai"
"github.com/songquanpeng/one-api/router"
"os"
"strconv"
)
//go:embed web/build/*
@ -91,12 +94,18 @@ func main() {
openai.InitTokenEncoders()
client.Init()
// Initialize i18n
if err := i18n.Init(); err != nil {
logger.FatalLog("failed to initialize i18n: " + err.Error())
}
// Initialize HTTP server
server := gin.New()
server.Use(gin.Recovery())
// This will cause SSE not to work!!!
//server.Use(gzip.Gzip(gzip.DefaultCompression))
server.Use(middleware.RequestId())
server.Use(middleware.Language())
middleware.SetUpLogger(server)
// Initialize session store
store := cookie.NewStore([]byte(config.SessionSecret))

View File

@ -2,13 +2,15 @@ package middleware
import (
"fmt"
"net/http"
"strconv"
"github.com/gin-gonic/gin"
"github.com/songquanpeng/one-api/common/ctxkey"
"github.com/songquanpeng/one-api/common/logger"
"github.com/songquanpeng/one-api/model"
"github.com/songquanpeng/one-api/relay/channeltype"
"net/http"
"strconv"
)
type ModelRequest struct {
@ -17,6 +19,7 @@ type ModelRequest struct {
func Distribute() func(c *gin.Context) {
return func(c *gin.Context) {
ctx := c.Request.Context()
userId := c.GetInt(ctxkey.Id)
userGroup, _ := model.CacheGetUserGroup(userId)
c.Set(ctxkey.Group, userGroup)
@ -52,6 +55,7 @@ func Distribute() func(c *gin.Context) {
return
}
}
logger.Debugf(ctx, "user id %d, user group: %s, request model: %s, using channel #%d", userId, userGroup, requestModel, channel.Id)
SetupContextForSelectedChannel(c, channel, requestModel)
c.Next()
}

27
middleware/gzip.go Normal file
View File

@ -0,0 +1,27 @@
package middleware
import (
"compress/gzip"
"github.com/gin-gonic/gin"
"io"
"net/http"
)
func GzipDecodeMiddleware() gin.HandlerFunc {
return func(c *gin.Context) {
if c.GetHeader("Content-Encoding") == "gzip" {
gzipReader, err := gzip.NewReader(c.Request.Body)
if err != nil {
c.AbortWithStatus(http.StatusBadRequest)
return
}
defer gzipReader.Close()
// Replace the request body with the decompressed data
c.Request.Body = io.NopCloser(gzipReader)
}
// Continue processing the request
c.Next()
}
}

25
middleware/language.go Normal file
View File

@ -0,0 +1,25 @@
package middleware
import (
"strings"
"github.com/gin-gonic/gin"
"github.com/songquanpeng/one-api/common/i18n"
)
func Language() gin.HandlerFunc {
return func(c *gin.Context) {
lang := c.GetHeader("Accept-Language")
if lang == "" {
lang = "en"
}
if strings.HasPrefix(strings.ToLower(lang), "zh") {
lang = "zh-CN"
} else {
lang = "en"
}
c.Set(i18n.ContextKey, lang)
c.Next()
}
}

View File

@ -7,6 +7,7 @@ import (
"time"
"github.com/gin-gonic/gin"
"github.com/songquanpeng/one-api/common"
"github.com/songquanpeng/one-api/common/config"
)
@ -71,7 +72,7 @@ func memoryRateLimiter(c *gin.Context, maxRequestNum int, duration int64, mark s
}
func rateLimitFactory(maxRequestNum int, duration int64, mark string) func(c *gin.Context) {
if maxRequestNum == 0 {
if maxRequestNum == 0 || config.DebugEnabled {
return func(c *gin.Context) {
c.Next()
}

View File

@ -1,8 +1,8 @@
package middleware
import (
"context"
"github.com/gin-gonic/gin"
"github.com/songquanpeng/one-api/common/helper"
)
@ -10,7 +10,7 @@ func RequestId() func(c *gin.Context) {
return func(c *gin.Context) {
id := helper.GenRequestID()
c.Set(helper.RequestIdKey, id)
ctx := context.WithValue(c.Request.Context(), helper.RequestIdKey, id)
ctx := helper.SetRequestID(c.Request.Context(), id)
c.Request = c.Request.WithContext(ctx)
c.Header(helper.RequestIdKey, id)
c.Next()

View File

@ -2,10 +2,13 @@ package model
import (
"context"
"github.com/songquanpeng/one-api/common"
"gorm.io/gorm"
"sort"
"strings"
"gorm.io/gorm"
"github.com/songquanpeng/one-api/common"
"github.com/songquanpeng/one-api/common/utils"
)
type Ability struct {
@ -49,6 +52,7 @@ func GetRandomSatisfiedChannel(group string, model string, ignoreFirstPriority b
func (channel *Channel) AddAbilities() error {
models_ := strings.Split(channel.Models, ",")
models_ = utils.DeDuplication(models_)
groups_ := strings.Split(channel.Group, ",")
abilities := make([]Ability, 0, len(models_))
for _, model := range models_ {

View File

@ -4,26 +4,31 @@ import (
"context"
"fmt"
"gorm.io/gorm"
"github.com/songquanpeng/one-api/common"
"github.com/songquanpeng/one-api/common/config"
"github.com/songquanpeng/one-api/common/helper"
"github.com/songquanpeng/one-api/common/logger"
"gorm.io/gorm"
)
type Log struct {
Id int `json:"id"`
UserId int `json:"user_id" gorm:"index"`
CreatedAt int64 `json:"created_at" gorm:"bigint;index:idx_created_at_type"`
Type int `json:"type" gorm:"index:idx_created_at_type"`
Content string `json:"content"`
Username string `json:"username" gorm:"index:index_username_model_name,priority:2;default:''"`
TokenName string `json:"token_name" gorm:"index;default:''"`
ModelName string `json:"model_name" gorm:"index;index:index_username_model_name,priority:1;default:''"`
Quota int `json:"quota" gorm:"default:0"`
PromptTokens int `json:"prompt_tokens" gorm:"default:0"`
CompletionTokens int `json:"completion_tokens" gorm:"default:0"`
ChannelId int `json:"channel" gorm:"index"`
Id int `json:"id"`
UserId int `json:"user_id" gorm:"index"`
CreatedAt int64 `json:"created_at" gorm:"bigint;index:idx_created_at_type"`
Type int `json:"type" gorm:"index:idx_created_at_type"`
Content string `json:"content"`
Username string `json:"username" gorm:"index:index_username_model_name,priority:2;default:''"`
TokenName string `json:"token_name" gorm:"index;default:''"`
ModelName string `json:"model_name" gorm:"index;index:index_username_model_name,priority:1;default:''"`
Quota int `json:"quota" gorm:"default:0"`
PromptTokens int `json:"prompt_tokens" gorm:"default:0"`
CompletionTokens int `json:"completion_tokens" gorm:"default:0"`
ChannelId int `json:"channel" gorm:"index"`
RequestId string `json:"request_id" gorm:"default:''"`
ElapsedTime int64 `json:"elapsed_time" gorm:"default:0"` // unit is ms
IsStream bool `json:"is_stream" gorm:"default:false"`
SystemPromptReset bool `json:"system_prompt_reset" gorm:"default:false"`
}
const (
@ -32,9 +37,21 @@ const (
LogTypeConsume
LogTypeManage
LogTypeSystem
LogTypeTest
)
func RecordLog(userId int, logType int, content string) {
func recordLogHelper(ctx context.Context, log *Log) {
requestId := helper.GetRequestID(ctx)
log.RequestId = requestId
err := LOG_DB.Create(log).Error
if err != nil {
logger.Error(ctx, "failed to record log: "+err.Error())
return
}
logger.Infof(ctx, "record log: %+v", log)
}
func RecordLog(ctx context.Context, userId int, logType int, content string) {
if logType == LogTypeConsume && !config.LogConsumeEnabled {
return
}
@ -45,13 +62,10 @@ func RecordLog(userId int, logType int, content string) {
Type: logType,
Content: content,
}
err := LOG_DB.Create(log).Error
if err != nil {
logger.SysError("failed to record log: " + err.Error())
}
recordLogHelper(ctx, log)
}
func RecordTopupLog(userId int, content string, quota int) {
func RecordTopupLog(ctx context.Context, userId int, content string, quota int) {
log := &Log{
UserId: userId,
Username: GetUsernameById(userId),
@ -60,34 +74,23 @@ func RecordTopupLog(userId int, content string, quota int) {
Content: content,
Quota: quota,
}
err := LOG_DB.Create(log).Error
if err != nil {
logger.SysError("failed to record log: " + err.Error())
}
recordLogHelper(ctx, log)
}
func RecordConsumeLog(ctx context.Context, userId int, channelId int, promptTokens int, completionTokens int, modelName string, tokenName string, quota int64, content string) {
logger.Info(ctx, fmt.Sprintf("record consume log: userId=%d, channelId=%d, promptTokens=%d, completionTokens=%d, modelName=%s, tokenName=%s, quota=%d, content=%s", userId, channelId, promptTokens, completionTokens, modelName, tokenName, quota, content))
func RecordConsumeLog(ctx context.Context, log *Log) {
if !config.LogConsumeEnabled {
return
}
log := &Log{
UserId: userId,
Username: GetUsernameById(userId),
CreatedAt: helper.GetTimestamp(),
Type: LogTypeConsume,
Content: content,
PromptTokens: promptTokens,
CompletionTokens: completionTokens,
TokenName: tokenName,
ModelName: modelName,
Quota: int(quota),
ChannelId: channelId,
}
err := LOG_DB.Create(log).Error
if err != nil {
logger.Error(ctx, "failed to record log: "+err.Error())
}
log.Username = GetUsernameById(log.UserId)
log.CreatedAt = helper.GetTimestamp()
log.Type = LogTypeConsume
recordLogHelper(ctx, log)
}
func RecordTestLog(ctx context.Context, log *Log) {
log.CreatedAt = helper.GetTimestamp()
log.Type = LogTypeTest
recordLogHelper(ctx, log)
}
func GetAllLogs(logType int, startTimestamp int64, endTimestamp int64, modelName string, username string, tokenName string, startIdx int, num int, channel int) (logs []*Log, err error) {

View File

@ -1,11 +1,14 @@
package model
import (
"context"
"errors"
"fmt"
"gorm.io/gorm"
"github.com/songquanpeng/one-api/common"
"github.com/songquanpeng/one-api/common/helper"
"gorm.io/gorm"
)
const (
@ -48,7 +51,7 @@ func GetRedemptionById(id int) (*Redemption, error) {
return &redemption, err
}
func Redeem(key string, userId int) (quota int64, err error) {
func Redeem(ctx context.Context, key string, userId int) (quota int64, err error) {
if key == "" {
return 0, errors.New("未提供兑换码")
}
@ -82,7 +85,7 @@ func Redeem(key string, userId int) (quota int64, err error) {
if err != nil {
return 0, errors.New("兑换失败," + err.Error())
}
RecordLog(userId, LogTypeTopup, fmt.Sprintf("通过兑换码充值 %s", common.LogQuota(redemption.Quota)))
RecordLog(ctx, userId, LogTypeTopup, fmt.Sprintf("通过兑换码充值 %s", common.LogQuota(redemption.Quota)))
return redemption.Quota, nil
}

View File

@ -3,12 +3,14 @@ package model
import (
"errors"
"fmt"
"gorm.io/gorm"
"github.com/songquanpeng/one-api/common"
"github.com/songquanpeng/one-api/common/config"
"github.com/songquanpeng/one-api/common/helper"
"github.com/songquanpeng/one-api/common/logger"
"github.com/songquanpeng/one-api/common/message"
"gorm.io/gorm"
)
const (
@ -238,16 +240,31 @@ func PreConsumeTokenQuota(tokenId int, quota int64) (err error) {
if err != nil {
logger.SysError("failed to fetch user email: " + err.Error())
}
prompt := "您的额度即将用尽"
prompt := "额度提醒"
var contentText string
if noMoreQuota {
prompt = "您的额度已用尽"
contentText = "您的额度已用尽"
} else {
contentText = "您的额度即将用尽"
}
if email != "" {
topUpLink := fmt.Sprintf("%s/topup", config.ServerAddress)
err = message.SendEmail(prompt, email,
fmt.Sprintf("%s当前剩余额度为 %d为了不影响您的使用请及时充值。<br/>充值链接:<a href='%s'>%s</a>", prompt, userQuota, topUpLink, topUpLink))
content := message.EmailTemplate(
prompt,
fmt.Sprintf(`
<p>您好</p>
<p>%s当前剩余额度为 <strong>%d</strong></p>
<p>为了不影响您的使用请及时充值</p>
<p style="text-align: center; margin: 30px 0;">
<a href="%s" style="background-color: #007bff; color: white; padding: 12px 24px; text-decoration: none; border-radius: 4px; display: inline-block;">立即充值</a>
</p>
<p style="color: #666;">如果按钮无法点击请复制以下链接到浏览器中打开</p>
<p style="background-color: #f8f8f8; padding: 10px; border-radius: 4px; word-break: break-all;">%s</p>
`, contentText, userQuota, topUpLink, topUpLink),
)
err = message.SendEmail(prompt, email, content)
if err != nil {
logger.SysError("failed to send email" + err.Error())
logger.SysError("failed to send email: " + err.Error())
}
}
}()

View File

@ -1,16 +1,19 @@
package model
import (
"context"
"errors"
"fmt"
"strings"
"gorm.io/gorm"
"github.com/songquanpeng/one-api/common"
"github.com/songquanpeng/one-api/common/blacklist"
"github.com/songquanpeng/one-api/common/config"
"github.com/songquanpeng/one-api/common/helper"
"github.com/songquanpeng/one-api/common/logger"
"github.com/songquanpeng/one-api/common/random"
"gorm.io/gorm"
"strings"
)
const (
@ -92,7 +95,7 @@ func GetUserById(id int, selectAll bool) (*User, error) {
if selectAll {
err = DB.First(&user, "id = ?", id).Error
} else {
err = DB.Omit("password").First(&user, "id = ?", id).Error
err = DB.Omit("password", "access_token").First(&user, "id = ?", id).Error
}
return &user, err
}
@ -114,7 +117,7 @@ func DeleteUserById(id int) (err error) {
return user.Delete()
}
func (user *User) Insert(inviterId int) error {
func (user *User) Insert(ctx context.Context, inviterId int) error {
var err error
if user.Password != "" {
user.Password, err = common.Password2Hash(user.Password)
@ -130,16 +133,16 @@ func (user *User) Insert(inviterId int) error {
return result.Error
}
if config.QuotaForNewUser > 0 {
RecordLog(user.Id, LogTypeSystem, fmt.Sprintf("新用户注册赠送 %s", common.LogQuota(config.QuotaForNewUser)))
RecordLog(ctx, user.Id, LogTypeSystem, fmt.Sprintf("新用户注册赠送 %s", common.LogQuota(config.QuotaForNewUser)))
}
if inviterId != 0 {
if config.QuotaForInvitee > 0 {
_ = IncreaseUserQuota(user.Id, config.QuotaForInvitee)
RecordLog(user.Id, LogTypeSystem, fmt.Sprintf("使用邀请码赠送 %s", common.LogQuota(config.QuotaForInvitee)))
RecordLog(ctx, user.Id, LogTypeSystem, fmt.Sprintf("使用邀请码赠送 %s", common.LogQuota(config.QuotaForInvitee)))
}
if config.QuotaForInviter > 0 {
_ = IncreaseUserQuota(inviterId, config.QuotaForInviter)
RecordLog(inviterId, LogTypeSystem, fmt.Sprintf("邀请用户赠送 %s", common.LogQuota(config.QuotaForInviter)))
RecordLog(ctx, inviterId, LogTypeSystem, fmt.Sprintf("邀请用户赠送 %s", common.LogQuota(config.QuotaForInviter)))
}
}
// create default token

View File

@ -2,6 +2,7 @@ package monitor
import (
"fmt"
"github.com/songquanpeng/one-api/common/config"
"github.com/songquanpeng/one-api/common/logger"
"github.com/songquanpeng/one-api/common/message"
@ -30,17 +31,32 @@ func notifyRootUser(subject string, content string) {
func DisableChannel(channelId int, channelName string, reason string) {
model.UpdateChannelStatusById(channelId, model.ChannelStatusAutoDisabled)
logger.SysLog(fmt.Sprintf("channel #%d has been disabled: %s", channelId, reason))
subject := fmt.Sprintf("渠道「%s」#%d已被禁用", channelName, channelId)
content := fmt.Sprintf("渠道「%s」#%d已被禁用原因%s", channelName, channelId, reason)
subject := fmt.Sprintf("渠道状态变更提醒")
content := message.EmailTemplate(
subject,
fmt.Sprintf(`
<p>您好</p>
<p>渠道<strong>%s</strong>#%d已被禁用</p>
<p>禁用原因</p>
<p style="background-color: #f8f8f8; padding: 10px; border-radius: 4px;">%s</p>
`, channelName, channelId, reason),
)
notifyRootUser(subject, content)
}
func MetricDisableChannel(channelId int, successRate float64) {
model.UpdateChannelStatusById(channelId, model.ChannelStatusAutoDisabled)
logger.SysLog(fmt.Sprintf("channel #%d has been disabled due to low success rate: %.2f", channelId, successRate*100))
subject := fmt.Sprintf("渠道 #%d 已被禁用", channelId)
content := fmt.Sprintf("该渠道(#%d在最近 %d 次调用中成功率为 %.2f%%,低于阈值 %.2f%%,因此被系统自动禁用。",
channelId, config.MetricQueueSize, successRate*100, config.MetricSuccessRateThreshold*100)
subject := fmt.Sprintf("渠道状态变更提醒")
content := message.EmailTemplate(
subject,
fmt.Sprintf(`
<p>您好</p>
<p>渠道 #%d 已被系统自动禁用</p>
<p>禁用原因</p>
<p style="background-color: #f8f8f8; padding: 10px; border-radius: 4px;">该渠道在最近 %d 次调用中成功率为 <strong>%.2f%%</strong>低于系统阈值 <strong>%.2f%%</strong></p>
`, channelId, config.MetricQueueSize, successRate*100, config.MetricSuccessRateThreshold*100),
)
notifyRootUser(subject, content)
}
@ -48,7 +64,14 @@ func MetricDisableChannel(channelId int, successRate float64) {
func EnableChannel(channelId int, channelName string) {
model.UpdateChannelStatusById(channelId, model.ChannelStatusEnabled)
logger.SysLog(fmt.Sprintf("channel #%d has been enabled", channelId))
subject := fmt.Sprintf("渠道「%s」#%d已被启用", channelName, channelId)
content := fmt.Sprintf("渠道「%s」#%d已被启用", channelName, channelId)
subject := fmt.Sprintf("渠道状态变更提醒")
content := message.EmailTemplate(
subject,
fmt.Sprintf(`
<p>您好</p>
<p>渠道<strong>%s</strong>#%d已被重新启用</p>
<p>您现在可以继续使用该渠道了</p>
`, channelName, channelId),
)
notifyRootUser(subject, content)
}

View File

@ -34,7 +34,9 @@ func ShouldDisableChannel(err *model.Error, statusCode int) bool {
strings.Contains(lowerMessage, "credit") ||
strings.Contains(lowerMessage, "balance") ||
strings.Contains(lowerMessage, "permission denied") ||
strings.Contains(lowerMessage, "organization has been restricted") || // groq
strings.Contains(lowerMessage, "organization has been restricted") || // groq
strings.Contains(lowerMessage, "api key not valid") || // gemini
strings.Contains(lowerMessage, "api key expired") || // gemini
strings.Contains(lowerMessage, "已欠费") {
return true
}

BIN
one-api

Binary file not shown.

View File

@ -16,6 +16,7 @@ import (
"github.com/songquanpeng/one-api/relay/adaptor/openai"
"github.com/songquanpeng/one-api/relay/adaptor/palm"
"github.com/songquanpeng/one-api/relay/adaptor/proxy"
"github.com/songquanpeng/one-api/relay/adaptor/replicate"
"github.com/songquanpeng/one-api/relay/adaptor/tencent"
"github.com/songquanpeng/one-api/relay/adaptor/vertexai"
"github.com/songquanpeng/one-api/relay/adaptor/xunfei"
@ -61,6 +62,8 @@ func GetAdaptor(apiType int) adaptor.Adaptor {
return &vertexai.Adaptor{}
case apitype.Proxy:
return &proxy.Adaptor{}
case apitype.Replicate:
return &replicate.Adaptor{}
}
return nil
}

View File

@ -1,7 +1,27 @@
package ali
var ModelList = []string{
"qwen-turbo", "qwen-plus", "qwen-max", "qwen-max-longcontext",
"text-embedding-v1",
"qwen-turbo", "qwen-turbo-latest",
"qwen-plus", "qwen-plus-latest",
"qwen-max", "qwen-max-latest",
"qwen-max-longcontext",
"qwen-vl-max", "qwen-vl-max-latest", "qwen-vl-plus", "qwen-vl-plus-latest",
"qwen-vl-ocr", "qwen-vl-ocr-latest",
"qwen-audio-turbo",
"qwen-math-plus", "qwen-math-plus-latest", "qwen-math-turbo", "qwen-math-turbo-latest",
"qwen-coder-plus", "qwen-coder-plus-latest", "qwen-coder-turbo", "qwen-coder-turbo-latest",
"qwq-32b-preview", "qwen2.5-72b-instruct", "qwen2.5-32b-instruct", "qwen2.5-14b-instruct", "qwen2.5-7b-instruct", "qwen2.5-3b-instruct", "qwen2.5-1.5b-instruct", "qwen2.5-0.5b-instruct",
"qwen2-72b-instruct", "qwen2-57b-a14b-instruct", "qwen2-7b-instruct", "qwen2-1.5b-instruct", "qwen2-0.5b-instruct",
"qwen1.5-110b-chat", "qwen1.5-72b-chat", "qwen1.5-32b-chat", "qwen1.5-14b-chat", "qwen1.5-7b-chat", "qwen1.5-1.8b-chat", "qwen1.5-0.5b-chat",
"qwen-72b-chat", "qwen-14b-chat", "qwen-7b-chat", "qwen-1.8b-chat", "qwen-1.8b-longcontext-chat",
"qvq-72b-preview",
"qwen2.5-vl-72b-instruct", "qwen2.5-vl-7b-instruct", "qwen2.5-vl-2b-instruct", "qwen2.5-vl-1b-instruct", "qwen2.5-vl-0.5b-instruct",
"qwen2-vl-7b-instruct", "qwen2-vl-2b-instruct", "qwen-vl-v1", "qwen-vl-chat-v1",
"qwen2-audio-instruct", "qwen-audio-chat",
"qwen2.5-math-72b-instruct", "qwen2.5-math-7b-instruct", "qwen2.5-math-1.5b-instruct", "qwen2-math-72b-instruct", "qwen2-math-7b-instruct", "qwen2-math-1.5b-instruct",
"qwen2.5-coder-32b-instruct", "qwen2.5-coder-14b-instruct", "qwen2.5-coder-7b-instruct", "qwen2.5-coder-3b-instruct", "qwen2.5-coder-1.5b-instruct", "qwen2.5-coder-0.5b-instruct",
"text-embedding-v1", "text-embedding-v3", "text-embedding-v2", "text-embedding-async-v2", "text-embedding-async-v1",
"ali-stable-diffusion-xl", "ali-stable-diffusion-v1.5", "wanx-v1",
"qwen-mt-plus", "qwen-mt-turbo",
"deepseek-r1", "deepseek-v3", "deepseek-r1-distill-qwen-1.5b", "deepseek-r1-distill-qwen-7b", "deepseek-r1-distill-qwen-14b", "deepseek-r1-distill-qwen-32b", "deepseek-r1-distill-llama-8b", "deepseek-r1-distill-llama-70b",
}

View File

@ -0,0 +1,20 @@
package alibailian
// https://help.aliyun.com/zh/model-studio/getting-started/models
var ModelList = []string{
"qwen-turbo",
"qwen-plus",
"qwen-long",
"qwen-max",
"qwen-coder-plus",
"qwen-coder-plus-latest",
"qwen-coder-turbo",
"qwen-coder-turbo-latest",
"qwen-mt-plus",
"qwen-mt-turbo",
"qwq-32b-preview",
"deepseek-r1",
"deepseek-v3",
}

View File

@ -0,0 +1,19 @@
package alibailian
import (
"fmt"
"github.com/songquanpeng/one-api/relay/meta"
"github.com/songquanpeng/one-api/relay/relaymode"
)
func GetRequestURL(meta *meta.Meta) (string, error) {
switch meta.Mode {
case relaymode.ChatCompletions:
return fmt.Sprintf("%s/compatible-mode/v1/chat/completions", meta.BaseURL), nil
case relaymode.Embeddings:
return fmt.Sprintf("%s/compatible-mode/v1/embeddings", meta.BaseURL), nil
default:
}
return "", fmt.Errorf("unsupported relay mode %d for ali bailian", meta.Mode)
}

View File

@ -4,10 +4,10 @@ var ModelList = []string{
"claude-instant-1.2", "claude-2.0", "claude-2.1",
"claude-3-haiku-20240307",
"claude-3-5-haiku-20241022",
"claude-3-5-haiku-latest",
"claude-3-sonnet-20240229",
"claude-3-opus-20240229",
"claude-3-5-sonnet-20240620",
"claude-3-5-sonnet-20241022",
"claude-3-5-sonnet-latest",
"claude-3-5-haiku-20241022",
}

View File

@ -0,0 +1,30 @@
package baiduv2
// https://console.bce.baidu.com/support/?_=1692863460488&timestamp=1739074632076#/api?product=QIANFAN&project=%E5%8D%83%E5%B8%86ModelBuilder&parent=%E5%AF%B9%E8%AF%9DChat%20V2&api=v2%2Fchat%2Fcompletions&method=post
// https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Fm2vrveyu#%E6%94%AF%E6%8C%81%E6%A8%A1%E5%9E%8B%E5%88%97%E8%A1%A8
var ModelList = []string{
"ernie-4.0-8k-latest",
"ernie-4.0-8k-preview",
"ernie-4.0-8k",
"ernie-4.0-turbo-8k-latest",
"ernie-4.0-turbo-8k-preview",
"ernie-4.0-turbo-8k",
"ernie-4.0-turbo-128k",
"ernie-3.5-8k-preview",
"ernie-3.5-8k",
"ernie-3.5-128k",
"ernie-speed-8k",
"ernie-speed-128k",
"ernie-speed-pro-128k",
"ernie-lite-8k",
"ernie-lite-pro-128k",
"ernie-tiny-8k",
"ernie-char-8k",
"ernie-char-fiction-8k",
"ernie-novel-8k",
"deepseek-v3",
"deepseek-r1",
"deepseek-r1-distill-qwen-32b",
"deepseek-r1-distill-qwen-14b",
}

View File

@ -0,0 +1,17 @@
package baiduv2
import (
"fmt"
"github.com/songquanpeng/one-api/relay/meta"
"github.com/songquanpeng/one-api/relay/relaymode"
)
func GetRequestURL(meta *meta.Meta) (string, error) {
switch meta.Mode {
case relaymode.ChatCompletions:
return fmt.Sprintf("%s/v2/chat/completions", meta.BaseURL), nil
default:
}
return "", fmt.Errorf("unsupported relay mode %d for baidu v2", meta.Mode)
}

View File

@ -2,5 +2,5 @@ package deepseek
var ModelList = []string{
"deepseek-chat",
"deepseek-coder",
"deepseek-reasoner",
}

View File

@ -5,6 +5,7 @@ import (
"fmt"
"io"
"net/http"
"strings"
"github.com/gin-gonic/gin"
"github.com/songquanpeng/one-api/common/config"
@ -20,11 +21,16 @@ type Adaptor struct {
}
func (a *Adaptor) Init(meta *meta.Meta) {
}
func (a *Adaptor) GetRequestURL(meta *meta.Meta) (string, error) {
version := helper.AssignOrDefault(meta.Config.APIVersion, config.GeminiVersion)
defaultVersion := config.GeminiVersion
if strings.Contains(meta.ActualModelName, "gemini-2.0") ||
strings.Contains(meta.ActualModelName, "gemini-1.5") {
defaultVersion = "v1beta"
}
version := helper.AssignOrDefault(meta.Config.APIVersion, defaultVersion)
action := ""
switch meta.Mode {
case relaymode.Embeddings:
@ -36,6 +42,7 @@ func (a *Adaptor) GetRequestURL(meta *meta.Meta) (string, error) {
if meta.IsStream {
action = "streamGenerateContent?alt=sse"
}
return fmt.Sprintf("%s/%s/models/%s:%s", meta.BaseURL, version, meta.ActualModelName, action), nil
}

View File

@ -1,7 +1,35 @@
package gemini
import (
"github.com/songquanpeng/one-api/relay/adaptor/geminiv2"
)
// https://ai.google.dev/models/gemini
var ModelList = []string{
"gemini-pro", "gemini-1.0-pro", "gemini-1.5-flash", "gemini-1.5-pro", "text-embedding-004", "aqa",
var ModelList = geminiv2.ModelList
// ModelsSupportSystemInstruction is the list of models that support system instruction.
//
// https://cloud.google.com/vertex-ai/generative-ai/docs/learn/prompts/system-instructions
var ModelsSupportSystemInstruction = []string{
// "gemini-1.0-pro-002",
// "gemini-1.5-flash", "gemini-1.5-flash-001", "gemini-1.5-flash-002",
// "gemini-1.5-flash-8b",
// "gemini-1.5-pro", "gemini-1.5-pro-001", "gemini-1.5-pro-002",
// "gemini-1.5-pro-experimental",
"gemini-2.0-flash", "gemini-2.0-flash-exp",
"gemini-2.0-flash-thinking-exp-01-21",
}
// IsModelSupportSystemInstruction check if the model support system instruction.
//
// Because the main version of Go is 1.20, slice.Contains cannot be used
func IsModelSupportSystemInstruction(model string) bool {
for _, m := range ModelsSupportSystemInstruction {
if m == model {
return true
}
}
return false
}

View File

@ -55,6 +55,10 @@ func ConvertRequest(textRequest model.GeneralOpenAIRequest) *ChatRequest {
Category: "HARM_CATEGORY_DANGEROUS_CONTENT",
Threshold: config.GeminiSafetySetting,
},
{
Category: "HARM_CATEGORY_CIVIC_INTEGRITY",
Threshold: config.GeminiSafetySetting,
},
},
GenerationConfig: ChatGenerationConfig{
Temperature: textRequest.Temperature,
@ -128,9 +132,16 @@ func ConvertRequest(textRequest model.GeneralOpenAIRequest) *ChatRequest {
}
// Converting system prompt to prompt from user for the same reason
if content.Role == "system" {
content.Role = "user"
shouldAddDummyModelMessage = true
if IsModelSupportSystemInstruction(textRequest.Model) {
geminiRequest.SystemInstruction = &content
geminiRequest.SystemInstruction.Role = ""
continue
} else {
content.Role = "user"
}
}
geminiRequest.Contents = append(geminiRequest.Contents, content)
// If a system message is the last message, we need to add a dummy model message to make gemini happy
@ -247,7 +258,14 @@ func responseGeminiChat2OpenAI(response *ChatResponse) *openai.TextResponse {
if candidate.Content.Parts[0].FunctionCall != nil {
choice.Message.ToolCalls = getToolCalls(&candidate)
} else {
choice.Message.Content = candidate.Content.Parts[0].Text
var builder strings.Builder
for _, part := range candidate.Content.Parts {
if i > 0 {
builder.WriteString("\n")
}
builder.WriteString(part.Text)
}
choice.Message.Content = builder.String()
}
} else {
choice.Message.Content = ""

View File

@ -1,10 +1,11 @@
package gemini
type ChatRequest struct {
Contents []ChatContent `json:"contents"`
SafetySettings []ChatSafetySettings `json:"safety_settings,omitempty"`
GenerationConfig ChatGenerationConfig `json:"generation_config,omitempty"`
Tools []ChatTools `json:"tools,omitempty"`
Contents []ChatContent `json:"contents"`
SafetySettings []ChatSafetySettings `json:"safety_settings,omitempty"`
GenerationConfig ChatGenerationConfig `json:"generation_config,omitempty"`
Tools []ChatTools `json:"tools,omitempty"`
SystemInstruction *ChatContent `json:"system_instruction,omitempty"`
}
type EmbeddingRequest struct {

View File

@ -0,0 +1,15 @@
package geminiv2
// https://ai.google.dev/models/gemini
var ModelList = []string{
"gemini-pro", "gemini-1.0-pro",
// "gemma-2-2b-it", "gemma-2-9b-it", "gemma-2-27b-it",
"gemini-1.5-flash", "gemini-1.5-flash-8b",
"gemini-1.5-pro", "gemini-1.5-pro-experimental",
"text-embedding-004", "aqa",
"gemini-2.0-flash", "gemini-2.0-flash-exp",
"gemini-2.0-flash-lite-preview-02-05",
"gemini-2.0-flash-thinking-exp-01-21",
"gemini-2.0-pro-exp-02-05",
}

View File

@ -0,0 +1,14 @@
package geminiv2
import (
"fmt"
"strings"
"github.com/songquanpeng/one-api/relay/meta"
)
func GetRequestURL(meta *meta.Meta) (string, error) {
baseURL := strings.TrimSuffix(meta.BaseURL, "/")
requestPath := strings.TrimPrefix(meta.RequestURLPath, "/v1")
return fmt.Sprintf("%s%s", baseURL, requestPath), nil
}

View File

@ -3,7 +3,6 @@ package groq
// https://console.groq.com/docs/models
var ModelList = []string{
"gemma-7b-it",
"gemma2-9b-it",
"llama-3.1-70b-versatile",
"llama-3.1-8b-instant",
@ -11,7 +10,6 @@ var ModelList = []string{
"llama-3.2-11b-vision-preview",
"llama-3.2-1b-preview",
"llama-3.2-3b-preview",
"llama-3.2-11b-vision-preview",
"llama-3.2-90b-text-preview",
"llama-3.2-90b-vision-preview",
"llama-guard-3-8b",
@ -24,4 +22,6 @@ var ModelList = []string{
"distil-whisper-large-v3-en",
"whisper-large-v3",
"whisper-large-v3-turbo",
"deepseek-r1-distill-llama-70b-specdec",
"deepseek-r1-distill-llama-70b",
}

View File

@ -8,4 +8,6 @@ var ModelList = []string{
"abab6-chat",
"abab5.5-chat",
"abab5.5s-chat",
"MiniMax-VL-01",
"MiniMax-Text-01",
}

View File

@ -31,8 +31,8 @@ func ConvertRequest(request model.GeneralOpenAIRequest) *ChatRequest {
TopP: request.TopP,
FrequencyPenalty: request.FrequencyPenalty,
PresencePenalty: request.PresencePenalty,
NumPredict: request.MaxTokens,
NumCtx: request.NumCtx,
NumPredict: request.MaxTokens,
NumCtx: request.NumCtx,
},
Stream: request.Stream,
}
@ -122,7 +122,7 @@ func StreamHandler(c *gin.Context, resp *http.Response) (*model.ErrorWithStatusC
for scanner.Scan() {
data := scanner.Text()
if strings.HasPrefix(data, "}") {
data = strings.TrimPrefix(data, "}") + "}"
data = strings.TrimPrefix(data, "}") + "}"
}
var ollamaResponse ChatResponse

View File

@ -8,8 +8,12 @@ import (
"strings"
"github.com/gin-gonic/gin"
"github.com/songquanpeng/one-api/relay/adaptor"
"github.com/songquanpeng/one-api/relay/adaptor/alibailian"
"github.com/songquanpeng/one-api/relay/adaptor/baiduv2"
"github.com/songquanpeng/one-api/relay/adaptor/doubao"
"github.com/songquanpeng/one-api/relay/adaptor/geminiv2"
"github.com/songquanpeng/one-api/relay/adaptor/minimax"
"github.com/songquanpeng/one-api/relay/adaptor/novita"
"github.com/songquanpeng/one-api/relay/channeltype"
@ -52,6 +56,12 @@ func (a *Adaptor) GetRequestURL(meta *meta.Meta) (string, error) {
return doubao.GetRequestURL(meta)
case channeltype.Novita:
return novita.GetRequestURL(meta)
case channeltype.BaiduV2:
return baiduv2.GetRequestURL(meta)
case channeltype.AliBailian:
return alibailian.GetRequestURL(meta)
case channeltype.GeminiOpenAICompatible:
return geminiv2.GetRequestURL(meta)
default:
return GetFullRequestURL(meta.BaseURL, meta.RequestURLPath, meta.ChannelType), nil
}

View File

@ -2,19 +2,24 @@ package openai
import (
"github.com/songquanpeng/one-api/relay/adaptor/ai360"
"github.com/songquanpeng/one-api/relay/adaptor/alibailian"
"github.com/songquanpeng/one-api/relay/adaptor/baichuan"
"github.com/songquanpeng/one-api/relay/adaptor/baiduv2"
"github.com/songquanpeng/one-api/relay/adaptor/deepseek"
"github.com/songquanpeng/one-api/relay/adaptor/doubao"
"github.com/songquanpeng/one-api/relay/adaptor/geminiv2"
"github.com/songquanpeng/one-api/relay/adaptor/groq"
"github.com/songquanpeng/one-api/relay/adaptor/lingyiwanwu"
"github.com/songquanpeng/one-api/relay/adaptor/minimax"
"github.com/songquanpeng/one-api/relay/adaptor/mistral"
"github.com/songquanpeng/one-api/relay/adaptor/moonshot"
"github.com/songquanpeng/one-api/relay/adaptor/novita"
"github.com/songquanpeng/one-api/relay/adaptor/openrouter"
"github.com/songquanpeng/one-api/relay/adaptor/siliconflow"
"github.com/songquanpeng/one-api/relay/adaptor/stepfun"
"github.com/songquanpeng/one-api/relay/adaptor/togetherai"
"github.com/songquanpeng/one-api/relay/adaptor/xai"
"github.com/songquanpeng/one-api/relay/adaptor/xunfeiv2"
"github.com/songquanpeng/one-api/relay/channeltype"
)
@ -34,6 +39,8 @@ var CompatibleChannels = []int{
channeltype.Novita,
channeltype.SiliconFlow,
channeltype.XAI,
channeltype.BaiduV2,
channeltype.XunfeiV2,
}
func GetCompatibleChannelMeta(channelType int) (string, []string) {
@ -68,6 +75,16 @@ func GetCompatibleChannelMeta(channelType int) (string, []string) {
return "siliconflow", siliconflow.ModelList
case channeltype.XAI:
return "xai", xai.ModelList
case channeltype.BaiduV2:
return "baiduv2", baiduv2.ModelList
case channeltype.XunfeiV2:
return "xunfeiv2", xunfeiv2.ModelList
case channeltype.OpenRouter:
return "openrouter", openrouter.ModelList
case channeltype.AliBailian:
return "alibailian", alibailian.ModelList
case channeltype.GeminiOpenAICompatible:
return "geminiv2", geminiv2.ModelList
default:
return "openai", ModelList
}

View File

@ -9,6 +9,7 @@ var ModelList = []string{
"gpt-4-turbo-preview", "gpt-4-turbo", "gpt-4-turbo-2024-04-09",
"gpt-4o", "gpt-4o-2024-05-13",
"gpt-4o-2024-08-06",
"gpt-4o-2024-11-20",
"chatgpt-4o-latest",
"gpt-4o-mini", "gpt-4o-mini-2024-07-18",
"gpt-4-vision-preview",
@ -20,4 +21,7 @@ var ModelList = []string{
"dall-e-2", "dall-e-3",
"whisper-1",
"tts-1", "tts-1-1106", "tts-1-hd", "tts-1-hd-1106",
"o1", "o1-2024-12-17",
"o1-preview", "o1-preview-2024-09-12",
"o1-mini", "o1-mini-2024-09-12",
}

View File

@ -2,20 +2,24 @@ package openai
import (
"fmt"
"strings"
"github.com/songquanpeng/one-api/relay/channeltype"
"github.com/songquanpeng/one-api/relay/model"
"strings"
)
func ResponseText2Usage(responseText string, modeName string, promptTokens int) *model.Usage {
func ResponseText2Usage(responseText string, modelName string, promptTokens int) *model.Usage {
usage := &model.Usage{}
usage.PromptTokens = promptTokens
usage.CompletionTokens = CountTokenText(responseText, modeName)
usage.CompletionTokens = CountTokenText(responseText, modelName)
usage.TotalTokens = usage.PromptTokens + usage.CompletionTokens
return usage
}
func GetFullRequestURL(baseURL string, requestURL string, channelType int) string {
if channelType == channeltype.OpenAICompatible {
return fmt.Sprintf("%s%s", strings.TrimSuffix(baseURL, "/"), strings.TrimPrefix(requestURL, "/v1"))
}
fullRequestURL := fmt.Sprintf("%s%s", baseURL, requestURL)
if strings.HasPrefix(baseURL, "https://gateway.ai.cloudflare.com") {

View File

@ -3,14 +3,16 @@ package openai
import (
"errors"
"fmt"
"math"
"strings"
"github.com/pkoukk/tiktoken-go"
"github.com/songquanpeng/one-api/common/config"
"github.com/songquanpeng/one-api/common/image"
"github.com/songquanpeng/one-api/common/logger"
billingratio "github.com/songquanpeng/one-api/relay/billing/ratio"
"github.com/songquanpeng/one-api/relay/model"
"math"
"strings"
)
// tokenEncoderMap won't grow after initialization
@ -21,7 +23,8 @@ func InitTokenEncoders() {
logger.SysLog("initializing token encoders")
gpt35TokenEncoder, err := tiktoken.EncodingForModel("gpt-3.5-turbo")
if err != nil {
logger.FatalLog(fmt.Sprintf("failed to get gpt-3.5-turbo token encoder: %s", err.Error()))
logger.FatalLog(fmt.Sprintf("failed to get gpt-3.5-turbo token encoder: %s, "+
"if you are using in offline environment, please set TIKTOKEN_CACHE_DIR to use exsited files, check this link for more information: https://stackoverflow.com/questions/76106366/how-to-use-tiktoken-in-offline-mode-computer ", err.Error()))
}
defaultTokenEncoder = gpt35TokenEncoder
gpt4oTokenEncoder, err := tiktoken.EncodingForModel("gpt-4o")

View File

@ -1,8 +1,16 @@
package openai
import "github.com/songquanpeng/one-api/relay/model"
import (
"context"
"fmt"
"github.com/songquanpeng/one-api/common/logger"
"github.com/songquanpeng/one-api/relay/model"
)
func ErrorWrapper(err error, code string, statusCode int) *model.ErrorWithStatusCode {
logger.Error(context.TODO(), fmt.Sprintf("[%s]%+v", code, err))
Error := model.Error{
Message: err.Error(),
Type: "one_api_error",

View File

@ -0,0 +1,235 @@
package openrouter
var ModelList = []string{
"01-ai/yi-large",
"aetherwiing/mn-starcannon-12b",
"ai21/jamba-1-5-large",
"ai21/jamba-1-5-mini",
"ai21/jamba-instruct",
"aion-labs/aion-1.0",
"aion-labs/aion-1.0-mini",
"aion-labs/aion-rp-llama-3.1-8b",
"allenai/llama-3.1-tulu-3-405b",
"alpindale/goliath-120b",
"alpindale/magnum-72b",
"amazon/nova-lite-v1",
"amazon/nova-micro-v1",
"amazon/nova-pro-v1",
"anthracite-org/magnum-v2-72b",
"anthracite-org/magnum-v4-72b",
"anthropic/claude-2",
"anthropic/claude-2.0",
"anthropic/claude-2.0:beta",
"anthropic/claude-2.1",
"anthropic/claude-2.1:beta",
"anthropic/claude-2:beta",
"anthropic/claude-3-haiku",
"anthropic/claude-3-haiku:beta",
"anthropic/claude-3-opus",
"anthropic/claude-3-opus:beta",
"anthropic/claude-3-sonnet",
"anthropic/claude-3-sonnet:beta",
"anthropic/claude-3.5-haiku",
"anthropic/claude-3.5-haiku-20241022",
"anthropic/claude-3.5-haiku-20241022:beta",
"anthropic/claude-3.5-haiku:beta",
"anthropic/claude-3.5-sonnet",
"anthropic/claude-3.5-sonnet-20240620",
"anthropic/claude-3.5-sonnet-20240620:beta",
"anthropic/claude-3.5-sonnet:beta",
"cognitivecomputations/dolphin-mixtral-8x22b",
"cognitivecomputations/dolphin-mixtral-8x7b",
"cohere/command",
"cohere/command-r",
"cohere/command-r-03-2024",
"cohere/command-r-08-2024",
"cohere/command-r-plus",
"cohere/command-r-plus-04-2024",
"cohere/command-r-plus-08-2024",
"cohere/command-r7b-12-2024",
"databricks/dbrx-instruct",
"deepseek/deepseek-chat",
"deepseek/deepseek-chat-v2.5",
"deepseek/deepseek-chat:free",
"deepseek/deepseek-r1",
"deepseek/deepseek-r1-distill-llama-70b",
"deepseek/deepseek-r1-distill-llama-70b:free",
"deepseek/deepseek-r1-distill-llama-8b",
"deepseek/deepseek-r1-distill-qwen-1.5b",
"deepseek/deepseek-r1-distill-qwen-14b",
"deepseek/deepseek-r1-distill-qwen-32b",
"deepseek/deepseek-r1:free",
"eva-unit-01/eva-llama-3.33-70b",
"eva-unit-01/eva-qwen-2.5-32b",
"eva-unit-01/eva-qwen-2.5-72b",
"google/gemini-2.0-flash-001",
"google/gemini-2.0-flash-exp:free",
"google/gemini-2.0-flash-lite-preview-02-05:free",
"google/gemini-2.0-flash-thinking-exp-1219:free",
"google/gemini-2.0-flash-thinking-exp:free",
"google/gemini-2.0-pro-exp-02-05:free",
"google/gemini-exp-1206:free",
"google/gemini-flash-1.5",
"google/gemini-flash-1.5-8b",
"google/gemini-flash-1.5-8b-exp",
"google/gemini-pro",
"google/gemini-pro-1.5",
"google/gemini-pro-vision",
"google/gemma-2-27b-it",
"google/gemma-2-9b-it",
"google/gemma-2-9b-it:free",
"google/gemma-7b-it",
"google/learnlm-1.5-pro-experimental:free",
"google/palm-2-chat-bison",
"google/palm-2-chat-bison-32k",
"google/palm-2-codechat-bison",
"google/palm-2-codechat-bison-32k",
"gryphe/mythomax-l2-13b",
"gryphe/mythomax-l2-13b:free",
"huggingfaceh4/zephyr-7b-beta:free",
"infermatic/mn-inferor-12b",
"inflection/inflection-3-pi",
"inflection/inflection-3-productivity",
"jondurbin/airoboros-l2-70b",
"liquid/lfm-3b",
"liquid/lfm-40b",
"liquid/lfm-7b",
"mancer/weaver",
"meta-llama/llama-2-13b-chat",
"meta-llama/llama-2-70b-chat",
"meta-llama/llama-3-70b-instruct",
"meta-llama/llama-3-8b-instruct",
"meta-llama/llama-3-8b-instruct:free",
"meta-llama/llama-3.1-405b",
"meta-llama/llama-3.1-405b-instruct",
"meta-llama/llama-3.1-70b-instruct",
"meta-llama/llama-3.1-8b-instruct",
"meta-llama/llama-3.2-11b-vision-instruct",
"meta-llama/llama-3.2-11b-vision-instruct:free",
"meta-llama/llama-3.2-1b-instruct",
"meta-llama/llama-3.2-3b-instruct",
"meta-llama/llama-3.2-90b-vision-instruct",
"meta-llama/llama-3.3-70b-instruct",
"meta-llama/llama-3.3-70b-instruct:free",
"meta-llama/llama-guard-2-8b",
"microsoft/phi-3-medium-128k-instruct",
"microsoft/phi-3-medium-128k-instruct:free",
"microsoft/phi-3-mini-128k-instruct",
"microsoft/phi-3-mini-128k-instruct:free",
"microsoft/phi-3.5-mini-128k-instruct",
"microsoft/phi-4",
"microsoft/wizardlm-2-7b",
"microsoft/wizardlm-2-8x22b",
"minimax/minimax-01",
"mistralai/codestral-2501",
"mistralai/codestral-mamba",
"mistralai/ministral-3b",
"mistralai/ministral-8b",
"mistralai/mistral-7b-instruct",
"mistralai/mistral-7b-instruct-v0.1",
"mistralai/mistral-7b-instruct-v0.3",
"mistralai/mistral-7b-instruct:free",
"mistralai/mistral-large",
"mistralai/mistral-large-2407",
"mistralai/mistral-large-2411",
"mistralai/mistral-medium",
"mistralai/mistral-nemo",
"mistralai/mistral-nemo:free",
"mistralai/mistral-small",
"mistralai/mistral-small-24b-instruct-2501",
"mistralai/mistral-small-24b-instruct-2501:free",
"mistralai/mistral-tiny",
"mistralai/mixtral-8x22b-instruct",
"mistralai/mixtral-8x7b",
"mistralai/mixtral-8x7b-instruct",
"mistralai/pixtral-12b",
"mistralai/pixtral-large-2411",
"neversleep/llama-3-lumimaid-70b",
"neversleep/llama-3-lumimaid-8b",
"neversleep/llama-3-lumimaid-8b:extended",
"neversleep/llama-3.1-lumimaid-70b",
"neversleep/llama-3.1-lumimaid-8b",
"neversleep/noromaid-20b",
"nothingiisreal/mn-celeste-12b",
"nousresearch/hermes-2-pro-llama-3-8b",
"nousresearch/hermes-3-llama-3.1-405b",
"nousresearch/hermes-3-llama-3.1-70b",
"nousresearch/nous-hermes-2-mixtral-8x7b-dpo",
"nousresearch/nous-hermes-llama2-13b",
"nvidia/llama-3.1-nemotron-70b-instruct",
"nvidia/llama-3.1-nemotron-70b-instruct:free",
"openai/chatgpt-4o-latest",
"openai/gpt-3.5-turbo",
"openai/gpt-3.5-turbo-0125",
"openai/gpt-3.5-turbo-0613",
"openai/gpt-3.5-turbo-1106",
"openai/gpt-3.5-turbo-16k",
"openai/gpt-3.5-turbo-instruct",
"openai/gpt-4",
"openai/gpt-4-0314",
"openai/gpt-4-1106-preview",
"openai/gpt-4-32k",
"openai/gpt-4-32k-0314",
"openai/gpt-4-turbo",
"openai/gpt-4-turbo-preview",
"openai/gpt-4o",
"openai/gpt-4o-2024-05-13",
"openai/gpt-4o-2024-08-06",
"openai/gpt-4o-2024-11-20",
"openai/gpt-4o-mini",
"openai/gpt-4o-mini-2024-07-18",
"openai/gpt-4o:extended",
"openai/o1",
"openai/o1-mini",
"openai/o1-mini-2024-09-12",
"openai/o1-preview",
"openai/o1-preview-2024-09-12",
"openai/o3-mini",
"openai/o3-mini-high",
"openchat/openchat-7b",
"openchat/openchat-7b:free",
"openrouter/auto",
"perplexity/llama-3.1-sonar-huge-128k-online",
"perplexity/llama-3.1-sonar-large-128k-chat",
"perplexity/llama-3.1-sonar-large-128k-online",
"perplexity/llama-3.1-sonar-small-128k-chat",
"perplexity/llama-3.1-sonar-small-128k-online",
"perplexity/sonar",
"perplexity/sonar-reasoning",
"pygmalionai/mythalion-13b",
"qwen/qvq-72b-preview",
"qwen/qwen-2-72b-instruct",
"qwen/qwen-2-7b-instruct",
"qwen/qwen-2-7b-instruct:free",
"qwen/qwen-2-vl-72b-instruct",
"qwen/qwen-2-vl-7b-instruct",
"qwen/qwen-2.5-72b-instruct",
"qwen/qwen-2.5-7b-instruct",
"qwen/qwen-2.5-coder-32b-instruct",
"qwen/qwen-max",
"qwen/qwen-plus",
"qwen/qwen-turbo",
"qwen/qwen-vl-plus:free",
"qwen/qwen2.5-vl-72b-instruct:free",
"qwen/qwq-32b-preview",
"raifle/sorcererlm-8x22b",
"sao10k/fimbulvetr-11b-v2",
"sao10k/l3-euryale-70b",
"sao10k/l3-lunaris-8b",
"sao10k/l3.1-70b-hanami-x1",
"sao10k/l3.1-euryale-70b",
"sao10k/l3.3-euryale-70b",
"sophosympatheia/midnight-rose-70b",
"sophosympatheia/rogue-rose-103b-v0.2:free",
"teknium/openhermes-2.5-mistral-7b",
"thedrummer/rocinante-12b",
"thedrummer/unslopnemo-12b",
"undi95/remm-slerp-l2-13b",
"undi95/toppy-m-7b",
"undi95/toppy-m-7b:free",
"x-ai/grok-2-1212",
"x-ai/grok-2-vision-1212",
"x-ai/grok-beta",
"x-ai/grok-vision-beta",
"xwin-lm/xwin-lm-70b",
}

View File

@ -0,0 +1,136 @@
package replicate
import (
"fmt"
"io"
"net/http"
"slices"
"strings"
"time"
"github.com/gin-gonic/gin"
"github.com/pkg/errors"
"github.com/songquanpeng/one-api/common/logger"
"github.com/songquanpeng/one-api/relay/adaptor"
"github.com/songquanpeng/one-api/relay/adaptor/openai"
"github.com/songquanpeng/one-api/relay/meta"
"github.com/songquanpeng/one-api/relay/model"
"github.com/songquanpeng/one-api/relay/relaymode"
)
type Adaptor struct {
meta *meta.Meta
}
// ConvertImageRequest implements adaptor.Adaptor.
func (*Adaptor) ConvertImageRequest(request *model.ImageRequest) (any, error) {
return DrawImageRequest{
Input: ImageInput{
Steps: 25,
Prompt: request.Prompt,
Guidance: 3,
Seed: int(time.Now().UnixNano()),
SafetyTolerance: 5,
NImages: 1, // replicate will always return 1 image
Width: 1440,
Height: 1440,
AspectRatio: "1:1",
},
}, nil
}
func (a *Adaptor) ConvertRequest(c *gin.Context, relayMode int, request *model.GeneralOpenAIRequest) (any, error) {
if !request.Stream {
// TODO: support non-stream mode
return nil, errors.Errorf("replicate models only support stream mode now, please set stream=true")
}
// Build the prompt from OpenAI messages
var promptBuilder strings.Builder
for _, message := range request.Messages {
switch msgCnt := message.Content.(type) {
case string:
promptBuilder.WriteString(message.Role)
promptBuilder.WriteString(": ")
promptBuilder.WriteString(msgCnt)
promptBuilder.WriteString("\n")
default:
}
}
replicateRequest := ReplicateChatRequest{
Input: ChatInput{
Prompt: promptBuilder.String(),
MaxTokens: request.MaxTokens,
Temperature: 1.0,
TopP: 1.0,
PresencePenalty: 0.0,
FrequencyPenalty: 0.0,
},
}
// Map optional fields
if request.Temperature != nil {
replicateRequest.Input.Temperature = *request.Temperature
}
if request.TopP != nil {
replicateRequest.Input.TopP = *request.TopP
}
if request.PresencePenalty != nil {
replicateRequest.Input.PresencePenalty = *request.PresencePenalty
}
if request.FrequencyPenalty != nil {
replicateRequest.Input.FrequencyPenalty = *request.FrequencyPenalty
}
if request.MaxTokens > 0 {
replicateRequest.Input.MaxTokens = request.MaxTokens
} else if request.MaxTokens == 0 {
replicateRequest.Input.MaxTokens = 500
}
return replicateRequest, nil
}
func (a *Adaptor) Init(meta *meta.Meta) {
a.meta = meta
}
func (a *Adaptor) GetRequestURL(meta *meta.Meta) (string, error) {
if !slices.Contains(ModelList, meta.OriginModelName) {
return "", errors.Errorf("model %s not supported", meta.OriginModelName)
}
return fmt.Sprintf("https://api.replicate.com/v1/models/%s/predictions", meta.OriginModelName), nil
}
func (a *Adaptor) SetupRequestHeader(c *gin.Context, req *http.Request, meta *meta.Meta) error {
adaptor.SetupCommonRequestHeader(c, req, meta)
req.Header.Set("Authorization", "Bearer "+meta.APIKey)
return nil
}
func (a *Adaptor) DoRequest(c *gin.Context, meta *meta.Meta, requestBody io.Reader) (*http.Response, error) {
logger.Info(c, "send request to replicate")
return adaptor.DoRequestHelper(a, c, meta, requestBody)
}
func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, meta *meta.Meta) (usage *model.Usage, err *model.ErrorWithStatusCode) {
switch meta.Mode {
case relaymode.ImagesGenerations:
err, usage = ImageHandler(c, resp)
case relaymode.ChatCompletions:
err, usage = ChatHandler(c, resp)
default:
err = openai.ErrorWrapper(errors.New("not implemented"), "not_implemented", http.StatusInternalServerError)
}
return
}
func (a *Adaptor) GetModelList() []string {
return ModelList
}
func (a *Adaptor) GetChannelName() string {
return "replicate"
}

View File

@ -0,0 +1,191 @@
package replicate
import (
"bufio"
"encoding/json"
"io"
"net/http"
"strings"
"time"
"github.com/gin-gonic/gin"
"github.com/pkg/errors"
"github.com/songquanpeng/one-api/common"
"github.com/songquanpeng/one-api/common/render"
"github.com/songquanpeng/one-api/relay/adaptor/openai"
"github.com/songquanpeng/one-api/relay/meta"
"github.com/songquanpeng/one-api/relay/model"
)
func ChatHandler(c *gin.Context, resp *http.Response) (
srvErr *model.ErrorWithStatusCode, usage *model.Usage) {
if resp.StatusCode != http.StatusCreated {
payload, _ := io.ReadAll(resp.Body)
return openai.ErrorWrapper(
errors.Errorf("bad_status_code [%d]%s", resp.StatusCode, string(payload)),
"bad_status_code", http.StatusInternalServerError),
nil
}
respBody, err := io.ReadAll(resp.Body)
if err != nil {
return openai.ErrorWrapper(err, "read_response_body_failed", http.StatusInternalServerError), nil
}
respData := new(ChatResponse)
if err = json.Unmarshal(respBody, respData); err != nil {
return openai.ErrorWrapper(err, "unmarshal_response_body_failed", http.StatusInternalServerError), nil
}
for {
err = func() error {
// get task
taskReq, err := http.NewRequestWithContext(c.Request.Context(),
http.MethodGet, respData.URLs.Get, nil)
if err != nil {
return errors.Wrap(err, "new request")
}
taskReq.Header.Set("Authorization", "Bearer "+meta.GetByContext(c).APIKey)
taskResp, err := http.DefaultClient.Do(taskReq)
if err != nil {
return errors.Wrap(err, "get task")
}
defer taskResp.Body.Close()
if taskResp.StatusCode != http.StatusOK {
payload, _ := io.ReadAll(taskResp.Body)
return errors.Errorf("bad status code [%d]%s",
taskResp.StatusCode, string(payload))
}
taskBody, err := io.ReadAll(taskResp.Body)
if err != nil {
return errors.Wrap(err, "read task response")
}
taskData := new(ChatResponse)
if err = json.Unmarshal(taskBody, taskData); err != nil {
return errors.Wrap(err, "decode task response")
}
switch taskData.Status {
case "succeeded":
case "failed", "canceled":
return errors.Errorf("task failed, [%s]%s", taskData.Status, taskData.Error)
default:
time.Sleep(time.Second * 3)
return errNextLoop
}
if taskData.URLs.Stream == "" {
return errors.New("stream url is empty")
}
// request stream url
responseText, err := chatStreamHandler(c, taskData.URLs.Stream)
if err != nil {
return errors.Wrap(err, "chat stream handler")
}
ctxMeta := meta.GetByContext(c)
usage = openai.ResponseText2Usage(responseText,
ctxMeta.ActualModelName, ctxMeta.PromptTokens)
return nil
}()
if err != nil {
if errors.Is(err, errNextLoop) {
continue
}
return openai.ErrorWrapper(err, "chat_task_failed", http.StatusInternalServerError), nil
}
break
}
return nil, usage
}
const (
eventPrefix = "event: "
dataPrefix = "data: "
done = "[DONE]"
)
func chatStreamHandler(c *gin.Context, streamUrl string) (responseText string, err error) {
// request stream endpoint
streamReq, err := http.NewRequestWithContext(c.Request.Context(), http.MethodGet, streamUrl, nil)
if err != nil {
return "", errors.Wrap(err, "new request to stream")
}
streamReq.Header.Set("Authorization", "Bearer "+meta.GetByContext(c).APIKey)
streamReq.Header.Set("Accept", "text/event-stream")
streamReq.Header.Set("Cache-Control", "no-store")
resp, err := http.DefaultClient.Do(streamReq)
if err != nil {
return "", errors.Wrap(err, "do request to stream")
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
payload, _ := io.ReadAll(resp.Body)
return "", errors.Errorf("bad status code [%d]%s", resp.StatusCode, string(payload))
}
scanner := bufio.NewScanner(resp.Body)
scanner.Split(bufio.ScanLines)
common.SetEventStreamHeaders(c)
doneRendered := false
for scanner.Scan() {
line := strings.TrimSpace(scanner.Text())
if line == "" {
continue
}
// Handle comments starting with ':'
if strings.HasPrefix(line, ":") {
continue
}
// Parse SSE fields
if strings.HasPrefix(line, eventPrefix) {
event := strings.TrimSpace(line[len(eventPrefix):])
var data string
// Read the following lines to get data and id
for scanner.Scan() {
nextLine := scanner.Text()
if nextLine == "" {
break
}
if strings.HasPrefix(nextLine, dataPrefix) {
data = nextLine[len(dataPrefix):]
} else if strings.HasPrefix(nextLine, "id:") {
// id = strings.TrimSpace(nextLine[len("id:"):])
}
}
if event == "output" {
render.StringData(c, data)
responseText += data
} else if event == "done" {
render.Done(c)
doneRendered = true
break
}
}
}
if err := scanner.Err(); err != nil {
return "", errors.Wrap(err, "scan stream")
}
if !doneRendered {
render.Done(c)
}
return responseText, nil
}

View File

@ -0,0 +1,58 @@
package replicate
// ModelList is a list of models that can be used with Replicate.
//
// https://replicate.com/pricing
var ModelList = []string{
// -------------------------------------
// image model
// -------------------------------------
"black-forest-labs/flux-1.1-pro",
"black-forest-labs/flux-1.1-pro-ultra",
"black-forest-labs/flux-canny-dev",
"black-forest-labs/flux-canny-pro",
"black-forest-labs/flux-depth-dev",
"black-forest-labs/flux-depth-pro",
"black-forest-labs/flux-dev",
"black-forest-labs/flux-dev-lora",
"black-forest-labs/flux-fill-dev",
"black-forest-labs/flux-fill-pro",
"black-forest-labs/flux-pro",
"black-forest-labs/flux-redux-dev",
"black-forest-labs/flux-redux-schnell",
"black-forest-labs/flux-schnell",
"black-forest-labs/flux-schnell-lora",
"ideogram-ai/ideogram-v2",
"ideogram-ai/ideogram-v2-turbo",
"recraft-ai/recraft-v3",
"recraft-ai/recraft-v3-svg",
"stability-ai/stable-diffusion-3",
"stability-ai/stable-diffusion-3.5-large",
"stability-ai/stable-diffusion-3.5-large-turbo",
"stability-ai/stable-diffusion-3.5-medium",
// -------------------------------------
// language model
// -------------------------------------
"ibm-granite/granite-20b-code-instruct-8k",
"ibm-granite/granite-3.0-2b-instruct",
"ibm-granite/granite-3.0-8b-instruct",
"ibm-granite/granite-8b-code-instruct-128k",
"meta/llama-2-13b",
"meta/llama-2-13b-chat",
"meta/llama-2-70b",
"meta/llama-2-70b-chat",
"meta/llama-2-7b",
"meta/llama-2-7b-chat",
"meta/meta-llama-3.1-405b-instruct",
"meta/meta-llama-3-70b",
"meta/meta-llama-3-70b-instruct",
"meta/meta-llama-3-8b",
"meta/meta-llama-3-8b-instruct",
"mistralai/mistral-7b-instruct-v0.2",
"mistralai/mistral-7b-v0.1",
"mistralai/mixtral-8x7b-instruct-v0.1",
// -------------------------------------
// video model
// -------------------------------------
// "minimax/video-01", // TODO: implement the adaptor
}

View File

@ -0,0 +1,222 @@
package replicate
import (
"bytes"
"encoding/base64"
"encoding/json"
"fmt"
"image"
"image/png"
"io"
"net/http"
"sync"
"time"
"github.com/gin-gonic/gin"
"github.com/pkg/errors"
"github.com/songquanpeng/one-api/common/logger"
"github.com/songquanpeng/one-api/relay/adaptor/openai"
"github.com/songquanpeng/one-api/relay/meta"
"github.com/songquanpeng/one-api/relay/model"
"golang.org/x/image/webp"
"golang.org/x/sync/errgroup"
)
// ImagesEditsHandler just copy response body to client
//
// https://replicate.com/black-forest-labs/flux-fill-pro
// func ImagesEditsHandler(c *gin.Context, resp *http.Response) (*model.ErrorWithStatusCode, *model.Usage) {
// c.Writer.WriteHeader(resp.StatusCode)
// for k, v := range resp.Header {
// c.Writer.Header().Set(k, v[0])
// }
// if _, err := io.Copy(c.Writer, resp.Body); err != nil {
// return ErrorWrapper(err, "copy_response_body_failed", http.StatusInternalServerError), nil
// }
// defer resp.Body.Close()
// return nil, nil
// }
var errNextLoop = errors.New("next_loop")
func ImageHandler(c *gin.Context, resp *http.Response) (*model.ErrorWithStatusCode, *model.Usage) {
if resp.StatusCode != http.StatusCreated {
payload, _ := io.ReadAll(resp.Body)
return openai.ErrorWrapper(
errors.Errorf("bad_status_code [%d]%s", resp.StatusCode, string(payload)),
"bad_status_code", http.StatusInternalServerError),
nil
}
respBody, err := io.ReadAll(resp.Body)
if err != nil {
return openai.ErrorWrapper(err, "read_response_body_failed", http.StatusInternalServerError), nil
}
respData := new(ImageResponse)
if err = json.Unmarshal(respBody, respData); err != nil {
return openai.ErrorWrapper(err, "unmarshal_response_body_failed", http.StatusInternalServerError), nil
}
for {
err = func() error {
// get task
taskReq, err := http.NewRequestWithContext(c.Request.Context(),
http.MethodGet, respData.URLs.Get, nil)
if err != nil {
return errors.Wrap(err, "new request")
}
taskReq.Header.Set("Authorization", "Bearer "+meta.GetByContext(c).APIKey)
taskResp, err := http.DefaultClient.Do(taskReq)
if err != nil {
return errors.Wrap(err, "get task")
}
defer taskResp.Body.Close()
if taskResp.StatusCode != http.StatusOK {
payload, _ := io.ReadAll(taskResp.Body)
return errors.Errorf("bad status code [%d]%s",
taskResp.StatusCode, string(payload))
}
taskBody, err := io.ReadAll(taskResp.Body)
if err != nil {
return errors.Wrap(err, "read task response")
}
taskData := new(ImageResponse)
if err = json.Unmarshal(taskBody, taskData); err != nil {
return errors.Wrap(err, "decode task response")
}
switch taskData.Status {
case "succeeded":
case "failed", "canceled":
return errors.Errorf("task failed: %s", taskData.Status)
default:
time.Sleep(time.Second * 3)
return errNextLoop
}
output, err := taskData.GetOutput()
if err != nil {
return errors.Wrap(err, "get output")
}
if len(output) == 0 {
return errors.New("response output is empty")
}
var mu sync.Mutex
var pool errgroup.Group
respBody := &openai.ImageResponse{
Created: taskData.CompletedAt.Unix(),
Data: []openai.ImageData{},
}
for _, imgOut := range output {
imgOut := imgOut
pool.Go(func() error {
// download image
downloadReq, err := http.NewRequestWithContext(c.Request.Context(),
http.MethodGet, imgOut, nil)
if err != nil {
return errors.Wrap(err, "new request")
}
imgResp, err := http.DefaultClient.Do(downloadReq)
if err != nil {
return errors.Wrap(err, "download image")
}
defer imgResp.Body.Close()
if imgResp.StatusCode != http.StatusOK {
payload, _ := io.ReadAll(imgResp.Body)
return errors.Errorf("bad status code [%d]%s",
imgResp.StatusCode, string(payload))
}
imgData, err := io.ReadAll(imgResp.Body)
if err != nil {
return errors.Wrap(err, "read image")
}
imgData, err = ConvertImageToPNG(imgData)
if err != nil {
return errors.Wrap(err, "convert image")
}
mu.Lock()
respBody.Data = append(respBody.Data, openai.ImageData{
B64Json: fmt.Sprintf("data:image/png;base64,%s",
base64.StdEncoding.EncodeToString(imgData)),
})
mu.Unlock()
return nil
})
}
if err := pool.Wait(); err != nil {
if len(respBody.Data) == 0 {
return errors.WithStack(err)
}
logger.Error(c, fmt.Sprintf("some images failed to download: %+v", err))
}
c.JSON(http.StatusOK, respBody)
return nil
}()
if err != nil {
if errors.Is(err, errNextLoop) {
continue
}
return openai.ErrorWrapper(err, "image_task_failed", http.StatusInternalServerError), nil
}
break
}
return nil, nil
}
// ConvertImageToPNG converts a WebP image to PNG format
func ConvertImageToPNG(webpData []byte) ([]byte, error) {
// bypass if it's already a PNG image
if bytes.HasPrefix(webpData, []byte("\x89PNG")) {
return webpData, nil
}
// check if is jpeg, convert to png
if bytes.HasPrefix(webpData, []byte("\xff\xd8\xff")) {
img, _, err := image.Decode(bytes.NewReader(webpData))
if err != nil {
return nil, errors.Wrap(err, "decode jpeg")
}
var pngBuffer bytes.Buffer
if err := png.Encode(&pngBuffer, img); err != nil {
return nil, errors.Wrap(err, "encode png")
}
return pngBuffer.Bytes(), nil
}
// Decode the WebP image
img, err := webp.Decode(bytes.NewReader(webpData))
if err != nil {
return nil, errors.Wrap(err, "decode webp")
}
// Encode the image as PNG
var pngBuffer bytes.Buffer
if err := png.Encode(&pngBuffer, img); err != nil {
return nil, errors.Wrap(err, "encode png")
}
return pngBuffer.Bytes(), nil
}

View File

@ -0,0 +1,159 @@
package replicate
import (
"time"
"github.com/pkg/errors"
)
// DrawImageRequest draw image by fluxpro
//
// https://replicate.com/black-forest-labs/flux-pro?prediction=kg1krwsdf9rg80ch1sgsrgq7h8&output=json
type DrawImageRequest struct {
Input ImageInput `json:"input"`
}
// ImageInput is input of DrawImageByFluxProRequest
//
// https://replicate.com/black-forest-labs/flux-1.1-pro/api/schema
type ImageInput struct {
Steps int `json:"steps" binding:"required,min=1"`
Prompt string `json:"prompt" binding:"required,min=5"`
ImagePrompt string `json:"image_prompt"`
Guidance int `json:"guidance" binding:"required,min=2,max=5"`
Interval int `json:"interval" binding:"required,min=1,max=4"`
AspectRatio string `json:"aspect_ratio" binding:"required,oneof=1:1 16:9 2:3 3:2 4:5 5:4 9:16"`
SafetyTolerance int `json:"safety_tolerance" binding:"required,min=1,max=5"`
Seed int `json:"seed"`
NImages int `json:"n_images" binding:"required,min=1,max=8"`
Width int `json:"width" binding:"required,min=256,max=1440"`
Height int `json:"height" binding:"required,min=256,max=1440"`
}
// InpaintingImageByFlusReplicateRequest is request to inpainting image by flux pro
//
// https://replicate.com/black-forest-labs/flux-fill-pro/api/schema
type InpaintingImageByFlusReplicateRequest struct {
Input FluxInpaintingInput `json:"input"`
}
// FluxInpaintingInput is input of DrawImageByFluxProRequest
//
// https://replicate.com/black-forest-labs/flux-fill-pro/api/schema
type FluxInpaintingInput struct {
Mask string `json:"mask" binding:"required"`
Image string `json:"image" binding:"required"`
Seed int `json:"seed"`
Steps int `json:"steps" binding:"required,min=1"`
Prompt string `json:"prompt" binding:"required,min=5"`
Guidance int `json:"guidance" binding:"required,min=2,max=5"`
OutputFormat string `json:"output_format"`
SafetyTolerance int `json:"safety_tolerance" binding:"required,min=1,max=5"`
PromptUnsampling bool `json:"prompt_unsampling"`
}
// ImageResponse is response of DrawImageByFluxProRequest
//
// https://replicate.com/black-forest-labs/flux-pro?prediction=kg1krwsdf9rg80ch1sgsrgq7h8&output=json
type ImageResponse struct {
CompletedAt time.Time `json:"completed_at"`
CreatedAt time.Time `json:"created_at"`
DataRemoved bool `json:"data_removed"`
Error string `json:"error"`
ID string `json:"id"`
Input DrawImageRequest `json:"input"`
Logs string `json:"logs"`
Metrics FluxMetrics `json:"metrics"`
// Output could be `string` or `[]string`
Output any `json:"output"`
StartedAt time.Time `json:"started_at"`
Status string `json:"status"`
URLs FluxURLs `json:"urls"`
Version string `json:"version"`
}
func (r *ImageResponse) GetOutput() ([]string, error) {
switch v := r.Output.(type) {
case string:
return []string{v}, nil
case []string:
return v, nil
case nil:
return nil, nil
case []interface{}:
// convert []interface{} to []string
ret := make([]string, len(v))
for idx, vv := range v {
if vvv, ok := vv.(string); ok {
ret[idx] = vvv
} else {
return nil, errors.Errorf("unknown output type: [%T]%v", vv, vv)
}
}
return ret, nil
default:
return nil, errors.Errorf("unknown output type: [%T]%v", r.Output, r.Output)
}
}
// FluxMetrics is metrics of ImageResponse
type FluxMetrics struct {
ImageCount int `json:"image_count"`
PredictTime float64 `json:"predict_time"`
TotalTime float64 `json:"total_time"`
}
// FluxURLs is urls of ImageResponse
type FluxURLs struct {
Get string `json:"get"`
Cancel string `json:"cancel"`
}
type ReplicateChatRequest struct {
Input ChatInput `json:"input" form:"input" binding:"required"`
}
// ChatInput is input of ChatByReplicateRequest
//
// https://replicate.com/meta/meta-llama-3.1-405b-instruct/api/schema
type ChatInput struct {
TopK int `json:"top_k"`
TopP float64 `json:"top_p"`
Prompt string `json:"prompt"`
MaxTokens int `json:"max_tokens"`
MinTokens int `json:"min_tokens"`
Temperature float64 `json:"temperature"`
SystemPrompt string `json:"system_prompt"`
StopSequences string `json:"stop_sequences"`
PromptTemplate string `json:"prompt_template"`
PresencePenalty float64 `json:"presence_penalty"`
FrequencyPenalty float64 `json:"frequency_penalty"`
}
// ChatResponse is response of ChatByReplicateRequest
//
// https://replicate.com/meta/meta-llama-3.1-405b-instruct/examples?input=http&output=json
type ChatResponse struct {
CompletedAt time.Time `json:"completed_at"`
CreatedAt time.Time `json:"created_at"`
DataRemoved bool `json:"data_removed"`
Error string `json:"error"`
ID string `json:"id"`
Input ChatInput `json:"input"`
Logs string `json:"logs"`
Metrics FluxMetrics `json:"metrics"`
// Output could be `string` or `[]string`
Output []string `json:"output"`
StartedAt time.Time `json:"started_at"`
Status string `json:"status"`
URLs ChatResponseUrl `json:"urls"`
Version string `json:"version"`
}
// ChatResponseUrl is task urls of ChatResponse
type ChatResponseUrl struct {
Stream string `json:"stream"`
Get string `json:"get"`
Cancel string `json:"cancel"`
}

View File

@ -2,16 +2,19 @@ package tencent
import (
"errors"
"io"
"net/http"
"strconv"
"strings"
"github.com/gin-gonic/gin"
"github.com/songquanpeng/one-api/common/helper"
"github.com/songquanpeng/one-api/relay/adaptor"
"github.com/songquanpeng/one-api/relay/adaptor/openai"
"github.com/songquanpeng/one-api/relay/meta"
"github.com/songquanpeng/one-api/relay/model"
"io"
"net/http"
"strconv"
"strings"
"github.com/songquanpeng/one-api/relay/relaymode"
)
// https://cloud.tencent.com/document/api/1729/101837
@ -52,10 +55,18 @@ func (a *Adaptor) ConvertRequest(c *gin.Context, relayMode int, request *model.G
if err != nil {
return nil, err
}
tencentRequest := ConvertRequest(*request)
var convertedRequest any
switch relayMode {
case relaymode.Embeddings:
a.Action = "GetEmbedding"
convertedRequest = ConvertEmbeddingRequest(*request)
default:
a.Action = "ChatCompletions"
convertedRequest = ConvertRequest(*request)
}
// we have to calculate the sign here
a.Sign = GetSign(*tencentRequest, a, secretId, secretKey)
return tencentRequest, nil
a.Sign = GetSign(convertedRequest, a, secretId, secretKey)
return convertedRequest, nil
}
func (a *Adaptor) ConvertImageRequest(request *model.ImageRequest) (any, error) {
@ -75,7 +86,12 @@ func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, meta *meta.Met
err, responseText = StreamHandler(c, resp)
usage = openai.ResponseText2Usage(responseText, meta.ActualModelName, meta.PromptTokens)
} else {
err, usage = Handler(c, resp)
switch meta.Mode {
case relaymode.Embeddings:
err, usage = EmbeddingHandler(c, resp)
default:
err, usage = Handler(c, resp)
}
}
return
}

View File

@ -6,4 +6,5 @@ var ModelList = []string{
"hunyuan-standard-256K",
"hunyuan-pro",
"hunyuan-vision",
"hunyuan-embedding",
}

View File

@ -8,7 +8,6 @@ import (
"encoding/json"
"errors"
"fmt"
"github.com/songquanpeng/one-api/common/render"
"io"
"net/http"
"strconv"
@ -16,11 +15,14 @@ import (
"time"
"github.com/gin-gonic/gin"
"github.com/songquanpeng/one-api/common"
"github.com/songquanpeng/one-api/common/conv"
"github.com/songquanpeng/one-api/common/ctxkey"
"github.com/songquanpeng/one-api/common/helper"
"github.com/songquanpeng/one-api/common/logger"
"github.com/songquanpeng/one-api/common/random"
"github.com/songquanpeng/one-api/common/render"
"github.com/songquanpeng/one-api/relay/adaptor/openai"
"github.com/songquanpeng/one-api/relay/constant"
"github.com/songquanpeng/one-api/relay/model"
@ -44,8 +46,68 @@ func ConvertRequest(request model.GeneralOpenAIRequest) *ChatRequest {
}
}
func ConvertEmbeddingRequest(request model.GeneralOpenAIRequest) *EmbeddingRequest {
return &EmbeddingRequest{
InputList: request.ParseInput(),
}
}
func EmbeddingHandler(c *gin.Context, resp *http.Response) (*model.ErrorWithStatusCode, *model.Usage) {
var tencentResponseP EmbeddingResponseP
err := json.NewDecoder(resp.Body).Decode(&tencentResponseP)
if err != nil {
return openai.ErrorWrapper(err, "unmarshal_response_body_failed", http.StatusInternalServerError), nil
}
err = resp.Body.Close()
if err != nil {
return openai.ErrorWrapper(err, "close_response_body_failed", http.StatusInternalServerError), nil
}
tencentResponse := tencentResponseP.Response
if tencentResponse.Error.Code != "" {
return &model.ErrorWithStatusCode{
Error: model.Error{
Message: tencentResponse.Error.Message,
Code: tencentResponse.Error.Code,
},
StatusCode: resp.StatusCode,
}, nil
}
requestModel := c.GetString(ctxkey.RequestModel)
fullTextResponse := embeddingResponseTencent2OpenAI(&tencentResponse)
fullTextResponse.Model = requestModel
jsonResponse, err := json.Marshal(fullTextResponse)
if err != nil {
return openai.ErrorWrapper(err, "marshal_response_body_failed", http.StatusInternalServerError), nil
}
c.Writer.Header().Set("Content-Type", "application/json")
c.Writer.WriteHeader(resp.StatusCode)
_, err = c.Writer.Write(jsonResponse)
return nil, &fullTextResponse.Usage
}
func embeddingResponseTencent2OpenAI(response *EmbeddingResponse) *openai.EmbeddingResponse {
openAIEmbeddingResponse := openai.EmbeddingResponse{
Object: "list",
Data: make([]openai.EmbeddingResponseItem, 0, len(response.Data)),
Model: "hunyuan-embedding",
Usage: model.Usage{TotalTokens: response.EmbeddingUsage.TotalTokens},
}
for _, item := range response.Data {
openAIEmbeddingResponse.Data = append(openAIEmbeddingResponse.Data, openai.EmbeddingResponseItem{
Object: item.Object,
Index: item.Index,
Embedding: item.Embedding,
})
}
return &openAIEmbeddingResponse
}
func responseTencent2OpenAI(response *ChatResponse) *openai.TextResponse {
fullTextResponse := openai.TextResponse{
Id: response.ReqID,
Object: "chat.completion",
Created: helper.GetTimestamp(),
Usage: model.Usage{
@ -148,7 +210,7 @@ func Handler(c *gin.Context, resp *http.Response) (*model.ErrorWithStatusCode, *
return openai.ErrorWrapper(err, "unmarshal_response_body_failed", http.StatusInternalServerError), nil
}
TencentResponse = responseP.Response
if TencentResponse.Error.Code != 0 {
if TencentResponse.Error.Code != "" {
return &model.ErrorWithStatusCode{
Error: model.Error{
Message: TencentResponse.Error.Message,
@ -195,7 +257,7 @@ func hmacSha256(s, key string) string {
return string(hashed.Sum(nil))
}
func GetSign(req ChatRequest, adaptor *Adaptor, secId, secKey string) string {
func GetSign(req any, adaptor *Adaptor, secId, secKey string) string {
// build canonical request string
host := "hunyuan.tencentcloudapi.com"
httpRequestMethod := "POST"

View File

@ -35,16 +35,16 @@ type ChatRequest struct {
// 1. 影响输出文本的多样性,取值越大,生成文本的多样性越强。
// 2. 取值区间为 [0.0, 1.0],未传值时使用各模型推荐值。
// 3. 非必要不建议使用,不合理的取值会影响效果。
TopP *float64 `json:"TopP"`
TopP *float64 `json:"TopP,omitempty"`
// 说明:
// 1. 较高的数值会使输出更加随机,而较低的数值会使其更加集中和确定。
// 2. 取值区间为 [0.0, 2.0],未传值时使用各模型推荐值。
// 3. 非必要不建议使用,不合理的取值会影响效果。
Temperature *float64 `json:"Temperature"`
Temperature *float64 `json:"Temperature,omitempty"`
}
type Error struct {
Code int `json:"Code"`
Code string `json:"Code"`
Message string `json:"Message"`
}
@ -61,15 +61,41 @@ type ResponseChoices struct {
}
type ChatResponse struct {
Choices []ResponseChoices `json:"Choices,omitempty"` // 结果
Created int64 `json:"Created,omitempty"` // unix 时间戳的字符串
Id string `json:"Id,omitempty"` // 会话 id
Usage Usage `json:"Usage,omitempty"` // token 数量
Error Error `json:"Error,omitempty"` // 错误信息 注意:此字段可能返回 null表示取不到有效值
Note string `json:"Note,omitempty"` // 注释
ReqID string `json:"Req_id,omitempty"` // 唯一请求 Id每次请求都会返回。用于反馈接口入参
Choices []ResponseChoices `json:"Choices,omitempty"` // 结果
Created int64 `json:"Created,omitempty"` // unix 时间戳的字符串
Id string `json:"Id,omitempty"` // 会话 id
Usage Usage `json:"Usage,omitempty"` // token 数量
Error Error `json:"Error,omitempty"` // 错误信息 注意:此字段可能返回 null表示取不到有效值
Note string `json:"Note,omitempty"` // 注释
ReqID string `json:"RequestId,omitempty"` // 唯一请求 Id每次请求都会返回。用于反馈接口入参
}
type ChatResponseP struct {
Response ChatResponse `json:"Response,omitempty"`
}
type EmbeddingRequest struct {
InputList []string `json:"InputList"`
}
type EmbeddingData struct {
Embedding []float64 `json:"Embedding"`
Index int `json:"Index"`
Object string `json:"Object"`
}
type EmbeddingUsage struct {
PromptTokens int `json:"PromptTokens"`
TotalTokens int `json:"TotalTokens"`
}
type EmbeddingResponse struct {
Data []EmbeddingData `json:"Data"`
EmbeddingUsage EmbeddingUsage `json:"Usage,omitempty"`
RequestId string `json:"RequestId,omitempty"`
Error Error `json:"Error,omitempty"`
}
type EmbeddingResponseP struct {
Response EmbeddingResponse `json:"Response,omitempty"`
}

View File

@ -15,7 +15,13 @@ import (
)
var ModelList = []string{
"gemini-1.5-pro-001", "gemini-1.5-flash-001", "gemini-pro", "gemini-pro-vision", "gemini-1.5-pro-002", "gemini-1.5-flash-002",
"gemini-pro", "gemini-pro-vision",
"gemini-exp-1206",
"gemini-1.5-pro-001", "gemini-1.5-pro-002",
"gemini-1.5-flash-001", "gemini-1.5-flash-002",
"gemini-2.0-flash-exp", "gemini-2.0-flash-001",
"gemini-2.0-flash-lite-preview-02-05",
"gemini-2.0-flash-thinking-exp-01-21",
}
type Adaptor struct {

View File

@ -1,5 +1,14 @@
package xai
//https://console.x.ai/
var ModelList = []string{
"grok-2",
"grok-vision-beta",
"grok-2-vision-1212",
"grok-2-vision",
"grok-2-vision-latest",
"grok-2-1212",
"grok-2-latest",
"grok-beta",
}

View File

@ -1,12 +1,10 @@
package xunfei
var ModelList = []string{
"SparkDesk",
"SparkDesk-v1.1",
"SparkDesk-v2.1",
"SparkDesk-v3.1",
"SparkDesk-v3.1-128K",
"SparkDesk-v3.5",
"SparkDesk-v3.5-32K",
"SparkDesk-v4.0",
"Spark-Lite",
"Spark-Pro",
"Spark-Pro-128K",
"Spark-Max",
"Spark-Max-32K",
"Spark-4.0-Ultra",
}

View File

@ -0,0 +1,97 @@
package xunfei
import (
"fmt"
"strings"
)
// https://www.xfyun.cn/doc/spark/Web.html#_1-%E6%8E%A5%E5%8F%A3%E8%AF%B4%E6%98%8E
//Spark4.0 Ultra 请求地址对应的domain参数为4.0Ultra
//
//wss://spark-api.xf-yun.com/v4.0/chat
//Spark Max-32K请求地址对应的domain参数为max-32k
//
//wss://spark-api.xf-yun.com/chat/max-32k
//Spark Max请求地址对应的domain参数为generalv3.5
//
//wss://spark-api.xf-yun.com/v3.5/chat
//Spark Pro-128K请求地址对应的domain参数为pro-128k
//
// wss://spark-api.xf-yun.com/chat/pro-128k
//Spark Pro请求地址对应的domain参数为generalv3
//
//wss://spark-api.xf-yun.com/v3.1/chat
//Spark Lite请求地址对应的domain参数为lite
//
//wss://spark-api.xf-yun.com/v1.1/chat
// Lite、Pro、Pro-128K、Max、Max-32K和4.0 Ultra
func parseAPIVersionByModelName(modelName string) string {
apiVersion := modelName2APIVersion(modelName)
if apiVersion != "" {
return apiVersion
}
index := strings.IndexAny(modelName, "-")
if index != -1 {
return modelName[index+1:]
}
return ""
}
func modelName2APIVersion(modelName string) string {
switch modelName {
case "Spark-Lite":
return "v1.1"
case "Spark-Pro":
return "v3.1"
case "Spark-Pro-128K":
return "v3.1-128K"
case "Spark-Max":
return "v3.5"
case "Spark-Max-32K":
return "v3.5-32K"
case "Spark-4.0-Ultra":
return "v4.0"
}
return ""
}
// https://www.xfyun.cn/doc/spark/Web.html#_1-%E6%8E%A5%E5%8F%A3%E8%AF%B4%E6%98%8E
func apiVersion2domain(apiVersion string) string {
switch apiVersion {
case "v1.1":
return "lite"
case "v2.1":
return "generalv2"
case "v3.1":
return "generalv3"
case "v3.1-128K":
return "pro-128k"
case "v3.5":
return "generalv3.5"
case "v3.5-32K":
return "max-32k"
case "v4.0":
return "4.0Ultra"
}
return "general" + apiVersion
}
func getXunfeiAuthUrl(apiVersion string, apiKey string, apiSecret string) (string, string) {
var authUrl string
domain := apiVersion2domain(apiVersion)
switch apiVersion {
case "v3.1-128K":
authUrl = buildXunfeiAuthUrl(fmt.Sprintf("wss://spark-api.xf-yun.com/chat/pro-128k"), apiKey, apiSecret)
break
case "v3.5-32K":
authUrl = buildXunfeiAuthUrl(fmt.Sprintf("wss://spark-api.xf-yun.com/chat/max-32k"), apiKey, apiSecret)
break
default:
authUrl = buildXunfeiAuthUrl(fmt.Sprintf("wss://spark-api.xf-yun.com/%s/chat", apiVersion), apiKey, apiSecret)
}
return domain, authUrl
}

View File

@ -15,6 +15,7 @@ import (
"github.com/gin-gonic/gin"
"github.com/gorilla/websocket"
"github.com/songquanpeng/one-api/common"
"github.com/songquanpeng/one-api/common/helper"
"github.com/songquanpeng/one-api/common/logger"
@ -270,48 +271,3 @@ func xunfeiMakeRequest(textRequest model.GeneralOpenAIRequest, domain, authUrl,
return dataChan, stopChan, nil
}
func parseAPIVersionByModelName(modelName string) string {
index := strings.IndexAny(modelName, "-")
if index != -1 {
return modelName[index+1:]
}
return ""
}
// https://www.xfyun.cn/doc/spark/Web.html#_1-%E6%8E%A5%E5%8F%A3%E8%AF%B4%E6%98%8E
func apiVersion2domain(apiVersion string) string {
switch apiVersion {
case "v1.1":
return "lite"
case "v2.1":
return "generalv2"
case "v3.1":
return "generalv3"
case "v3.1-128K":
return "pro-128k"
case "v3.5":
return "generalv3.5"
case "v3.5-32K":
return "max-32k"
case "v4.0":
return "4.0Ultra"
}
return "general" + apiVersion
}
func getXunfeiAuthUrl(apiVersion string, apiKey string, apiSecret string) (string, string) {
var authUrl string
domain := apiVersion2domain(apiVersion)
switch apiVersion {
case "v3.1-128K":
authUrl = buildXunfeiAuthUrl(fmt.Sprintf("wss://spark-api.xf-yun.com/chat/pro-128k"), apiKey, apiSecret)
break
case "v3.5-32K":
authUrl = buildXunfeiAuthUrl(fmt.Sprintf("wss://spark-api.xf-yun.com/chat/max-32k"), apiKey, apiSecret)
break
default:
authUrl = buildXunfeiAuthUrl(fmt.Sprintf("wss://spark-api.xf-yun.com/%s/chat", apiVersion), apiKey, apiSecret)
}
return domain, authUrl
}

View File

@ -0,0 +1,12 @@
package xunfeiv2
// https://www.xfyun.cn/doc/spark/HTTP%E8%B0%83%E7%94%A8%E6%96%87%E6%A1%A3.html#_3-%E8%AF%B7%E6%B1%82%E8%AF%B4%E6%98%8E
var ModelList = []string{
"lite",
"generalv3",
"pro-128k",
"generalv3.5",
"max-32k",
"4.0Ultra",
}

View File

@ -1,7 +1,14 @@
package zhipu
// https://open.bigmodel.cn/pricing
var ModelList = []string{
"chatglm_turbo", "chatglm_pro", "chatglm_std", "chatglm_lite",
"glm-4", "glm-4v", "glm-3-turbo", "embedding-2",
"cogview-3",
"glm-zero-preview", "glm-4-plus", "glm-4-0520", "glm-4-airx",
"glm-4-air", "glm-4-long", "glm-4-flashx", "glm-4-flash",
"glm-4", "glm-3-turbo",
"glm-4v-plus", "glm-4v", "glm-4v-flash",
"cogview-3-plus", "cogview-3", "cogview-3-flash",
"cogviewx", "cogviewx-flash",
"charglm-4", "emohaa", "codegeex-4",
"embedding-2", "embedding-3",
}

View File

@ -19,6 +19,7 @@ const (
DeepL
VertexAI
Proxy
Replicate
Dummy // this one is only for count, do not add any channel after this
)

View File

@ -3,6 +3,7 @@ package billing
import (
"context"
"fmt"
"github.com/songquanpeng/one-api/common/logger"
"github.com/songquanpeng/one-api/model"
)
@ -31,8 +32,17 @@ func PostConsumeQuota(ctx context.Context, tokenId int, quotaDelta int64, totalQ
}
// totalQuota is total quota consumed
if totalQuota != 0 {
logContent := fmt.Sprintf("模型倍率 %.2f,分组倍率 %.2f", modelRatio, groupRatio)
model.RecordConsumeLog(ctx, userId, channelId, int(totalQuota), 0, modelName, tokenName, totalQuota, logContent)
logContent := fmt.Sprintf("倍率:%.2f × %.2f", modelRatio, groupRatio)
model.RecordConsumeLog(ctx, &model.Log{
UserId: userId,
ChannelId: channelId,
PromptTokens: int(totalQuota),
CompletionTokens: 0,
ModelName: modelName,
TokenName: tokenName,
Quota: int(totalQuota),
Content: logContent,
})
model.UpdateUserUsedQuotaAndRequestCount(userId, totalQuota)
model.UpdateChannelUsedQuota(channelId, totalQuota)
}

View File

@ -3,8 +3,10 @@ package ratio
import (
"encoding/json"
"github.com/songquanpeng/one-api/common/logger"
"sync"
)
var groupRatioLock sync.RWMutex
var GroupRatio = map[string]float64{
"default": 1,
"vip": 1,
@ -20,11 +22,15 @@ func GroupRatio2JSONString() string {
}
func UpdateGroupRatioByJSONString(jsonStr string) error {
groupRatioLock.Lock()
defer groupRatioLock.Unlock()
GroupRatio = make(map[string]float64)
return json.Unmarshal([]byte(jsonStr), &GroupRatio)
}
func GetGroupRatio(name string) float64 {
groupRatioLock.RLock()
defer groupRatioLock.RUnlock()
ratio, ok := GroupRatio[name]
if !ok {
logger.SysError("group ratio not found: " + name)

View File

@ -4,16 +4,20 @@ import (
"encoding/json"
"fmt"
"strings"
"sync"
"github.com/songquanpeng/one-api/common/logger"
)
const (
USD2RMB = 7
USD = 500 // $0.002 = 1 -> $1 = 500
RMB = USD / USD2RMB
USD2RMB = 7
USD = 500 // $0.002 = 1 -> $1 = 500
MILLI_USD = 1.0 / 1000 * USD
RMB = USD / USD2RMB
)
var modelRatioLock sync.RWMutex
// ModelRatio
// https://platform.openai.com/docs/models/model-endpoint-compatibility
// https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Blfmc9dlf
@ -37,6 +41,7 @@ var ModelRatio = map[string]float64{
"chatgpt-4o-latest": 2.5, // $0.005 / 1K tokens
"gpt-4o-2024-05-13": 2.5, // $0.005 / 1K tokens
"gpt-4o-2024-08-06": 1.25, // $0.0025 / 1K tokens
"gpt-4o-2024-11-20": 1.25, // $0.0025 / 1K tokens
"gpt-4o-mini": 0.075, // $0.00015 / 1K tokens
"gpt-4o-mini-2024-07-18": 0.075, // $0.00015 / 1K tokens
"gpt-4-vision-preview": 5, // $0.01 / 1K tokens
@ -48,8 +53,16 @@ var ModelRatio = map[string]float64{
"gpt-3.5-turbo-instruct": 0.75, // $0.0015 / 1K tokens
"gpt-3.5-turbo-1106": 0.5, // $0.001 / 1K tokens
"gpt-3.5-turbo-0125": 0.25, // $0.0005 / 1K tokens
"davinci-002": 1, // $0.002 / 1K tokens
"babbage-002": 0.2, // $0.0004 / 1K tokens
"o1": 7.5, // $15.00 / 1M input tokens
"o1-2024-12-17": 7.5,
"o1-preview": 7.5, // $15.00 / 1M input tokens
"o1-preview-2024-09-12": 7.5,
"o1-mini": 1.5, // $3.00 / 1M input tokens
"o1-mini-2024-09-12": 1.5,
"o3-mini": 1.5, // $3.00 / 1M input tokens
"o3-mini-2025-01-31": 1.5,
"davinci-002": 1, // $0.002 / 1K tokens
"babbage-002": 0.2, // $0.0004 / 1K tokens
"text-ada-001": 0.2,
"text-babbage-001": 0.25,
"text-curie-001": 1,
@ -74,15 +87,17 @@ var ModelRatio = map[string]float64{
"text-moderation-latest": 0.1,
"dall-e-2": 0.02 * USD, // $0.016 - $0.020 / image
"dall-e-3": 0.04 * USD, // $0.040 - $0.120 / image
// https://www.anthropic.com/api#pricing
// https://docs.anthropic.com/en/docs/about-claude/models
"claude-instant-1.2": 0.8 / 1000 * USD,
"claude-2.0": 8.0 / 1000 * USD,
"claude-2.1": 8.0 / 1000 * USD,
"claude-3-haiku-20240307": 0.25 / 1000 * USD,
"claude-3-5-haiku-20241022": 1.0 / 1000 * USD,
"claude-3-5-haiku-latest": 1.0 / 1000 * USD,
"claude-3-sonnet-20240229": 3.0 / 1000 * USD,
"claude-3-5-sonnet-20240620": 3.0 / 1000 * USD,
"claude-3-5-sonnet-20241022": 3.0 / 1000 * USD,
"claude-3-5-sonnet-latest": 3.0 / 1000 * USD,
"claude-3-opus-20240229": 15.0 / 1000 * USD,
// https://cloud.baidu.com/doc/WENXINWORKSHOP/s/hlrk4akp7
"ERNIE-4.0-8K": 0.120 * RMB,
@ -102,45 +117,162 @@ var ModelRatio = map[string]float64{
"bge-large-en": 0.002 * RMB,
"tao-8k": 0.002 * RMB,
// https://ai.google.dev/pricing
"gemini-pro": 1, // $0.00025 / 1k characters -> $0.001 / 1k tokens
"gemini-1.0-pro": 1,
"gemini-1.5-flash": 1,
"gemini-1.5-pro": 1,
"aqa": 1,
// https://cloud.google.com/vertex-ai/generative-ai/pricing
// "gemma-2-2b-it": 0,
// "gemma-2-9b-it": 0,
// "gemma-2-27b-it": 0,
"gemini-pro": 0.25 * MILLI_USD, // $0.00025 / 1k characters -> $0.001 / 1k tokens
"gemini-1.0-pro": 0.125 * MILLI_USD,
"gemini-1.5-pro": 1.25 * MILLI_USD,
"gemini-1.5-pro-001": 1.25 * MILLI_USD,
"gemini-1.5-pro-experimental": 1.25 * MILLI_USD,
"gemini-1.5-flash": 0.075 * MILLI_USD,
"gemini-1.5-flash-001": 0.075 * MILLI_USD,
"gemini-1.5-flash-8b": 0.0375 * MILLI_USD,
"gemini-2.0-flash-exp": 0.075 * MILLI_USD,
"gemini-2.0-flash": 0.15 * MILLI_USD,
"gemini-2.0-flash-001": 0.15 * MILLI_USD,
"gemini-2.0-flash-lite-preview-02-05": 0.075 * MILLI_USD,
"gemini-2.0-flash-thinking-exp-01-21": 0.075 * MILLI_USD,
"gemini-2.0-pro-exp-02-05": 1.25 * MILLI_USD,
"aqa": 1,
// https://open.bigmodel.cn/pricing
"glm-4": 0.1 * RMB,
"glm-4v": 0.1 * RMB,
"glm-3-turbo": 0.005 * RMB,
"embedding-2": 0.0005 * RMB,
"chatglm_turbo": 0.3572, // ¥0.005 / 1k tokens
"chatglm_pro": 0.7143, // ¥0.01 / 1k tokens
"chatglm_std": 0.3572, // ¥0.005 / 1k tokens
"chatglm_lite": 0.1429, // ¥0.002 / 1k tokens
"cogview-3": 0.25 * RMB,
"glm-zero-preview": 0.01 * RMB,
"glm-4-plus": 0.05 * RMB,
"glm-4-0520": 0.1 * RMB,
"glm-4-airx": 0.01 * RMB,
"glm-4-air": 0.0005 * RMB,
"glm-4-long": 0.001 * RMB,
"glm-4-flashx": 0.0001 * RMB,
"glm-4-flash": 0,
"glm-4": 0.1 * RMB, // deprecated model, available until 2025/06
"glm-3-turbo": 0.001 * RMB, // deprecated model, available until 2025/06
"glm-4v-plus": 0.004 * RMB,
"glm-4v": 0.05 * RMB,
"glm-4v-flash": 0,
"cogview-3-plus": 0.06 * RMB,
"cogview-3": 0.1 * RMB,
"cogview-3-flash": 0,
"cogviewx": 0.5 * RMB,
"cogviewx-flash": 0,
"charglm-4": 0.001 * RMB,
"emohaa": 0.015 * RMB,
"codegeex-4": 0.0001 * RMB,
"embedding-2": 0.0005 * RMB,
"embedding-3": 0.0005 * RMB,
// https://help.aliyun.com/zh/dashscope/developer-reference/tongyi-thousand-questions-metering-and-billing
"qwen-turbo": 0.5715, // ¥0.008 / 1k tokens
"qwen-plus": 1.4286, // ¥0.02 / 1k tokens
"qwen-max": 1.4286, // ¥0.02 / 1k tokens
"qwen-max-longcontext": 1.4286, // ¥0.02 / 1k tokens
"text-embedding-v1": 0.05, // ¥0.0007 / 1k tokens
"ali-stable-diffusion-xl": 8,
"ali-stable-diffusion-v1.5": 8,
"wanx-v1": 8,
"SparkDesk": 1.2858, // ¥0.018 / 1k tokens
"SparkDesk-v1.1": 1.2858, // ¥0.018 / 1k tokens
"SparkDesk-v2.1": 1.2858, // ¥0.018 / 1k tokens
"SparkDesk-v3.1": 1.2858, // ¥0.018 / 1k tokens
"SparkDesk-v3.1-128K": 1.2858, // ¥0.018 / 1k tokens
"SparkDesk-v3.5": 1.2858, // ¥0.018 / 1k tokens
"SparkDesk-v3.5-32K": 1.2858, // ¥0.018 / 1k tokens
"SparkDesk-v4.0": 1.2858, // ¥0.018 / 1k tokens
"360GPT_S2_V9": 0.8572, // ¥0.012 / 1k tokens
"embedding-bert-512-v1": 0.0715, // ¥0.001 / 1k tokens
"embedding_s1_v1": 0.0715, // ¥0.001 / 1k tokens
"semantic_similarity_s1_v1": 0.0715, // ¥0.001 / 1k tokens
"hunyuan": 7.143, // ¥0.1 / 1k tokens // https://cloud.tencent.com/document/product/1729/97731#e0e6be58-60c8-469f-bdeb-6c264ce3b4d0
"ChatStd": 0.01 * RMB,
"ChatPro": 0.1 * RMB,
"qwen-turbo": 0.0003 * RMB,
"qwen-turbo-latest": 0.0003 * RMB,
"qwen-plus": 0.0008 * RMB,
"qwen-plus-latest": 0.0008 * RMB,
"qwen-max": 0.0024 * RMB,
"qwen-max-latest": 0.0024 * RMB,
"qwen-max-longcontext": 0.0005 * RMB,
"qwen-vl-max": 0.003 * RMB,
"qwen-vl-max-latest": 0.003 * RMB,
"qwen-vl-plus": 0.0015 * RMB,
"qwen-vl-plus-latest": 0.0015 * RMB,
"qwen-vl-ocr": 0.005 * RMB,
"qwen-vl-ocr-latest": 0.005 * RMB,
"qwen-audio-turbo": 1.4286,
"qwen-math-plus": 0.004 * RMB,
"qwen-math-plus-latest": 0.004 * RMB,
"qwen-math-turbo": 0.002 * RMB,
"qwen-math-turbo-latest": 0.002 * RMB,
"qwen-coder-plus": 0.0035 * RMB,
"qwen-coder-plus-latest": 0.0035 * RMB,
"qwen-coder-turbo": 0.002 * RMB,
"qwen-coder-turbo-latest": 0.002 * RMB,
"qwen-mt-plus": 0.015 * RMB,
"qwen-mt-turbo": 0.001 * RMB,
"qwq-32b-preview": 0.002 * RMB,
"qwen2.5-72b-instruct": 0.004 * RMB,
"qwen2.5-32b-instruct": 0.03 * RMB,
"qwen2.5-14b-instruct": 0.001 * RMB,
"qwen2.5-7b-instruct": 0.0005 * RMB,
"qwen2.5-3b-instruct": 0.006 * RMB,
"qwen2.5-1.5b-instruct": 0.0003 * RMB,
"qwen2.5-0.5b-instruct": 0.0003 * RMB,
"qwen2-72b-instruct": 0.004 * RMB,
"qwen2-57b-a14b-instruct": 0.0035 * RMB,
"qwen2-7b-instruct": 0.001 * RMB,
"qwen2-1.5b-instruct": 0.001 * RMB,
"qwen2-0.5b-instruct": 0.001 * RMB,
"qwen1.5-110b-chat": 0.007 * RMB,
"qwen1.5-72b-chat": 0.005 * RMB,
"qwen1.5-32b-chat": 0.0035 * RMB,
"qwen1.5-14b-chat": 0.002 * RMB,
"qwen1.5-7b-chat": 0.001 * RMB,
"qwen1.5-1.8b-chat": 0.001 * RMB,
"qwen1.5-0.5b-chat": 0.001 * RMB,
"qwen-72b-chat": 0.02 * RMB,
"qwen-14b-chat": 0.008 * RMB,
"qwen-7b-chat": 0.006 * RMB,
"qwen-1.8b-chat": 0.006 * RMB,
"qwen-1.8b-longcontext-chat": 0.006 * RMB,
"qvq-72b-preview": 0.012 * RMB,
"qwen2.5-vl-72b-instruct": 0.016 * RMB,
"qwen2.5-vl-7b-instruct": 0.002 * RMB,
"qwen2.5-vl-3b-instruct": 0.0012 * RMB,
"qwen2-vl-7b-instruct": 0.016 * RMB,
"qwen2-vl-2b-instruct": 0.002 * RMB,
"qwen-vl-v1": 0.002 * RMB,
"qwen-vl-chat-v1": 0.002 * RMB,
"qwen2-audio-instruct": 0.002 * RMB,
"qwen-audio-chat": 0.002 * RMB,
"qwen2.5-math-72b-instruct": 0.004 * RMB,
"qwen2.5-math-7b-instruct": 0.001 * RMB,
"qwen2.5-math-1.5b-instruct": 0.001 * RMB,
"qwen2-math-72b-instruct": 0.004 * RMB,
"qwen2-math-7b-instruct": 0.001 * RMB,
"qwen2-math-1.5b-instruct": 0.001 * RMB,
"qwen2.5-coder-32b-instruct": 0.002 * RMB,
"qwen2.5-coder-14b-instruct": 0.002 * RMB,
"qwen2.5-coder-7b-instruct": 0.001 * RMB,
"qwen2.5-coder-3b-instruct": 0.001 * RMB,
"qwen2.5-coder-1.5b-instruct": 0.001 * RMB,
"qwen2.5-coder-0.5b-instruct": 0.001 * RMB,
"text-embedding-v1": 0.0007 * RMB, // ¥0.0007 / 1k tokens
"text-embedding-v3": 0.0007 * RMB,
"text-embedding-v2": 0.0007 * RMB,
"text-embedding-async-v2": 0.0007 * RMB,
"text-embedding-async-v1": 0.0007 * RMB,
"ali-stable-diffusion-xl": 8.00,
"ali-stable-diffusion-v1.5": 8.00,
"wanx-v1": 8.00,
"deepseek-r1": 0.002 * RMB,
"deepseek-v3": 0.001 * RMB,
"deepseek-r1-distill-qwen-1.5b": 0.001 * RMB,
"deepseek-r1-distill-qwen-7b": 0.0005 * RMB,
"deepseek-r1-distill-qwen-14b": 0.001 * RMB,
"deepseek-r1-distill-qwen-32b": 0.002 * RMB,
"deepseek-r1-distill-llama-8b": 0.0005 * RMB,
"deepseek-r1-distill-llama-70b": 0.004 * RMB,
"SparkDesk": 1.2858, // ¥0.018 / 1k tokens
"SparkDesk-v1.1": 1.2858, // ¥0.018 / 1k tokens
"SparkDesk-v2.1": 1.2858, // ¥0.018 / 1k tokens
"SparkDesk-v3.1": 1.2858, // ¥0.018 / 1k tokens
"SparkDesk-v3.1-128K": 1.2858, // ¥0.018 / 1k tokens
"SparkDesk-v3.5": 1.2858, // ¥0.018 / 1k tokens
"SparkDesk-v3.5-32K": 1.2858, // ¥0.018 / 1k tokens
"SparkDesk-v4.0": 1.2858, // ¥0.018 / 1k tokens
"360GPT_S2_V9": 0.8572, // ¥0.012 / 1k tokens
"embedding-bert-512-v1": 0.0715, // ¥0.001 / 1k tokens
"embedding_s1_v1": 0.0715, // ¥0.001 / 1k tokens
"semantic_similarity_s1_v1": 0.0715, // ¥0.001 / 1k tokens
// https://cloud.tencent.com/document/product/1729/97731#e0e6be58-60c8-469f-bdeb-6c264ce3b4d0
"hunyuan-turbo": 0.015 * RMB,
"hunyuan-large": 0.004 * RMB,
"hunyuan-large-longcontext": 0.006 * RMB,
"hunyuan-standard": 0.0008 * RMB,
"hunyuan-standard-256K": 0.0005 * RMB,
"hunyuan-translation-lite": 0.005 * RMB,
"hunyuan-role": 0.004 * RMB,
"hunyuan-functioncall": 0.004 * RMB,
"hunyuan-code": 0.004 * RMB,
"hunyuan-turbo-vision": 0.08 * RMB,
"hunyuan-vision": 0.018 * RMB,
"hunyuan-embedding": 0.0007 * RMB,
// https://platform.moonshot.cn/pricing
"moonshot-v1-8k": 0.012 * RMB,
"moonshot-v1-32k": 0.024 * RMB,
@ -203,20 +335,301 @@ var ModelRatio = map[string]float64{
"command-r": 0.5 / 1000 * USD,
"command-r-plus": 3.0 / 1000 * USD,
// https://platform.deepseek.com/api-docs/pricing/
"deepseek-chat": 1.0 / 1000 * RMB,
"deepseek-coder": 1.0 / 1000 * RMB,
"deepseek-chat": 0.14 * MILLI_USD,
"deepseek-reasoner": 0.55 * MILLI_USD,
// https://www.deepl.com/pro?cta=header-prices
"deepl-zh": 25.0 / 1000 * USD,
"deepl-en": 25.0 / 1000 * USD,
"deepl-ja": 25.0 / 1000 * USD,
// https://console.x.ai/
"grok-beta": 5.0 / 1000 * USD,
// replicate charges based on the number of generated images
// https://replicate.com/pricing
"black-forest-labs/flux-1.1-pro": 0.04 * USD,
"black-forest-labs/flux-1.1-pro-ultra": 0.06 * USD,
"black-forest-labs/flux-canny-dev": 0.025 * USD,
"black-forest-labs/flux-canny-pro": 0.05 * USD,
"black-forest-labs/flux-depth-dev": 0.025 * USD,
"black-forest-labs/flux-depth-pro": 0.05 * USD,
"black-forest-labs/flux-dev": 0.025 * USD,
"black-forest-labs/flux-dev-lora": 0.032 * USD,
"black-forest-labs/flux-fill-dev": 0.04 * USD,
"black-forest-labs/flux-fill-pro": 0.05 * USD,
"black-forest-labs/flux-pro": 0.055 * USD,
"black-forest-labs/flux-redux-dev": 0.025 * USD,
"black-forest-labs/flux-redux-schnell": 0.003 * USD,
"black-forest-labs/flux-schnell": 0.003 * USD,
"black-forest-labs/flux-schnell-lora": 0.02 * USD,
"ideogram-ai/ideogram-v2": 0.08 * USD,
"ideogram-ai/ideogram-v2-turbo": 0.05 * USD,
"recraft-ai/recraft-v3": 0.04 * USD,
"recraft-ai/recraft-v3-svg": 0.08 * USD,
"stability-ai/stable-diffusion-3": 0.035 * USD,
"stability-ai/stable-diffusion-3.5-large": 0.065 * USD,
"stability-ai/stable-diffusion-3.5-large-turbo": 0.04 * USD,
"stability-ai/stable-diffusion-3.5-medium": 0.035 * USD,
// replicate chat models
"ibm-granite/granite-20b-code-instruct-8k": 0.100 * USD,
"ibm-granite/granite-3.0-2b-instruct": 0.030 * USD,
"ibm-granite/granite-3.0-8b-instruct": 0.050 * USD,
"ibm-granite/granite-8b-code-instruct-128k": 0.050 * USD,
"meta/llama-2-13b": 0.100 * USD,
"meta/llama-2-13b-chat": 0.100 * USD,
"meta/llama-2-70b": 0.650 * USD,
"meta/llama-2-70b-chat": 0.650 * USD,
"meta/llama-2-7b": 0.050 * USD,
"meta/llama-2-7b-chat": 0.050 * USD,
"meta/meta-llama-3.1-405b-instruct": 9.500 * USD,
"meta/meta-llama-3-70b": 0.650 * USD,
"meta/meta-llama-3-70b-instruct": 0.650 * USD,
"meta/meta-llama-3-8b": 0.050 * USD,
"meta/meta-llama-3-8b-instruct": 0.050 * USD,
"mistralai/mistral-7b-instruct-v0.2": 0.050 * USD,
"mistralai/mistral-7b-v0.1": 0.050 * USD,
"mistralai/mixtral-8x7b-instruct-v0.1": 0.300 * USD,
//https://openrouter.ai/models
"01-ai/yi-large": 1.5,
"aetherwiing/mn-starcannon-12b": 0.6,
"ai21/jamba-1-5-large": 4.0,
"ai21/jamba-1-5-mini": 0.2,
"ai21/jamba-instruct": 0.35,
"aion-labs/aion-1.0": 6.0,
"aion-labs/aion-1.0-mini": 1.2,
"aion-labs/aion-rp-llama-3.1-8b": 0.1,
"allenai/llama-3.1-tulu-3-405b": 5.0,
"alpindale/goliath-120b": 4.6875,
"alpindale/magnum-72b": 1.125,
"amazon/nova-lite-v1": 0.12,
"amazon/nova-micro-v1": 0.07,
"amazon/nova-pro-v1": 1.6,
"anthracite-org/magnum-v2-72b": 1.5,
"anthracite-org/magnum-v4-72b": 1.125,
"anthropic/claude-2": 12.0,
"anthropic/claude-2.0": 12.0,
"anthropic/claude-2.0:beta": 12.0,
"anthropic/claude-2.1": 12.0,
"anthropic/claude-2.1:beta": 12.0,
"anthropic/claude-2:beta": 12.0,
"anthropic/claude-3-haiku": 0.625,
"anthropic/claude-3-haiku:beta": 0.625,
"anthropic/claude-3-opus": 37.5,
"anthropic/claude-3-opus:beta": 37.5,
"anthropic/claude-3-sonnet": 7.5,
"anthropic/claude-3-sonnet:beta": 7.5,
"anthropic/claude-3.5-haiku": 2.0,
"anthropic/claude-3.5-haiku-20241022": 2.0,
"anthropic/claude-3.5-haiku-20241022:beta": 2.0,
"anthropic/claude-3.5-haiku:beta": 2.0,
"anthropic/claude-3.5-sonnet": 7.5,
"anthropic/claude-3.5-sonnet-20240620": 7.5,
"anthropic/claude-3.5-sonnet-20240620:beta": 7.5,
"anthropic/claude-3.5-sonnet:beta": 7.5,
"cognitivecomputations/dolphin-mixtral-8x22b": 0.45,
"cognitivecomputations/dolphin-mixtral-8x7b": 0.25,
"cohere/command": 0.95,
"cohere/command-r": 0.7125,
"cohere/command-r-03-2024": 0.7125,
"cohere/command-r-08-2024": 0.285,
"cohere/command-r-plus": 7.125,
"cohere/command-r-plus-04-2024": 7.125,
"cohere/command-r-plus-08-2024": 4.75,
"cohere/command-r7b-12-2024": 0.075,
"databricks/dbrx-instruct": 0.6,
"deepseek/deepseek-chat": 0.445,
"deepseek/deepseek-chat-v2.5": 1.0,
"deepseek/deepseek-chat:free": 0.0,
"deepseek/deepseek-r1": 1.2,
"deepseek/deepseek-r1-distill-llama-70b": 0.345,
"deepseek/deepseek-r1-distill-llama-70b:free": 0.0,
"deepseek/deepseek-r1-distill-llama-8b": 0.02,
"deepseek/deepseek-r1-distill-qwen-1.5b": 0.09,
"deepseek/deepseek-r1-distill-qwen-14b": 0.075,
"deepseek/deepseek-r1-distill-qwen-32b": 0.09,
"deepseek/deepseek-r1:free": 0.0,
"eva-unit-01/eva-llama-3.33-70b": 3.0,
"eva-unit-01/eva-qwen-2.5-32b": 1.7,
"eva-unit-01/eva-qwen-2.5-72b": 3.0,
"google/gemini-2.0-flash-001": 0.2,
"google/gemini-2.0-flash-exp:free": 0.0,
"google/gemini-2.0-flash-lite-preview-02-05:free": 0.0,
"google/gemini-2.0-flash-thinking-exp-1219:free": 0.0,
"google/gemini-2.0-flash-thinking-exp:free": 0.0,
"google/gemini-2.0-pro-exp-02-05:free": 0.0,
"google/gemini-exp-1206:free": 0.0,
"google/gemini-flash-1.5": 0.15,
"google/gemini-flash-1.5-8b": 0.075,
"google/gemini-flash-1.5-8b-exp": 0.0,
"google/gemini-pro": 0.75,
"google/gemini-pro-1.5": 2.5,
"google/gemini-pro-vision": 0.75,
"google/gemma-2-27b-it": 0.135,
"google/gemma-2-9b-it": 0.03,
"google/gemma-2-9b-it:free": 0.0,
"google/gemma-7b-it": 0.075,
"google/learnlm-1.5-pro-experimental:free": 0.0,
"google/palm-2-chat-bison": 1.0,
"google/palm-2-chat-bison-32k": 1.0,
"google/palm-2-codechat-bison": 1.0,
"google/palm-2-codechat-bison-32k": 1.0,
"gryphe/mythomax-l2-13b": 0.0325,
"gryphe/mythomax-l2-13b:free": 0.0,
"huggingfaceh4/zephyr-7b-beta:free": 0.0,
"infermatic/mn-inferor-12b": 0.6,
"inflection/inflection-3-pi": 5.0,
"inflection/inflection-3-productivity": 5.0,
"jondurbin/airoboros-l2-70b": 0.25,
"liquid/lfm-3b": 0.01,
"liquid/lfm-40b": 0.075,
"liquid/lfm-7b": 0.005,
"mancer/weaver": 1.125,
"meta-llama/llama-2-13b-chat": 0.11,
"meta-llama/llama-2-70b-chat": 0.45,
"meta-llama/llama-3-70b-instruct": 0.2,
"meta-llama/llama-3-8b-instruct": 0.03,
"meta-llama/llama-3-8b-instruct:free": 0.0,
"meta-llama/llama-3.1-405b": 1.0,
"meta-llama/llama-3.1-405b-instruct": 0.4,
"meta-llama/llama-3.1-70b-instruct": 0.15,
"meta-llama/llama-3.1-8b-instruct": 0.025,
"meta-llama/llama-3.2-11b-vision-instruct": 0.0275,
"meta-llama/llama-3.2-11b-vision-instruct:free": 0.0,
"meta-llama/llama-3.2-1b-instruct": 0.005,
"meta-llama/llama-3.2-3b-instruct": 0.0125,
"meta-llama/llama-3.2-90b-vision-instruct": 0.8,
"meta-llama/llama-3.3-70b-instruct": 0.15,
"meta-llama/llama-3.3-70b-instruct:free": 0.0,
"meta-llama/llama-guard-2-8b": 0.1,
"microsoft/phi-3-medium-128k-instruct": 0.5,
"microsoft/phi-3-medium-128k-instruct:free": 0.0,
"microsoft/phi-3-mini-128k-instruct": 0.05,
"microsoft/phi-3-mini-128k-instruct:free": 0.0,
"microsoft/phi-3.5-mini-128k-instruct": 0.05,
"microsoft/phi-4": 0.07,
"microsoft/wizardlm-2-7b": 0.035,
"microsoft/wizardlm-2-8x22b": 0.25,
"minimax/minimax-01": 0.55,
"mistralai/codestral-2501": 0.45,
"mistralai/codestral-mamba": 0.125,
"mistralai/ministral-3b": 0.02,
"mistralai/ministral-8b": 0.05,
"mistralai/mistral-7b-instruct": 0.0275,
"mistralai/mistral-7b-instruct-v0.1": 0.1,
"mistralai/mistral-7b-instruct-v0.3": 0.0275,
"mistralai/mistral-7b-instruct:free": 0.0,
"mistralai/mistral-large": 3.0,
"mistralai/mistral-large-2407": 3.0,
"mistralai/mistral-large-2411": 3.0,
"mistralai/mistral-medium": 4.05,
"mistralai/mistral-nemo": 0.04,
"mistralai/mistral-nemo:free": 0.0,
"mistralai/mistral-small": 0.3,
"mistralai/mistral-small-24b-instruct-2501": 0.07,
"mistralai/mistral-small-24b-instruct-2501:free": 0.0,
"mistralai/mistral-tiny": 0.125,
"mistralai/mixtral-8x22b-instruct": 0.45,
"mistralai/mixtral-8x7b": 0.3,
"mistralai/mixtral-8x7b-instruct": 0.12,
"mistralai/pixtral-12b": 0.05,
"mistralai/pixtral-large-2411": 3.0,
"neversleep/llama-3-lumimaid-70b": 2.25,
"neversleep/llama-3-lumimaid-8b": 0.5625,
"neversleep/llama-3-lumimaid-8b:extended": 0.5625,
"neversleep/llama-3.1-lumimaid-70b": 2.25,
"neversleep/llama-3.1-lumimaid-8b": 0.5625,
"neversleep/noromaid-20b": 1.125,
"nothingiisreal/mn-celeste-12b": 0.6,
"nousresearch/hermes-2-pro-llama-3-8b": 0.02,
"nousresearch/hermes-3-llama-3.1-405b": 0.4,
"nousresearch/hermes-3-llama-3.1-70b": 0.15,
"nousresearch/nous-hermes-2-mixtral-8x7b-dpo": 0.3,
"nousresearch/nous-hermes-llama2-13b": 0.085,
"nvidia/llama-3.1-nemotron-70b-instruct": 0.15,
"nvidia/llama-3.1-nemotron-70b-instruct:free": 0.0,
"openai/chatgpt-4o-latest": 7.5,
"openai/gpt-3.5-turbo": 0.75,
"openai/gpt-3.5-turbo-0125": 0.75,
"openai/gpt-3.5-turbo-0613": 1.0,
"openai/gpt-3.5-turbo-1106": 1.0,
"openai/gpt-3.5-turbo-16k": 2.0,
"openai/gpt-3.5-turbo-instruct": 1.0,
"openai/gpt-4": 30.0,
"openai/gpt-4-0314": 30.0,
"openai/gpt-4-1106-preview": 15.0,
"openai/gpt-4-32k": 60.0,
"openai/gpt-4-32k-0314": 60.0,
"openai/gpt-4-turbo": 15.0,
"openai/gpt-4-turbo-preview": 15.0,
"openai/gpt-4o": 5.0,
"openai/gpt-4o-2024-05-13": 7.5,
"openai/gpt-4o-2024-08-06": 5.0,
"openai/gpt-4o-2024-11-20": 5.0,
"openai/gpt-4o-mini": 0.3,
"openai/gpt-4o-mini-2024-07-18": 0.3,
"openai/gpt-4o:extended": 9.0,
"openai/o1": 30.0,
"openai/o1-mini": 2.2,
"openai/o1-mini-2024-09-12": 2.2,
"openai/o1-preview": 30.0,
"openai/o1-preview-2024-09-12": 30.0,
"openai/o3-mini": 2.2,
"openai/o3-mini-high": 2.2,
"openchat/openchat-7b": 0.0275,
"openchat/openchat-7b:free": 0.0,
"openrouter/auto": -500000.0,
"perplexity/llama-3.1-sonar-huge-128k-online": 2.5,
"perplexity/llama-3.1-sonar-large-128k-chat": 0.5,
"perplexity/llama-3.1-sonar-large-128k-online": 0.5,
"perplexity/llama-3.1-sonar-small-128k-chat": 0.1,
"perplexity/llama-3.1-sonar-small-128k-online": 0.1,
"perplexity/sonar": 0.5,
"perplexity/sonar-reasoning": 2.5,
"pygmalionai/mythalion-13b": 0.6,
"qwen/qvq-72b-preview": 0.25,
"qwen/qwen-2-72b-instruct": 0.45,
"qwen/qwen-2-7b-instruct": 0.027,
"qwen/qwen-2-7b-instruct:free": 0.0,
"qwen/qwen-2-vl-72b-instruct": 0.2,
"qwen/qwen-2-vl-7b-instruct": 0.05,
"qwen/qwen-2.5-72b-instruct": 0.2,
"qwen/qwen-2.5-7b-instruct": 0.025,
"qwen/qwen-2.5-coder-32b-instruct": 0.08,
"qwen/qwen-max": 3.2,
"qwen/qwen-plus": 0.6,
"qwen/qwen-turbo": 0.1,
"qwen/qwen-vl-plus:free": 0.0,
"qwen/qwen2.5-vl-72b-instruct:free": 0.0,
"qwen/qwq-32b-preview": 0.09,
"raifle/sorcererlm-8x22b": 2.25,
"sao10k/fimbulvetr-11b-v2": 0.6,
"sao10k/l3-euryale-70b": 0.4,
"sao10k/l3-lunaris-8b": 0.03,
"sao10k/l3.1-70b-hanami-x1": 1.5,
"sao10k/l3.1-euryale-70b": 0.4,
"sao10k/l3.3-euryale-70b": 0.4,
"sophosympatheia/midnight-rose-70b": 0.4,
"sophosympatheia/rogue-rose-103b-v0.2:free": 0.0,
"teknium/openhermes-2.5-mistral-7b": 0.085,
"thedrummer/rocinante-12b": 0.25,
"thedrummer/unslopnemo-12b": 0.25,
"undi95/remm-slerp-l2-13b": 0.6,
"undi95/toppy-m-7b": 0.035,
"undi95/toppy-m-7b:free": 0.0,
"x-ai/grok-2-1212": 5.0,
"x-ai/grok-2-vision-1212": 5.0,
"x-ai/grok-beta": 7.5,
"x-ai/grok-vision-beta": 7.5,
"xwin-lm/xwin-lm-70b": 1.875,
}
var CompletionRatio = map[string]float64{
// aws llama3
"llama3-8b-8192(33)": 0.0006 / 0.0003,
"llama3-70b-8192(33)": 0.0035 / 0.00265,
// whisper
"whisper-1": 0, // only count input tokens
// deepseek
"deepseek-chat": 0.28 / 0.14,
"deepseek-reasoner": 2.19 / 0.55,
}
var (
@ -264,11 +677,15 @@ func ModelRatio2JSONString() string {
}
func UpdateModelRatioByJSONString(jsonStr string) error {
modelRatioLock.Lock()
defer modelRatioLock.Unlock()
ModelRatio = make(map[string]float64)
return json.Unmarshal([]byte(jsonStr), &ModelRatio)
}
func GetModelRatio(name string, channelType int) float64 {
modelRatioLock.RLock()
defer modelRatioLock.RUnlock()
if strings.HasPrefix(name, "qwen-") && strings.HasSuffix(name, "-internet") {
name = strings.TrimSuffix(name, "-internet")
}
@ -334,16 +751,22 @@ func GetCompletionRatio(name string, channelType int) float64 {
return 4.0 / 3.0
}
if strings.HasPrefix(name, "gpt-4") {
if strings.HasPrefix(name, "gpt-4o-mini") || name == "gpt-4o-2024-08-06" {
if strings.HasPrefix(name, "gpt-4o") {
if name == "gpt-4o-2024-05-13" {
return 3
}
return 4
}
if strings.HasPrefix(name, "gpt-4-turbo") ||
strings.HasPrefix(name, "gpt-4o") ||
strings.HasSuffix(name, "preview") {
return 3
}
return 2
}
// including o1, o1-preview, o1-mini
if strings.HasPrefix(name, "o1") {
return 4
}
if name == "chatgpt-4o-latest" {
return 3
}
@ -362,6 +785,7 @@ func GetCompletionRatio(name string, channelType int) float64 {
if strings.HasPrefix(name, "deepseek-") {
return 2
}
switch name {
case "llama2-70b-4096":
return 0.8 / 0.64
@ -377,6 +801,35 @@ func GetCompletionRatio(name string, channelType int) float64 {
return 5
case "grok-beta":
return 3
// Replicate Models
// https://replicate.com/pricing
case "ibm-granite/granite-20b-code-instruct-8k":
return 5
case "ibm-granite/granite-3.0-2b-instruct":
return 8.333333333333334
case "ibm-granite/granite-3.0-8b-instruct",
"ibm-granite/granite-8b-code-instruct-128k":
return 5
case "meta/llama-2-13b",
"meta/llama-2-13b-chat",
"meta/llama-2-7b",
"meta/llama-2-7b-chat",
"meta/meta-llama-3-8b",
"meta/meta-llama-3-8b-instruct":
return 5
case "meta/llama-2-70b",
"meta/llama-2-70b-chat",
"meta/meta-llama-3-70b",
"meta/meta-llama-3-70b-instruct":
return 2.750 / 0.650 // ≈4.230769
case "meta/meta-llama-3.1-405b-instruct":
return 1
case "mistralai/mistral-7b-instruct-v0.2",
"mistralai/mistral-7b-v0.1":
return 5
case "mistralai/mixtral-8x7b-instruct-v0.1":
return 1.000 / 0.300 // ≈3.333333
}
return 1
}

View File

@ -47,5 +47,11 @@ const (
Proxy
SiliconFlow
XAI
Replicate
BaiduV2
XunfeiV2
AliBailian
OpenAICompatible
GeminiOpenAICompatible
Dummy
)

View File

@ -37,6 +37,8 @@ func ToAPIType(channelType int) int {
apiType = apitype.DeepL
case VertextAI:
apiType = apitype.VertexAI
case Replicate:
apiType = apitype.Replicate
case Proxy:
apiType = apitype.Proxy
}

View File

@ -47,6 +47,13 @@ var ChannelBaseURLs = []string{
"", // 43
"https://api.siliconflow.cn", // 44
"https://api.x.ai", // 45
"https://api.replicate.com/v1/models/", // 46
"https://qianfan.baidubce.com", // 47
"https://spark-api-open.xf-yun.com", // 48
"https://dashscope.aliyuncs.com", // 49
"", // 50
"https://generativelanguage.googleapis.com/v1beta/openai/", // 51
}
func init() {

View File

@ -110,16 +110,9 @@ func RelayAudioHelper(c *gin.Context, relayMode int) *relaymodel.ErrorWithStatus
}()
// map model name
modelMapping := c.GetString(ctxkey.ModelMapping)
if modelMapping != "" {
modelMap := make(map[string]string)
err := json.Unmarshal([]byte(modelMapping), &modelMap)
if err != nil {
return openai.ErrorWrapper(err, "unmarshal_model_mapping_failed", http.StatusInternalServerError)
}
if modelMap[audioModel] != "" {
audioModel = modelMap[audioModel]
}
modelMapping := c.GetStringMapString(ctxkey.ModelMapping)
if modelMapping != nil && modelMapping[audioModel] != "" {
audioModel = modelMapping[audioModel]
}
baseURL := channeltype.ChannelBaseURLs[channelType]

Some files were not shown because too many files have changed in this diff Show More