binary-husky
163e59c0f3
minor bug fix
2025-02-09 19:33:02 +08:00
Steven Moder
991a903fa9
fix: f-string expression part cannot include a backslash ( #2139 )
2025-02-08 20:50:54 +08:00
Steven Moder
cf7c81170c
fix: return 参数数量 及 返回类型考虑 ( #2129 )
2025-02-07 21:33:06 +08:00
barry
6dda2061dd
Update bridge_openrouter.py ( #2132 )
...
fix openrouter api 400 post bug
Co-authored-by: lan <56376794+lostatnight@users.noreply.github.com >
2025-02-07 21:28:05 +08:00
Memento mori.
caaebe4296
add support for Deepseek R1 model and display CoT ( #2118 )
...
* feat: add support for R1 model and display CoT
* fix unpacking
* feat: customized font & font size
* auto hide tooltip when scoll down
* tooltip glass transparent css
* fix: Enhance API key validation in is_any_api_key function (#2113 )
* support qwen2.5-max!
* update minior adjustment
---------
Co-authored-by: binary-husky <qingxu.fu@outlook.com >
Co-authored-by: Steven Moder <java20131114@gmail.com >
2025-02-04 16:02:02 +08:00
binary-husky
0458590a77
support qwen2.5-max!
2025-01-29 23:29:38 +08:00
Southlandi
fd93622840
修复Gemini对话错误问题(停用词数量为0的情况) ( #2092 )
2024-12-28 23:22:10 +08:00
whyXVI
09a82a572d
Fix RuntimeError in predict_no_ui_long_connection() ( #2095 )
...
Bug fix: Fix RuntimeError in predict_no_ui_long_connection()
In the original code, calling predict_no_ui_long_connection() would trigger a RuntimeError("OpenAI拒绝了请求:" + error_msg) even when the server responded normally. The issue occurred due to incorrect handling of SSE protocol comment lines (lines starting with ":").
Modified the parsing logic in both `predict` and `predict_no_ui_long_connection` to handle these lines correctly, making the logic more intuitive and robust.
2024-12-28 23:21:14 +08:00
binary-husky
ac64a77c2d
allow disable openai proxy in WHEN_TO_USE_PROXY
2024-12-28 07:14:54 +08:00
binary-husky
dae8a0affc
compat bug fix
2024-12-25 01:21:58 +08:00
binary-husky
97a81e9388
fix temp issue of o1
2024-12-25 00:54:03 +08:00
binary-husky
1dd1d0ed6c
fix cookie overflow bug
2024-12-25 00:33:20 +08:00
YIQI JIANG
f60a12f8b4
Add o1 and o1-2024-12-17 model support ( #2090 )
...
* Add o1 and o1-2024-12-17 model support
* patch api key selection
---------
Co-authored-by: 蒋翌琪 <jiangyiqi99@jiangyiqideMacBook-Pro.local >
Co-authored-by: binary-husky <qingxu.fu@outlook.com >
2024-12-19 22:32:57 +08:00
binary-husky
72b2ce9b62
ollama patch
2024-12-18 23:05:55 +08:00
YE Ke 叶柯
294df6c2d5
Add ChatGLM4 local deployment support and refactor ChatGLM bridge's path configuration ( #2062 )
...
* ✨ feat(request_llms and config.py): ChatGLM4 Deployment
Add support for local deployment of ChatGLM4 model
* 🦄 refactor(bridge_chatglm3.py): ChatGLM3 model path
Added ChatGLM3 path customization (in config.py).
Removed useless quantization model options that have been annotated
---------
Co-authored-by: MarkDeia <17290550+MarkDeia@users.noreply.github.com >
2024-12-07 23:43:51 +08:00
Zhenhong Du
239894544e
Add support for grok-beta model from x.ai ( #2060 )
...
* Update config.py
add support for `grok-beta` model
* Update bridge_all.py
add support for `grok-beta` model
2024-12-07 23:41:53 +08:00
binary-husky
e62decac21
change some open fn encoding to utf-8
2024-11-19 15:53:50 +00:00
binary-husky
34cc484635
chatgpt-4o-latest
2024-11-11 15:58:57 +00:00
hcy2206
4f0851f703
增加了对于glm-4-plus的支持 ( #2014 )
...
* 增加对于讯飞星火大模型Spark4.0的支持
* Create github action sync.yml
* 增加对于智谱glm-4-plus的支持
* feat: change arxiv io param
* catch comment source code exception
* upgrade auto comment
* add security patch
---------
Co-authored-by: GH Action - Upstream Sync <action@github.com >
Co-authored-by: binary-husky <qingxu.fu@outlook.com >
2024-11-03 22:41:16 +08:00
binary-husky
074b3c9828
explicitly declare default value
2024-10-15 06:41:12 +00:00
Nextstrain
b8e8457a01
关于o1系列模型无法正常请求的修复,多模型轮询KeyError: 'finish_reason'的修复 ( #1992 )
...
* Update bridge_all.py
* Update bridge_chatgpt.py
* Update bridge_chatgpt.py
* Update bridge_all.py
* Update bridge_all.py
2024-10-15 14:36:51 +08:00
binary-husky
adbed044e4
fix o1 compat problem
2024-10-13 17:02:07 +00:00
binary-husky
a01ca93362
Merge Latest Frontier ( #1991 )
...
* logging sys to loguru: stage 1 complete
* import loguru: stage 2
* logging -> loguru: stage 3
* support o1-preview and o1-mini
* logging -> loguru stage 4
* update social helper
* logging -> loguru: final stage
* fix: console output
* update translation matrix
* fix: loguru argument error with proxy enabled (#1977 )
* relax llama index version
* remove comment
* Added some modules to support openrouter (#1975 )
* Added some modules for supporting openrouter model
Added some modules for supporting openrouter model
* Update config.py
* Update .gitignore
* Update bridge_openrouter.py
* Not changed actually
* Refactor logging in bridge_openrouter.py
---------
Co-authored-by: binary-husky <qingxu.fu@outlook.com >
* remove logging extra
---------
Co-authored-by: Steven Moder <java20131114@gmail.com >
Co-authored-by: Ren Lifei <2602264455@qq.com >
2024-10-05 17:09:18 +08:00
binary-husky
597c320808
fix: system prompt err when using o1 models
2024-09-14 17:04:01 +00:00
binary-husky
18290fd138
fix: support o1 models
2024-09-14 17:00:02 +00:00
binary-husky
0d0575a639
support o1-preview and o1-mini
2024-09-13 03:12:18 +00:00
binary-husky
8b91d2ac0a
add milvus vector store
2024-09-08 15:19:03 +00:00
binary-husky
34784c1d40
Merge branch 'rag' into frontier
2024-09-02 15:01:12 +00:00
binary-husky
08c3c56f53
rag version one
2024-08-28 15:14:13 +00:00
binary-husky
294716c832
begin rag project with llama index
2024-08-21 14:24:37 +00:00
moetayuko
a95b3daab9
fix loading chatglm3 ( #1937 )
...
* update welcome svg
* update welcome message
* fix loading chatglm3
---------
Co-authored-by: binary-husky <qingxu.fu@outlook.com >
Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com >
2024-08-19 23:32:45 +08:00
moetayuko
a119ab36fe
fix enabling sparkv4 ( #1936 )
2024-08-12 21:45:08 +08:00
FatShibaInu
f9384e4e5f
Add Support for Gemini 1.5 Pro & Gemini 1.5 Flash ( #1926 )
...
* Add Support for Gemini 1.5 Pro & 1.5 Flash.
* Update bridge_all.py
fix a spelling error in comments.
* Add Support for Gemini 1.5 Pro & Gemini 1.5 Flash
2024-08-12 21:44:24 +08:00
hongyi-zhao
573dc4d184
Add claude-3-5-sonnet-20240620 ( #1907 )
...
See https://docs.anthropic.com/en/docs/about-claude/models#model-names fore model names.
2024-08-02 18:04:42 +08:00
jiangfy-ihep
60b3491513
add gpt-4o-mini ( #1904 )
...
Co-authored-by: Fayu Jiang <jiangfayu@hotmail.com >
2024-07-23 00:55:34 +08:00
binary-husky
68838da8ad
finish test
2024-07-12 04:19:07 +00:00
binary-husky
7ebc2d00e7
Merge branch 'master' into frontier
2024-07-09 03:19:35 +00:00
binary-husky
41f25a6a9b
Merge branch 'bold_frontier' into frontier
2024-07-04 14:16:08 +00:00
Menghuan1918
114192e025
Bug fix: can not chat with deepseek ( #1879 )
2024-07-04 20:28:53 +08:00
binary-husky
0c6c357e9c
revise qwen
2024-07-02 14:22:45 +00:00
binary-husky
9d11b17f25
Merge branch 'master' into frontier
2024-07-02 08:06:34 +00:00
Menghuan1918
6cd2d80dfd
Bug fix: Some non-standard forms of error return are not caught ( #1877 )
2024-07-01 20:35:49 +08:00
hcy2206
194e665a3b
增加了对于讯飞星火大模型Spark4.0的支持 ( #1875 )
2024-06-29 23:20:04 +08:00
binary-husky
26e7677dc3
fix new api for taichu
2024-06-26 15:18:11 +00:00
binary-husky
ba484c55a0
Merge branch 'master' into frontier
2024-06-10 14:19:26 +00:00
Frank Lee
ca64a592f5
Update zhipu models ( #1852 )
2024-06-10 22:17:51 +08:00
binary-husky
2262a4d80a
taichu model fix
2024-06-06 09:35:05 +00:00
binary-husky
24a21ae320
紫东太初大模型
2024-06-06 09:05:06 +00:00
binary-husky
3d5790cc2c
resolve fallback to non-multimodal problem
2024-06-06 08:00:30 +00:00
binary-husky
7de6015800
multimodal support for gpt-4o etc
2024-06-06 07:36:37 +00:00