Compare commits

..

45 Commits

Author SHA1 Message Date
binary-husky
5e0f327237 Merge branch 'master' into frontier 2025-02-04 16:12:42 +08:00
binary-husky
7f4b87a633 update readme 2025-02-04 16:08:18 +08:00
binary-husky
2ddd1bb634 Merge branch 'memset0-master' 2025-02-04 16:03:53 +08:00
binary-husky
c68285aeac update config and version 2025-02-04 16:03:01 +08:00
Memento mori.
caaebe4296 add support for Deepseek R1 model and display CoT (#2118)
* feat: add support for R1 model and display CoT

* fix unpacking

* feat: customized font & font size

* auto hide tooltip when scoll down

* tooltip glass transparent css

* fix: Enhance API key validation in is_any_api_key function (#2113)

* support qwen2.5-max!

* update minior adjustment

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>
Co-authored-by: Steven Moder <java20131114@gmail.com>
2025-02-04 16:02:02 +08:00
binary-husky
39d50c1c95 update minior adjustment 2025-02-04 15:57:35 +08:00
binary-husky
25dc7bf912 Merge branch 'master' of https://github.com/memset0/gpt_academic into memset0-master 2025-01-30 22:03:31 +08:00
binary-husky
0458590a77 support qwen2.5-max! 2025-01-29 23:29:38 +08:00
Steven Moder
44fe78fff5 fix: Enhance API key validation in is_any_api_key function (#2113) 2025-01-29 21:40:30 +08:00
binary-husky
6a6eba5f16 support qwen2.5-max! 2025-01-29 21:30:54 +08:00
binary-husky
722a055879 Merge branch 'master' into frontier 2025-01-29 00:00:08 +08:00
binary-husky
5ddd657ebc tooltip glass transparent css 2025-01-28 23:50:21 +08:00
binary-husky
9b0b2cf260 auto hide tooltip when scoll down 2025-01-28 23:32:40 +08:00
binary-husky
9f39a6571a feat: customized font & font size 2025-01-28 02:52:56 +08:00
memset0
d07e736214 fix unpacking 2025-01-25 00:00:13 +08:00
memset0
a1f7ae5b55 feat: add support for R1 model and display CoT 2025-01-24 14:43:49 +08:00
binary-husky
1213ef19e5 Merge branch 'master' of github.com:binary-husky/chatgpt_academic 2025-01-22 01:50:08 +08:00
binary-husky
aaafe2a797 fix xelatex font problem in all-cap image 2025-01-22 01:49:53 +08:00
binary-husky
2716606f0c Update README.md 2025-01-16 23:40:24 +08:00
binary-husky
286f7303be fix image display bug 2025-01-12 21:54:43 +08:00
binary-husky
7eeab9e376 fix code block display bug 2025-01-09 22:31:59 +08:00
binary-husky
4ca331fb28 prevent html rendering for input 2025-01-05 21:20:12 +08:00
binary-husky
9487829930 change max_chat_preserve = 10 2025-01-03 00:34:36 +08:00
binary-husky
8254930495 Merge branch 'master' into frontier 2025-01-03 00:31:30 +08:00
binary-husky
a73074b89e upgrade chat checkpoint 2025-01-03 00:31:03 +08:00
binary-husky
ca1ab57f5d Merge branch 'master' into frontier 2024-12-29 00:08:59 +08:00
Yuki
e20177cb7d Support new azure ai key pattern (#2098)
* fix cookie overflow bug

* fix temp issue of o1

* compat bug fix

* support new azure ai key pattern

* support new azure ai key pattern

* allow disable openai proxy in `WHEN_TO_USE_PROXY`

* change padding

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>
2024-12-29 00:04:16 +08:00
Southlandi
fd93622840 修复Gemini对话错误问题(停用词数量为0的情况) (#2092) 2024-12-28 23:22:10 +08:00
whyXVI
09a82a572d Fix RuntimeError in predict_no_ui_long_connection() (#2095)
Bug fix: Fix RuntimeError in predict_no_ui_long_connection()

In the original code, calling predict_no_ui_long_connection() would trigger a RuntimeError("OpenAI拒绝了请求:" + error_msg) even when the server responded normally. The issue occurred due to incorrect handling of SSE protocol comment lines (lines starting with ":"). 

Modified the parsing logic in both `predict` and `predict_no_ui_long_connection` to handle these lines correctly, making the logic more intuitive and robust.
2024-12-28 23:21:14 +08:00
G.RQ
c53ddf65aa 修复 bug“重置”按钮报错 (#2102)
* fix 重置按钮bug

* fix version control bug

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>
2024-12-28 23:19:25 +08:00
binary-husky
6bd410582b Merge branch 'master' into frontier 2024-12-28 07:15:37 +08:00
binary-husky
ac64a77c2d allow disable openai proxy in WHEN_TO_USE_PROXY 2024-12-28 07:14:54 +08:00
binary-husky
dae8a0affc compat bug fix 2024-12-25 01:21:58 +08:00
binary-husky
97a81e9388 fix temp issue of o1 2024-12-25 00:54:03 +08:00
binary-husky
1dd1d0ed6c fix cookie overflow bug 2024-12-25 00:33:20 +08:00
Aibot
4fe638ffa8 Dev/aibot/bug fix (#2086)
* 添加为windows的环境打包以及一键启动脚本 (#2068)

* 新增自动打包windows下的环境依赖

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>

* update requirements

* update readme

* idor-vuln-bug-fix

* vuln-bug-fix: validate file size, default 500M

* add tts test

* remove welcome card when layout overflows

---------

Co-authored-by: Menghuan <menghuan2003@outlook.com>
Co-authored-by: binary-husky <qingxu.fu@outlook.com>
Co-authored-by: aibot <hangyuntang@qq.com>
2024-12-23 10:17:43 +08:00
binary-husky
060af0d2e6 Merge branch 'master' of github.com:binary-husky/chatgpt_academic 2024-12-22 23:33:44 +08:00
binary-husky
a848f714b6 fix welcome card bugs 2024-12-22 23:33:22 +08:00
binary-husky
924f8e30c7 Update issue stale.yml 2024-12-22 14:16:18 +08:00
binary-husky
f40347665b github action change 2024-12-22 14:15:16 +08:00
binary-husky
734c40bbde fix non-localhost javascript error 2024-12-22 14:01:22 +08:00
binary-husky
4ec87fbb54 history ng patch 1 2024-12-21 11:27:53 +08:00
binary-husky
17b5c22e61 Merge branch 'master' of github.com:binary-husky/chatgpt_academic 2024-12-19 22:46:14 +08:00
binary-husky
c6cd04a407 promote the rank of DASHSCOPE_API_KEY 2024-12-19 22:39:14 +08:00
YIQI JIANG
f60a12f8b4 Add o1 and o1-2024-12-17 model support (#2090)
* Add o1 and o1-2024-12-17 model support

* patch api key selection

---------

Co-authored-by: 蒋翌琪 <jiangyiqi99@jiangyiqideMacBook-Pro.local>
Co-authored-by: binary-husky <qingxu.fu@outlook.com>
2024-12-19 22:32:57 +08:00
45 changed files with 772 additions and 266 deletions

View File

@@ -7,7 +7,7 @@
name: 'Close stale issues and PRs' name: 'Close stale issues and PRs'
on: on:
schedule: schedule:
- cron: '*/5 * * * *' - cron: '*/30 * * * *'
jobs: jobs:
stale: stale:
@@ -19,7 +19,6 @@ jobs:
steps: steps:
- uses: actions/stale@v8 - uses: actions/stale@v8
with: with:
stale-issue-message: 'This issue is stale because it has been open 100 days with no activity. Remove stale label or comment or this will be closed in 1 days.' stale-issue-message: 'This issue is stale because it has been open 100 days with no activity. Remove stale label or comment or this will be closed in 7 days.'
days-before-stale: 100 days-before-stale: 100
days-before-close: 1 days-before-close: 7
debug-only: true

View File

@@ -15,6 +15,7 @@ RUN echo '[global]' > /etc/pip.conf && \
# 语音输出功能以下两行第一行更换阿里源第二行安装ffmpeg都可以删除 # 语音输出功能以下两行第一行更换阿里源第二行安装ffmpeg都可以删除
RUN UBUNTU_VERSION=$(awk -F= '/^VERSION_CODENAME=/{print $2}' /etc/os-release); echo "deb https://mirrors.aliyun.com/debian/ $UBUNTU_VERSION main non-free contrib" > /etc/apt/sources.list; apt-get update RUN UBUNTU_VERSION=$(awk -F= '/^VERSION_CODENAME=/{print $2}' /etc/os-release); echo "deb https://mirrors.aliyun.com/debian/ $UBUNTU_VERSION main non-free contrib" > /etc/apt/sources.list; apt-get update
RUN apt-get install ffmpeg -y RUN apt-get install ffmpeg -y
RUN apt-get clean
# 进入工作路径(必要) # 进入工作路径(必要)
@@ -33,6 +34,7 @@ RUN pip3 install -r requirements.txt
# 非必要步骤,用于预热模块(可以删除) # 非必要步骤,用于预热模块(可以删除)
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()' RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'
RUN python3 -m pip cache purge
# 启动(必要) # 启动(必要)

View File

@@ -1,8 +1,10 @@
> [!IMPORTANT] > [!IMPORTANT]
> `master主分支`最新动态(2025.2.4): 增加deepseek-r1支持增加字体自定义功能
> `master主分支`最新动态(2025.2.2): 三分钟快速接入最强qwen2.5-max[视频](https://www.bilibili.com/video/BV1LeFuerEG4)
> `frontier开发分支`最新动态(2024.12.9): 更新对话时间线功能优化xelatex论文翻译 > `frontier开发分支`最新动态(2024.12.9): 更新对话时间线功能优化xelatex论文翻译
> `wiki文档`最新动态(2024.12.5): 更新ollama接入指南 > `wiki文档`最新动态(2024.12.5): 更新ollama接入指南
> >
> 2024.10.10: 突发停电,紧急恢复了提供[whl包](https://drive.google.com/file/d/19U_hsLoMrjOlQSzYS3pzWX9fTzyusArP/view?usp=sharing)的文件服务器 > 2024.10.10: 突发停电,紧急恢复了提供[whl包](https://drive.google.com/drive/folders/14kR-3V-lIbvGxri4AHc8TpiA1fqsw7SK?usp=sharing)的文件服务器
> 2024.10.8: 版本3.90加入对llama-index的初步支持版本3.80加入插件二级菜单功能详见wiki > 2024.10.8: 版本3.90加入对llama-index的初步支持版本3.80加入插件二级菜单功能详见wiki
> 2024.5.1: 加入Doc2x翻译PDF论文的功能[查看详情](https://github.com/binary-husky/gpt_academic/wiki/Doc2x) > 2024.5.1: 加入Doc2x翻译PDF论文的功能[查看详情](https://github.com/binary-husky/gpt_academic/wiki/Doc2x)
> 2024.3.11: 全力支持Qwen、GLM、DeepseekCoder等中文大语言模型 SoVits语音克隆模块[查看详情](https://www.bilibili.com/video/BV1Rp421S7tF/) > 2024.3.11: 全力支持Qwen、GLM、DeepseekCoder等中文大语言模型 SoVits语音克隆模块[查看详情](https://www.bilibili.com/video/BV1Rp421S7tF/)

View File

@@ -7,11 +7,16 @@
Configuration reading priority: environment variable > config_private.py > config.py Configuration reading priority: environment variable > config_private.py > config.py
""" """
# [step 1]>> API_KEY = "sk-123456789xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx123456789"。极少数情况下还需要填写组织格式如org-123456789abcdefghijklmno的请向下翻找 API_ORG 设置项 # [step 1-1]>> ( 接入GPT等模型 ) API_KEY = "sk-123456789xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx123456789"。极少数情况下还需要填写组织格式如org-123456789abcdefghijklmno的请向下翻找 API_ORG 设置项
API_KEY = "此处填API密钥" # 可同时填写多个API-KEY用英文逗号分割例如API_KEY = "sk-openaikey1,sk-openaikey2,fkxxxx-api2dkey3,azure-apikey4" API_KEY = "此处填APIKEY" # 可同时填写多个API-KEY用英文逗号分割例如API_KEY = "sk-openaikey1,sk-openaikey2,fkxxxx-api2dkey3,azure-apikey4"
# [step 1-2]>> ( 接入通义 qwen-max ) 接入通义千问在线大模型api-key获取地址 https://dashscope.console.aliyun.com/
DASHSCOPE_API_KEY = "" # 阿里灵积云API_KEY
# [step 2]>> 改为True应用代理如果直接在海外服务器部署此处不修改如果使用本地或无地域限制的大模型时此处也不需要修改 # [step 1-3]>> ( 接入 deepseek-reasoner, 即 deepseek-r1 ) 深度求索(DeepSeek) API KEY默认请求地址为"https://api.deepseek.com/v1/chat/completions"
DEEPSEEK_API_KEY = ""
# [step 2]>> 改为True应用代理。如果使用本地或无地域限制的大模型时此处不修改如果直接在海外服务器部署此处不修改
USE_PROXY = False USE_PROXY = False
if USE_PROXY: if USE_PROXY:
""" """
@@ -32,11 +37,13 @@ else:
# [step 3]>> 模型选择是 (注意: LLM_MODEL是默认选中的模型, 它*必须*被包含在AVAIL_LLM_MODELS列表中 ) # [step 3]>> 模型选择是 (注意: LLM_MODEL是默认选中的模型, 它*必须*被包含在AVAIL_LLM_MODELS列表中 )
LLM_MODEL = "gpt-3.5-turbo-16k" # 可选 ↓↓↓ LLM_MODEL = "gpt-3.5-turbo-16k" # 可选 ↓↓↓
AVAIL_LLM_MODELS = ["gpt-4-1106-preview", "gpt-4-turbo-preview", "gpt-4-vision-preview", AVAIL_LLM_MODELS = ["qwen-max", "o1-mini", "o1-mini-2024-09-12", "o1", "o1-2024-12-17", "o1-preview", "o1-preview-2024-09-12",
"gpt-4-1106-preview", "gpt-4-turbo-preview", "gpt-4-vision-preview",
"gpt-4o", "gpt-4o-mini", "gpt-4-turbo", "gpt-4-turbo-2024-04-09", "gpt-4o", "gpt-4o-mini", "gpt-4-turbo", "gpt-4-turbo-2024-04-09",
"gpt-3.5-turbo-1106", "gpt-3.5-turbo-16k", "gpt-3.5-turbo", "azure-gpt-3.5", "gpt-3.5-turbo-1106", "gpt-3.5-turbo-16k", "gpt-3.5-turbo", "azure-gpt-3.5",
"gpt-4", "gpt-4-32k", "azure-gpt-4", "glm-4", "glm-4v", "glm-3-turbo", "gpt-4", "gpt-4-32k", "azure-gpt-4", "glm-4", "glm-4v", "glm-3-turbo",
"gemini-1.5-pro", "chatglm3", "chatglm4" "gemini-1.5-pro", "chatglm3", "chatglm4",
"deepseek-chat", "deepseek-coder", "deepseek-reasoner"
] ]
EMBEDDING_MODEL = "text-embedding-3-small" EMBEDDING_MODEL = "text-embedding-3-small"
@@ -47,7 +54,7 @@ EMBEDDING_MODEL = "text-embedding-3-small"
# "glm-4-0520", "glm-4-air", "glm-4-airx", "glm-4-flash", # "glm-4-0520", "glm-4-air", "glm-4-airx", "glm-4-flash",
# "qianfan", "deepseekcoder", # "qianfan", "deepseekcoder",
# "spark", "sparkv2", "sparkv3", "sparkv3.5", "sparkv4", # "spark", "sparkv2", "sparkv3", "sparkv3.5", "sparkv4",
# "qwen-turbo", "qwen-plus", "qwen-max", "qwen-local", # "qwen-turbo", "qwen-plus", "qwen-local",
# "moonshot-v1-128k", "moonshot-v1-32k", "moonshot-v1-8k", # "moonshot-v1-128k", "moonshot-v1-32k", "moonshot-v1-8k",
# "gpt-3.5-turbo-0613", "gpt-3.5-turbo-16k-0613", "gpt-3.5-turbo-0125", "gpt-4o-2024-05-13" # "gpt-3.5-turbo-0613", "gpt-3.5-turbo-16k-0613", "gpt-3.5-turbo-0125", "gpt-4o-2024-05-13"
# "claude-3-haiku-20240307","claude-3-sonnet-20240229","claude-3-opus-20240229", "claude-2.1", "claude-instant-1.2", # "claude-3-haiku-20240307","claude-3-sonnet-20240229","claude-3-opus-20240229", "claude-2.1", "claude-instant-1.2",
@@ -82,6 +89,30 @@ DEFAULT_WORKER_NUM = 3
THEME = "Default" THEME = "Default"
AVAIL_THEMES = ["Default", "Chuanhu-Small-and-Beautiful", "High-Contrast", "Gstaff/Xkcd", "NoCrypt/Miku"] AVAIL_THEMES = ["Default", "Chuanhu-Small-and-Beautiful", "High-Contrast", "Gstaff/Xkcd", "NoCrypt/Miku"]
FONT = "Theme-Default-Font"
AVAIL_FONTS = [
"默认值(Theme-Default-Font)",
"宋体(SimSun)",
"黑体(SimHei)",
"楷体(KaiTi)",
"仿宋(FangSong)",
"华文细黑(STHeiti Light)",
"华文楷体(STKaiti)",
"华文仿宋(STFangsong)",
"华文宋体(STSong)",
"华文中宋(STZhongsong)",
"华文新魏(STXinwei)",
"华文隶书(STLiti)",
"思源宋体(Source Han Serif CN VF@https://chinese-fonts-cdn.deno.dev/packages/syst/dist/SourceHanSerifCN/result.css)",
"月星楷(Moon Stars Kai HW@https://chinese-fonts-cdn.deno.dev/packages/moon-stars-kai/dist/MoonStarsKaiHW-Regular/result.css)",
"珠圆体(MaokenZhuyuanTi@https://chinese-fonts-cdn.deno.dev/packages/mkzyt/dist/猫啃珠圆体/result.css)",
"平方萌萌哒(PING FANG MENG MNEG DA@https://chinese-fonts-cdn.deno.dev/packages/pfmmd/dist/平方萌萌哒/result.css)",
"Helvetica",
"ui-sans-serif",
"sans-serif",
"system-ui"
]
# 默认的系统提示词system prompt # 默认的系统提示词system prompt
INIT_SYS_PROMPT = "Serve me as a writing and programming assistant." INIT_SYS_PROMPT = "Serve me as a writing and programming assistant."
@@ -133,10 +164,6 @@ MULTI_QUERY_LLM_MODELS = "gpt-3.5-turbo&chatglm3"
QWEN_LOCAL_MODEL_SELECTION = "Qwen/Qwen-1_8B-Chat-Int8" QWEN_LOCAL_MODEL_SELECTION = "Qwen/Qwen-1_8B-Chat-Int8"
# 接入通义千问在线大模型 https://dashscope.console.aliyun.com/
DASHSCOPE_API_KEY = "" # 阿里灵积云API_KEY
# 百度千帆LLM_MODEL="qianfan" # 百度千帆LLM_MODEL="qianfan"
BAIDU_CLOUD_API_KEY = '' BAIDU_CLOUD_API_KEY = ''
BAIDU_CLOUD_SECRET_KEY = '' BAIDU_CLOUD_SECRET_KEY = ''
@@ -238,9 +265,6 @@ MOONSHOT_API_KEY = ""
# 零一万物(Yi Model) API KEY # 零一万物(Yi Model) API KEY
YIMODEL_API_KEY = "" YIMODEL_API_KEY = ""
# 深度求索(DeepSeek) API KEY默认请求地址为"https://api.deepseek.com/v1/chat/completions"
DEEPSEEK_API_KEY = ""
# 紫东太初大模型 https://ai-maas.wair.ac.cn # 紫东太初大模型 https://ai-maas.wair.ac.cn
TAICHU_API_KEY = "" TAICHU_API_KEY = ""
@@ -303,7 +327,7 @@ ARXIV_CACHE_DIR = "gpt_log/arxiv_cache"
# 除了连接OpenAI之外还有哪些场合允许使用代理请尽量不要修改 # 除了连接OpenAI之外还有哪些场合允许使用代理请尽量不要修改
WHEN_TO_USE_PROXY = ["Download_LLM", "Download_Gradio_Theme", "Connect_Grobid", WHEN_TO_USE_PROXY = ["Connect_OpenAI", "Download_LLM", "Download_Gradio_Theme", "Connect_Grobid",
"Warmup_Modules", "Nougat_Download", "AutoGen", "Connect_OpenAI_Embedding"] "Warmup_Modules", "Nougat_Download", "AutoGen", "Connect_OpenAI_Embedding"]

View File

@@ -172,7 +172,7 @@ def 载入对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
user_request 当前用户的请求信息IP地址等 user_request 当前用户的请求信息IP地址等
""" """
from crazy_functions.crazy_utils import get_files_from_everything from crazy_functions.crazy_utils import get_files_from_everything
success, file_manifest, _ = get_files_from_everything(txt, type='.html') success, file_manifest, _ = get_files_from_everything(txt, type='.html',chatbot=chatbot)
if not success: if not success:
if txt == "": txt = '空空如也的输入栏' if txt == "": txt = '空空如也的输入栏'

View File

@@ -1,3 +1,4 @@
from shared_utils.fastapi_server import validate_path_safety
from toolbox import update_ui, trimmed_format_exc, promote_file_to_downloadzone, get_log_folder from toolbox import update_ui, trimmed_format_exc, promote_file_to_downloadzone, get_log_folder
from toolbox import CatchException, report_exception, write_history_to_file, zip_folder from toolbox import CatchException, report_exception, write_history_to_file, zip_folder
from loguru import logger from loguru import logger
@@ -155,6 +156,7 @@ def Latex英文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
import glob, os import glob, os
if os.path.exists(txt): if os.path.exists(txt):
project_folder = txt project_folder = txt
validate_path_safety(project_folder, chatbot.get_user())
else: else:
if txt == "": txt = '空空如也的输入栏' if txt == "": txt = '空空如也的输入栏'
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
@@ -193,6 +195,7 @@ def Latex中文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
import glob, os import glob, os
if os.path.exists(txt): if os.path.exists(txt):
project_folder = txt project_folder = txt
validate_path_safety(project_folder, chatbot.get_user())
else: else:
if txt == "": txt = '空空如也的输入栏' if txt == "": txt = '空空如也的输入栏'
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
@@ -229,6 +232,7 @@ def Latex英文纠错(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
import glob, os import glob, os
if os.path.exists(txt): if os.path.exists(txt):
project_folder = txt project_folder = txt
validate_path_safety(project_folder, chatbot.get_user())
else: else:
if txt == "": txt = '空空如也的输入栏' if txt == "": txt = '空空如也的输入栏'
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")

View File

@@ -1,5 +1,6 @@
import glob, shutil, os, re import glob, shutil, os, re
from loguru import logger from loguru import logger
from shared_utils.fastapi_server import validate_path_safety
from toolbox import update_ui, trimmed_format_exc, gen_time_str from toolbox import update_ui, trimmed_format_exc, gen_time_str
from toolbox import CatchException, report_exception, get_log_folder from toolbox import CatchException, report_exception, get_log_folder
from toolbox import write_history_to_file, promote_file_to_downloadzone from toolbox import write_history_to_file, promote_file_to_downloadzone
@@ -118,7 +119,7 @@ def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
def get_files_from_everything(txt, preference=''): def get_files_from_everything(txt, preference='', chatbox=None):
if txt == "": return False, None, None if txt == "": return False, None, None
success = True success = True
if txt.startswith('http'): if txt.startswith('http'):
@@ -146,9 +147,11 @@ def get_files_from_everything(txt, preference=''):
# 直接给定文件 # 直接给定文件
file_manifest = [txt] file_manifest = [txt]
project_folder = os.path.dirname(txt) project_folder = os.path.dirname(txt)
validate_path_safety(project_folder, chatbot.get_user())
elif os.path.exists(txt): elif os.path.exists(txt):
# 本地路径,递归搜索 # 本地路径,递归搜索
project_folder = txt project_folder = txt
validate_path_safety(project_folder, chatbot.get_user())
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.md', recursive=True)] file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.md', recursive=True)]
else: else:
project_folder = None project_folder = None
@@ -177,7 +180,7 @@ def Markdown英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
return return
history = [] # 清空历史,以免输入溢出 history = [] # 清空历史,以免输入溢出
success, file_manifest, project_folder = get_files_from_everything(txt, preference="Github") success, file_manifest, project_folder = get_files_from_everything(txt, preference="Github", chatbox=chatbot)
if not success: if not success:
# 什么都没有 # 什么都没有

View File

@@ -26,7 +26,7 @@ def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
# 清空历史,以免输入溢出 # 清空历史,以免输入溢出
history = [] history = []
success, file_manifest, project_folder = get_files_from_everything(txt, type='.pdf') success, file_manifest, project_folder = get_files_from_everything(txt, type='.pdf', chatbot=chatbot)
# 检测输入参数,如没有给定输入参数,直接退出 # 检测输入参数,如没有给定输入参数,直接退出
if (not success) and txt == "": txt = '空空如也的输入栏。提示请先上传文件把PDF文件拖入对话' if (not success) and txt == "": txt = '空空如也的输入栏。提示请先上传文件把PDF文件拖入对话'

View File

@@ -2,6 +2,7 @@ import os
import threading import threading
from loguru import logger from loguru import logger
from shared_utils.char_visual_effect import scolling_visual_effect from shared_utils.char_visual_effect import scolling_visual_effect
from shared_utils.fastapi_server import validate_path_safety
from toolbox import update_ui, get_conf, trimmed_format_exc, get_max_token, Singleton from toolbox import update_ui, get_conf, trimmed_format_exc, get_max_token, Singleton
def input_clipping(inputs, history, max_token_limit, return_clip_flags=False): def input_clipping(inputs, history, max_token_limit, return_clip_flags=False):
@@ -539,7 +540,7 @@ def read_and_clean_pdf_text(fp):
return meta_txt, page_one_meta return meta_txt, page_one_meta
def get_files_from_everything(txt, type): # type='.md' def get_files_from_everything(txt, type, chatbot=None): # type='.md'
""" """
这个函数是用来获取指定目录下所有指定类型(如.md的文件并且对于网络上的文件也可以获取它。 这个函数是用来获取指定目录下所有指定类型(如.md的文件并且对于网络上的文件也可以获取它。
下面是对每个参数和返回值的说明: 下面是对每个参数和返回值的说明:
@@ -551,6 +552,7 @@ def get_files_from_everything(txt, type): # type='.md'
- file_manifest: 文件路径列表,里面包含以指定类型为后缀名的所有文件的绝对路径。 - file_manifest: 文件路径列表,里面包含以指定类型为后缀名的所有文件的绝对路径。
- project_folder: 字符串,表示文件所在的文件夹路径。如果是网络上的文件,就是临时文件夹的路径。 - project_folder: 字符串,表示文件所在的文件夹路径。如果是网络上的文件,就是临时文件夹的路径。
该函数详细注释已添加,请确认是否满足您的需要。 该函数详细注释已添加,请确认是否满足您的需要。
- chatbot 带Cookies的Chatbot类为实现更多强大的功能做基础
""" """
import glob, os import glob, os
@@ -573,9 +575,13 @@ def get_files_from_everything(txt, type): # type='.md'
# 直接给定文件 # 直接给定文件
file_manifest = [txt] file_manifest = [txt]
project_folder = os.path.dirname(txt) project_folder = os.path.dirname(txt)
if chatbot is not None:
validate_path_safety(project_folder, chatbot.get_user())
elif os.path.exists(txt): elif os.path.exists(txt):
# 本地路径,递归搜索 # 本地路径,递归搜索
project_folder = txt project_folder = txt
if chatbot is not None:
validate_path_safety(project_folder, chatbot.get_user())
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*'+type, recursive=True)] file_manifest = [f for f in glob.glob(f'{project_folder}/**/*'+type, recursive=True)]
if len(file_manifest) == 0: if len(file_manifest) == 0:
success = False success = False

View File

@@ -373,7 +373,7 @@ def 编译Latex(chatbot, history, main_file_original, main_file_modified, work_f
# 根据编译器类型返回编译命令 # 根据编译器类型返回编译命令
def get_compile_command(compiler, filename): def get_compile_command(compiler, filename):
compile_command = f'{compiler} -interaction=batchmode -file-line-error {filename}.tex' compile_command = f'{compiler} -interaction=batchmode -file-line-error {filename}.tex'
logger.info('Latex 编译指令: ', compile_command) logger.info('Latex 编译指令: ' + compile_command)
return compile_command return compile_command
# 确定使用的编译器 # 确定使用的编译器

View File

@@ -242,9 +242,7 @@ def 解析PDF_DOC2X_单文件(
extract_archive(file_path=this_file_path, dest_dir=ex_folder) extract_archive(file_path=this_file_path, dest_dir=ex_folder)
# edit markdown files # edit markdown files
success, file_manifest, project_folder = get_files_from_everything( success, file_manifest, project_folder = get_files_from_everything(ex_folder, type='.md', chatbot=chatbot)
ex_folder, type=".md"
)
for generated_fp in file_manifest: for generated_fp in file_manifest:
# 修正一些公式问题 # 修正一些公式问题
with open(generated_fp, "r", encoding="utf8") as f: with open(generated_fp, "r", encoding="utf8") as f:

View File

@@ -27,10 +27,10 @@ def extract_text_from_files(txt, chatbot, history):
return False, final_result, page_one, file_manifest, excption #如输入区内容不是文件则直接返回输入区内容 return False, final_result, page_one, file_manifest, excption #如输入区内容不是文件则直接返回输入区内容
#查找输入区内容中的文件 #查找输入区内容中的文件
file_pdf,pdf_manifest,folder_pdf = get_files_from_everything(txt, '.pdf') file_pdf,pdf_manifest,folder_pdf = get_files_from_everything(txt, '.pdf', chatbot=chatbot)
file_md,md_manifest,folder_md = get_files_from_everything(txt, '.md') file_md,md_manifest,folder_md = get_files_from_everything(txt, '.md', chatbot=chatbot)
file_word,word_manifest,folder_word = get_files_from_everything(txt, '.docx') file_word,word_manifest,folder_word = get_files_from_everything(txt, '.docx', chatbot=chatbot)
file_doc,doc_manifest,folder_doc = get_files_from_everything(txt, '.doc') file_doc,doc_manifest,folder_doc = get_files_from_everything(txt, '.doc', chatbot=chatbot)
if file_doc: if file_doc:
excption = "word" excption = "word"

View File

@@ -104,6 +104,8 @@ def 总结word文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_pr
# 检测输入参数,如没有给定输入参数,直接退出 # 检测输入参数,如没有给定输入参数,直接退出
if os.path.exists(txt): if os.path.exists(txt):
project_folder = txt project_folder = txt
from shared_utils.fastapi_server import validate_path_safety
validate_path_safety(project_folder, chatbot.get_user())
else: else:
if txt == "": txt = '空空如也的输入栏' if txt == "": txt = '空空如也的输入栏'
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}") report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")

View File

@@ -61,7 +61,7 @@ def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
history = [] history = []
from crazy_functions.crazy_utils import get_files_from_everything from crazy_functions.crazy_utils import get_files_from_everything
success, file_manifest, project_folder = get_files_from_everything(txt, type='.pdf') success, file_manifest, project_folder = get_files_from_everything(txt, type='.pdf', chatbot=chatbot)
if len(file_manifest) > 0: if len(file_manifest) > 0:
# 尝试导入依赖,如果缺少依赖,则给出安装建议 # 尝试导入依赖,如果缺少依赖,则给出安装建议
try: try:
@@ -73,7 +73,7 @@ def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade nougat-ocr tiktoken```。") b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade nougat-ocr tiktoken```。")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return return
success_mmd, file_manifest_mmd, _ = get_files_from_everything(txt, type='.mmd') success_mmd, file_manifest_mmd, _ = get_files_from_everything(txt, type='.mmd', chatbot=chatbot)
success = success or success_mmd success = success or success_mmd
file_manifest += file_manifest_mmd file_manifest += file_manifest_mmd
chatbot.append(["文件列表:", ", ".join([e.split('/')[-1] for e in file_manifest])]); chatbot.append(["文件列表:", ", ".join([e.split('/')[-1] for e in file_manifest])]);

View File

@@ -87,6 +87,8 @@ def 理解PDF文档内容标准文件输入(txt, llm_kwargs, plugin_kwargs, chat
# 检测输入参数,如没有给定输入参数,直接退出 # 检测输入参数,如没有给定输入参数,直接退出
if os.path.exists(txt): if os.path.exists(txt):
project_folder = txt project_folder = txt
from shared_utils.fastapi_server import validate_path_safety
validate_path_safety(project_folder, chatbot.get_user())
else: else:
if txt == "": if txt == "":
txt = '空空如也的输入栏' txt = '空空如也的输入栏'

View File

@@ -39,6 +39,8 @@ def 批量生成函数注释(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
import glob, os import glob, os
if os.path.exists(txt): if os.path.exists(txt):
project_folder = txt project_folder = txt
from shared_utils.fastapi_server import validate_path_safety
validate_path_safety(project_folder, chatbot.get_user())
else: else:
if txt == "": txt = '空空如也的输入栏' if txt == "": txt = '空空如也的输入栏'
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")

View File

@@ -49,7 +49,7 @@ def 知识库文件注入(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
file_manifest = [] file_manifest = []
spl = ["txt", "doc", "docx", "email", "epub", "html", "json", "md", "msg", "pdf", "ppt", "pptx", "rtf"] spl = ["txt", "doc", "docx", "email", "epub", "html", "json", "md", "msg", "pdf", "ppt", "pptx", "rtf"]
for sp in spl: for sp in spl:
_, file_manifest_tmp, _ = get_files_from_everything(txt, type=f'.{sp}') _, file_manifest_tmp, _ = get_files_from_everything(txt, type=f'.{sp}', chatbot=chatbot)
file_manifest += file_manifest_tmp file_manifest += file_manifest_tmp
if len(file_manifest) == 0: if len(file_manifest) == 0:

View File

@@ -126,6 +126,8 @@ def 解析ipynb文件(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
import os import os
if os.path.exists(txt): if os.path.exists(txt):
project_folder = txt project_folder = txt
from shared_utils.fastapi_server import validate_path_safety
validate_path_safety(project_folder, chatbot.get_user())
else: else:
if txt == "": if txt == "":
txt = '空空如也的输入栏' txt = '空空如也的输入栏'

View File

@@ -48,6 +48,8 @@ def 读文章写摘要(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_
import glob, os import glob, os
if os.path.exists(txt): if os.path.exists(txt):
project_folder = txt project_folder = txt
from shared_utils.fastapi_server import validate_path_safety
validate_path_safety(project_folder, chatbot.get_user())
else: else:
if txt == "": txt = '空空如也的输入栏' if txt == "": txt = '空空如也的输入栏'
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")

View File

@@ -5,6 +5,10 @@ FROM fuqingxu/11.3.1-runtime-ubuntu20.04-with-texlive:latest
# edge-tts需要的依赖某些pip包所需的依赖 # edge-tts需要的依赖某些pip包所需的依赖
RUN apt update && apt install ffmpeg build-essential -y RUN apt update && apt install ffmpeg build-essential -y
RUN apt-get install -y fontconfig
RUN ln -s /usr/local/texlive/2023/texmf-dist/fonts/truetype /usr/share/fonts/truetype/texlive
RUN fc-cache -fv
RUN apt-get clean
# use python3 as the system default python # use python3 as the system default python
WORKDIR /gpt WORKDIR /gpt
@@ -30,7 +34,7 @@ RUN python3 -m pip install -r request_llms/requirements_qwen.txt
RUN python3 -m pip install -r request_llms/requirements_chatglm.txt RUN python3 -m pip install -r request_llms/requirements_chatglm.txt
RUN python3 -m pip install -r request_llms/requirements_newbing.txt RUN python3 -m pip install -r request_llms/requirements_newbing.txt
RUN python3 -m pip install nougat-ocr RUN python3 -m pip install nougat-ocr
RUN python3 -m pip cache purge
# 预热Tiktoken模块 # 预热Tiktoken模块
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()' RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'

View File

@@ -7,6 +7,7 @@ RUN apt-get install -y git python python3 python-dev python3-dev --fix-missing
# edge-tts需要的依赖某些pip包所需的依赖 # edge-tts需要的依赖某些pip包所需的依赖
RUN apt update && apt install ffmpeg build-essential -y RUN apt update && apt install ffmpeg build-essential -y
RUN apt-get clean
# use python3 as the system default python # use python3 as the system default python
RUN curl -sS https://bootstrap.pypa.io/get-pip.py | python3.8 RUN curl -sS https://bootstrap.pypa.io/get-pip.py | python3.8
@@ -22,6 +23,7 @@ RUN python3 -m pip install -r request_llms/requirements_moss.txt
RUN python3 -m pip install -r request_llms/requirements_qwen.txt RUN python3 -m pip install -r request_llms/requirements_qwen.txt
RUN python3 -m pip install -r request_llms/requirements_chatglm.txt RUN python3 -m pip install -r request_llms/requirements_chatglm.txt
RUN python3 -m pip install -r request_llms/requirements_newbing.txt RUN python3 -m pip install -r request_llms/requirements_newbing.txt
RUN python3 -m pip cache purge
# 预热Tiktoken模块 # 预热Tiktoken模块

View File

@@ -18,5 +18,7 @@ RUN apt update && apt install ffmpeg -y
# 可选步骤,用于预热模块 # 可选步骤,用于预热模块
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()' RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'
RUN python3 -m pip cache purge && apt-get clean
# 启动 # 启动
CMD ["python3", "-u", "main.py"] CMD ["python3", "-u", "main.py"]

View File

@@ -30,5 +30,7 @@ COPY --chown=gptuser:gptuser . .
# 可选步骤,用于预热模块 # 可选步骤,用于预热模块
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()' RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'
RUN python3 -m pip cache purge
# 启动 # 启动
CMD ["python3", "-u", "main.py"] CMD ["python3", "-u", "main.py"]

View File

@@ -24,6 +24,8 @@ RUN apt update && apt install ffmpeg -y
# 可选步骤,用于预热模块 # 可选步骤,用于预热模块
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()' RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'
RUN python3 -m pip cache purge && apt-get clean
# 启动 # 启动
CMD ["python3", "-u", "main.py"] CMD ["python3", "-u", "main.py"]

13
main.py
View File

@@ -1,4 +1,4 @@
import os, json; os.environ['no_proxy'] = '*' # 避免代理网络产生意外污染 import os; os.environ['no_proxy'] = '*' # 避免代理网络产生意外污染
help_menu_description = \ help_menu_description = \
"""Github源代码开源和更新[地址🚀](https://github.com/binary-husky/gpt_academic), """Github源代码开源和更新[地址🚀](https://github.com/binary-husky/gpt_academic),
@@ -49,7 +49,7 @@ def main():
# 读取配置 # 读取配置
proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION = get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION') proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION = get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION')
CHATBOT_HEIGHT, LAYOUT, AVAIL_LLM_MODELS, AUTO_CLEAR_TXT = get_conf('CHATBOT_HEIGHT', 'LAYOUT', 'AVAIL_LLM_MODELS', 'AUTO_CLEAR_TXT') CHATBOT_HEIGHT, LAYOUT, AVAIL_LLM_MODELS, AUTO_CLEAR_TXT = get_conf('CHATBOT_HEIGHT', 'LAYOUT', 'AVAIL_LLM_MODELS', 'AUTO_CLEAR_TXT')
ENABLE_AUDIO, AUTO_CLEAR_TXT, PATH_LOGGING, AVAIL_THEMES, THEME, ADD_WAIFU = get_conf('ENABLE_AUDIO', 'AUTO_CLEAR_TXT', 'PATH_LOGGING', 'AVAIL_THEMES', 'THEME', 'ADD_WAIFU') ENABLE_AUDIO, AUTO_CLEAR_TXT, AVAIL_FONTS, AVAIL_THEMES, THEME, ADD_WAIFU = get_conf('ENABLE_AUDIO', 'AUTO_CLEAR_TXT', 'AVAIL_FONTS', 'AVAIL_THEMES', 'THEME', 'ADD_WAIFU')
NUM_CUSTOM_BASIC_BTN, SSL_KEYFILE, SSL_CERTFILE = get_conf('NUM_CUSTOM_BASIC_BTN', 'SSL_KEYFILE', 'SSL_CERTFILE') NUM_CUSTOM_BASIC_BTN, SSL_KEYFILE, SSL_CERTFILE = get_conf('NUM_CUSTOM_BASIC_BTN', 'SSL_KEYFILE', 'SSL_CERTFILE')
DARK_MODE, INIT_SYS_PROMPT, ADD_WAIFU, TTS_TYPE = get_conf('DARK_MODE', 'INIT_SYS_PROMPT', 'ADD_WAIFU', 'TTS_TYPE') DARK_MODE, INIT_SYS_PROMPT, ADD_WAIFU, TTS_TYPE = get_conf('DARK_MODE', 'INIT_SYS_PROMPT', 'ADD_WAIFU', 'TTS_TYPE')
if LLM_MODEL not in AVAIL_LLM_MODELS: AVAIL_LLM_MODELS += [LLM_MODEL] if LLM_MODEL not in AVAIL_LLM_MODELS: AVAIL_LLM_MODELS += [LLM_MODEL]
@@ -178,7 +178,7 @@ def main():
# 左上角工具栏定义 # 左上角工具栏定义
from themes.gui_toolbar import define_gui_toolbar from themes.gui_toolbar import define_gui_toolbar
checkboxes, checkboxes_2, max_length_sl, theme_dropdown, system_prompt, file_upload_2, md_dropdown, top_p, temperature = \ checkboxes, checkboxes_2, max_length_sl, theme_dropdown, system_prompt, file_upload_2, md_dropdown, top_p, temperature = \
define_gui_toolbar(AVAIL_LLM_MODELS, LLM_MODEL, INIT_SYS_PROMPT, THEME, AVAIL_THEMES, ADD_WAIFU, help_menu_description, js_code_for_toggle_darkmode) define_gui_toolbar(AVAIL_LLM_MODELS, LLM_MODEL, INIT_SYS_PROMPT, THEME, AVAIL_THEMES, AVAIL_FONTS, ADD_WAIFU, help_menu_description, js_code_for_toggle_darkmode)
# 浮动菜单定义 # 浮动菜单定义
from themes.gui_floating_menu import define_gui_floating_menu from themes.gui_floating_menu import define_gui_floating_menu
@@ -226,11 +226,8 @@ def main():
multiplex_sel.select( multiplex_sel.select(
None, [multiplex_sel], None, _js=f"""(multiplex_sel)=>run_multiplex_shift(multiplex_sel)""") None, [multiplex_sel], None, _js=f"""(multiplex_sel)=>run_multiplex_shift(multiplex_sel)""")
cancel_handles.append(submit_btn.click(**predict_args)) cancel_handles.append(submit_btn.click(**predict_args))
resetBtn.click(None, None, [chatbot, history, status], _js="""(a,b,c)=>clear_conversation(a,b,c)""") # 先在前端快速清除chatbot&status resetBtn.click(None, None, [chatbot, history, status], _js= """clear_conversation""") # 先在前端快速清除chatbot&status
resetBtn2.click(None, None, [chatbot, history, status], _js="""(a,b,c)=>clear_conversation(a,b,c)""") # 先在前端快速清除chatbot&status resetBtn2.click(None, None, [chatbot, history, status], _js="""clear_conversation""") # 先在前端快速清除chatbot&status
# reset_server_side_args = (lambda history: ([], [], "已重置"), [history], [chatbot, history, status])
# resetBtn.click(*reset_server_side_args) # 再在后端清除history
# resetBtn2.click(*reset_server_side_args) # 再在后端清除history
clearBtn.click(None, None, [txt, txt2], _js=js_code_clear) clearBtn.click(None, None, [txt, txt2], _js=js_code_clear)
clearBtn2.click(None, None, [txt, txt2], _js=js_code_clear) clearBtn2.click(None, None, [txt, txt2], _js=js_code_clear)
if AUTO_CLEAR_TXT: if AUTO_CLEAR_TXT:

View File

@@ -273,7 +273,9 @@ model_info = {
"token_cnt": get_token_num_gpt4, "token_cnt": get_token_num_gpt4,
"openai_disable_system_prompt": True, "openai_disable_system_prompt": True,
"openai_disable_stream": True, "openai_disable_stream": True,
"openai_force_temperature_one": True,
}, },
"o1-mini": { "o1-mini": {
"fn_with_ui": chatgpt_ui, "fn_with_ui": chatgpt_ui,
"fn_without_ui": chatgpt_noui, "fn_without_ui": chatgpt_noui,
@@ -283,6 +285,31 @@ model_info = {
"token_cnt": get_token_num_gpt4, "token_cnt": get_token_num_gpt4,
"openai_disable_system_prompt": True, "openai_disable_system_prompt": True,
"openai_disable_stream": True, "openai_disable_stream": True,
"openai_force_temperature_one": True,
},
"o1-2024-12-17": {
"fn_with_ui": chatgpt_ui,
"fn_without_ui": chatgpt_noui,
"endpoint": openai_endpoint,
"max_token": 200000,
"tokenizer": tokenizer_gpt4,
"token_cnt": get_token_num_gpt4,
"openai_disable_system_prompt": True,
"openai_disable_stream": True,
"openai_force_temperature_one": True,
},
"o1": {
"fn_with_ui": chatgpt_ui,
"fn_without_ui": chatgpt_noui,
"endpoint": openai_endpoint,
"max_token": 200000,
"tokenizer": tokenizer_gpt4,
"token_cnt": get_token_num_gpt4,
"openai_disable_system_prompt": True,
"openai_disable_stream": True,
"openai_force_temperature_one": True,
}, },
"gpt-4-turbo": { "gpt-4-turbo": {
@@ -785,7 +812,8 @@ if "qwen-local" in AVAIL_LLM_MODELS:
except: except:
logger.error(trimmed_format_exc()) logger.error(trimmed_format_exc())
# -=-=-=-=-=-=- 通义-在线模型 -=-=-=-=-=-=- # -=-=-=-=-=-=- 通义-在线模型 -=-=-=-=-=-=-
if "qwen-turbo" in AVAIL_LLM_MODELS or "qwen-plus" in AVAIL_LLM_MODELS or "qwen-max" in AVAIL_LLM_MODELS: # zhipuai qwen_models = ["qwen-max-latest", "qwen-max-2025-01-25","qwen-max","qwen-turbo","qwen-plus"]
if any(item in qwen_models for item in AVAIL_LLM_MODELS):
try: try:
from .bridge_qwen import predict_no_ui_long_connection as qwen_noui from .bridge_qwen import predict_no_ui_long_connection as qwen_noui
from .bridge_qwen import predict as qwen_ui from .bridge_qwen import predict as qwen_ui
@@ -795,7 +823,7 @@ if "qwen-turbo" in AVAIL_LLM_MODELS or "qwen-plus" in AVAIL_LLM_MODELS or "qwen-
"fn_without_ui": qwen_noui, "fn_without_ui": qwen_noui,
"can_multi_thread": True, "can_multi_thread": True,
"endpoint": None, "endpoint": None,
"max_token": 6144, "max_token": 100000,
"tokenizer": tokenizer_gpt35, "tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35, "token_cnt": get_token_num_gpt35,
}, },
@@ -804,7 +832,7 @@ if "qwen-turbo" in AVAIL_LLM_MODELS or "qwen-plus" in AVAIL_LLM_MODELS or "qwen-
"fn_without_ui": qwen_noui, "fn_without_ui": qwen_noui,
"can_multi_thread": True, "can_multi_thread": True,
"endpoint": None, "endpoint": None,
"max_token": 30720, "max_token": 129024,
"tokenizer": tokenizer_gpt35, "tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35, "token_cnt": get_token_num_gpt35,
}, },
@@ -813,7 +841,25 @@ if "qwen-turbo" in AVAIL_LLM_MODELS or "qwen-plus" in AVAIL_LLM_MODELS or "qwen-
"fn_without_ui": qwen_noui, "fn_without_ui": qwen_noui,
"can_multi_thread": True, "can_multi_thread": True,
"endpoint": None, "endpoint": None,
"max_token": 28672, "max_token": 30720,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
"qwen-max-latest": {
"fn_with_ui": qwen_ui,
"fn_without_ui": qwen_noui,
"can_multi_thread": True,
"endpoint": None,
"max_token": 30720,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
"qwen-max-2025-01-25": {
"fn_with_ui": qwen_ui,
"fn_without_ui": qwen_noui,
"can_multi_thread": True,
"endpoint": None,
"max_token": 30720,
"tokenizer": tokenizer_gpt35, "tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35, "token_cnt": get_token_num_gpt35,
} }
@@ -1044,18 +1090,18 @@ if "deepseekcoder" in AVAIL_LLM_MODELS: # deepseekcoder
except: except:
logger.error(trimmed_format_exc()) logger.error(trimmed_format_exc())
# -=-=-=-=-=-=- 幻方-深度求索大模型在线API -=-=-=-=-=-=- # -=-=-=-=-=-=- 幻方-深度求索大模型在线API -=-=-=-=-=-=-
if "deepseek-chat" in AVAIL_LLM_MODELS or "deepseek-coder" in AVAIL_LLM_MODELS: if "deepseek-chat" in AVAIL_LLM_MODELS or "deepseek-coder" in AVAIL_LLM_MODELS or "deepseek-reasoner" in AVAIL_LLM_MODELS:
try: try:
deepseekapi_noui, deepseekapi_ui = get_predict_function( deepseekapi_noui, deepseekapi_ui = get_predict_function(
api_key_conf_name="DEEPSEEK_API_KEY", max_output_token=4096, disable_proxy=False api_key_conf_name="DEEPSEEK_API_KEY", max_output_token=4096, disable_proxy=False
) )
model_info.update({ model_info.update({
"deepseek-chat":{ "deepseek-chat":{
"fn_with_ui": deepseekapi_ui, "fn_with_ui": deepseekapi_ui,
"fn_without_ui": deepseekapi_noui, "fn_without_ui": deepseekapi_noui,
"endpoint": deepseekapi_endpoint, "endpoint": deepseekapi_endpoint,
"can_multi_thread": True, "can_multi_thread": True,
"max_token": 32000, "max_token": 64000,
"tokenizer": tokenizer_gpt35, "tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35, "token_cnt": get_token_num_gpt35,
}, },
@@ -1068,6 +1114,16 @@ if "deepseek-chat" in AVAIL_LLM_MODELS or "deepseek-coder" in AVAIL_LLM_MODELS:
"tokenizer": tokenizer_gpt35, "tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35, "token_cnt": get_token_num_gpt35,
}, },
"deepseek-reasoner":{
"fn_with_ui": deepseekapi_ui,
"fn_without_ui": deepseekapi_noui,
"endpoint": deepseekapi_endpoint,
"can_multi_thread": True,
"max_token": 64000,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
"enable_reasoning": True
},
}) })
except: except:
logger.error(trimmed_format_exc()) logger.error(trimmed_format_exc())
@@ -1335,6 +1391,11 @@ def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot,
inputs = apply_gpt_academic_string_mask(inputs, mode="show_llm") inputs = apply_gpt_academic_string_mask(inputs, mode="show_llm")
if llm_kwargs['llm_model'] not in model_info:
from toolbox import update_ui
chatbot.append([inputs, f"很抱歉,模型 '{llm_kwargs['llm_model']}' 暂不支持<br/>(1) 检查config中的AVAIL_LLM_MODELS选项<br/>(2) 检查request_llms/bridge_all.py中的模型路由"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
method = model_info[llm_kwargs['llm_model']]["fn_with_ui"] # 如果这里报错检查config中的AVAIL_LLM_MODELS选项 method = model_info[llm_kwargs['llm_model']]["fn_with_ui"] # 如果这里报错检查config中的AVAIL_LLM_MODELS选项
if additional_fn: # 根据基础功能区 ModelOverride 参数调整模型类型 if additional_fn: # 根据基础功能区 ModelOverride 参数调整模型类型

View File

@@ -23,8 +23,13 @@ from loguru import logger
from toolbox import get_conf, update_ui, is_any_api_key, select_api_key, what_keys, clip_history from toolbox import get_conf, update_ui, is_any_api_key, select_api_key, what_keys, clip_history
from toolbox import trimmed_format_exc, is_the_upload_folder, read_one_api_model_name, log_chat from toolbox import trimmed_format_exc, is_the_upload_folder, read_one_api_model_name, log_chat
from toolbox import ChatBotWithCookies, have_any_recent_upload_image_files, encode_image from toolbox import ChatBotWithCookies, have_any_recent_upload_image_files, encode_image
proxies, TIMEOUT_SECONDS, MAX_RETRY, API_ORG, AZURE_CFG_ARRAY = \ proxies, WHEN_TO_USE_PROXY, TIMEOUT_SECONDS, MAX_RETRY, API_ORG, AZURE_CFG_ARRAY = \
get_conf('proxies', 'TIMEOUT_SECONDS', 'MAX_RETRY', 'API_ORG', 'AZURE_CFG_ARRAY') get_conf('proxies', 'WHEN_TO_USE_PROXY', 'TIMEOUT_SECONDS', 'MAX_RETRY', 'API_ORG', 'AZURE_CFG_ARRAY')
if "Connect_OpenAI" not in WHEN_TO_USE_PROXY:
if proxies is not None:
logger.error("虽然您配置了代理设置但不会在连接OpenAI的过程中起作用请检查WHEN_TO_USE_PROXY配置。")
proxies = None
timeout_bot_msg = '[Local Message] Request timeout. Network error. Please check proxy settings in config.py.' + \ timeout_bot_msg = '[Local Message] Request timeout. Network error. Please check proxy settings in config.py.' + \
'网络错误,检查代理服务器是否可用,以及代理设置的格式是否正确,格式须是[协议]://[地址]:[端口],缺一不可。' '网络错误,检查代理服务器是否可用,以及代理设置的格式是否正确,格式须是[协议]://[地址]:[端口],缺一不可。'
@@ -180,14 +185,20 @@ def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[],
raise ConnectionAbortedError("正常结束但显示Token不足导致输出不完整请削减单次输入的文本量。") raise ConnectionAbortedError("正常结束但显示Token不足导致输出不完整请削减单次输入的文本量。")
else: else:
raise RuntimeError("OpenAI拒绝了请求" + error_msg) raise RuntimeError("OpenAI拒绝了请求" + error_msg)
if ('data: [DONE]' in chunk_decoded): break # api2d 正常完成 if ('data: [DONE]' in chunk_decoded): break # api2d & one-api 正常完成
# 提前读取一些信息 (用于判断异常) # 提前读取一些信息 (用于判断异常)
if has_choices and not choice_valid: if has_choices and not choice_valid:
# 一些垃圾第三方接口的出现这样的错误 # 一些垃圾第三方接口的出现这样的错误
continue continue
json_data = chunkjson['choices'][0] json_data = chunkjson['choices'][0]
delta = json_data["delta"] delta = json_data["delta"]
if len(delta) == 0: break
if len(delta) == 0:
is_termination_certain = False
if (has_choices) and (chunkjson['choices'][0].get('finish_reason', 'null') == 'stop'): is_termination_certain = True
if is_termination_certain: break
else: continue # 对于不符合规范的狗屎接口,这里需要继续
if (not has_content) and has_role: continue if (not has_content) and has_role: continue
if (not has_content) and (not has_role): continue # raise RuntimeError("发现不标准的第三方接口:"+delta) if (not has_content) and (not has_role): continue # raise RuntimeError("发现不标准的第三方接口:"+delta)
if has_content: # has_role = True/False if has_content: # has_role = True/False
@@ -285,6 +296,8 @@ def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:ChatBotWith
history.extend([inputs, ""]) history.extend([inputs, ""])
retry = 0 retry = 0
previous_ui_reflesh_time = 0
ui_reflesh_min_interval = 0.0
while True: while True:
try: try:
# make a POST request to the API endpoint, stream=True # make a POST request to the API endpoint, stream=True
@@ -297,13 +310,13 @@ def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:ChatBotWith
yield from update_ui(chatbot=chatbot, history=history, msg="请求超时"+retry_msg) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history, msg="请求超时"+retry_msg) # 刷新界面
if retry > MAX_RETRY: raise TimeoutError if retry > MAX_RETRY: raise TimeoutError
if not stream: if not stream:
# 该分支仅适用于不支持stream的o1模型其他情形一律不适用 # 该分支仅适用于不支持stream的o1模型其他情形一律不适用
yield from handle_o1_model_special(response, inputs, llm_kwargs, chatbot, history) yield from handle_o1_model_special(response, inputs, llm_kwargs, chatbot, history)
return return
if stream: if stream:
reach_termination = False # 处理一些 new-api 的奇葩异常
gpt_replying_buffer = "" gpt_replying_buffer = ""
is_head_of_the_stream = True is_head_of_the_stream = True
stream_response = response.iter_lines() stream_response = response.iter_lines()
@@ -316,11 +329,14 @@ def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:ChatBotWith
error_msg = chunk_decoded error_msg = chunk_decoded
# 首先排除一个one-api没有done数据包的第三方Bug情形 # 首先排除一个one-api没有done数据包的第三方Bug情形
if len(gpt_replying_buffer.strip()) > 0 and len(error_msg) == 0: if len(gpt_replying_buffer.strip()) > 0 and len(error_msg) == 0:
yield from update_ui(chatbot=chatbot, history=history, msg="检测到有缺陷的非OpenAI官方接口,建议选择更稳定的接口。") yield from update_ui(chatbot=chatbot, history=history, msg="检测到有缺陷的接口,建议选择更稳定的接口。")
if not reach_termination:
reach_termination = True
log_chat(llm_model=llm_kwargs["llm_model"], input_str=inputs, output_str=gpt_replying_buffer)
break break
# 其他情况,直接返回报错 # 其他情况,直接返回报错
chatbot, history = handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg) chatbot, history = handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg)
yield from update_ui(chatbot=chatbot, history=history, msg="非OpenAI官方接口返回了错误:" + chunk.decode()) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history, msg="接口返回了错误:" + chunk.decode()) # 刷新界面
return return
# 提前读取一些信息 (用于判断异常) # 提前读取一些信息 (用于判断异常)
@@ -330,6 +346,8 @@ def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:ChatBotWith
# 数据流的第一帧不携带content # 数据流的第一帧不携带content
is_head_of_the_stream = False; continue is_head_of_the_stream = False; continue
if "error" in chunk_decoded: logger.error(f"接口返回了未知错误: {chunk_decoded}")
if chunk: if chunk:
try: try:
if has_choices and not choice_valid: if has_choices and not choice_valid:
@@ -338,14 +356,25 @@ def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:ChatBotWith
if ('data: [DONE]' not in chunk_decoded) and len(chunk_decoded) > 0 and (chunkjson is None): if ('data: [DONE]' not in chunk_decoded) and len(chunk_decoded) > 0 and (chunkjson is None):
# 传递进来一些奇怪的东西 # 传递进来一些奇怪的东西
raise ValueError(f'无法读取以下数据,请检查配置。\n\n{chunk_decoded}') raise ValueError(f'无法读取以下数据,请检查配置。\n\n{chunk_decoded}')
# 前者是API2D的结束条件后者是OPENAI的结束条件 # 前者是API2D & One-API的结束条件后者是OPENAI的结束条件
if ('data: [DONE]' in chunk_decoded) or (len(chunkjson['choices'][0]["delta"]) == 0): one_api_terminate = ('data: [DONE]' in chunk_decoded)
# 判定为数据流的结束gpt_replying_buffer也写完了 openai_terminate = (has_choices) and (len(chunkjson['choices'][0]["delta"]) == 0)
log_chat(llm_model=llm_kwargs["llm_model"], input_str=inputs, output_str=gpt_replying_buffer) if one_api_terminate or openai_terminate:
break is_termination_certain = False
if one_api_terminate: is_termination_certain = True # 抓取符合规范的结束条件
elif (has_choices) and (chunkjson['choices'][0].get('finish_reason', 'null') == 'stop'): is_termination_certain = True # 抓取符合规范的结束条件
if is_termination_certain:
reach_termination = True
log_chat(llm_model=llm_kwargs["llm_model"], input_str=inputs, output_str=gpt_replying_buffer)
break # 对于符合规范的接口这里可以break
else:
continue # 对于不符合规范的狗屎接口,这里需要继续
# 到这里我们已经可以假定必须包含choice了
try:
status_text = f"finish_reason: {chunkjson['choices'][0].get('finish_reason', 'null')}"
except:
logger.error(f"一些垃圾第三方接口出现这样的错误,兼容一下吧: {chunk_decoded}")
# 处理数据流的主体 # 处理数据流的主体
status_text = f"finish_reason: {chunkjson['choices'][0].get('finish_reason', 'null')}"
# 如果这里抛出异常一般是文本过长详情见get_full_error的输出
if has_content: if has_content:
# 正常情况 # 正常情况
gpt_replying_buffer = gpt_replying_buffer + chunkjson['choices'][0]["delta"]["content"] gpt_replying_buffer = gpt_replying_buffer + chunkjson['choices'][0]["delta"]["content"]
@@ -354,21 +383,26 @@ def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:ChatBotWith
continue continue
else: else:
# 至此已经超出了正常接口应该进入的范围,一些垃圾第三方接口会出现这样的错误 # 至此已经超出了正常接口应该进入的范围,一些垃圾第三方接口会出现这样的错误
if chunkjson['choices'][0]["delta"]["content"] is None: continue # 一些垃圾第三方接口出现这样的错误,兼容一下吧 if chunkjson['choices'][0]["delta"].get("content", None) is None:
logger.error(f"一些垃圾第三方接口出现这样的错误,兼容一下吧: {chunk_decoded}")
continue
gpt_replying_buffer = gpt_replying_buffer + chunkjson['choices'][0]["delta"]["content"] gpt_replying_buffer = gpt_replying_buffer + chunkjson['choices'][0]["delta"]["content"]
history[-1] = gpt_replying_buffer history[-1] = gpt_replying_buffer
chatbot[-1] = (history[-2], history[-1]) chatbot[-1] = (history[-2], history[-1])
yield from update_ui(chatbot=chatbot, history=history, msg=status_text) # 刷新界面 if time.time() - previous_ui_reflesh_time > ui_reflesh_min_interval:
yield from update_ui(chatbot=chatbot, history=history, msg=status_text) # 刷新界面
previous_ui_reflesh_time = time.time()
except Exception as e: except Exception as e:
yield from update_ui(chatbot=chatbot, history=history, msg="Json解析不合常规") # 刷新界面 yield from update_ui(chatbot=chatbot, history=history, msg="Json解析不合常规") # 刷新界面
chunk = get_full_error(chunk, stream_response) chunk = get_full_error(chunk, stream_response)
chunk_decoded = chunk.decode() chunk_decoded = chunk.decode()
error_msg = chunk_decoded error_msg = chunk_decoded
chatbot, history = handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg) chatbot, history = handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg)
yield from update_ui(chatbot=chatbot, history=history, msg="Json解析异常" + error_msg) # 刷新界面
logger.error(error_msg) logger.error(error_msg)
yield from update_ui(chatbot=chatbot, history=history, msg="Json解析异常" + error_msg) # 刷新界面
return return
yield from update_ui(chatbot=chatbot, history=history, msg="完成") # 刷新界面
return # return from stream-branch return # return from stream-branch
def handle_o1_model_special(response, inputs, llm_kwargs, chatbot, history): def handle_o1_model_special(response, inputs, llm_kwargs, chatbot, history):
@@ -536,6 +570,8 @@ def generate_payload(inputs:str, llm_kwargs:dict, history:list, system_prompt:st
"n": 1, "n": 1,
"stream": stream, "stream": stream,
} }
openai_force_temperature_one = model_info[llm_kwargs['llm_model']].get('openai_force_temperature_one', False)
if openai_force_temperature_one:
payload.pop('temperature')
return headers,payload return headers,payload

View File

@@ -170,7 +170,7 @@ def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[],
except requests.exceptions.ConnectionError: except requests.exceptions.ConnectionError:
chunk = next(stream_response) # 失败了,重试一次?再失败就没办法了。 chunk = next(stream_response) # 失败了,重试一次?再失败就没办法了。
chunk_decoded, chunkjson, has_choices, choice_valid, has_content, has_role = decode_chunk(chunk) chunk_decoded, chunkjson, has_choices, choice_valid, has_content, has_role = decode_chunk(chunk)
if len(chunk_decoded)==0: continue if len(chunk_decoded)==0 or chunk_decoded.startswith(':'): continue
if not chunk_decoded.startswith('data:'): if not chunk_decoded.startswith('data:'):
error_msg = get_full_error(chunk, stream_response).decode() error_msg = get_full_error(chunk, stream_response).decode()
if "reduce the length" in error_msg: if "reduce the length" in error_msg:
@@ -181,9 +181,6 @@ def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[],
raise RuntimeError("OpenAI拒绝了请求" + error_msg) raise RuntimeError("OpenAI拒绝了请求" + error_msg)
if ('data: [DONE]' in chunk_decoded): break # api2d 正常完成 if ('data: [DONE]' in chunk_decoded): break # api2d 正常完成
# 提前读取一些信息 (用于判断异常) # 提前读取一些信息 (用于判断异常)
if (has_choices and not choice_valid) or ('OPENROUTER PROCESSING' in chunk_decoded):
# 一些垃圾第三方接口的出现这样的错误openrouter的特殊处理
continue
json_data = chunkjson['choices'][0] json_data = chunkjson['choices'][0]
delta = json_data["delta"] delta = json_data["delta"]
if len(delta) == 0: break if len(delta) == 0: break
@@ -328,8 +325,7 @@ def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:ChatBotWith
if chunk: if chunk:
try: try:
if (has_choices and not choice_valid) or ('OPENROUTER PROCESSING' in chunk_decoded): if (has_choices and not choice_valid) or chunk_decoded.startswith(':'):
# 一些垃圾第三方接口的出现这样的错误, 或者OPENROUTER的特殊处理,因为OPENROUTER的数据流未连接到模型时会出现OPENROUTER PROCESSING
continue continue
if ('data: [DONE]' not in chunk_decoded) and len(chunk_decoded) > 0 and (chunkjson is None): if ('data: [DONE]' not in chunk_decoded) and len(chunk_decoded) > 0 and (chunkjson is None):
# 传递进来一些奇怪的东西 # 传递进来一些奇怪的东西

View File

@@ -202,16 +202,29 @@ class GoogleChatInit:
) # 处理 history ) # 处理 history
messages.append(self.__conversation_user(inputs, llm_kwargs, enable_multimodal_capacity)) # 处理用户对话 messages.append(self.__conversation_user(inputs, llm_kwargs, enable_multimodal_capacity)) # 处理用户对话
payload = { stop_sequences = str(llm_kwargs.get("stop", "")).split(" ")
"contents": messages, # 过滤空字符串并确保至少有一个停止序列
"generationConfig": { stop_sequences = [s for s in stop_sequences if s]
# "maxOutputTokens": llm_kwargs.get("max_token", 1024), if not stop_sequences:
"stopSequences": str(llm_kwargs.get("stop", "")).split(" "), payload = {
"temperature": llm_kwargs.get("temperature", 1), "contents": messages,
"topP": llm_kwargs.get("top_p", 0.8), "generationConfig": {
"topK": 10, "temperature": llm_kwargs.get("temperature", 1),
}, "topP": llm_kwargs.get("top_p", 0.8),
} "topK": 10,
},
}
else:
payload = {
"contents": messages,
"generationConfig": {
# "maxOutputTokens": llm_kwargs.get("max_token", 1024),
"stopSequences": stop_sequences,
"temperature": llm_kwargs.get("temperature", 1),
"topP": llm_kwargs.get("top_p", 0.8),
"topK": 10,
},
}
return header, payload return header, payload

View File

@@ -24,18 +24,13 @@ class QwenRequestInstance():
def generate(self, inputs, llm_kwargs, history, system_prompt): def generate(self, inputs, llm_kwargs, history, system_prompt):
# import _thread as thread # import _thread as thread
from dashscope import Generation from dashscope import Generation
QWEN_MODEL = {
'qwen-turbo': Generation.Models.qwen_turbo,
'qwen-plus': Generation.Models.qwen_plus,
'qwen-max': Generation.Models.qwen_max,
}[llm_kwargs['llm_model']]
top_p = llm_kwargs.get('top_p', 0.8) top_p = llm_kwargs.get('top_p', 0.8)
if top_p == 0: top_p += 1e-5 if top_p == 0: top_p += 1e-5
if top_p == 1: top_p -= 1e-5 if top_p == 1: top_p -= 1e-5
self.result_buf = "" self.result_buf = ""
responses = Generation.call( responses = Generation.call(
model=QWEN_MODEL, model=llm_kwargs['llm_model'],
messages=generate_message_payload(inputs, llm_kwargs, history, system_prompt), messages=generate_message_payload(inputs, llm_kwargs, history, system_prompt),
top_p=top_p, top_p=top_p,
temperature=llm_kwargs.get('temperature', 1.0), temperature=llm_kwargs.get('temperature', 1.0),

View File

@@ -36,10 +36,11 @@ def get_full_error(chunk, stream_response):
def decode_chunk(chunk): def decode_chunk(chunk):
""" """
用于解读"content""finish_reason"的内容 用于解读"content""finish_reason"的内容(如果支持思维链也会返回"reasoning_content"内容)
""" """
chunk = chunk.decode() chunk = chunk.decode()
respose = "" respose = ""
reasoning_content = ""
finish_reason = "False" finish_reason = "False"
try: try:
chunk = json.loads(chunk[6:]) chunk = json.loads(chunk[6:])
@@ -57,14 +58,20 @@ def decode_chunk(chunk):
return respose, finish_reason return respose, finish_reason
try: try:
respose = chunk["choices"][0]["delta"]["content"] if chunk["choices"][0]["delta"]["content"] is not None:
respose = chunk["choices"][0]["delta"]["content"]
except:
pass
try:
if chunk["choices"][0]["delta"]["reasoning_content"] is not None:
reasoning_content = chunk["choices"][0]["delta"]["reasoning_content"]
except: except:
pass pass
try: try:
finish_reason = chunk["choices"][0]["finish_reason"] finish_reason = chunk["choices"][0]["finish_reason"]
except: except:
pass pass
return respose, finish_reason return respose, reasoning_content, finish_reason
def generate_message(input, model, key, history, max_output_token, system_prompt, temperature): def generate_message(input, model, key, history, max_output_token, system_prompt, temperature):
@@ -149,6 +156,7 @@ def get_predict_function(
observe_window = None observe_window = None
用于负责跨越线程传递已经输出的部分大部分时候仅仅为了fancy的视觉效果留空即可。observe_window[0]观测窗。observe_window[1]:看门狗 用于负责跨越线程传递已经输出的部分大部分时候仅仅为了fancy的视觉效果留空即可。observe_window[0]观测窗。observe_window[1]:看门狗
""" """
from .bridge_all import model_info
watch_dog_patience = 5 # 看门狗的耐心设置5秒不准咬人(咬的也不是人 watch_dog_patience = 5 # 看门狗的耐心设置5秒不准咬人(咬的也不是人
if len(APIKEY) == 0: if len(APIKEY) == 0:
raise RuntimeError(f"APIKEY为空,请检查配置文件的{APIKEY}") raise RuntimeError(f"APIKEY为空,请检查配置文件的{APIKEY}")
@@ -163,29 +171,21 @@ def get_predict_function(
system_prompt=sys_prompt, system_prompt=sys_prompt,
temperature=llm_kwargs["temperature"], temperature=llm_kwargs["temperature"],
) )
reasoning = model_info[llm_kwargs['llm_model']].get('enable_reasoning', False)
retry = 0 retry = 0
while True: while True:
try: try:
from .bridge_all import model_info
endpoint = model_info[llm_kwargs["llm_model"]]["endpoint"] endpoint = model_info[llm_kwargs["llm_model"]]["endpoint"]
if not disable_proxy: response = requests.post(
response = requests.post( endpoint,
endpoint, headers=headers,
headers=headers, proxies=None if disable_proxy else proxies,
proxies=proxies, json=playload,
json=playload, stream=True,
stream=True, timeout=TIMEOUT_SECONDS,
timeout=TIMEOUT_SECONDS, )
)
else:
response = requests.post(
endpoint,
headers=headers,
json=playload,
stream=True,
timeout=TIMEOUT_SECONDS,
)
break break
except: except:
retry += 1 retry += 1
@@ -195,9 +195,12 @@ def get_predict_function(
if MAX_RETRY != 0: if MAX_RETRY != 0:
logger.error(f"请求超时,正在重试 ({retry}/{MAX_RETRY}) ……") logger.error(f"请求超时,正在重试 ({retry}/{MAX_RETRY}) ……")
stream_response = response.iter_lines()
result = "" result = ""
finish_reason = "" finish_reason = ""
if reasoning:
resoning_buffer = ""
stream_response = response.iter_lines()
while True: while True:
try: try:
chunk = next(stream_response) chunk = next(stream_response)
@@ -207,9 +210,9 @@ def get_predict_function(
break break
except requests.exceptions.ConnectionError: except requests.exceptions.ConnectionError:
chunk = next(stream_response) # 失败了,重试一次?再失败就没办法了。 chunk = next(stream_response) # 失败了,重试一次?再失败就没办法了。
response_text, finish_reason = decode_chunk(chunk) response_text, reasoning_content, finish_reason = decode_chunk(chunk)
# 返回的数据流第一次为空,继续等待 # 返回的数据流第一次为空,继续等待
if response_text == "" and finish_reason != "False": if response_text == "" and (reasoning == False or reasoning_content == "") and finish_reason != "False":
continue continue
if response_text == "API_ERROR" and ( if response_text == "API_ERROR" and (
finish_reason != "False" or finish_reason != "stop" finish_reason != "False" or finish_reason != "stop"
@@ -227,6 +230,8 @@ def get_predict_function(
print(f"[response] {result}") print(f"[response] {result}")
break break
result += response_text result += response_text
if reasoning:
resoning_buffer += reasoning_content
if observe_window is not None: if observe_window is not None:
# 观测窗,把已经获取的数据显示出去 # 观测窗,把已经获取的数据显示出去
if len(observe_window) >= 1: if len(observe_window) >= 1:
@@ -241,6 +246,10 @@ def get_predict_function(
error_msg = chunk_decoded error_msg = chunk_decoded
logger.error(error_msg) logger.error(error_msg)
raise RuntimeError("Json解析不合常规") raise RuntimeError("Json解析不合常规")
if reasoning:
# reasoning 的部分加上框 (>)
return '\n'.join(map(lambda x: '> ' + x, resoning_buffer.split('\n'))) + \
'\n\n' + result
return result return result
def predict( def predict(
@@ -262,6 +271,7 @@ def get_predict_function(
chatbot 为WebUI中显示的对话列表修改它然后yeild出去可以直接修改对话界面内容 chatbot 为WebUI中显示的对话列表修改它然后yeild出去可以直接修改对话界面内容
additional_fn代表点击的哪个按钮按钮见functional.py additional_fn代表点击的哪个按钮按钮见functional.py
""" """
from .bridge_all import model_info
if len(APIKEY) == 0: if len(APIKEY) == 0:
raise RuntimeError(f"APIKEY为空,请检查配置文件的{APIKEY}") raise RuntimeError(f"APIKEY为空,请检查配置文件的{APIKEY}")
if inputs == "": if inputs == "":
@@ -299,31 +309,22 @@ def get_predict_function(
temperature=llm_kwargs["temperature"], temperature=llm_kwargs["temperature"],
) )
reasoning = model_info[llm_kwargs['llm_model']].get('enable_reasoning', False)
history.append(inputs) history.append(inputs)
history.append("") history.append("")
retry = 0 retry = 0
while True: while True:
try: try:
from .bridge_all import model_info
endpoint = model_info[llm_kwargs["llm_model"]]["endpoint"] endpoint = model_info[llm_kwargs["llm_model"]]["endpoint"]
if not disable_proxy: response = requests.post(
response = requests.post( endpoint,
endpoint, headers=headers,
headers=headers, proxies=None if disable_proxy else proxies,
proxies=proxies, json=playload,
json=playload, stream=True,
stream=True, timeout=TIMEOUT_SECONDS,
timeout=TIMEOUT_SECONDS, )
)
else:
response = requests.post(
endpoint,
headers=headers,
json=playload,
stream=True,
timeout=TIMEOUT_SECONDS,
)
break break
except: except:
retry += 1 retry += 1
@@ -338,6 +339,8 @@ def get_predict_function(
raise TimeoutError raise TimeoutError
gpt_replying_buffer = "" gpt_replying_buffer = ""
if reasoning:
gpt_reasoning_buffer = ""
stream_response = response.iter_lines() stream_response = response.iter_lines()
while True: while True:
@@ -347,9 +350,9 @@ def get_predict_function(
break break
except requests.exceptions.ConnectionError: except requests.exceptions.ConnectionError:
chunk = next(stream_response) # 失败了,重试一次?再失败就没办法了。 chunk = next(stream_response) # 失败了,重试一次?再失败就没办法了。
response_text, finish_reason = decode_chunk(chunk) response_text, reasoning_content, finish_reason = decode_chunk(chunk)
# 返回的数据流第一次为空,继续等待 # 返回的数据流第一次为空,继续等待
if response_text == "" and finish_reason != "False": if response_text == "" and (reasoning == False or reasoning_content == "") and finish_reason != "False":
status_text = f"finish_reason: {finish_reason}" status_text = f"finish_reason: {finish_reason}"
yield from update_ui( yield from update_ui(
chatbot=chatbot, history=history, msg=status_text chatbot=chatbot, history=history, msg=status_text
@@ -379,9 +382,14 @@ def get_predict_function(
logger.info(f"[response] {gpt_replying_buffer}") logger.info(f"[response] {gpt_replying_buffer}")
break break
status_text = f"finish_reason: {finish_reason}" status_text = f"finish_reason: {finish_reason}"
gpt_replying_buffer += response_text if reasoning:
# 如果这里抛出异常一般是文本过长详情见get_full_error的输出 gpt_replying_buffer += response_text
history[-1] = gpt_replying_buffer gpt_reasoning_buffer += reasoning_content
history[-1] = '\n'.join(map(lambda x: '> ' + x, gpt_reasoning_buffer.split('\n'))) + '\n\n' + gpt_replying_buffer
else:
gpt_replying_buffer += response_text
# 如果这里抛出异常一般是文本过长详情见get_full_error的输出
history[-1] = gpt_replying_buffer
chatbot[-1] = (history[-2], history[-1]) chatbot[-1] = (history[-2], history[-1])
yield from update_ui( yield from update_ui(
chatbot=chatbot, history=history, msg=status_text chatbot=chatbot, history=history, msg=status_text

View File

@@ -13,7 +13,7 @@ scipdf_parser>=0.52
spacy==3.7.4 spacy==3.7.4
anthropic>=0.18.1 anthropic>=0.18.1
python-markdown-math python-markdown-math
pymdown-extensions pymdown-extensions>=10.14
websocket-client websocket-client
beautifulsoup4 beautifulsoup4
prompt_toolkit prompt_toolkit

View File

@@ -2,6 +2,7 @@ import markdown
import re import re
import os import os
import math import math
import html
from loguru import logger from loguru import logger
from textwrap import dedent from textwrap import dedent
@@ -384,6 +385,24 @@ def markdown_convertion(txt):
) )
def code_block_title_replace_format(match):
lang = match.group(1)
filename = match.group(2)
return f"```{lang} {{title=\"{filename}\"}}\n"
def get_last_backticks_indent(text):
# 从后向前查找最后一个 ```
lines = text.splitlines()
for line in reversed(lines):
if '```' in line:
# 计算前面的空格数量
indent = len(line) - len(line.lstrip())
return indent
return 0 # 如果没找到返回0
@lru_cache(maxsize=16) # 使用lru缓存
def close_up_code_segment_during_stream(gpt_reply): def close_up_code_segment_during_stream(gpt_reply):
""" """
在gpt输出代码的中途输出了前面的```,但还没输出完后面的```),补上后面的``` 在gpt输出代码的中途输出了前面的```,但还没输出完后面的```),补上后面的```
@@ -397,6 +416,12 @@ def close_up_code_segment_during_stream(gpt_reply):
""" """
if "```" not in gpt_reply: if "```" not in gpt_reply:
return gpt_reply return gpt_reply
# replace [```python:warp.py] to [```python {title="warp.py"}]
pattern = re.compile(r"```([a-z]{1,12}):([^:\n]{1,35}\.([a-zA-Z^:\n]{1,3}))\n")
if pattern.search(gpt_reply):
gpt_reply = pattern.sub(code_block_title_replace_format, gpt_reply)
if gpt_reply.endswith("```"): if gpt_reply.endswith("```"):
return gpt_reply return gpt_reply
@@ -404,7 +429,11 @@ def close_up_code_segment_during_stream(gpt_reply):
segments = gpt_reply.split("```") segments = gpt_reply.split("```")
n_mark = len(segments) - 1 n_mark = len(segments) - 1
if n_mark % 2 == 1: if n_mark % 2 == 1:
return gpt_reply + "\n```" # 输出代码片段中! try:
num_padding = get_last_backticks_indent(gpt_reply)
except:
num_padding = 0
return gpt_reply + "\n" + " "*num_padding + "```" # 输出代码片段中!
else: else:
return gpt_reply return gpt_reply
@@ -421,6 +450,19 @@ def special_render_issues_for_mermaid(text):
return text return text
def contain_html_tag(text):
"""
判断文本中是否包含HTML标签。
"""
pattern = r'</?([a-zA-Z0-9_]{3,16})>|<script\s+[^>]*src=["\']([^"\']+)["\'][^>]*>'
return re.search(pattern, text) is not None
def contain_image(text):
pattern = r'<br/><br/><div align="center"><img src="file=(.*?)" base64="(.*?)"></div>'
return re.search(pattern, text) is not None
def compat_non_markdown_input(text): def compat_non_markdown_input(text):
""" """
改善非markdown输入的显示效果例如将空格转换为&nbsp;,将换行符转换为</br>等。 改善非markdown输入的显示效果例如将空格转换为&nbsp;,将换行符转换为</br>等。
@@ -429,9 +471,13 @@ def compat_non_markdown_input(text):
# careful inputmarkdown输入 # careful inputmarkdown输入
text = special_render_issues_for_mermaid(text) # 处理特殊的渲染问题 text = special_render_issues_for_mermaid(text) # 处理特殊的渲染问题
return text return text
elif "</div>" in text: elif ("<" in text) and (">" in text) and contain_html_tag(text):
# careful inputhtml输入 # careful inputhtml输入
return text if contain_image(text):
return text
else:
escaped_text = html.escape(text)
return escaped_text
else: else:
# whatever input非markdown输入 # whatever input非markdown输入
lines = text.split("\n") lines = text.split("\n")

View File

@@ -51,7 +51,7 @@ def validate_path_safety(path_or_url, user):
from toolbox import get_conf, default_user_name from toolbox import get_conf, default_user_name
from toolbox import FriendlyException from toolbox import FriendlyException
PATH_PRIVATE_UPLOAD, PATH_LOGGING = get_conf('PATH_PRIVATE_UPLOAD', 'PATH_LOGGING') PATH_PRIVATE_UPLOAD, PATH_LOGGING = get_conf('PATH_PRIVATE_UPLOAD', 'PATH_LOGGING')
sensitive_path = None sensitive_path = None # 必须不能包含 '/',即不能是多级路径
path_or_url = os.path.relpath(path_or_url) path_or_url = os.path.relpath(path_or_url)
if path_or_url.startswith(PATH_LOGGING): # 日志文件(按用户划分) if path_or_url.startswith(PATH_LOGGING): # 日志文件(按用户划分)
sensitive_path = PATH_LOGGING sensitive_path = PATH_LOGGING

View File

@@ -4,7 +4,6 @@ from functools import wraps, lru_cache
from shared_utils.advanced_markdown_format import format_io from shared_utils.advanced_markdown_format import format_io
from shared_utils.config_loader import get_conf as get_conf from shared_utils.config_loader import get_conf as get_conf
pj = os.path.join pj = os.path.join
default_user_name = 'default_user' default_user_name = 'default_user'
@@ -12,11 +11,13 @@ default_user_name = 'default_user'
openai_regex = re.compile( openai_regex = re.compile(
r"sk-[a-zA-Z0-9_-]{48}$|" + r"sk-[a-zA-Z0-9_-]{48}$|" +
r"sk-[a-zA-Z0-9_-]{92}$|" + r"sk-[a-zA-Z0-9_-]{92}$|" +
r"sk-proj-[a-zA-Z0-9_-]{48}$|"+ r"sk-proj-[a-zA-Z0-9_-]{48}$|" +
r"sk-proj-[a-zA-Z0-9_-]{124}$|"+ r"sk-proj-[a-zA-Z0-9_-]{124}$|" +
r"sk-proj-[a-zA-Z0-9_-]{156}$|"+ #新版apikey位数不匹配故修改此正则表达式 r"sk-proj-[a-zA-Z0-9_-]{156}$|" + #新版apikey位数不匹配故修改此正则表达式
r"sess-[a-zA-Z0-9]{40}$" r"sess-[a-zA-Z0-9]{40}$"
) )
def is_openai_api_key(key): def is_openai_api_key(key):
CUSTOM_API_KEY_PATTERN = get_conf('CUSTOM_API_KEY_PATTERN') CUSTOM_API_KEY_PATTERN = get_conf('CUSTOM_API_KEY_PATTERN')
if len(CUSTOM_API_KEY_PATTERN) != 0: if len(CUSTOM_API_KEY_PATTERN) != 0:
@@ -27,7 +28,7 @@ def is_openai_api_key(key):
def is_azure_api_key(key): def is_azure_api_key(key):
API_MATCH_AZURE = re.match(r"[a-zA-Z0-9]{32}$", key) API_MATCH_AZURE = re.match(r"^[a-zA-Z0-9]{32}$|^[a-zA-Z0-9]{84}", key)
return bool(API_MATCH_AZURE) return bool(API_MATCH_AZURE)
@@ -35,16 +36,25 @@ def is_api2d_key(key):
API_MATCH_API2D = re.match(r"fk[a-zA-Z0-9]{6}-[a-zA-Z0-9]{32}$", key) API_MATCH_API2D = re.match(r"fk[a-zA-Z0-9]{6}-[a-zA-Z0-9]{32}$", key)
return bool(API_MATCH_API2D) return bool(API_MATCH_API2D)
def is_openroute_api_key(key): def is_openroute_api_key(key):
API_MATCH_OPENROUTE = re.match(r"sk-or-v1-[a-zA-Z0-9]{64}$", key) API_MATCH_OPENROUTE = re.match(r"sk-or-v1-[a-zA-Z0-9]{64}$", key)
return bool(API_MATCH_OPENROUTE) return bool(API_MATCH_OPENROUTE)
def is_cohere_api_key(key): def is_cohere_api_key(key):
API_MATCH_AZURE = re.match(r"[a-zA-Z0-9]{40}$", key) API_MATCH_AZURE = re.match(r"[a-zA-Z0-9]{40}$", key)
return bool(API_MATCH_AZURE) return bool(API_MATCH_AZURE)
def is_any_api_key(key): def is_any_api_key(key):
# key 一般只包含字母、数字、下划线、逗号、中划线
if not re.match(r"^[a-zA-Z0-9_\-,]+$", key):
# 如果配置了 CUSTOM_API_KEY_PATTERN再检查以下以免误杀
if CUSTOM_API_KEY_PATTERN := get_conf('CUSTOM_API_KEY_PATTERN'):
return bool(re.match(CUSTOM_API_KEY_PATTERN, key))
return False
if ',' in key: if ',' in key:
keys = key.split(',') keys = key.split(',')
for k in keys: for k in keys:
@@ -79,7 +89,7 @@ def select_api_key(keys, llm_model):
key_list = keys.split(',') key_list = keys.split(',')
if llm_model.startswith('gpt-') or llm_model.startswith('chatgpt-') or \ if llm_model.startswith('gpt-') or llm_model.startswith('chatgpt-') or \
llm_model.startswith('one-api-') or llm_model.startswith('o1-'): llm_model.startswith('one-api-') or llm_model == 'o1' or llm_model.startswith('o1-'):
for k in key_list: for k in key_list:
if is_openai_api_key(k): avail_key_list.append(k) if is_openai_api_key(k): avail_key_list.append(k)
@@ -102,7 +112,7 @@ def select_api_key(keys, llm_model):
if len(avail_key_list) == 0: if len(avail_key_list) == 0:
raise RuntimeError(f"您提供的api-key不满足要求不包含任何可用于{llm_model}的api-key。您可能选择了错误的模型或请求源左上角更换模型菜单中可切换openai,azure,claude,cohere等请求源") raise RuntimeError(f"您提供的api-key不满足要求不包含任何可用于{llm_model}的api-key。您可能选择了错误的模型或请求源左上角更换模型菜单中可切换openai,azure,claude,cohere等请求源")
api_key = random.choice(avail_key_list) # 随机负载均衡 api_key = random.choice(avail_key_list) # 随机负载均衡
return api_key return api_key
@@ -118,5 +128,5 @@ def select_api_key_for_embed_models(keys, llm_model):
if len(avail_key_list) == 0: if len(avail_key_list) == 0:
raise RuntimeError(f"您提供的api-key不满足要求不包含任何可用于{llm_model}的api-key。您可能选择了错误的模型或请求源。") raise RuntimeError(f"您提供的api-key不满足要求不包含任何可用于{llm_model}的api-key。您可能选择了错误的模型或请求源。")
api_key = random.choice(avail_key_list) # 随机负载均衡 api_key = random.choice(avail_key_list) # 随机负载均衡
return api_key return api_key

View File

@@ -20,7 +20,7 @@ Replace 'Tex/' with the actual directory path where your files are located befor
md = """ md = """
Following code including wrapper Following code including wrapper
```mermaid ```python:wrapper.py
graph TD graph TD
A[Enter Chart Definition] --> B(Preview) A[Enter Chart Definition] --> B(Preview)
B --> C{decide} B --> C{decide}
@@ -41,6 +41,33 @@ Any folded content here. It requires an empty line just above it.
</details> </details>
"""
md ="""
在这种场景中,您希望机器 B 能够通过轮询机制来间接地“请求”机器 A而实际上机器 A 只能主动向机器 B 发出请求。这是一种典型的客户端-服务器轮询模式。下面是如何实现这种机制的详细步骤:
### 机器 B 的实现
1. **安装 FastAPI 和必要的依赖库**
```bash
pip install fastapi uvicorn
```
2. **创建 FastAPI 服务**
```python
from fastapi import FastAPI
from fastapi.responses import JSONResponse
from uuid import uuid4
from threading import Lock
import time
app = FastAPI()
# 字典用于存储请求和状态
requests = {}
process_lock = Lock()
""" """
def validate_path(): def validate_path():
import os, sys import os, sys
@@ -53,10 +80,12 @@ def validate_path():
validate_path() # validate path so you can run from base directory validate_path() # validate path so you can run from base directory
from toolbox import markdown_convertion from toolbox import markdown_convertion
from shared_utils.advanced_markdown_format import markdown_convertion_for_file # from shared_utils.advanced_markdown_format import markdown_convertion_for_file
from shared_utils.advanced_markdown_format import close_up_code_segment_during_stream
# with open("gpt_log/default_user/shared/2024-04-22-01-27-43.zip.extract/translated_markdown.md", "r", encoding="utf-8") as f: # with open("gpt_log/default_user/shared/2024-04-22-01-27-43.zip.extract/translated_markdown.md", "r", encoding="utf-8") as f:
# md = f.read() # md = f.read()
html = markdown_convertion_for_file(md) md = close_up_code_segment_during_stream(md)
html = markdown_convertion(md)
# print(html) # print(html)
with open("test.html", "w", encoding="utf-8") as f: with open("test.html", "w", encoding="utf-8") as f:
f.write(html) f.write(html)

View File

@@ -1,9 +1,16 @@
:root {
--gpt-academic-message-font-size: 15px;
}
.message {
font-size: var(--gpt-academic-message-font-size) !important;
}
#plugin_arg_menu { #plugin_arg_menu {
transform: translate(-50%, -50%); transform: translate(-50%, -50%);
border: dashed; border: dashed;
} }
/* hide remove all button */ /* hide remove all button */
.remove-all.svelte-aqlk7e.svelte-aqlk7e.svelte-aqlk7e { .remove-all.svelte-aqlk7e.svelte-aqlk7e.svelte-aqlk7e {
visibility: hidden; visibility: hidden;
@@ -25,7 +32,6 @@
visibility: hidden; visibility: hidden;
} }
/* height of the upload box */ /* height of the upload box */
.wrap.svelte-xwlu1w { .wrap.svelte-xwlu1w {
min-height: var(--size-32); min-height: var(--size-32);
@@ -97,13 +103,9 @@
min-width: min(80px, 100%); min-width: min(80px, 100%);
} }
#cbs,
#cbs {
background-color: var(--block-background-fill) !important;
}
#cbsc { #cbsc {
background-color: var(--block-background-fill) !important; background-color: rgba(var(--block-background-fill), 0.5) !important;
} }
#interact-panel .form { #interact-panel .form {
@@ -155,7 +157,7 @@
transform: translate(-50%, -50%); transform: translate(-50%, -50%);
flex-wrap: wrap; flex-wrap: wrap;
justify-content: center; justify-content: center;
transition: opacity 1s ease-in-out; transition: opacity 0.6s ease-in-out;
opacity: 0; opacity: 0;
} }
.welcome-card-container.show { .welcome-card-container.show {
@@ -207,6 +209,7 @@
.welcome-content { .welcome-content {
text-wrap: balance; text-wrap: balance;
height: 55px; height: 55px;
font-size: 13px;
display: flex; display: flex;
align-items: center; align-items: center;
} }
@@ -276,3 +279,35 @@
box-shadow: 10px 10px 15px rgba(0, 0, 0, 0.5); box-shadow: 10px 10px 15px rgba(0, 0, 0, 0.5);
left: 10px; left: 10px;
} }
#tooltip .hidden {
/* display: none; */
opacity: 0;
transition: opacity 0.5s ease;
}
#tooltip .visible {
/* display: block; */
opacity: 1;
transition: opacity 0.5s ease;
}
#elem_fontsize,
#elem_top_p,
#elem_temperature,
#elem_max_length_sl,
#elem_prompt {
/* 左右为0顶部为0底部为2px */
padding: 0 0 4px 0;
backdrop-filter: blur(10px);
background-color: rgba(var(--block-background-fill), 0.5);
}
#tooltip #cbs,
#tooltip #cbsc,
#tooltip .svelte-b6y5bg,
#tooltip .tabitem {
backdrop-filter: blur(10px);
background-color: rgba(var(--block-background-fill), 0.5);
}

View File

@@ -392,7 +392,8 @@ function chatbotContentChanged(attempt = 1, force = false) {
// Now pass both the message element and the is_last_in_arr boolean to addCopyButton // Now pass both the message element and the is_last_in_arr boolean to addCopyButton
addCopyButton(message, index, is_last_in_arr); addCopyButton(message, index, is_last_in_arr);
save_conversation_history(); // save_conversation_history
save_conversation_history_slow_down();
}); });
// gradioApp().querySelectorAll('#gpt-chatbot .message-wrap .message.bot').forEach(addCopyButton); // gradioApp().querySelectorAll('#gpt-chatbot .message-wrap .message.bot').forEach(addCopyButton);
}, i === 0 ? 0 : 200); }, i === 0 ? 0 : 200);
@@ -749,10 +750,24 @@ function minor_ui_adjustment() {
var bar_btn_width = []; var bar_btn_width = [];
// 自动隐藏超出范围的toolbar按钮 // 自动隐藏超出范围的toolbar按钮
function auto_hide_toolbar() { function auto_hide_toolbar() {
var qq = document.getElementById('tooltip'); // if chatbot hit upper page boarder, hide all
var tab_nav = qq.getElementsByClassName('tab-nav'); const elem_chatbot = document.getElementById('gpt-chatbot');
const chatbot_top = elem_chatbot.getBoundingClientRect().top;
var tooltip = document.getElementById('tooltip');
var tab_nav = tooltip.getElementsByClassName('tab-nav')[0];
// 20 px 大概是一个字的高度
if (chatbot_top < 20) {
// tab_nav.style.display = 'none';
if (tab_nav.classList.contains('visible')) {tab_nav.classList.remove('visible');}
if (!tab_nav.classList.contains('hidden')) {tab_nav.classList.add('hidden');}
return;
}
if (tab_nav.classList.contains('hidden')) {tab_nav.classList.remove('hidden');}
if (!tab_nav.classList.contains('visible')) {tab_nav.classList.add('visible');}
// tab_nav.style.display = '';
if (tab_nav.length == 0) { return; } if (tab_nav.length == 0) { return; }
var btn_list = tab_nav[0].getElementsByTagName('button') var btn_list = tab_nav.getElementsByTagName('button')
if (btn_list.length == 0) { return; } if (btn_list.length == 0) { return; }
// 获取页面宽度 // 获取页面宽度
var page_width = document.documentElement.clientWidth; var page_width = document.documentElement.clientWidth;
@@ -938,19 +953,36 @@ function gpt_academic_gradio_saveload(
} }
} }
function generateUUID() {
// Generate a random number and convert it to a hexadecimal string
function randomHexDigit() {
return Math.floor((1 + Math.random()) * 0x10000).toString(16).slice(1);
}
// Construct the UUID using the randomHexDigit function
return (
randomHexDigit() + randomHexDigit() + '-' +
randomHexDigit() + '-' +
'4' + randomHexDigit().slice(0, 3) + '-' + // Version 4 UUID
((Math.floor(Math.random() * 4) + 8).toString(16)) + randomHexDigit().slice(0, 3) + '-' +
randomHexDigit() + randomHexDigit() + randomHexDigit()
);
}
function update_conversation_metadata() { function update_conversation_metadata() {
// Create a conversation UUID and timestamp // Create a conversation UUID and timestamp
const conversationId = crypto.randomUUID(); try {
const timestamp = new Date().toISOString(); const conversationId = generateUUID();
const conversationData = { console.log('Create conversation ID:', conversationId);
id: conversationId, const timestamp = new Date().toISOString();
timestamp: timestamp const conversationMetaData = {
}; id: conversationId,
// Save to cookie timestamp: timestamp
setCookie("conversation_metadata", JSON.stringify(conversationData), 2); };
// read from cookie localStorage.setItem("conversation_metadata", JSON.stringify(conversationMetaData));
let conversation_metadata = getCookie("conversation_metadata"); } catch (e) {
// console.log("conversation_metadata", conversation_metadata); console.error('Error in updating conversation metadata:', e);
}
} }
@@ -966,20 +998,26 @@ function generatePreview(conversation, timestamp, maxLength = 100) {
} }
async function save_conversation_history() { async function save_conversation_history() {
// 505030475
let chatbot = await get_data_from_gradio_component('gpt-chatbot'); let chatbot = await get_data_from_gradio_component('gpt-chatbot');
let history = await get_data_from_gradio_component('history-ng'); let history = await get_data_from_gradio_component('history-ng');
let conversation_metadata = getCookie("conversation_metadata"); let conversation = {};
conversation_metadata = JSON.parse(conversation_metadata); let conversation_metadata = localStorage.getItem("conversation_metadata");
// console.log("conversation_metadata", conversation_metadata); try {
let conversation = { conversation_metadata = JSON.parse(conversation_metadata);
timestamp: conversation_metadata.timestamp, conversation = {
id: conversation_metadata.id, timestamp: conversation_metadata.timestamp,
metadata: conversation_metadata, id: conversation_metadata.id,
conversation: chatbot, metadata: conversation_metadata,
history: history, conversation: chatbot,
preview: generatePreview(JSON.parse(history), conversation_metadata.timestamp) history: history,
}; preview: generatePreview(JSON.parse(history), conversation_metadata.timestamp)
};
} catch (e) {
// console.error('Conversation metadata parse error, recreate conversation metadata');
update_conversation_metadata();
return;
}
// Get existing conversation history from local storage // Get existing conversation history from local storage
let conversation_history = []; let conversation_history = [];
@@ -1010,6 +1048,13 @@ async function save_conversation_history() {
return timeB - timeA; return timeB - timeA;
}); });
const max_chat_preserve = 10;
if (conversation_history.length >= max_chat_preserve + 1) {
toast_push('对话时间线记录已满,正在移除最早的对话记录。您也可以点击左侧的记录点进行手动清理。', 3000);
conversation_history = conversation_history.slice(0, max_chat_preserve);
}
// Save back to local storage // Save back to local storage
try { try {
localStorage.setItem('conversation_history', JSON.stringify(conversation_history)); localStorage.setItem('conversation_history', JSON.stringify(conversation_history));
@@ -1024,61 +1069,35 @@ async function save_conversation_history() {
} }
} }
save_conversation_history_slow_down = do_something_but_not_too_frequently(300, save_conversation_history);
function restore_chat_from_local_storage(event) { function restore_chat_from_local_storage(event) {
let conversation = event.detail; let conversation = event.detail;
push_data_to_gradio_component(conversation.conversation, "gpt-chatbot", "obj"); push_data_to_gradio_component(conversation.conversation, "gpt-chatbot", "obj");
push_data_to_gradio_component(conversation.history, "history-ng", "obj"); push_data_to_gradio_component(conversation.history, "history-ng", "obj");
// console.log("restore_chat_from_local_storage", conversation);
// Create a conversation UUID and timestamp
const conversationId = conversation.id; const conversationId = conversation.id;
const timestamp = conversation.timestamp; const timestamp = conversation.timestamp;
const conversationData = { const conversationData = {
id: conversationId, id: conversationId,
timestamp: timestamp timestamp: timestamp
}; };
// Save to cookie localStorage.setItem("conversation_metadata", JSON.stringify(conversationData));
setCookie("conversation_metadata", JSON.stringify(conversationData), 2);
// read from cookie
let conversation_metadata = getCookie("conversation_metadata");
} }
async function clear_conversation(a, b, c) {
function clear_conversation(a, b, c) { await save_conversation_history();
update_conversation_metadata(); update_conversation_metadata();
let stopButton = document.getElementById("elem_stop"); let stopButton = document.getElementById("elem_stop");
stopButton.click(); stopButton.click();
// console.log("clear_conversation");
return reset_conversation(a, b); return reset_conversation(a, b);
} }
function reset_conversation(a, b) { function reset_conversation(a, b) {
// console.log("js_code_reset");
a = btoa(unescape(encodeURIComponent(JSON.stringify(a))));
setCookie("js_previous_chat_cookie", a, 1);
b = btoa(unescape(encodeURIComponent(JSON.stringify(b))));
setCookie("js_previous_history_cookie", b, 1);
// gen_restore_btn();
return [[], [], "已重置"]; return [[], [], "已重置"];
} }
// clear -> 将 history 缓存至 history_cache -> 点击复原 -> restore_previous_chat() -> 触发elem_update_history -> 读取 history_cache
function restore_previous_chat() {
// console.log("restore_previous_chat");
let chat = getCookie("js_previous_chat_cookie");
chat = JSON.parse(decodeURIComponent(escape(atob(chat))));
push_data_to_gradio_component(chat, "gpt-chatbot", "obj");
let history = getCookie("js_previous_history_cookie");
history = JSON.parse(decodeURIComponent(escape(atob(history))));
push_data_to_gradio_component(history, "history-ng", "obj");
// document.querySelector("#elem_update_history").click(); // in order to call set_history_gr_state, and send history state to server
}
async function on_plugin_exe_complete(fn_name) { async function on_plugin_exe_complete(fn_name) {
// console.log(fn_name); // console.log(fn_name);
if (fn_name === "保存当前的对话") { if (fn_name === "保存当前的对话") {

View File

@@ -567,7 +567,6 @@ ul:not(.options) {
border-radius: var(--radius-xl) !important; border-radius: var(--radius-xl) !important;
border: none; border: none;
padding: var(--spacing-xl) !important; padding: var(--spacing-xl) !important;
font-size: 15px !important;
line-height: var(--line-md) !important; line-height: var(--line-md) !important;
min-height: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); min-height: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl));
min-width: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); min-width: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl));

View File

@@ -10,6 +10,14 @@ theme_dir = os.path.dirname(__file__)
def adjust_theme(): def adjust_theme():
try: try:
set_theme = gr.themes.Soft( set_theme = gr.themes.Soft(
font=[
"Helvetica",
"Microsoft YaHei",
"ui-sans-serif",
"sans-serif",
"system-ui",
],
font_mono=["ui-monospace", "Consolas", "monospace"],
primary_hue=gr.themes.Color( primary_hue=gr.themes.Color(
c50="#EBFAF2", c50="#EBFAF2",
c100="#CFF3E1", c100="#CFF3E1",

View File

@@ -1,6 +1,7 @@
import gradio as gr import gradio as gr
from toolbox import get_conf
def define_gui_toolbar(AVAIL_LLM_MODELS, LLM_MODEL, INIT_SYS_PROMPT, THEME, AVAIL_THEMES, ADD_WAIFU, help_menu_description, js_code_for_toggle_darkmode): def define_gui_toolbar(AVAIL_LLM_MODELS, LLM_MODEL, INIT_SYS_PROMPT, THEME, AVAIL_THEMES, AVAIL_FONTS, ADD_WAIFU, help_menu_description, js_code_for_toggle_darkmode):
with gr.Floating(init_x="0%", init_y="0%", visible=True, width=None, drag="forbidden", elem_id="tooltip"): with gr.Floating(init_x="0%", init_y="0%", visible=True, width=None, drag="forbidden", elem_id="tooltip"):
with gr.Row(): with gr.Row():
with gr.Tab("上传文件", elem_id="interact-panel"): with gr.Tab("上传文件", elem_id="interact-panel"):
@@ -9,12 +10,12 @@ def define_gui_toolbar(AVAIL_LLM_MODELS, LLM_MODEL, INIT_SYS_PROMPT, THEME, AVAI
with gr.Tab("更换模型", elem_id="interact-panel"): with gr.Tab("更换模型", elem_id="interact-panel"):
md_dropdown = gr.Dropdown(AVAIL_LLM_MODELS, value=LLM_MODEL, elem_id="elem_model_sel", label="更换LLM模型/请求源").style(container=False) md_dropdown = gr.Dropdown(AVAIL_LLM_MODELS, value=LLM_MODEL, elem_id="elem_model_sel", label="更换LLM模型/请求源").style(container=False)
top_p = gr.Slider(minimum=-0, maximum=1.0, value=1.0, step=0.01,interactive=True, label="Top-p (nucleus sampling)",) top_p = gr.Slider(minimum=-0, maximum=1.0, value=1.0, step=0.01,interactive=True, label="Top-p (nucleus sampling)", elem_id="elem_top_p")
temperature = gr.Slider(minimum=-0, maximum=2.0, value=1.0, step=0.01, interactive=True, label="Temperature", elem_id="elem_temperature") temperature = gr.Slider(minimum=-0, maximum=2.0, value=1.0, step=0.01, interactive=True, label="Temperature", elem_id="elem_temperature")
max_length_sl = gr.Slider(minimum=256, maximum=1024*32, value=4096, step=128, interactive=True, label="Local LLM MaxLength",) max_length_sl = gr.Slider(minimum=256, maximum=1024*32, value=4096, step=128, interactive=True, label="Local LLM MaxLength", elem_id="elem_max_length_sl")
system_prompt = gr.Textbox(show_label=True, lines=2, placeholder=f"System Prompt", label="System prompt", value=INIT_SYS_PROMPT, elem_id="elem_prompt") system_prompt = gr.Textbox(show_label=True, lines=2, placeholder=f"System Prompt", label="System prompt", value=INIT_SYS_PROMPT, elem_id="elem_prompt")
temperature.change(None, inputs=[temperature], outputs=None, temperature.change(None, inputs=[temperature], outputs=None,
_js="""(temperature)=>gpt_academic_gradio_saveload("save", "elem_prompt", "js_temperature_cookie", temperature)""") _js="""(temperature)=>gpt_academic_gradio_saveload("save", "elem_temperature", "js_temperature_cookie", temperature)""")
system_prompt.change(None, inputs=[system_prompt], outputs=None, system_prompt.change(None, inputs=[system_prompt], outputs=None,
_js="""(system_prompt)=>gpt_academic_gradio_saveload("save", "elem_prompt", "js_system_prompt_cookie", system_prompt)""") _js="""(system_prompt)=>gpt_academic_gradio_saveload("save", "elem_prompt", "js_system_prompt_cookie", system_prompt)""")
md_dropdown.change(None, inputs=[md_dropdown], outputs=None, md_dropdown.change(None, inputs=[md_dropdown], outputs=None,
@@ -22,6 +23,8 @@ def define_gui_toolbar(AVAIL_LLM_MODELS, LLM_MODEL, INIT_SYS_PROMPT, THEME, AVAI
with gr.Tab("界面外观", elem_id="interact-panel"): with gr.Tab("界面外观", elem_id="interact-panel"):
theme_dropdown = gr.Dropdown(AVAIL_THEMES, value=THEME, label="更换UI主题").style(container=False) theme_dropdown = gr.Dropdown(AVAIL_THEMES, value=THEME, label="更换UI主题").style(container=False)
fontfamily_dropdown = gr.Dropdown(AVAIL_FONTS, value=get_conf("FONT"), elem_id="elem_fontfamily", label="更换字体类型").style(container=False)
fontsize_slider = gr.Slider(minimum=5, maximum=25, value=15, step=1, interactive=True, label="字体大小(默认15)", elem_id="elem_fontsize")
checkboxes = gr.CheckboxGroup(["基础功能区", "函数插件区", "浮动输入区", "输入清除键", "插件参数区"], value=["基础功能区", "函数插件区"], label="显示/隐藏功能区", elem_id='cbs').style(container=False) checkboxes = gr.CheckboxGroup(["基础功能区", "函数插件区", "浮动输入区", "输入清除键", "插件参数区"], value=["基础功能区", "函数插件区"], label="显示/隐藏功能区", elem_id='cbs').style(container=False)
opt = ["自定义菜单"] opt = ["自定义菜单"]
value=[] value=[]
@@ -31,7 +34,10 @@ def define_gui_toolbar(AVAIL_LLM_MODELS, LLM_MODEL, INIT_SYS_PROMPT, THEME, AVAI
dark_mode_btn.click(None, None, None, _js=js_code_for_toggle_darkmode) dark_mode_btn.click(None, None, None, _js=js_code_for_toggle_darkmode)
open_new_tab = gr.Button("打开新对话", variant="secondary").style(size="sm") open_new_tab = gr.Button("打开新对话", variant="secondary").style(size="sm")
open_new_tab.click(None, None, None, _js=f"""()=>duplicate_in_new_window()""") open_new_tab.click(None, None, None, _js=f"""()=>duplicate_in_new_window()""")
fontfamily_dropdown.select(None, inputs=[fontfamily_dropdown], outputs=None,
_js="""(fontfamily)=>{gpt_academic_gradio_saveload("save", "elem_fontfamily", "js_fontfamily", fontfamily); gpt_academic_change_chatbot_font(fontfamily, null, null);}""")
fontsize_slider.change(None, inputs=[fontsize_slider], outputs=None,
_js="""(fontsize)=>{gpt_academic_gradio_saveload("save", "elem_fontsize", "js_fontsize", fontsize); gpt_academic_change_chatbot_font(null, fontsize, null);}""")
with gr.Tab("帮助", elem_id="interact-panel"): with gr.Tab("帮助", elem_id="interact-panel"):
gr.Markdown(help_menu_description) gr.Markdown(help_menu_description)

View File

@@ -1,5 +1,136 @@
function remove_legacy_cookie() {
setCookie("web_cookie_cache", "", -1);
setCookie("js_previous_chat_cookie", "", -1);
setCookie("js_previous_history_cookie", "", -1);
}
function processFontFamily(fontfamily) {
// 检查是否包含括号
if (fontfamily.includes('(')) {
// 分割字符串
const parts = fontfamily.split('(');
const fontNamePart = parts[1].split(')')[0].trim(); // 获取括号内的部分
// 检查是否包含 @
if (fontNamePart.includes('@')) {
const [fontName, fontUrl] = fontNamePart.split('@').map(part => part.trim());
return { fontName, fontUrl };
} else {
return { fontName: fontNamePart, fontUrl: null };
}
} else {
return { fontName: fontfamily, fontUrl: null };
}
}
// 检查字体是否存在
function checkFontAvailability(fontfamily) {
return new Promise((resolve) => {
const canvas = document.createElement('canvas');
const context = canvas.getContext('2d');
// 设置两个不同的字体进行比较
const testText = 'abcdefghijklmnopqrstuvwxyz0123456789';
context.font = `16px ${fontfamily}, sans-serif`;
const widthWithFont = context.measureText(testText).width;
context.font = '16px sans-serif';
const widthWithFallback = context.measureText(testText).width;
// 如果宽度相同,说明字体不存在
resolve(widthWithFont !== widthWithFallback);
});
}
async function checkFontAvailabilityV2(fontfamily) {
fontName = fontfamily;
console.log('Checking font availability:', fontName);
if ('queryLocalFonts' in window) {
try {
const fonts = await window.queryLocalFonts();
const fontExists = fonts.some(font => font.family === fontName);
console.log(`Local Font "${fontName}" exists:`, fontExists);
return fontExists;
} catch (error) {
console.error('Error querying local fonts:', error);
return false;
}
} else {
console.error('queryLocalFonts is not supported in this browser.');
return false;
}
}
// 动态加载字体
function loadFont(fontfamily, fontUrl) {
return new Promise((resolve, reject) => {
// 使用 Google Fonts 或其他字体来源
const link = document.createElement('link');
link.rel = 'stylesheet';
link.href = fontUrl;
link.onload = () => {
toast_push(`字体 "${fontfamily}" 已成功加载`, 3000);
resolve();
};
link.onerror = (error) => {
reject(error);
};
document.head.appendChild(link);
});
}
function gpt_academic_change_chatbot_font(fontfamily, fontsize, fontcolor) {
const chatbot = document.querySelector('#gpt-chatbot');
// 检查元素是否存在
if (chatbot) {
if (fontfamily != null) {
// 更改字体
const result = processFontFamily(fontfamily);
if (result.fontName == "Theme-Default-Font") {
chatbot.style.fontFamily = result.fontName;
return;
}
// 检查字体是否存在
checkFontAvailability(result.fontName).then((isAvailable) => {
if (isAvailable) {
// 如果字体存在,直接应用
chatbot.style.fontFamily = result.fontName;
} else {
if (result.fontUrl == null) {
// toast_push('无法加载字体本地字体不存在且URL未提供', 3000);
// 直接把失效的字体放上去让系统自动fallback
chatbot.style.fontFamily = result.fontName;
return;
} else {
toast_push('正在下载字体', 3000);
// 如果字体不存在,尝试加载字体
loadFont(result.fontName, result.fontUrl).then(() => {
chatbot.style.fontFamily = result.fontName;
}).catch((error) => {
console.error(`无法加载字体 "${result.fontName}":`, error);
});
}
}
});
}
if (fontsize != null) {
// 修改字体大小
document.documentElement.style.setProperty(
'--gpt-academic-message-font-size',
`${fontsize}px`
);
}
if (fontcolor != null) {
// 更改字体颜色
chatbot.style.color = fontcolor;
}
} else {
console.error('#gpt-chatbot is missing');
}
}
async function GptAcademicJavaScriptInit(dark, prompt, live2d, layout, tts) { async function GptAcademicJavaScriptInit(dark, prompt, live2d, layout, tts) {
// 第一部分,布局初始化 // 第一部分,布局初始化
remove_legacy_cookie();
audio_fn_init(); audio_fn_init();
minor_ui_adjustment(); minor_ui_adjustment();
ButtonWithDropdown_init(); ButtonWithDropdown_init();
@@ -38,7 +169,7 @@ async function GptAcademicJavaScriptInit(dark, prompt, live2d, layout, tts) {
} }
// 自动朗读 // 自动朗读
if (tts != "DISABLE"){ if (tts != "DISABLE") {
enable_tts = true; enable_tts = true;
if (getCookie("js_auto_read_cookie")) { if (getCookie("js_auto_read_cookie")) {
auto_read_tts = getCookie("js_auto_read_cookie") auto_read_tts = getCookie("js_auto_read_cookie")
@@ -48,7 +179,11 @@ async function GptAcademicJavaScriptInit(dark, prompt, live2d, layout, tts) {
} }
} }
} }
// 字体
gpt_academic_gradio_saveload("load", "elem_fontfamily", "js_fontfamily", null, "str");
gpt_academic_change_chatbot_font(getCookie("js_fontfamily"), null, null);
gpt_academic_gradio_saveload("load", "elem_fontsize", "js_fontsize", null, "str");
gpt_academic_change_chatbot_font(null, getCookie("js_fontsize"), null);
// SysPrompt 系统静默提示词 // SysPrompt 系统静默提示词
gpt_academic_gradio_saveload("load", "elem_prompt", "js_system_prompt_cookie", null, "str"); gpt_academic_gradio_saveload("load", "elem_prompt", "js_system_prompt_cookie", null, "str");
// Temperature 大模型温度参数 // Temperature 大模型温度参数
@@ -58,7 +193,7 @@ async function GptAcademicJavaScriptInit(dark, prompt, live2d, layout, tts) {
const cached_model = getCookie("js_md_dropdown_cookie"); const cached_model = getCookie("js_md_dropdown_cookie");
var model_sel = await get_gradio_component("elem_model_sel"); var model_sel = await get_gradio_component("elem_model_sel");
// determine whether the cached model is in the choices // determine whether the cached model is in the choices
if (model_sel.props.choices.includes(cached_model)){ if (model_sel.props.choices.includes(cached_model)) {
// change dropdown // change dropdown
gpt_academic_gradio_saveload("load", "elem_model_sel", "js_md_dropdown_cookie", null, "str"); gpt_academic_gradio_saveload("load", "elem_model_sel", "js_md_dropdown_cookie", null, "str");
// 连锁修改chatbot的label // 连锁修改chatbot的label

View File

@@ -85,7 +85,8 @@ class WelcomeMessage {
this.card_array = []; this.card_array = [];
this.static_welcome_message_previous = []; this.static_welcome_message_previous = [];
this.reflesh_time_interval = 15 * 1000; this.reflesh_time_interval = 15 * 1000;
this.update_time_interval = 2 * 1000;
this.major_title = "欢迎使用GPT-Academic";
const reflesh_render_status = () => { const reflesh_render_status = () => {
for (let index = 0; index < this.card_array.length; index++) { for (let index = 0; index < this.card_array.length; index++) {
@@ -99,16 +100,28 @@ class WelcomeMessage {
// call update when page size change, call this.update when page size change // call update when page size change, call this.update when page size change
window.addEventListener('resize', this.update.bind(this)); window.addEventListener('resize', this.update.bind(this));
// add a loop to reflesh cards
this.startRefleshCards();
this.startAutoUpdate();
} }
begin_render() { begin_render() {
this.update(); this.update();
} }
async startAutoUpdate() {
// sleep certain time
await new Promise(r => setTimeout(r, this.update_time_interval));
this.update();
}
async startRefleshCards() { async startRefleshCards() {
// sleep certain time
await new Promise(r => setTimeout(r, this.reflesh_time_interval)); await new Promise(r => setTimeout(r, this.reflesh_time_interval));
await this.reflesh_cards(); // checkout visible status
if (this.visible) { if (this.visible) {
// if visible, then reflesh cards
await this.reflesh_cards();
setTimeout(() => { setTimeout(() => {
this.startRefleshCards.call(this); this.startRefleshCards.call(this);
}, 1); }, 1);
@@ -129,6 +142,7 @@ class WelcomeMessage {
// combine two lists // combine two lists
this.static_welcome_message_previous = not_shown_previously.concat(already_shown_previously); this.static_welcome_message_previous = not_shown_previously.concat(already_shown_previously);
this.static_welcome_message_previous = this.static_welcome_message_previous.slice(0, this.max_welcome_card_num);
(async () => { (async () => {
// 使用 for...of 循环来处理异步操作 // 使用 for...of 循环来处理异步操作
@@ -145,8 +159,10 @@ class WelcomeMessage {
continue; continue;
} }
// 等待动画结束
card.addEventListener('transitionend', () => { card.classList.add('hide');
const timeout = 100; // 与CSS中transition的时间保持一致(0.1s)
setTimeout(() => {
// 更新卡片信息 // 更新卡片信息
const message = this.static_welcome_message_previous[index]; const message = this.static_welcome_message_previous[index];
const title = card.getElementsByClassName('welcome-card-title')[0]; const title = card.getElementsByClassName('welcome-card-title')[0];
@@ -158,16 +174,14 @@ class WelcomeMessage {
text.href = message.url; text.href = message.url;
content.textContent = message.content; content.textContent = message.content;
card.classList.remove('hide'); card.classList.remove('hide');
// 等待动画结束 // 等待动画结束
card.addEventListener('transitionend', () => {
card.classList.remove('show');
}, { once: true });
card.classList.add('show'); card.classList.add('show');
const timeout = 100; // 与CSS中transition的时间保持一致(0.1s)
setTimeout(() => {
card.classList.remove('show');
}, timeout);
}, timeout);
}, { once: true });
card.classList.add('hide');
// 等待 250 毫秒 // 等待 250 毫秒
await new Promise(r => setTimeout(r, 200)); await new Promise(r => setTimeout(r, 200));
@@ -193,36 +207,38 @@ class WelcomeMessage {
return array; return array;
} }
async update() { async can_display() {
// console.log('update') // update the card visibility
const elem_chatbot = document.getElementById('gpt-chatbot'); const elem_chatbot = document.getElementById('gpt-chatbot');
const chatbot_top = elem_chatbot.getBoundingClientRect().top; const chatbot_top = elem_chatbot.getBoundingClientRect().top;
const welcome_card_container = document.getElementsByClassName('welcome-card-container')[0]; const welcome_card_container = document.getElementsByClassName('welcome-card-container')[0];
// detect if welcome card overflow
let welcome_card_overflow = false; let welcome_card_overflow = false;
if (welcome_card_container) { if (welcome_card_container) {
const welcome_card_top = welcome_card_container.getBoundingClientRect().top; const welcome_card_top = welcome_card_container.getBoundingClientRect().top;
if (welcome_card_top < chatbot_top) { if (welcome_card_top < chatbot_top) {
welcome_card_overflow = true; welcome_card_overflow = true;
// console.log("welcome_card_overflow");
} }
} }
var page_width = document.documentElement.clientWidth; var page_width = document.documentElement.clientWidth;
const width_to_hide_welcome = 1200; const width_to_hide_welcome = 1200;
if (!await this.isChatbotEmpty() || page_width < width_to_hide_welcome || welcome_card_overflow) { if (!await this.isChatbotEmpty() || page_width < width_to_hide_welcome || welcome_card_overflow) {
if (this.visible) { // cannot display
console.log("remove welcome"); return false;
this.removeWelcome(); this.visible = false; // this two lines must always be together }
this.card_array = []; return true;
this.static_welcome_message_previous = []; }
}
async update() {
const can_display = await this.can_display();
if (can_display && !this.visible) {
this.showWelcome();
return; return;
} }
if (this.visible) { if (!can_display && this.visible) {
this.removeWelcome();
return; return;
} }
console.log("show welcome");
this.showWelcome(); this.visible = true; // this two lines must always be together
this.startRefleshCards();
} }
showCard(message) { showCard(message) {
@@ -263,7 +279,7 @@ class WelcomeMessage {
} }
async showWelcome() { async showWelcome() {
this.visible = true;
// 首先,找到想要添加子元素的父元素 // 首先,找到想要添加子元素的父元素
const elem_chatbot = document.getElementById('gpt-chatbot'); const elem_chatbot = document.getElementById('gpt-chatbot');
@@ -274,7 +290,7 @@ class WelcomeMessage {
// 创建主标题 // 创建主标题
const major_title = document.createElement('div'); const major_title = document.createElement('div');
major_title.classList.add('welcome-title'); major_title.classList.add('welcome-title');
major_title.textContent = "欢迎使用GPT-Academic"; major_title.textContent = this.major_title;
welcome_card_container.appendChild(major_title) welcome_card_container.appendChild(major_title)
// 创建卡片 // 创建卡片
@@ -289,6 +305,16 @@ class WelcomeMessage {
}); });
elem_chatbot.appendChild(welcome_card_container); elem_chatbot.appendChild(welcome_card_container);
const can_display = await this.can_display();
if (!can_display) {
// undo
this.visible = false;
this.card_array = [];
this.static_welcome_message_previous = [];
elem_chatbot.removeChild(welcome_card_container);
await new Promise(r => setTimeout(r, this.update_time_interval / 2));
return;
}
// 添加显示动画 // 添加显示动画
requestAnimationFrame(() => { requestAnimationFrame(() => {
@@ -297,15 +323,24 @@ class WelcomeMessage {
} }
async removeWelcome() { async removeWelcome() {
this.visible = false;
// remove welcome-card-container // remove welcome-card-container
const elem_chatbot = document.getElementById('gpt-chatbot'); const elem_chatbot = document.getElementById('gpt-chatbot');
const welcome_card_container = document.getElementsByClassName('welcome-card-container')[0]; const welcome_card_container = document.getElementsByClassName('welcome-card-container')[0];
// 添加隐藏动画 // begin hide animation
welcome_card_container.classList.add('hide'); welcome_card_container.classList.add('hide');
// 等待动画结束后再移除元素
welcome_card_container.addEventListener('transitionend', () => { welcome_card_container.addEventListener('transitionend', () => {
elem_chatbot.removeChild(welcome_card_container); elem_chatbot.removeChild(welcome_card_container);
this.card_array = [];
this.static_welcome_message_previous = [];
}, { once: true }); }, { once: true });
// add a fail safe timeout
const timeout = 600; // 与 CSS 中 transition 的时间保持一致(1s)
setTimeout(() => {
if (welcome_card_container.parentNode) {
elem_chatbot.removeChild(welcome_card_container);
}
}, timeout);
} }
async isChatbotEmpty() { async isChatbotEmpty() {

View File

@@ -178,6 +178,7 @@ def update_ui(chatbot:ChatBotWithCookies, history:list, msg:str="正常", **kwar
else: else:
chatbot_gr = chatbot chatbot_gr = chatbot
history = [str(history_item) for history_item in history] # ensure all items are string
json_history = json.dumps(history, ensure_ascii=False) json_history = json.dumps(history, ensure_ascii=False)
yield cookies, chatbot_gr, json_history, msg yield cookies, chatbot_gr, json_history, msg
@@ -498,6 +499,22 @@ def to_markdown_tabs(head: list, tabs: list, alignment=":---:", column=False, om
return tabs_list return tabs_list
def validate_file_size(files, max_size_mb=500):
"""
验证文件大小是否在允许范围内。
:param files: 文件的完整路径的列表
:param max_size_mb: 最大文件大小单位为MB默认500MB
:return: True 如果文件大小有效,否则抛出异常
"""
# 获取文件大小(字节)
total_size = 0
max_size_bytes = max_size_mb * 1024 * 1024
for file in files:
total_size += os.path.getsize(file.name)
if total_size > max_size_bytes:
raise ValueError(f"File size exceeds the allowed limit of {max_size_mb} MB. "
f"Current size: {total_size / (1024 * 1024):.2f} MB")
return True
def on_file_uploaded( def on_file_uploaded(
request: gradio.Request, files:List[str], chatbot:ChatBotWithCookies, request: gradio.Request, files:List[str], chatbot:ChatBotWithCookies,
@@ -509,6 +526,7 @@ def on_file_uploaded(
if len(files) == 0: if len(files) == 0:
return chatbot, txt return chatbot, txt
validate_file_size(files, max_size_mb=500)
# 创建工作路径 # 创建工作路径
user_name = default_user_name if not request.username else request.username user_name = default_user_name if not request.username else request.username
time_tag = gen_time_str() time_tag = gen_time_str()

View File

@@ -1,5 +1,5 @@
{ {
"version": 3.91, "version": 3.93,
"show_feature": true, "show_feature": true,
"new_feature": "优化前端并修复TTS的BUG <-> 添加时间线回溯功能 <-> 支持chatgpt-4o-latest <-> 增加RAG组件 <-> 升级多合一主提交键" "new_feature": "支持deepseek-reason(r1) <-> 字体和字体大小自定义 <-> 优化前端并修复TTS的BUG <-> 添加时间线回溯功能 <-> 支持chatgpt-4o-latest <-> 增加RAG组件 <-> 升级多合一主提交键"
} }