Compare commits

...

89 Commits

Author SHA1 Message Date
hongyi-zhao
573dc4d184 Add claude-3-5-sonnet-20240620 (#1907)
See https://docs.anthropic.com/en/docs/about-claude/models#model-names fore model names.
2024-08-02 18:04:42 +08:00
binary-husky
da8b2d69ce update version 3.8 2024-08-02 10:02:04 +00:00
binary-husky
58e732c26f Merge branch 'frontier' 2024-08-02 09:50:40 +00:00
Menghuan1918
ca238daa8c 改进联网搜索插件-新增搜索模式,搜索增强 (#1874)
* Change default to Mixed option

* Add option optimizer

* Add search optimizer prompts

* Enhanced Processing

* Finish search_optimizer part

* prompts bug fix

* Bug fix
2024-07-23 00:55:48 +08:00
jiangfy-ihep
60b3491513 add gpt-4o-mini (#1904)
Co-authored-by: Fayu Jiang <jiangfayu@hotmail.com>
2024-07-23 00:55:34 +08:00
binary-husky
c1175bfb7d add flip card animation 2024-07-22 04:53:59 +00:00
binary-husky
b705afd5ff welcome menu bug fix 2024-07-22 04:35:52 +00:00
binary-husky
dfcd28abce add width_to_hide_welcome 2024-07-22 03:34:35 +00:00
binary-husky
1edaa9e234 hide when too narrow 2024-07-21 15:04:38 +00:00
binary-husky
f0cd617ec2 minor css improve 2024-07-20 10:29:47 +00:00
binary-husky
0b08bb2cea update svg 2024-07-20 07:15:08 +00:00
Keldos
d1f8607ac8 Update submit button dropdown style (#1900) 2024-07-20 14:50:56 +08:00
binary-husky
7eb68a2086 tune 2024-07-17 17:16:34 +00:00
binary-husky
ee9e99036a Merge branch 'frontier' of github.com:binary-husky/chatgpt_academic into frontier 2024-07-17 17:14:49 +00:00
binary-husky
55e255220b update 2024-07-17 17:12:32 +00:00
lbykkkk
019cd26ae8 Merge branch 'frontier' of https://github.com/binary-husky/gpt_academic into frontier 2024-07-18 00:35:51 +08:00
lbykkkk
a5b21d5cc0 修改content并统一logo颜色 2024-07-18 00:35:40 +08:00
binary-husky
ce940ff70f roll welcome msg 2024-07-17 16:34:24 +00:00
binary-husky
fc6a83c29f update 2024-07-17 15:44:08 +00:00
binary-husky
1d3212e367 reverse welcome msg 2024-07-17 15:43:41 +00:00
lbykkkk
8a835352a3 更新欢迎界面的用语和logo 2024-07-17 19:49:07 +08:00
binary-husky
5456c9fa43 improve welcome UI 2024-07-16 16:23:07 +00:00
binary-husky
ea67054c30 update chuanhu theme 2024-07-16 16:07:46 +00:00
binary-husky
1084108df6 adding welcome page 2024-07-16 10:41:25 +00:00
binary-husky
40c9700a8d add welcome page 2024-07-15 15:47:24 +00:00
binary-husky
6da5623813 多用途复用提交按钮 2024-07-15 04:23:43 +00:00
binary-husky
778c9cd9ec roll version 2024-07-15 03:29:56 +00:00
binary-husky
e290317146 proxy submit btn 2024-07-15 03:28:59 +00:00
binary-husky
85b92b7f07 move python comment agent to dropdown 2024-07-13 16:26:36 +00:00
binary-husky
ff899777ce improve source code comment plugin functionality 2024-07-13 16:20:17 +00:00
binary-husky
c1b8c773c3 stage compare source code comment 2024-07-13 15:28:53 +00:00
binary-husky
8747c48175 mt improvement 2024-07-12 08:26:40 +00:00
binary-husky
c0010c88bc implement auto comment 2024-07-12 07:36:40 +00:00
binary-husky
68838da8ad finish test 2024-07-12 04:19:07 +00:00
binary-husky
ca7de8fcdd version up 2024-07-10 02:00:36 +00:00
binary-husky
7ebc2d00e7 Merge branch 'master' into frontier 2024-07-09 03:19:35 +00:00
binary-husky
47fb81cfde Merge branch 'master' of github.com:binary-husky/chatgpt_academic 2024-07-09 03:18:19 +00:00
binary-husky
83961c1002 optimize image generation fn 2024-07-09 03:18:14 +00:00
binary-husky
a8621333af js impl bug fix 2024-07-08 15:50:12 +00:00
binary-husky
f402ef8134 hide ask btn 2024-07-08 15:15:30 +00:00
binary-husky
65d0f486f1 change cache to lru_cache for lower python version 2024-07-07 16:02:05 +00:00
binary-husky
41f25a6a9b Merge branch 'bold_frontier' into frontier 2024-07-04 14:16:08 +00:00
binary-husky
4a6a032334 ignore 2024-07-04 14:14:49 +00:00
binary-husky
f945a7bd19 preserve theme selection 2024-07-04 14:11:51 +00:00
binary-husky
379dcb2fa7 minor gui bug fix 2024-07-04 13:31:21 +00:00
Menghuan1918
114192e025 Bug fix: can not chat with deepseek (#1879) 2024-07-04 20:28:53 +08:00
binary-husky
30c905917a unify plugin calling 2024-07-02 15:32:40 +00:00
binary-husky
0c6c357e9c revise qwen 2024-07-02 14:22:45 +00:00
binary-husky
9d11b17f25 Merge branch 'master' into frontier 2024-07-02 08:06:34 +00:00
binary-husky
1d9e9fa6a1 new page btn 2024-07-01 16:27:23 +00:00
Menghuan1918
6cd2d80dfd Bug fix: Some non-standard forms of error return are not caught (#1877) 2024-07-01 20:35:49 +08:00
binary-husky
18d3245fc9 ready next gradio version 2024-06-29 15:29:48 +00:00
hcy2206
194e665a3b 增加了对于讯飞星火大模型Spark4.0的支持 (#1875) 2024-06-29 23:20:04 +08:00
binary-husky
7e201c5028 move test file to correct position 2024-06-28 08:23:40 +00:00
binary-husky
6babcb4a9c Merge branch 'master' into frontier 2024-06-27 06:52:03 +00:00
binary-husky
00e5a31b50 Merge branch 'master' of github.com:binary-husky/chatgpt_academic 2024-06-27 06:50:06 +00:00
binary-husky
d8b9686eeb fix latex auto correct 2024-06-27 06:49:36 +00:00
binary-husky
b7b4e201cb fix latex auto correct 2024-06-27 06:49:10 +00:00
binary-husky
26e7677dc3 fix new api for taichu 2024-06-26 15:18:11 +00:00
Menghuan1918
25e06de1b6 Docker build bug fix (#1870) 2024-06-26 14:31:31 +08:00
binary-husky
5e64a50898 Merge branch 'master' into frontier 2024-06-25 11:43:40 +00:00
binary-husky
0ad571e6b5 prevent further stream when reset is clicked 2024-06-25 11:43:14 +00:00
binary-husky
60a42fb070 Merge branch 'master' into frontier 2024-06-25 11:14:32 +00:00
binary-husky
ddad5247fc upgrade searxng 2024-06-25 11:12:51 +00:00
binary-husky
c94d5054a2 move fn 2024-06-25 08:53:28 +00:00
binary-husky
ececfb9b6e test new dropdown js code 2024-06-25 08:34:50 +00:00
binary-husky
9f13c5cedf update default value of scroller_max_len 2024-06-25 05:34:55 +00:00
binary-husky
68b36042ce re-locate plugin 2024-06-25 05:32:20 +00:00
binary-husky
cac6c50d2f roll version 2024-06-19 12:56:23 +00:00
binary-husky
f884eb43cf Merge branch 'master' into frontier 2024-06-19 12:56:04 +00:00
binary-husky
d37383dd4e change arxiv cache dir path 2024-06-19 12:49:34 +00:00
binary-husky
dfae4e8081 optimize scolling visual effect 2024-06-19 12:42:11 +00:00
binary-husky
15cc08505f resolve safe pickle err 2024-06-19 11:59:47 +00:00
iluem
c5a82f6ab7 Merge pull request from GHSA-3jrq-66fm-w7xr 2024-06-19 14:29:21 +08:00
binary-husky
768ed4514a minor formatting issue 2024-06-18 14:51:53 +00:00
binary-husky
9dfbff7fd0 Merge branch 'GHSA-3jrq-66fm-w7xr' into frontier 2024-06-18 10:19:10 +00:00
binary-husky
1e16485087 internet gpt minor bug fix 2024-06-16 15:16:24 +00:00
binary-husky
f3660d669f internet GPT upgrade 2024-06-16 14:10:38 +00:00
binary-husky
e6d1cb09cb Merge branch 'master' into frontier 2024-06-16 13:47:15 +00:00
Yuki
cdadd38cf7 ️feat: block access to openapi references while running under fastapi (#1849)
- block fastapi openapi reference(swagger and redoc) routes
2024-06-10 22:26:46 +08:00
binary-husky
ba484c55a0 Merge branch 'master' into frontier 2024-06-10 14:19:26 +00:00
binary-husky
737101b81d remove debug msg 2024-06-07 17:00:05 +00:00
binary-husky
612caa2f5f revise 2024-06-07 16:50:27 +00:00
binary-husky
85dbe4a4bf pdf processing improvement 2024-06-07 15:53:08 +00:00
binary-husky
2262a4d80a taichu model fix 2024-06-06 09:35:05 +00:00
binary-husky
b456ff02ab add note 2024-06-06 09:14:32 +00:00
binary-husky
24a21ae320 紫东太初大模型 2024-06-06 09:05:06 +00:00
binary-husky
3d5790cc2c resolve fallback to non-multimodal problem 2024-06-06 08:00:30 +00:00
binary-husky
7de6015800 multimodal support for gpt-4o etc 2024-06-06 07:36:37 +00:00
81 changed files with 3582 additions and 892 deletions

8
.gitignore vendored
View File

@@ -131,6 +131,9 @@ dmypy.json
# Pyre type checker
.pyre/
# macOS files
.DS_Store
.vscode
.idea
@@ -153,6 +156,7 @@ media
flagged
request_llms/ChatGLM-6b-onnx-u8s8
.pre-commit-config.yaml
themes/common.js.min.*.js
test*
test.*
temp.*
objdump*
*.min.*.js

View File

@@ -1,33 +1,44 @@
def check_proxy(proxies):
def check_proxy(proxies, return_ip=False):
import requests
proxies_https = proxies['https'] if proxies is not None else ''
ip = None
try:
response = requests.get("https://ipapi.co/json/", proxies=proxies, timeout=4)
data = response.json()
if 'country_name' in data:
country = data['country_name']
result = f"代理配置 {proxies_https}, 代理所在地:{country}"
if 'ip' in data: ip = data['ip']
elif 'error' in data:
alternative = _check_with_backup_source(proxies)
alternative, ip = _check_with_backup_source(proxies)
if alternative is None:
result = f"代理配置 {proxies_https}, 代理所在地未知IP查询频率受限"
else:
result = f"代理配置 {proxies_https}, 代理所在地:{alternative}"
else:
result = f"代理配置 {proxies_https}, 代理数据解析失败:{data}"
if not return_ip:
print(result)
return result
else:
return ip
except:
result = f"代理配置 {proxies_https}, 代理所在地查询超时,代理可能无效"
if not return_ip:
print(result)
return result
else:
return ip
def _check_with_backup_source(proxies):
import random, string, requests
random_string = ''.join(random.choices(string.ascii_letters + string.digits, k=32))
try: return requests.get(f"http://{random_string}.edns.ip-api.com/json", proxies=proxies, timeout=4).json()['dns']['geo']
except: return None
try:
res_json = requests.get(f"http://{random_string}.edns.ip-api.com/json", proxies=proxies, timeout=4).json()
return res_json['dns']['geo'], res_json['dns']['ip']
except:
return None, None
def backup_and_download(current_version, remote_version):
"""

View File

@@ -33,7 +33,7 @@ else:
# [step 3]>> 模型选择是 (注意: LLM_MODEL是默认选中的模型, 它*必须*被包含在AVAIL_LLM_MODELS列表中 )
LLM_MODEL = "gpt-3.5-turbo-16k" # 可选 ↓↓↓
AVAIL_LLM_MODELS = ["gpt-4-1106-preview", "gpt-4-turbo-preview", "gpt-4-vision-preview",
"gpt-4o", "gpt-4-turbo", "gpt-4-turbo-2024-04-09",
"gpt-4o", "gpt-4o-mini", "gpt-4-turbo", "gpt-4-turbo-2024-04-09",
"gpt-3.5-turbo-1106", "gpt-3.5-turbo-16k", "gpt-3.5-turbo", "azure-gpt-3.5",
"gpt-4", "gpt-4-32k", "azure-gpt-4", "glm-4", "glm-4v", "glm-3-turbo",
"gemini-pro", "chatglm3"
@@ -43,7 +43,7 @@ AVAIL_LLM_MODELS = ["gpt-4-1106-preview", "gpt-4-turbo-preview", "gpt-4-vision-p
# AVAIL_LLM_MODELS = [
# "glm-4-0520", "glm-4-air", "glm-4-airx", "glm-4-flash",
# "qianfan", "deepseekcoder",
# "spark", "sparkv2", "sparkv3", "sparkv3.5",
# "spark", "sparkv2", "sparkv3", "sparkv3.5", "sparkv4",
# "qwen-turbo", "qwen-plus", "qwen-max", "qwen-local",
# "moonshot-v1-128k", "moonshot-v1-32k", "moonshot-v1-8k",
# "gpt-3.5-turbo-0613", "gpt-3.5-turbo-16k-0613", "gpt-3.5-turbo-0125", "gpt-4o-2024-05-13"
@@ -230,9 +230,15 @@ MOONSHOT_API_KEY = ""
# 零一万物(Yi Model) API KEY
YIMODEL_API_KEY = ""
# 深度求索(DeepSeek) API KEY默认请求地址为"https://api.deepseek.com/v1/chat/completions"
DEEPSEEK_API_KEY = ""
# 紫东太初大模型 https://ai-maas.wair.ac.cn
TAICHU_API_KEY = ""
# Mathpix 拥有执行PDF的OCR功能但是需要注册账号
MATHPIX_APPID = ""
MATHPIX_APPKEY = ""
@@ -263,6 +269,10 @@ GROBID_URLS = [
]
# Searxng互联网检索服务
SEARXNG_URL = "https://cloud-1.agent-matrix.com/"
# 是否允许通过自然语言描述修改本页的配置,该功能具有一定的危险性,默认关闭
ALLOW_RESET_CONFIG = False
@@ -271,23 +281,23 @@ ALLOW_RESET_CONFIG = False
AUTOGEN_USE_DOCKER = False
# 临时的上传文件夹位置,请修改
# 临时的上传文件夹位置,请尽量不要修改
PATH_PRIVATE_UPLOAD = "private_upload"
# 日志文件夹的位置,请修改
# 日志文件夹的位置,请尽量不要修改
PATH_LOGGING = "gpt_log"
# 除了连接OpenAI之外还有哪些场合允许使用代理请勿修改
# 存储翻译好的arxiv论文的路径请尽量不要修改
ARXIV_CACHE_DIR = "gpt_log/arxiv_cache"
# 除了连接OpenAI之外还有哪些场合允许使用代理请尽量不要修改
WHEN_TO_USE_PROXY = ["Download_LLM", "Download_Gradio_Theme", "Connect_Grobid",
"Warmup_Modules", "Nougat_Download", "AutoGen"]
# *实验性功能*: 自动检测并屏蔽失效的KEY请勿使用
BLOCK_INVALID_APIKEY = False
# 启用插件热加载
PLUGIN_HOT_RELOAD = False
@@ -384,6 +394,9 @@ NUM_CUSTOM_BASIC_BTN = 4
插件在线服务配置依赖关系示意图
├── 互联网检索
│ └── SEARXNG_URL
├── 语音功能
│ ├── ENABLE_AUDIO
│ ├── ALIYUN_TOKEN

View File

@@ -5,21 +5,21 @@ from toolbox import trimmed_format_exc
def get_crazy_functions():
from crazy_functions.读文章写摘要 import 读文章写摘要
from crazy_functions.生成函数注释 import 批量生成函数注释
from crazy_functions.解析项目源代码 import 解析项目本身
from crazy_functions.解析项目源代码 import 解析一个Python项目
from crazy_functions.解析项目源代码 import 解析一个Matlab项目
from crazy_functions.解析项目源代码 import 解析一个C项目的头文件
from crazy_functions.解析项目源代码 import 解析一个C项目
from crazy_functions.解析项目源代码 import 解析一个Golang项目
from crazy_functions.解析项目源代码 import 解析一个Rust项目
from crazy_functions.解析项目源代码 import 解析一个Java项目
from crazy_functions.解析项目源代码 import 解析一个前端项目
from crazy_functions.SourceCode_Analyse import 解析项目本身
from crazy_functions.SourceCode_Analyse import 解析一个Python项目
from crazy_functions.SourceCode_Analyse import 解析一个Matlab项目
from crazy_functions.SourceCode_Analyse import 解析一个C项目的头文件
from crazy_functions.SourceCode_Analyse import 解析一个C项目
from crazy_functions.SourceCode_Analyse import 解析一个Golang项目
from crazy_functions.SourceCode_Analyse import 解析一个Rust项目
from crazy_functions.SourceCode_Analyse import 解析一个Java项目
from crazy_functions.SourceCode_Analyse import 解析一个前端项目
from crazy_functions.高级功能函数模板 import 高阶功能模板函数
from crazy_functions.高级功能函数模板 import Demo_Wrap
from crazy_functions.Latex全文润色 import Latex英文润色
from crazy_functions.询问多个大语言模型 import 同时问询
from crazy_functions.解析项目源代码 import 解析一个Lua项目
from crazy_functions.解析项目源代码 import 解析一个CSharp项目
from crazy_functions.SourceCode_Analyse import 解析一个Lua项目
from crazy_functions.SourceCode_Analyse import 解析一个CSharp项目
from crazy_functions.总结word文档 import 总结word文档
from crazy_functions.解析JupyterNotebook import 解析ipynb文件
from crazy_functions.Conversation_To_File import 载入对话历史存档
@@ -43,13 +43,18 @@ def get_crazy_functions():
from crazy_functions.Latex_Function import PDF翻译中文并重新编译PDF
from crazy_functions.Latex_Function_Wrap import Arxiv_Localize
from crazy_functions.Latex_Function_Wrap import PDF_Localize
from crazy_functions.Internet_GPT import 连接网络回答问题
from crazy_functions.Internet_GPT_Wrap import NetworkGPT_Wrap
from crazy_functions.Image_Generate import 图片生成_DALLE2, 图片生成_DALLE3, 图片修改_DALLE2
from crazy_functions.Image_Generate_Wrap import ImageGen_Wrap
from crazy_functions.SourceCode_Comment import 注释Python项目
function_plugins = {
"虚空终端": {
"Group": "对话|编程|学术|智能体",
"Color": "stop",
"AsButton": True,
"Info": "使用自然语言实现您的想法",
"Function": HotReload(虚空终端),
},
"解析整个Python项目": {
@@ -59,6 +64,13 @@ def get_crazy_functions():
"Info": "解析一个Python项目的所有源文件(.py) | 输入参数为路径",
"Function": HotReload(解析一个Python项目),
},
"注释Python项目": {
"Group": "编程",
"Color": "stop",
"AsButton": False,
"Info": "上传一系列python源文件(或者压缩包), 为这些代码添加docstring | 输入参数为路径",
"Function": HotReload(注释Python项目),
},
"载入对话历史存档(先上传存档或输入路径)": {
"Group": "对话",
"Color": "stop",
@@ -87,10 +99,18 @@ def get_crazy_functions():
"Function": None,
"Class": Mermaid_Gen
},
"批量总结Word文档": {
"Arxiv论文翻译": {
"Group": "学术",
"Color": "stop",
"AsButton": True,
"Info": "Arixv论文精细翻译 | 输入参数arxiv论文的ID比如1812.10695",
"Function": HotReload(Latex翻译中文并重新编译PDF), # 当注册Class后Function旧接口仅会在“虚空终端”中起作用
"Class": Arxiv_Localize, # 新一代插件需要注册Class
},
"批量总结Word文档": {
"Group": "学术",
"Color": "stop",
"AsButton": False,
"Info": "批量总结word文档 | 输入参数为路径",
"Function": HotReload(总结word文档),
},
@@ -196,6 +216,7 @@ def get_crazy_functions():
},
"保存当前的对话": {
"Group": "对话",
"Color": "stop",
"AsButton": True,
"Info": "保存当前的对话 | 不需要输入参数",
"Function": HotReload(对话历史存档), # 当注册Class后Function旧接口仅会在“虚空终端”中起作用
@@ -203,13 +224,23 @@ def get_crazy_functions():
},
"[多线程Demo]解析此项目本身(源码自译解)": {
"Group": "对话|编程",
"Color": "stop",
"AsButton": False, # 加入下拉菜单中
"Info": "多线程解析并翻译此项目的源码 | 不需要输入参数",
"Function": HotReload(解析项目本身),
},
"查互联网后回答": {
"Group": "对话",
"Color": "stop",
"AsButton": True, # 加入下拉菜单中
# "Info": "连接网络回答问题(需要访问谷歌)| 输入参数是一个问题",
"Function": HotReload(连接网络回答问题),
"Class": NetworkGPT_Wrap # 新一代插件需要注册Class
},
"历史上的今天": {
"Group": "对话",
"AsButton": True,
"Color": "stop",
"AsButton": False,
"Info": "查看历史上的今天事件 (这是一个面向开发者的插件Demo) | 不需要输入参数",
"Function": None,
"Class": Demo_Wrap, # 新一代插件需要注册Class
@@ -303,7 +334,7 @@ def get_crazy_functions():
"ArgsReminder": "如果有必要, 请在此处追加更细致的矫错指令(使用英文)。",
"Function": HotReload(Latex英文纠错加PDF对比),
},
"Arxiv论文精细翻译输入arxivID[需Latex]": {
"📚Arxiv论文精细翻译输入arxivID[需Latex]": {
"Group": "学术",
"Color": "stop",
"AsButton": False,
@@ -315,7 +346,7 @@ def get_crazy_functions():
"Function": HotReload(Latex翻译中文并重新编译PDF), # 当注册Class后Function旧接口仅会在“虚空终端”中起作用
"Class": Arxiv_Localize, # 新一代插件需要注册Class
},
"本地Latex论文精细翻译上传Latex项目[需Latex]": {
"📚本地Latex论文精细翻译上传Latex项目[需Latex]": {
"Group": "学术",
"Color": "stop",
"AsButton": False,
@@ -340,6 +371,39 @@ def get_crazy_functions():
}
}
function_plugins.update(
{
"🎨图片生成DALLE2/DALLE3, 使用前切换到GPT系列模型": {
"Group": "对话",
"Color": "stop",
"AsButton": False,
"Info": "使用 DALLE2/DALLE3 生成图片 | 输入参数字符串,提供图像的内容",
"Function": HotReload(图片生成_DALLE2), # 当注册Class后Function旧接口仅会在“虚空终端”中起作用
"Class": ImageGen_Wrap # 新一代插件需要注册Class
},
}
)
function_plugins.update(
{
"🎨图片修改_DALLE2 使用前请切换模型到GPT系列": {
"Group": "对话",
"Color": "stop",
"AsButton": False,
"AdvancedArgs": False, # 调用时唤起高级参数输入区默认False
# "Info": "使用DALLE2修改图片 | 输入参数字符串,提供图像的内容",
"Function": HotReload(图片修改_DALLE2),
},
}
)
# -=--=- 尚未充分测试的实验性插件 & 需要额外依赖的插件 -=--=-
try:
@@ -360,39 +424,39 @@ def get_crazy_functions():
print(trimmed_format_exc())
print("Load function plugin failed")
try:
from crazy_functions.联网的ChatGPT import 连接网络回答问题
# try:
# from crazy_functions.联网的ChatGPT import 连接网络回答问题
function_plugins.update(
{
"连接网络回答问题(输入问题后点击该插件,需要访问谷歌)": {
"Group": "对话",
"Color": "stop",
"AsButton": False, # 加入下拉菜单中
# "Info": "连接网络回答问题(需要访问谷歌)| 输入参数是一个问题",
"Function": HotReload(连接网络回答问题),
}
}
)
from crazy_functions.联网的ChatGPT_bing版 import 连接bing搜索回答问题
# function_plugins.update(
# {
# "连接网络回答问题(输入问题后点击该插件,需要访问谷歌)": {
# "Group": "对话",
# "Color": "stop",
# "AsButton": False, # 加入下拉菜单中
# # "Info": "连接网络回答问题(需要访问谷歌)| 输入参数是一个问题",
# "Function": HotReload(连接网络回答问题),
# }
# }
# )
# from crazy_functions.联网的ChatGPT_bing版 import 连接bing搜索回答问题
function_plugins.update(
{
"连接网络回答问题中文Bing版输入问题后点击该插件": {
"Group": "对话",
"Color": "stop",
"AsButton": False, # 加入下拉菜单中
"Info": "连接网络回答问题需要访问中文Bing| 输入参数是一个问题",
"Function": HotReload(连接bing搜索回答问题),
}
}
)
except:
print(trimmed_format_exc())
print("Load function plugin failed")
# function_plugins.update(
# {
# "连接网络回答问题中文Bing版输入问题后点击该插件": {
# "Group": "对话",
# "Color": "stop",
# "AsButton": False, # 加入下拉菜单中
# "Info": "连接网络回答问题需要访问中文Bing| 输入参数是一个问题",
# "Function": HotReload(连接bing搜索回答问题),
# }
# }
# )
# except:
# print(trimmed_format_exc())
# print("Load function plugin failed")
try:
from crazy_functions.解析项目源代码 import 解析任意code项目
from crazy_functions.SourceCode_Analyse import 解析任意code项目
function_plugins.update(
{
@@ -429,50 +493,7 @@ def get_crazy_functions():
print(trimmed_format_exc())
print("Load function plugin failed")
try:
from crazy_functions.图片生成 import 图片生成_DALLE2, 图片生成_DALLE3, 图片修改_DALLE2
function_plugins.update(
{
"图片生成_DALLE2 先切换模型到gpt-*": {
"Group": "对话",
"Color": "stop",
"AsButton": False,
"AdvancedArgs": True, # 调用时唤起高级参数输入区默认False
"ArgsReminder": "在这里输入分辨率, 如1024x1024默认支持 256x256, 512x512, 1024x1024", # 高级参数输入区的显示提示
"Info": "使用DALLE2生成图片 | 输入参数字符串,提供图像的内容",
"Function": HotReload(图片生成_DALLE2),
},
}
)
function_plugins.update(
{
"图片生成_DALLE3 先切换模型到gpt-*": {
"Group": "对话",
"Color": "stop",
"AsButton": False,
"AdvancedArgs": True, # 调用时唤起高级参数输入区默认False
"ArgsReminder": "在这里输入自定义参数「分辨率-质量(可选)-风格(可选)」, 参数示例「1024x1024-hd-vivid」 || 分辨率支持 「1024x1024」(默认) /「1792x1024」/「1024x1792」 || 质量支持 「-standard」(默认) /「-hd」 || 风格支持 「-vivid」(默认) /「-natural」", # 高级参数输入区的显示提示
"Info": "使用DALLE3生成图片 | 输入参数字符串,提供图像的内容",
"Function": HotReload(图片生成_DALLE3),
},
}
)
function_plugins.update(
{
"图片修改_DALLE2 先切换模型到gpt-*": {
"Group": "对话",
"Color": "stop",
"AsButton": False,
"AdvancedArgs": False, # 调用时唤起高级参数输入区默认False
# "Info": "使用DALLE2修改图片 | 输入参数字符串,提供图像的内容",
"Function": HotReload(图片修改_DALLE2),
},
}
)
except:
print(trimmed_format_exc())
print("Load function plugin failed")
try:
from crazy_functions.总结音视频 import 总结音视频

View File

@@ -218,5 +218,3 @@ def 删除所有本地对话历史记录(txt, llm_kwargs, plugin_kwargs, chatbot
chatbot.append([f"删除所有历史对话文件", f"已删除<br/>{local_history}"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return

View File

@@ -108,7 +108,7 @@ def 图片生成_DALLE2(prompt, llm_kwargs, plugin_kwargs, chatbot, history, sys
chatbot.append((prompt, "[Local Message] 图像生成提示为空白,请在“输入区”输入图像生成提示。"))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 界面更新
return
chatbot.append(("您正在调用“图像生成”插件。", "[Local Message] 生成图像, 请先把模型切换至gpt-*。如果中文Prompt效果不理想, 请尝试英文Prompt。正在处理中 ....."))
chatbot.append(("您正在调用“图像生成”插件。", "[Local Message] 生成图像, 使用前请切换模型到GPT系列。如果中文Prompt效果不理想, 请尝试英文Prompt。正在处理中 ....."))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 由于请求gpt需要一段时间,我们先及时地做一次界面更新
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
resolution = plugin_kwargs.get("advanced_arg", '1024x1024')
@@ -129,7 +129,7 @@ def 图片生成_DALLE3(prompt, llm_kwargs, plugin_kwargs, chatbot, history, sys
chatbot.append((prompt, "[Local Message] 图像生成提示为空白,请在“输入区”输入图像生成提示。"))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 界面更新
return
chatbot.append(("您正在调用“图像生成”插件。", "[Local Message] 生成图像, 请先把模型切换至gpt-*。如果中文Prompt效果不理想, 请尝试英文Prompt。正在处理中 ....."))
chatbot.append(("您正在调用“图像生成”插件。", "[Local Message] 生成图像, 使用前请切换模型到GPT系列。如果中文Prompt效果不理想, 请尝试英文Prompt。正在处理中 ....."))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 由于请求gpt需要一段时间,我们先及时地做一次界面更新
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
resolution_arg = plugin_kwargs.get("advanced_arg", '1024x1024-standard-vivid').lower()
@@ -166,7 +166,7 @@ class ImageEditState(GptAcademicState):
return confirm, file
def lock_plugin(self, chatbot):
chatbot._cookies['lock_plugin'] = 'crazy_functions.图片生成->图片修改_DALLE2'
chatbot._cookies['lock_plugin'] = 'crazy_functions.Image_Generate->图片修改_DALLE2'
self.dump_state(chatbot)
def unlock_plugin(self, chatbot):

View File

@@ -0,0 +1,56 @@
from toolbox import get_conf, update_ui
from crazy_functions.Image_Generate import 图片生成_DALLE2, 图片生成_DALLE3, 图片修改_DALLE2
from crazy_functions.plugin_template.plugin_class_template import GptAcademicPluginTemplate, ArgProperty
class ImageGen_Wrap(GptAcademicPluginTemplate):
def __init__(self):
"""
请注意`execute`会执行在不同的线程中,因此您在定义和使用类变量时,应当慎之又慎!
"""
pass
def define_arg_selection_menu(self):
"""
定义插件的二级选项菜单
第一个参数,名称`main_input`,参数`type`声明这是一个文本框,文本框上方显示`title`,文本框内部显示`description``default_value`为默认值;
第二个参数,名称`advanced_arg`,参数`type`声明这是一个文本框,文本框上方显示`title`,文本框内部显示`description``default_value`为默认值;
"""
gui_definition = {
"main_input":
ArgProperty(title="输入图片描述", description="需要生成图像的文本描述,尽量使用英文", default_value="", type="string").model_dump_json(), # 主输入,自动从输入框同步
"model_name":
ArgProperty(title="模型", options=["DALLE2", "DALLE3"], default_value="DALLE3", description="", type="dropdown").model_dump_json(),
"resolution":
ArgProperty(title="分辨率", options=["256x256(限DALLE2)", "512x512(限DALLE2)", "1024x1024", "1792x1024(限DALLE3)", "1024x1792(限DALLE3)"], default_value="1024x1024", description="", type="dropdown").model_dump_json(),
"quality (仅DALLE3生效)":
ArgProperty(title="质量", options=["standard", "hd"], default_value="standard", description="", type="dropdown").model_dump_json(),
"style (仅DALLE3生效)":
ArgProperty(title="风格", options=["vivid", "natural"], default_value="vivid", description="", type="dropdown").model_dump_json(),
}
return gui_definition
def execute(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
"""
执行插件
"""
# 分辨率
resolution = plugin_kwargs["resolution"].replace("(限DALLE2)", "").replace("(限DALLE3)", "")
if plugin_kwargs["model_name"] == "DALLE2":
plugin_kwargs["advanced_arg"] = resolution
yield from 图片生成_DALLE2(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request)
elif plugin_kwargs["model_name"] == "DALLE3":
quality = plugin_kwargs["quality (仅DALLE3生效)"]
style = plugin_kwargs["style (仅DALLE3生效)"]
plugin_kwargs["advanced_arg"] = f"{resolution}-{quality}-{style}"
yield from 图片生成_DALLE3(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request)
else:
chatbot.append([None, "抱歉,找不到该模型"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面

View File

@@ -1,27 +1,144 @@
from toolbox import CatchException, update_ui
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive, input_clipping
import requests
import random
import time
import re
import json
from bs4 import BeautifulSoup
from request_llms.bridge_all import model_info
import urllib.request
from functools import lru_cache
from itertools import zip_longest
from check_proxy import check_proxy
from toolbox import CatchException, update_ui, get_conf
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive, input_clipping
from request_llms.bridge_all import model_info
from request_llms.bridge_all import predict_no_ui_long_connection
from crazy_functions.prompts.internet import SearchOptimizerPrompt, SearchAcademicOptimizerPrompt
def search_optimizer(
query,
proxies,
history,
llm_kwargs,
optimizer=1,
categories="general",
searxng_url=None,
engines=None,
):
# ------------- < 第1步尝试进行搜索优化 > -------------
# * 增强优化,会尝试结合历史记录进行搜索优化
if optimizer == 2:
his = " "
if len(history) == 0:
pass
else:
for i, h in enumerate(history):
if i % 2 == 0:
his += f"Q: {h}\n"
else:
his += f"A: {h}\n"
if categories == "general":
sys_prompt = SearchOptimizerPrompt.format(query=query, history=his, num=4)
elif categories == "science":
sys_prompt = SearchAcademicOptimizerPrompt.format(query=query, history=his, num=4)
else:
his = " "
if categories == "general":
sys_prompt = SearchOptimizerPrompt.format(query=query, history=his, num=3)
elif categories == "science":
sys_prompt = SearchAcademicOptimizerPrompt.format(query=query, history=his, num=3)
mutable = ["", time.time(), ""]
llm_kwargs["temperature"] = 0.8
try:
querys_json = predict_no_ui_long_connection(
inputs=query,
llm_kwargs=llm_kwargs,
history=[],
sys_prompt=sys_prompt,
observe_window=mutable,
)
except Exception:
querys_json = "1234"
#* 尝试解码优化后的搜索结果
querys_json = re.sub(r"```json|```", "", querys_json)
try:
querys = json.loads(querys_json)
except Exception:
#* 如果解码失败,降低温度再试一次
try:
llm_kwargs["temperature"] = 0.4
querys_json = predict_no_ui_long_connection(
inputs=query,
llm_kwargs=llm_kwargs,
history=[],
sys_prompt=sys_prompt,
observe_window=mutable,
)
querys_json = re.sub(r"```json|```", "", querys_json)
querys = json.loads(querys_json)
except Exception:
#* 如果再次失败,直接返回原始问题
querys = [query]
links = []
success = 0
Exceptions = ""
for q in querys:
try:
link = searxng_request(q, proxies, categories, searxng_url, engines=engines)
if len(link) > 0:
links.append(link[:-5])
success += 1
except Exception:
Exceptions = Exception
pass
if success == 0:
raise ValueError(f"在线搜索失败!\n{Exceptions}")
# * 清洗搜索结果,依次放入每组第一,第二个搜索结果,并清洗重复的搜索结果
seen_links = set()
result = []
for tuple in zip_longest(*links, fillvalue=None):
for item in tuple:
if item is not None:
link = item["link"]
if link not in seen_links:
seen_links.add(link)
result.append(item)
return result
@lru_cache
def get_auth_ip():
try:
external_ip = urllib.request.urlopen('https://v4.ident.me/').read().decode('utf8')
return external_ip
except:
return '114.114.114.114'
ip = check_proxy(None, return_ip=True)
if ip is None:
return '114.114.114.' + str(random.randint(1, 10))
return ip
def searxng_request(query, proxies):
url = 'https://cloud-1.agent-matrix.com/' # 请替换为实际的API URL
def searxng_request(query, proxies, categories='general', searxng_url=None, engines=None):
if searxng_url is None:
url = get_conf("SEARXNG_URL")
else:
url = searxng_url
if engines == "Mixed":
engines = None
if categories == 'general':
params = {
'q': query, # 搜索查询
'format': 'json', # 输出格式为JSON
'language': 'zh', # 搜索语言
'engines': engines,
}
elif categories == 'science':
params = {
'q': query, # 搜索查询
'format': 'json', # 输出格式为JSON
'language': 'zh', # 搜索语言
'categories': 'science'
}
else:
raise ValueError('不支持的检索类型')
headers = {
'Accept-Language': 'zh-CN,zh;q=0.9',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36',
@@ -29,19 +146,24 @@ def searxng_request(query, proxies):
'X-Real-IP': get_auth_ip()
}
results = []
response = requests.post(url, params=params, headers=headers, proxies=proxies)
response = requests.post(url, params=params, headers=headers, proxies=proxies, timeout=30)
if response.status_code == 200:
json_result = response.json()
for result in json_result['results']:
item = {
"title": result["title"],
"content": result["content"],
"title": result.get("title", ""),
"source": result.get("engines", "unknown"),
"content": result.get("content", ""),
"link": result["url"],
}
results.append(item)
return results
else:
raise ValueError("搜索失败,状态码: " + str(response.status_code) + '\t' + response.content.decode('utf-8'))
if response.status_code == 429:
raise ValueError("Searxng在线搜索服务当前使用人数太多请稍后。")
else:
raise ValueError("在线搜索失败,状态码: " + str(response.status_code) + '\t' + response.content.decode('utf-8'))
def scrape_text(url, proxies) -> str:
"""Scrape text from a webpage
@@ -70,46 +192,52 @@ def scrape_text(url, proxies) -> str:
text = "\n".join(chunk for chunk in chunks if chunk)
return text
@CatchException
def 连接网络回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
"""
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
llm_kwargs gpt模型参数如温度和top_p等一般原样传递下去就行
plugin_kwargs 插件模型的参数,暂时没有用武之地
chatbot 聊天显示框的句柄,用于显示给用户
history 聊天历史,前情提要
system_prompt 给gpt的静默提醒
user_request 当前用户的请求信息IP地址等
"""
optimizer_history = history[:-8]
history = [] # 清空历史,以免输入溢出
chatbot.append((f"请结合互联网信息回答以下问题:{txt}",
"[Local Message] 请注意,您正在调用一个[函数插件]的模板该模板可以实现ChatGPT联网信息综合。该函数面向希望实现更多有趣功能的开发者它可以作为创建新功能函数的模板。您若希望分享新的功能模组请不吝PR"))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间我们先及时地做一次界面更新
chatbot.append((f"请结合互联网信息回答以下问题:{txt}", "检索中..."))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# ------------- < 第1步爬取搜索引擎的结果 > -------------
from toolbox import get_conf
proxies = get_conf('proxies')
urls = searxng_request(txt, proxies)
categories = plugin_kwargs.get('categories', 'general')
searxng_url = plugin_kwargs.get('searxng_url', None)
engines = plugin_kwargs.get('engine', None)
optimizer = plugin_kwargs.get('optimizer', "关闭")
if optimizer == "关闭":
urls = searxng_request(txt, proxies, categories, searxng_url, engines=engines)
else:
urls = search_optimizer(txt, proxies, optimizer_history, llm_kwargs, optimizer, categories, searxng_url, engines)
history = []
if len(urls) == 0:
chatbot.append((f"结论:{txt}",
"[Local Message] 受到google限制无法从google获取信息"))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间我们先及时地做一次界面更新
"[Local Message] 受到限制无法从searxng获取信息请尝试更换搜索引擎。"))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
# ------------- < 第2步依次访问网页 > -------------
max_search_result = 5 # 最多收纳多少个网页的结果
if optimizer == "开启(增强)":
max_search_result = 8
chatbot.append(["联网检索中 ...", None])
for index, url in enumerate(urls[:max_search_result]):
res = scrape_text(url['link'], proxies)
history.extend([f"{index}份搜索结果:", res])
chatbot.append([f"{index}份搜索结果:", res[:500]+"......"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间我们先及时地做一次界面更新
prefix = f"{index}份搜索结果 [源自{url['source'][0]}搜索] {url['title'][:25]}"
history.extend([prefix, res])
res_squeeze = res.replace('\n', '...')
chatbot[-1] = [prefix + "\n\n" + res_squeeze[:500] + "......", None]
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# ------------- < 第3步ChatGPT综合 > -------------
if (optimizer != "开启(增强)"):
i_say = f"从以上搜索结果中抽取信息,然后回答问题:{txt}"
i_say, history = input_clipping( # 裁剪输入从最长的条目开始裁剪防止爆token
inputs=i_say,
history=history,
max_token_limit=model_info[llm_kwargs['llm_model']]['max_token']*3//4
max_token_limit=min(model_info[llm_kwargs['llm_model']]['max_token']*3//4, 8192)
)
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=i_say, inputs_show_user=i_say,
@@ -120,3 +248,31 @@ def 连接网络回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
history.append(i_say);history.append(gpt_say)
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
#* 或者使用搜索优化器,这样可以保证后续问答能读取到有效的历史记录
else:
i_say = f"从以上搜索结果中抽取与问题:{txt} 相关的信息:"
i_say, history = input_clipping( # 裁剪输入从最长的条目开始裁剪防止爆token
inputs=i_say,
history=history,
max_token_limit=min(model_info[llm_kwargs['llm_model']]['max_token']*3//4, 8192)
)
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=i_say, inputs_show_user=i_say,
llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,
sys_prompt="请从给定的若干条搜索结果中抽取信息,对最相关的三个搜索结果进行总结"
)
chatbot[-1] = (i_say, gpt_say)
history = []
history.append(i_say);history.append(gpt_say)
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
# ------------- < 第4步根据综合回答问题 > -------------
i_say = f"请根据以上搜索结果回答问题:{txt}"
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=i_say, inputs_show_user=i_say,
llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,
sys_prompt="请根据给定的若干条搜索结果回答问题"
)
chatbot[-1] = (i_say, gpt_say)
history.append(i_say);history.append(gpt_say)
yield from update_ui(chatbot=chatbot, history=history)

View File

@@ -0,0 +1,45 @@
from toolbox import get_conf
from crazy_functions.Internet_GPT import 连接网络回答问题
from crazy_functions.plugin_template.plugin_class_template import GptAcademicPluginTemplate, ArgProperty
class NetworkGPT_Wrap(GptAcademicPluginTemplate):
def __init__(self):
"""
请注意`execute`会执行在不同的线程中,因此您在定义和使用类变量时,应当慎之又慎!
"""
pass
def define_arg_selection_menu(self):
"""
定义插件的二级选项菜单
第一个参数,名称`main_input`,参数`type`声明这是一个文本框,文本框上方显示`title`,文本框内部显示`description``default_value`为默认值;
第二个参数,名称`advanced_arg`,参数`type`声明这是一个文本框,文本框上方显示`title`,文本框内部显示`description``default_value`为默认值;
第三个参数,名称`allow_cache`,参数`type`声明这是一个下拉菜单,下拉菜单上方显示`title`+`description`,下拉菜单的选项为`options``default_value`为下拉菜单默认值;
"""
gui_definition = {
"main_input":
ArgProperty(title="输入问题", description="待通过互联网检索的问题,会自动读取输入框内容", default_value="", type="string").model_dump_json(), # 主输入,自动从输入框同步
"categories":
ArgProperty(title="搜索分类", options=["网页", "学术论文"], default_value="网页", description="", type="dropdown").model_dump_json(),
"engine":
ArgProperty(title="选择搜索引擎", options=["Mixed", "bing", "google", "duckduckgo"], default_value="google", description="", type="dropdown").model_dump_json(),
"optimizer":
ArgProperty(title="搜索优化", options=["关闭", "开启", "开启(增强)"], default_value="关闭", description="是否使用搜索增强。注意这可能会消耗较多token", type="dropdown").model_dump_json(),
"searxng_url":
ArgProperty(title="Searxng服务地址", description="输入Searxng的地址", default_value=get_conf("SEARXNG_URL"), type="string").model_dump_json(), # 主输入,自动从输入框同步
}
return gui_definition
def execute(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
"""
执行插件
"""
if plugin_kwargs["categories"] == "网页": plugin_kwargs["categories"] = "general"
if plugin_kwargs["categories"] == "学术论文": plugin_kwargs["categories"] = "science"
yield from 连接网络回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request)

View File

@@ -4,7 +4,7 @@ from functools import partial
import glob, os, requests, time, json, tarfile
pj = os.path.join
ARXIV_CACHE_DIR = os.path.expanduser(f"~/arxiv_cache/")
ARXIV_CACHE_DIR = get_conf("ARXIV_CACHE_DIR")
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- 工具函数 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
@@ -233,7 +233,7 @@ def pdf2tex_project(pdf_file_path, plugin_kwargs):
def Latex英文纠错加PDF对比(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
# <-------------- information about this plugin ------------->
chatbot.append(["函数插件功能?",
"对整个Latex项目进行纠错, 用latex编译为PDF对修正处做高亮。函数插件贡献者: Binary-Husky。注意事项: 目前仅支持GPT3.5/GPT4其他模型转化效果未知。目前对机器学习类文献转化效果最好其他类型文献转化效果未知。仅在Windows系统进行了测试其他操作系统表现未知。"])
"对整个Latex项目进行纠错, 用latex编译为PDF对修正处做高亮。函数插件贡献者: Binary-Husky。注意事项: 目前对机器学习类文献转化效果最好其他类型文献转化效果未知。仅在Windows系统进行了测试其他操作系统表现未知。"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# <-------------- more requirements ------------->
@@ -312,7 +312,7 @@ def Latex翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot,
# <-------------- information about this plugin ------------->
chatbot.append([
"函数插件功能?",
"对整个Latex项目进行翻译, 生成中文PDF。函数插件贡献者: Binary-Husky。注意事项: 此插件Windows支持最佳Linux下必须使用Docker安装详见项目主README.md。目前仅支持GPT3.5/GPT4其他模型转化效果未知。目前对机器学习类文献转化效果最好,其他类型文献转化效果未知。"])
"对整个Latex项目进行翻译, 生成中文PDF。函数插件贡献者: Binary-Husky。注意事项: 此插件Windows支持最佳Linux下必须使用Docker安装详见项目主README.md。目前对机器学习类文献转化效果最好其他类型文献转化效果未知。"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# <-------------- more requirements ------------->
@@ -408,7 +408,7 @@ def PDF翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot, h
# <-------------- information about this plugin ------------->
chatbot.append([
"函数插件功能?",
"将PDF转换为Latex项目翻译为中文后重新编译为PDF。函数插件贡献者: Marroh。注意事项: 此插件Windows支持最佳Linux下必须使用Docker安装详见项目主README.md。目前仅支持GPT3.5/GPT4其他模型转化效果未知。目前对机器学习类文献转化效果最好,其他类型文献转化效果未知。"])
"将PDF转换为Latex项目翻译为中文后重新编译为PDF。函数插件贡献者: Marroh。注意事项: 此插件Windows支持最佳Linux下必须使用Docker安装详见项目主README.md。目前对机器学习类文献转化效果最好其他类型文献转化效果未知。"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# <-------------- more requirements ------------->

View File

@@ -1,4 +1,4 @@
from toolbox import update_ui, promote_file_to_downloadzone, disable_auto_promotion
from toolbox import update_ui, promote_file_to_downloadzone
from toolbox import CatchException, report_exception, write_history_to_file
from shared_utils.fastapi_server import validate_path_safety
from crazy_functions.crazy_utils import input_clipping
@@ -7,7 +7,6 @@ def 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
import os, copy
from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
disable_auto_promotion(chatbot=chatbot)
summary_batch_isolation = True
inputs_array = []
@@ -24,7 +23,7 @@ def 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
file_content = f.read()
prefix = "接下来请你逐文件分析下面的工程" if index==0 else ""
i_say = prefix + f'请对下面的程序文件做一个概述文件名是{os.path.relpath(fp, project_folder)},文件代码是 ```{file_content}```'
i_say_show_user = prefix + f'[{index}/{len(file_manifest)}] 请对下面的程序文件做一个概述: {fp}'
i_say_show_user = prefix + f'[{index+1}/{len(file_manifest)}] 请对下面的程序文件做一个概述: {fp}'
# 装载请求内容
inputs_array.append(i_say)
inputs_show_user_array.append(i_say_show_user)

View File

@@ -0,0 +1,138 @@
import os, copy, time
from toolbox import CatchException, report_exception, update_ui, zip_result, promote_file_to_downloadzone, update_ui_lastest_msg, get_conf, generate_file_link
from shared_utils.fastapi_server import validate_path_safety
from crazy_functions.crazy_utils import input_clipping
from crazy_functions.crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
from crazy_functions.agent_fns.python_comment_agent import PythonCodeComment
from crazy_functions.diagram_fns.file_tree import FileNode
from shared_utils.advanced_markdown_format import markdown_convertion_for_file
def 注释源代码(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
summary_batch_isolation = True
inputs_array = []
inputs_show_user_array = []
history_array = []
sys_prompt_array = []
assert len(file_manifest) <= 512, "源文件太多超过512个, 请缩减输入文件的数量。或者您也可以选择删除此行警告并修改代码拆分file_manifest列表从而实现分批次处理。"
# 建立文件树
file_tree_struct = FileNode("root", build_manifest=True)
for file_path in file_manifest:
file_tree_struct.add_file(file_path, file_path)
# <第一步,逐个文件分析,多线程>
for index, fp in enumerate(file_manifest):
# 读取文件
with open(fp, 'r', encoding='utf-8', errors='replace') as f:
file_content = f.read()
prefix = ""
i_say = prefix + f'Please conclude the following source code at {os.path.relpath(fp, project_folder)} with only one sentence, the code is:\n```{file_content}```'
i_say_show_user = prefix + f'[{index+1}/{len(file_manifest)}] 请用一句话对下面的程序文件做一个整体概述: {fp}'
# 装载请求内容
MAX_TOKEN_SINGLE_FILE = 2560
i_say, _ = input_clipping(inputs=i_say, history=[], max_token_limit=MAX_TOKEN_SINGLE_FILE)
inputs_array.append(i_say)
inputs_show_user_array.append(i_say_show_user)
history_array.append([])
sys_prompt_array.append("You are a software architecture analyst analyzing a source code project. Do not dig into details, tell me what the code is doing in general. Your answer must be short, simple and clear.")
# 文件读取完成,对每一个源代码文件,生成一个请求线程,发送到大模型进行分析
gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
inputs_array = inputs_array,
inputs_show_user_array = inputs_show_user_array,
history_array = history_array,
sys_prompt_array = sys_prompt_array,
llm_kwargs = llm_kwargs,
chatbot = chatbot,
show_user_at_complete = True
)
# <第二步,逐个文件分析,生成带注释文件>
from concurrent.futures import ThreadPoolExecutor
executor = ThreadPoolExecutor(max_workers=get_conf('DEFAULT_WORKER_NUM'))
def _task_multi_threading(i_say, gpt_say, fp, file_tree_struct):
pcc = PythonCodeComment(llm_kwargs, language='English')
pcc.read_file(path=fp, brief=gpt_say)
revised_path, revised_content = pcc.begin_comment_source_code(None, None)
file_tree_struct.manifest[fp].revised_path = revised_path
file_tree_struct.manifest[fp].revised_content = revised_content
# <将结果写回源文件>
with open(fp, 'w', encoding='utf-8') as f:
f.write(file_tree_struct.manifest[fp].revised_content)
# <生成对比html>
with open("crazy_functions/agent_fns/python_comment_compare.html", 'r', encoding='utf-8') as f:
html_template = f.read()
warp = lambda x: "```python\n\n" + x + "\n\n```"
from themes.theme import advanced_css
html_template = html_template.replace("ADVANCED_CSS", advanced_css)
html_template = html_template.replace("REPLACE_CODE_FILE_LEFT", pcc.get_markdown_block_in_html(markdown_convertion_for_file(warp(pcc.original_content))))
html_template = html_template.replace("REPLACE_CODE_FILE_RIGHT", pcc.get_markdown_block_in_html(markdown_convertion_for_file(warp(revised_content))))
compare_html_path = fp + '.compare.html'
file_tree_struct.manifest[fp].compare_html = compare_html_path
with open(compare_html_path, 'w', encoding='utf-8') as f:
f.write(html_template)
print('done 1')
chatbot.append([None, f"正在处理:"])
futures = []
for i_say, gpt_say, fp in zip(gpt_response_collection[0::2], gpt_response_collection[1::2], file_manifest):
future = executor.submit(_task_multi_threading, i_say, gpt_say, fp, file_tree_struct)
futures.append(future)
cnt = 0
while True:
cnt += 1
time.sleep(3)
worker_done = [h.done() for h in futures]
remain = len(worker_done) - sum(worker_done)
# <展示已经完成的部分>
preview_html_list = []
for done, fp in zip(worker_done, file_manifest):
if not done: continue
preview_html_list.append(file_tree_struct.manifest[fp].compare_html)
file_links = generate_file_link(preview_html_list)
yield from update_ui_lastest_msg(
f"剩余源文件数量: {remain}.\n\n" +
f"已完成的文件: {sum(worker_done)}.\n\n" +
file_links +
"\n\n" +
''.join(['.']*(cnt % 10 + 1)
), chatbot=chatbot, history=history, delay=0)
yield from update_ui(chatbot=chatbot, history=[]) # 刷新界面
if all(worker_done):
executor.shutdown()
break
# <第四步,压缩结果>
zip_res = zip_result(project_folder)
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
# <END>
chatbot.append((None, "所有源文件均已处理完毕。"))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
@CatchException
def 注释Python项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
history = [] # 清空历史,以免输入溢出
import glob, os
if os.path.exists(txt):
project_folder = txt
validate_path_safety(project_folder, chatbot.get_user())
else:
if txt == "": txt = '空空如也的输入栏'
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.py', recursive=True)]
if len(file_manifest) == 0:
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何python文件: {txt}")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
yield from 注释源代码(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)

View File

@@ -0,0 +1,391 @@
from toolbox import CatchException, update_ui
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
from request_llms.bridge_all import predict_no_ui_long_connection
import datetime
import re
import os
from textwrap import dedent
# TODO: 解决缩进问题
find_function_end_prompt = '''
Below is a page of code that you need to read. This page may not yet complete, you job is to split this page to sperate functions, class functions etc.
- Provide the line number where the first visible function ends.
- Provide the line number where the next visible function begins.
- If there are no other functions in this page, you should simply return the line number of the last line.
- Only focus on functions declared by `def` keyword. Ignore inline functions. Ignore function calls.
------------------ Example ------------------
INPUT:
```
L0000 |import sys
L0001 |import re
L0002 |
L0003 |def trimmed_format_exc():
L0004 | import os
L0005 | import traceback
L0006 | str = traceback.format_exc()
L0007 | current_path = os.getcwd()
L0008 | replace_path = "."
L0009 | return str.replace(current_path, replace_path)
L0010 |
L0011 |
L0012 |def trimmed_format_exc_markdown():
L0013 | ...
L0014 | ...
```
OUTPUT:
```
<first_function_end_at>L0009</first_function_end_at>
<next_function_begin_from>L0012</next_function_begin_from>
```
------------------ End of Example ------------------
------------------ the real INPUT you need to process NOW ------------------
```
{THE_TAGGED_CODE}
```
'''
revise_funtion_prompt = '''
You need to read the following code, and revise the source code ({FILE_BASENAME}) according to following instructions:
1. You should analyze the purpose of the functions (if there are any).
2. You need to add docstring for the provided functions (if there are any).
Be aware:
1. You must NOT modify the indent of code.
2. You are NOT authorized to change or translate non-comment code, and you are NOT authorized to add empty lines either, toggle qu.
3. Use {LANG} to add comments and docstrings. Do NOT translate Chinese that is already in the code.
------------------ Example ------------------
INPUT:
```
L0000 |
L0001 |def zip_result(folder):
L0002 | t = gen_time_str()
L0003 | zip_folder(folder, get_log_folder(), f"result.zip")
L0004 | return os.path.join(get_log_folder(), f"result.zip")
L0005 |
L0006 |
```
OUTPUT:
<instruction_1_purpose>
This function compresses a given folder, and return the path of the resulting `zip` file.
</instruction_1_purpose>
<instruction_2_revised_code>
```
def zip_result(folder):
"""
Compresses the specified folder into a zip file and stores it in the log folder.
Args:
folder (str): The path to the folder that needs to be compressed.
Returns:
str: The path to the created zip file in the log folder.
"""
t = gen_time_str()
zip_folder(folder, get_log_folder(), f"result.zip") # ⭐ Execute the zipping of folder
return os.path.join(get_log_folder(), f"result.zip")
```
</instruction_2_revised_code>
------------------ End of Example ------------------
------------------ the real INPUT you need to process NOW ({FILE_BASENAME}) ------------------
```
{THE_CODE}
```
{INDENT_REMINDER}
{BRIEF_REMINDER}
{HINT_REMINDER}
'''
class PythonCodeComment():
def __init__(self, llm_kwargs, language) -> None:
self.original_content = ""
self.full_context = []
self.full_context_with_line_no = []
self.current_page_start = 0
self.page_limit = 100 # 100 lines of code each page
self.ignore_limit = 20
self.llm_kwargs = llm_kwargs
self.language = language
self.path = None
self.file_basename = None
self.file_brief = ""
def generate_tagged_code_from_full_context(self):
for i, code in enumerate(self.full_context):
number = i
padded_number = f"{number:04}"
result = f"L{padded_number}"
self.full_context_with_line_no.append(f"{result} | {code}")
return self.full_context_with_line_no
def read_file(self, path, brief):
with open(path, 'r', encoding='utf8') as f:
self.full_context = f.readlines()
self.original_content = ''.join(self.full_context)
self.file_basename = os.path.basename(path)
self.file_brief = brief
self.full_context_with_line_no = self.generate_tagged_code_from_full_context()
self.path = path
def find_next_function_begin(self, tagged_code:list, begin_and_end):
begin, end = begin_and_end
THE_TAGGED_CODE = ''.join(tagged_code)
self.llm_kwargs['temperature'] = 0
result = predict_no_ui_long_connection(
inputs=find_function_end_prompt.format(THE_TAGGED_CODE=THE_TAGGED_CODE),
llm_kwargs=self.llm_kwargs,
history=[],
sys_prompt="",
observe_window=[],
console_slience=True
)
def extract_number(text):
# 使用正则表达式匹配模式
match = re.search(r'<next_function_begin_from>L(\d+)</next_function_begin_from>', text)
if match:
# 提取匹配的数字部分并转换为整数
return int(match.group(1))
return None
line_no = extract_number(result)
if line_no is not None:
return line_no
else:
return end
def _get_next_window(self):
#
current_page_start = self.current_page_start
if self.current_page_start == len(self.full_context) + 1:
raise StopIteration
# 如果剩余的行数非常少,一鼓作气处理掉
if len(self.full_context) - self.current_page_start < self.ignore_limit:
future_page_start = len(self.full_context) + 1
self.current_page_start = future_page_start
return current_page_start, future_page_start
tagged_code = self.full_context_with_line_no[ self.current_page_start: self.current_page_start + self.page_limit]
line_no = self.find_next_function_begin(tagged_code, [self.current_page_start, self.current_page_start + self.page_limit])
if line_no > len(self.full_context) - 5:
line_no = len(self.full_context) + 1
future_page_start = line_no
self.current_page_start = future_page_start
# ! consider eof
return current_page_start, future_page_start
def dedent(self, text):
"""Remove any common leading whitespace from every line in `text`.
"""
# Look for the longest leading string of spaces and tabs common to
# all lines.
margin = None
_whitespace_only_re = re.compile('^[ \t]+$', re.MULTILINE)
_leading_whitespace_re = re.compile('(^[ \t]*)(?:[^ \t\n])', re.MULTILINE)
text = _whitespace_only_re.sub('', text)
indents = _leading_whitespace_re.findall(text)
for indent in indents:
if margin is None:
margin = indent
# Current line more deeply indented than previous winner:
# no change (previous winner is still on top).
elif indent.startswith(margin):
pass
# Current line consistent with and no deeper than previous winner:
# it's the new winner.
elif margin.startswith(indent):
margin = indent
# Find the largest common whitespace between current line and previous
# winner.
else:
for i, (x, y) in enumerate(zip(margin, indent)):
if x != y:
margin = margin[:i]
break
# sanity check (testing/debugging only)
if 0 and margin:
for line in text.split("\n"):
assert not line or line.startswith(margin), \
"line = %r, margin = %r" % (line, margin)
if margin:
text = re.sub(r'(?m)^' + margin, '', text)
return text, len(margin)
else:
return text, 0
def get_next_batch(self):
current_page_start, future_page_start = self._get_next_window()
return ''.join(self.full_context[current_page_start: future_page_start]), current_page_start, future_page_start
def tag_code(self, fn, hint):
code = fn
_, n_indent = self.dedent(code)
indent_reminder = "" if n_indent == 0 else "(Reminder: as you can see, this piece of code has indent made up with {n_indent} whitespace, please preseve them in the OUTPUT.)"
brief_reminder = "" if self.file_brief == "" else f"({self.file_basename} abstract: {self.file_brief})"
hint_reminder = "" if hint is None else f"(Reminder: do not ignore or modify code such as `{hint}`, provide complete code in the OUTPUT.)"
self.llm_kwargs['temperature'] = 0
result = predict_no_ui_long_connection(
inputs=revise_funtion_prompt.format(
LANG=self.language,
FILE_BASENAME=self.file_basename,
THE_CODE=code,
INDENT_REMINDER=indent_reminder,
BRIEF_REMINDER=brief_reminder,
HINT_REMINDER=hint_reminder
),
llm_kwargs=self.llm_kwargs,
history=[],
sys_prompt="",
observe_window=[],
console_slience=True
)
def get_code_block(reply):
import re
pattern = r"```([\s\S]*?)```" # regex pattern to match code blocks
matches = re.findall(pattern, reply) # find all code blocks in text
if len(matches) == 1:
return matches[0].strip('python') # code block
return None
code_block = get_code_block(result)
if code_block is not None:
code_block = self.sync_and_patch(original=code, revised=code_block)
return code_block
else:
return code
def get_markdown_block_in_html(self, html):
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
found_list = soup.find_all("div", class_="markdown-body")
if found_list:
res = found_list[0]
return res.prettify()
else:
return None
def sync_and_patch(self, original, revised):
"""Ensure the number of pre-string empty lines in revised matches those in original."""
def count_leading_empty_lines(s, reverse=False):
"""Count the number of leading empty lines in a string."""
lines = s.split('\n')
if reverse: lines = list(reversed(lines))
count = 0
for line in lines:
if line.strip() == '':
count += 1
else:
break
return count
original_empty_lines = count_leading_empty_lines(original)
revised_empty_lines = count_leading_empty_lines(revised)
if original_empty_lines > revised_empty_lines:
additional_lines = '\n' * (original_empty_lines - revised_empty_lines)
revised = additional_lines + revised
elif original_empty_lines < revised_empty_lines:
lines = revised.split('\n')
revised = '\n'.join(lines[revised_empty_lines - original_empty_lines:])
original_empty_lines = count_leading_empty_lines(original, reverse=True)
revised_empty_lines = count_leading_empty_lines(revised, reverse=True)
if original_empty_lines > revised_empty_lines:
additional_lines = '\n' * (original_empty_lines - revised_empty_lines)
revised = revised + additional_lines
elif original_empty_lines < revised_empty_lines:
lines = revised.split('\n')
revised = '\n'.join(lines[:-(revised_empty_lines - original_empty_lines)])
return revised
def begin_comment_source_code(self, chatbot=None, history=None):
# from toolbox import update_ui_lastest_msg
assert self.path is not None
assert '.py' in self.path # must be python source code
# write_target = self.path + '.revised.py'
write_content = ""
# with open(self.path + '.revised.py', 'w+', encoding='utf8') as f:
while True:
try:
# yield from update_ui_lastest_msg(f"({self.file_basename}) 正在读取下一段代码片段:\n", chatbot=chatbot, history=history, delay=0)
next_batch, line_no_start, line_no_end = self.get_next_batch()
# yield from update_ui_lastest_msg(f"({self.file_basename}) 处理代码片段:\n\n{next_batch}", chatbot=chatbot, history=history, delay=0)
hint = None
MAX_ATTEMPT = 2
for attempt in range(MAX_ATTEMPT):
result = self.tag_code(next_batch, hint)
try:
successful, hint = self.verify_successful(next_batch, result)
except Exception as e:
print('ignored exception:\n' + str(e))
break
if successful:
break
if attempt == MAX_ATTEMPT - 1:
# cannot deal with this, give up
result = next_batch
break
# f.write(result)
write_content += result
except StopIteration:
next_batch, line_no_start, line_no_end = [], -1, -1
return None, write_content
def verify_successful(self, original, revised):
""" Determine whether the revised code contains every line that already exists
"""
from crazy_functions.ast_fns.comment_remove import remove_python_comments
original = remove_python_comments(original)
original_lines = original.split('\n')
revised_lines = revised.split('\n')
for l in original_lines:
l = l.strip()
if '\'' in l or '\"' in l: continue # ast sometimes toggle " to '
found = False
for lt in revised_lines:
if l in lt:
found = True
break
if not found:
return False, l
return True, None

View File

@@ -0,0 +1,45 @@
<!DOCTYPE html>
<html lang="zh-CN">
<head>
<style>ADVANCED_CSS</style>
<meta charset="UTF-8">
<title>源文件对比</title>
<style>
body {
font-family: Arial, sans-serif;
display: flex;
justify-content: center;
align-items: center;
height: 100vh;
margin: 0;
}
.container {
display: flex;
width: 95%;
height: -webkit-fill-available;
}
.code-container {
flex: 1;
margin: 0px;
padding: 0px;
border: 1px solid #ccc;
background-color: #f9f9f9;
overflow: auto;
}
pre {
white-space: pre-wrap;
word-wrap: break-word;
}
</style>
</head>
<body>
<div class="container">
<div class="code-container">
REPLACE_CODE_FILE_LEFT
</div>
<div class="code-container">
REPLACE_CODE_FILE_RIGHT
</div>
</div>
</body>
</html>

View File

@@ -0,0 +1,46 @@
import ast
class CommentRemover(ast.NodeTransformer):
def visit_FunctionDef(self, node):
# 移除函数的文档字符串
if (node.body and isinstance(node.body[0], ast.Expr) and
isinstance(node.body[0].value, ast.Str)):
node.body = node.body[1:]
self.generic_visit(node)
return node
def visit_ClassDef(self, node):
# 移除类的文档字符串
if (node.body and isinstance(node.body[0], ast.Expr) and
isinstance(node.body[0].value, ast.Str)):
node.body = node.body[1:]
self.generic_visit(node)
return node
def visit_Module(self, node):
# 移除模块的文档字符串
if (node.body and isinstance(node.body[0], ast.Expr) and
isinstance(node.body[0].value, ast.Str)):
node.body = node.body[1:]
self.generic_visit(node)
return node
def remove_python_comments(source_code):
# 解析源代码为 AST
tree = ast.parse(source_code)
# 移除注释
transformer = CommentRemover()
tree = transformer.visit(tree)
# 将处理后的 AST 转换回源代码
return ast.unparse(tree)
# 示例使用
if __name__ == "__main__":
with open("source.py", "r", encoding="utf-8") as f:
source_code = f.read()
cleaned_code = remove_python_comments(source_code)
with open("cleaned_source.py", "w", encoding="utf-8") as f:
f.write(cleaned_code)

View File

@@ -1,9 +1,20 @@
from toolbox import update_ui, get_conf, trimmed_format_exc, get_max_token, Singleton
from shared_utils.char_visual_effect import scolling_visual_effect
import threading
import os
import logging
def input_clipping(inputs, history, max_token_limit):
"""
当输入文本 + 历史文本超出最大限制时,采取措施丢弃一部分文本。
输入:
- inputs 本次请求
- history 历史上下文
- max_token_limit 最大token限制
输出:
- inputs 本次请求经过clip
- history 历史上下文经过clip
"""
import numpy as np
from request_llms.bridge_all import model_info
enc = model_info["gpt-3.5-turbo"]['tokenizer']
@@ -158,7 +169,7 @@ def can_multi_process(llm) -> bool:
def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
inputs_array, inputs_show_user_array, llm_kwargs,
chatbot, history_array, sys_prompt_array,
refresh_interval=0.2, max_workers=-1, scroller_max_len=30,
refresh_interval=0.2, max_workers=-1, scroller_max_len=75,
handle_token_exceed=True, show_user_at_complete=False,
retry_times_at_unknown_error=2,
):
@@ -283,6 +294,8 @@ def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
futures = [executor.submit(_req_gpt, index, inputs, history, sys_prompt) for index, inputs, history, sys_prompt in zip(
range(len(inputs_array)), inputs_array, history_array, sys_prompt_array)]
cnt = 0
while True:
# yield一次以刷新前端页面
time.sleep(refresh_interval)
@@ -295,8 +308,7 @@ def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
mutable[thread_index][1] = time.time()
# 在前端打印些好玩的东西
for thread_index, _ in enumerate(worker_done):
print_something_really_funny = "[ ...`"+mutable[thread_index][0][-scroller_max_len:].\
replace('\n', '').replace('`', '.').replace(' ', '.').replace('<br/>', '.....').replace('$', '.')+"`... ]"
print_something_really_funny = f"[ ...`{scolling_visual_effect(mutable[thread_index][0], scroller_max_len)}`... ]"
observe_win.append(print_something_really_funny)
# 在前端打印些好玩的东西
stat_str = ''.join([f'`{mutable[thread_index][2]}`: {obs}\n\n'

View File

@@ -2,7 +2,7 @@ import os
from textwrap import indent
class FileNode:
def __init__(self, name):
def __init__(self, name, build_manifest=False):
self.name = name
self.children = []
self.is_leaf = False
@@ -10,6 +10,8 @@ class FileNode:
self.parenting_ship = []
self.comment = ""
self.comment_maxlen_show = 50
self.build_manifest = build_manifest
self.manifest = {}
@staticmethod
def add_linebreaks_at_spaces(string, interval=10):
@@ -29,6 +31,7 @@ class FileNode:
level = 1
if directory_names == "":
new_node = FileNode(file_name)
self.manifest[file_path] = new_node
current_node.children.append(new_node)
new_node.is_leaf = True
new_node.comment = self.sanitize_comment(file_comment)
@@ -50,6 +53,7 @@ class FileNode:
new_node.level = level - 1
current_node = new_node
term = FileNode(file_name)
self.manifest[file_path] = term
term.level = level
term.comment = self.sanitize_comment(file_comment)
term.is_leaf = True

View File

@@ -92,7 +92,7 @@ class MiniGame_ResumeStory(GptAcademicGameBaseState):
def generate_story_image(self, story_paragraph):
try:
from crazy_functions.图片生成 import gen_image
from crazy_functions.Image_Generate import gen_image
prompt_ = predict_no_ui_long_connection(inputs=story_paragraph, llm_kwargs=self.llm_kwargs, history=[], sys_prompt='你需要根据用户给出的小说段落进行简短的环境描写。要求80字以内。')
image_url, image_path = gen_image(self.llm_kwargs, prompt_, '512x512', model="dall-e-2", quality='standard', style='natural')
return f'<br/><div align="center"><img src="file={image_path}"></div>'

View File

@@ -4,20 +4,28 @@ import pickle
class SafeUnpickler(pickle.Unpickler):
def get_safe_classes(self):
from .latex_actions import LatexPaperFileGroup, LatexPaperSplit
from crazy_functions.latex_fns.latex_actions import LatexPaperFileGroup, LatexPaperSplit
from crazy_functions.latex_fns.latex_toolbox import LinkedListNode
# 定义允许的安全类
safe_classes = {
# 在这里添加其他安全的类
'LatexPaperFileGroup': LatexPaperFileGroup,
'LatexPaperSplit' : LatexPaperSplit,
'LatexPaperSplit': LatexPaperSplit,
'LinkedListNode': LinkedListNode,
}
return safe_classes
def find_class(self, module, name):
# 只允许特定的类进行反序列化
self.safe_classes = self.get_safe_classes()
if f'{module}.{name}' in self.safe_classes:
return self.safe_classes[f'{module}.{name}']
match_class_name = None
for class_name in self.safe_classes.keys():
if (class_name in f'{module}.{name}'):
match_class_name = class_name
if module == 'numpy' or module.startswith('numpy.'):
return super().find_class(module, name)
if match_class_name is not None:
return self.safe_classes[match_class_name]
# 如果尝试加载未授权的类,则抛出异常
raise pickle.UnpicklingError(f"Attempted to deserialize unauthorized class '{name}' from module '{module}'")

View File

@@ -104,6 +104,8 @@ def 解析PDF_DOC2X_单文件(fp, project_folder, llm_kwargs, plugin_kwargs, cha
z_decoded = z_decoded[len("data: "):]
decoded_json = json.loads(z_decoded)
res_json.append(decoded_json)
if 'limit exceeded' in decoded_json.get('status', ''):
raise RuntimeError("Doc2x API 页数受限,请联系 Doc2x 方面,并更换新的 API 秘钥。")
else:
raise RuntimeError(format("[ERROR] status code: %d, body: %s" % (res.status_code, res.text)))
uuid = res_json[0]['uuid']
@@ -161,8 +163,8 @@ def 解析PDF_DOC2X_单文件(fp, project_folder, llm_kwargs, plugin_kwargs, cha
from shared_utils.advanced_markdown_format import markdown_convertion_for_file
with open(generated_fp, "r", encoding="utf-8") as f:
md = f.read()
# Markdown中使用不标准的表格需要在表格前加上一个emoji以便公式渲染
md = re.sub(r'^<table>', r'😃<table>', md, flags=re.MULTILINE)
# # Markdown中使用不标准的表格需要在表格前加上一个emoji以便公式渲染
# md = re.sub(r'^<table>', r'.<table>', md, flags=re.MULTILINE)
html = markdown_convertion_for_file(md)
with open(preview_fp, "w", encoding="utf-8") as f: f.write(html)
chatbot.append([None, f"生成在线预览:{generate_file_link([preview_fp])}"])
@@ -182,7 +184,7 @@ def 解析PDF_DOC2X_单文件(fp, project_folder, llm_kwargs, plugin_kwargs, cha
with open(generated_fp, 'r', encoding='utf8') as f: content = f.read()
content = content.replace('```markdown', '\n').replace('```', '\n')
# Markdown中使用不标准的表格需要在表格前加上一个emoji以便公式渲染
content = re.sub(r'^<table>', r'😃<table>', content, flags=re.MULTILINE)
# content = re.sub(r'^<table>', r'.<table>', content, flags=re.MULTILINE)
with open(generated_fp, 'w', encoding='utf8') as f: f.write(content)
# 生成在线预览html
file_name = '在线预览翻译' + gen_time_str() + '.html'

View File

@@ -0,0 +1,87 @@
SearchOptimizerPrompt="""作为一个网页搜索助手,你的任务是结合历史记录,从不同角度,为“原问题”生成个不同版本的“检索词”,从而提高网页检索的精度。生成的问题要求指向对象清晰明确,并与“原问题语言相同”。例如:
历史记录:
"
Q: 对话背景。
A: 当前对话是关于 Nginx 的介绍和在Ubuntu上的使用等。
"
原问题: 怎么下载
检索词: ["Nginx 下载","Ubuntu Nginx","Ubuntu安装Nginx"]
----------------
历史记录:
"
Q: 对话背景。
A: 当前对话是关于 Nginx 的介绍和使用等。
Q: 报错 "no connection"
A: 报错"no connection"可能是因为……
"
原问题: 怎么解决
检索词: ["Nginx报错"no connection" 解决","Nginx'no connection'报错 原因","Nginx提示'no connection'"]
----------------
历史记录:
"
"
原问题: 你知道 Python 么?
检索词: ["Python","Python 使用教程。","Python 特点和优势"]
----------------
历史记录:
"
Q: 列出Java的三种特点
A: 1. Java 是一种编译型语言。
2. Java 是一种面向对象的编程语言。
3. Java 是一种跨平台的编程语言。
"
原问题: 介绍下第2点。
检索词: ["Java 面向对象特点","Java 面向对象编程优势。","Java 面向对象编程"]
----------------
现在有历史记录:
"
{history}
"
有其原问题: {query}
直接给出最多{num}个检索词必须以json形式给出不得有多余字符:
"""
SearchAcademicOptimizerPrompt="""作为一个学术论文搜索助手,你的任务是结合历史记录,从不同角度,为“原问题”生成个不同版本的“检索词”,从而提高学术论文检索的精度。生成的问题要求指向对象清晰明确,并与“原问题语言相同”。例如:
历史记录:
"
Q: 对话背景。
A: 当前对话是关于深度学习的介绍和在图像识别中的应用等。
"
原问题: 怎么下载相关论文
检索词: ["深度学习 图像识别 论文下载","图像识别 深度学习 研究论文","深度学习 图像识别 论文资源","Deep Learning Image Recognition Paper Download","Image Recognition Deep Learning Research Paper"]
----------------
历史记录:
"
Q: 对话背景。
A: 当前对话是关于深度学习的介绍和应用等。
Q: 报错 "模型不收敛"
A: 报错"模型不收敛"可能是因为……
"
原问题: 怎么解决
检索词: ["深度学习 模型不收敛 解决方案 论文","深度学习 模型不收敛 原因 研究","深度学习 模型不收敛 论文","Deep Learning Model Convergence Issue Solution Paper","Deep Learning Model Convergence Problem Research"]
----------------
历史记录:
"
"
原问题: 你知道 GAN 么?
检索词: ["生成对抗网络 论文","GAN 使用教程 论文","GAN 特点和优势 研究","Generative Adversarial Network Paper","GAN Usage Tutorial Paper"]
----------------
历史记录:
"
Q: 列出机器学习的三种应用?
A: 1. 机器学习在图像识别中的应用。
2. 机器学习在自然语言处理中的应用。
3. 机器学习在推荐系统中的应用。
"
原问题: 介绍下第2点。
检索词: ["机器学习 自然语言处理 应用 论文","机器学习 自然语言处理 研究","机器学习 NLP 应用 论文","Machine Learning Natural Language Processing Application Paper","Machine Learning NLP Research"]
----------------
现在有历史记录:
"
{history}
"
有其原问题: {query}
直接给出最多{num}个检索词必须以json形式给出不得有多余字符:
"""

View File

@@ -77,7 +77,7 @@ def 解析Paper(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbo
prefix = "接下来请你逐文件分析下面的论文文件,概括其内容" if index==0 else ""
i_say = prefix + f'请对下面的文章片段用中文做一个概述,文件名是{os.path.relpath(fp, project_folder)},文章内容是 ```{file_content}```'
i_say_show_user = prefix + f'[{index}/{len(file_manifest)}] 请对下面的文章片段做一个概述: {os.path.abspath(fp)}'
i_say_show_user = prefix + f'[{index+1}/{len(file_manifest)}] 请对下面的文章片段做一个概述: {os.path.abspath(fp)}'
chatbot.append((i_say_show_user, "[Local Message] waiting gpt response."))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面

View File

@@ -12,7 +12,7 @@ def 生成函数注释(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
file_content = f.read()
i_say = f'请对下面的程序文件做一个概述并对文件中的所有函数生成注释使用markdown表格输出结果文件名是{os.path.relpath(fp, project_folder)},文件内容是 ```{file_content}```'
i_say_show_user = f'[{index}/{len(file_manifest)}] 请对下面的程序文件做一个概述,并对文件中的所有函数生成注释: {os.path.abspath(fp)}'
i_say_show_user = f'[{index+1}/{len(file_manifest)}] 请对下面的程序文件做一个概述,并对文件中的所有函数生成注释: {os.path.abspath(fp)}'
chatbot.append((i_say_show_user, "[Local Message] waiting gpt response."))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面

View File

@@ -13,7 +13,7 @@ def 解析Paper(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbo
prefix = "接下来请你逐文件分析下面的论文文件,概括其内容" if index==0 else ""
i_say = prefix + f'请对下面的文章片段用中文做一个概述,文件名是{os.path.relpath(fp, project_folder)},文章内容是 ```{file_content}```'
i_say_show_user = prefix + f'[{index}/{len(file_manifest)}] 请对下面的文章片段做一个概述: {os.path.abspath(fp)}'
i_say_show_user = prefix + f'[{index+1}/{len(file_manifest)}] 请对下面的文章片段做一个概述: {os.path.abspath(fp)}'
chatbot.append((i_say_show_user, "[Local Message] waiting gpt response."))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面

View File

@@ -3,6 +3,9 @@
# 从NVIDIA源从而支持显卡检查宿主的nvidia-smi中的cuda版本必须>=11.3
FROM fuqingxu/11.3.1-runtime-ubuntu20.04-with-texlive:latest
# edge-tts需要的依赖某些pip包所需的依赖
RUN apt update && apt install ffmpeg build-essential -y
# use python3 as the system default python
WORKDIR /gpt
RUN curl -sS https://bootstrap.pypa.io/get-pip.py | python3.8
@@ -28,8 +31,6 @@ RUN python3 -m pip install -r request_llms/requirements_chatglm.txt
RUN python3 -m pip install -r request_llms/requirements_newbing.txt
RUN python3 -m pip install nougat-ocr
# edge-tts需要的依赖
RUN apt update && apt install ffmpeg -y
# 预热Tiktoken模块
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'

View File

@@ -5,6 +5,9 @@
# 从NVIDIA源从而支持显卡检查宿主的nvidia-smi中的cuda版本必须>=11.3
FROM fuqingxu/11.3.1-runtime-ubuntu20.04-with-texlive:latest
# edge-tts需要的依赖某些pip包所需的依赖
RUN apt update && apt install ffmpeg build-essential -y
# use python3 as the system default python
WORKDIR /gpt
RUN curl -sS https://bootstrap.pypa.io/get-pip.py | python3.8
@@ -36,8 +39,6 @@ RUN python3 -m pip install -r request_llms/requirements_chatglm.txt
RUN python3 -m pip install -r request_llms/requirements_newbing.txt
RUN python3 -m pip install nougat-ocr
# edge-tts需要的依赖
RUN apt update && apt install ffmpeg -y
# 预热Tiktoken模块
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'

View File

@@ -5,6 +5,8 @@ RUN apt-get update
RUN apt-get install -y curl proxychains curl gcc
RUN apt-get install -y git python python3 python-dev python3-dev --fix-missing
# edge-tts需要的依赖某些pip包所需的依赖
RUN apt update && apt install ffmpeg build-essential -y
# use python3 as the system default python
RUN curl -sS https://bootstrap.pypa.io/get-pip.py | python3.8
@@ -21,8 +23,6 @@ RUN python3 -m pip install -r request_llms/requirements_qwen.txt
RUN python3 -m pip install -r request_llms/requirements_chatglm.txt
RUN python3 -m pip install -r request_llms/requirements_newbing.txt
# edge-tts需要的依赖
RUN apt update && apt install ffmpeg -y
# 预热Tiktoken模块
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'

124
main.py
View File

@@ -24,13 +24,28 @@ def enable_log(PATH_LOGGING):
logging.getLogger("httpx").setLevel(logging.WARNING)
print(f"所有对话记录将自动保存在本地目录{log_dir}, 请注意自我隐私保护哦!")
def encode_plugin_info(k, plugin)->str:
import copy
from themes.theme import to_cookie_str
plugin_ = copy.copy(plugin)
plugin_.pop("Function", None)
plugin_.pop("Class", None)
plugin_.pop("Button", None)
plugin_["Info"] = plugin.get("Info", k)
if plugin.get("AdvancedArgs", False):
plugin_["Label"] = f"插件[{k}]的高级参数说明:" + plugin.get("ArgsReminder", f"没有提供高级参数功能说明")
else:
plugin_["Label"] = f"插件[{k}]不需要高级参数。"
return to_cookie_str(plugin_)
def main():
import gradio as gr
if gr.__version__ not in ['3.32.9', '3.32.10']:
if gr.__version__ not in ['3.32.9', '3.32.10', '3.32.11']:
raise ModuleNotFoundError("使用项目内置Gradio获取最优体验! 请运行 `pip install -r requirements.txt` 指令安装内置Gradio及其他依赖, 详情信息见requirements.txt.")
from request_llms.bridge_all import predict
from toolbox import format_io, find_free_port, on_file_uploaded, on_report_generated, get_conf, ArgsGeneralWrapper, DummyWith
# 建议您复制一个config_private.py放自己的秘密, 如API和代理网址
# 读取配置
proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION = get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION')
CHATBOT_HEIGHT, LAYOUT, AVAIL_LLM_MODELS, AUTO_CLEAR_TXT = get_conf('CHATBOT_HEIGHT', 'LAYOUT', 'AVAIL_LLM_MODELS', 'AUTO_CLEAR_TXT')
ENABLE_AUDIO, AUTO_CLEAR_TXT, PATH_LOGGING, AVAIL_THEMES, THEME, ADD_WAIFU = get_conf('ENABLE_AUDIO', 'AUTO_CLEAR_TXT', 'PATH_LOGGING', 'AVAIL_THEMES', 'THEME', 'ADD_WAIFU')
@@ -42,7 +57,7 @@ def main():
PORT = find_free_port() if WEB_PORT <= 0 else WEB_PORT
from check_proxy import get_current_version
from themes.theme import adjust_theme, advanced_css, theme_declaration, js_code_clear, js_code_reset, js_code_show_or_hide, js_code_show_or_hide_group2
from themes.theme import js_code_for_css_changing, js_code_for_toggle_darkmode, js_code_for_persistent_cookie_init
from themes.theme import js_code_for_toggle_darkmode, js_code_for_persistent_cookie_init
from themes.theme import load_dynamic_theme, to_cookie_str, from_cookie_str, assign_user_uuid
title_html = f"<h1 align=\"center\">GPT 学术优化 {get_current_version()}</h1>{theme_declaration}"
@@ -70,6 +85,7 @@ def main():
from check_proxy import check_proxy, auto_update, warm_up_modules
proxy_info = check_proxy(proxies)
# 切换布局
gr_L1 = lambda: gr.Row().style()
gr_L2 = lambda scale, elem_id: gr.Column(scale=scale, elem_id=elem_id, min_width=400)
if LAYOUT == "TOP-DOWN":
@@ -84,7 +100,7 @@ def main():
with gr.Blocks(title="GPT 学术优化", theme=set_theme, analytics_enabled=False, css=advanced_css) as app_block:
gr.HTML(title_html)
secret_css = gr.Textbox(visible=False, elem_id="secret_css")
register_advanced_plugin_init_code_arr = ""
register_advanced_plugin_init_arr = ""
cookies, web_cookie_cache = make_cookie_cache() # 定义 后端statecookies、前端web_cookie_cache两兄弟
with gr_L1():
@@ -96,8 +112,18 @@ def main():
with gr.Accordion("输入区", open=True, elem_id="input-panel") as area_input_primary:
with gr.Row():
txt = gr.Textbox(show_label=False, placeholder="Input question here.", elem_id='user_input_main').style(container=False)
with gr.Row():
submitBtn = gr.Button("提交", elem_id="elem_submit", variant="primary")
with gr.Row(elem_id="gpt-submit-row"):
multiplex_submit_btn = gr.Button("提交", elem_id="elem_submit_visible", variant="primary")
multiplex_sel = gr.Dropdown(
choices=[
"常规对话",
"多模型对话",
# "智能上下文",
# "智能召回 RAG",
], value="常规对话",
interactive=True, label='', show_label=False,
elem_classes='normal_mut_select', elem_id="gpt-submit-dropdown").style(container=False)
submit_btn = gr.Button("提交", elem_id="elem_submit", variant="primary", visible=False)
with gr.Row():
resetBtn = gr.Button("重置", elem_id="elem_reset", variant="secondary"); resetBtn.style(size="sm")
stopBtn = gr.Button("停止", elem_id="elem_stop", variant="secondary"); stopBtn.style(size="sm")
@@ -106,7 +132,7 @@ def main():
with gr.Row():
audio_mic = gr.Audio(source="microphone", type="numpy", elem_id="elem_audio", streaming=True, show_label=False).style(container=False)
with gr.Row():
status = gr.Markdown(f"Tip: 按Enter提交, 按Shift+Enter换行。当前模型: {LLM_MODEL} \n {proxy_info}", elem_id="state-panel")
status = gr.Markdown(f"Tip: 按Enter提交, 按Shift+Enter换行。支持将文件直接粘贴到输入区。", elem_id="state-panel")
with gr.Accordion("基础功能区", open=True, elem_id="basic-panel") as area_basic_fn:
with gr.Row():
@@ -127,13 +153,15 @@ def main():
plugin_group_sel = gr.Dropdown(choices=all_plugin_groups, label='', show_label=False, value=DEFAULT_FN_GROUPS,
multiselect=True, interactive=True, elem_classes='normal_mut_select').style(container=False)
with gr.Row():
for k, plugin in plugins.items():
for index, (k, plugin) in enumerate(plugins.items()):
if not plugin.get("AsButton", True): continue
visible = True if match_group(plugin['Group'], DEFAULT_FN_GROUPS) else False
variant = plugins[k]["Color"] if "Color" in plugin else "secondary"
info = plugins[k].get("Info", k)
btn_elem_id = f"plugin_btn_{index}"
plugin['Button'] = plugins[k]['Button'] = gr.Button(k, variant=variant,
visible=visible, info_str=f'函数插件区: {info}').style(size="sm")
visible=visible, info_str=f'函数插件区: {info}', elem_id=btn_elem_id).style(size="sm")
plugin['ButtonElemId'] = btn_elem_id
with gr.Row():
with gr.Accordion("更多函数插件", open=True):
dropdown_fn_list = []
@@ -142,24 +170,27 @@ def main():
if not plugin.get("AsButton", True): dropdown_fn_list.append(k) # 排除已经是按钮的插件
elif plugin.get('AdvancedArgs', False): dropdown_fn_list.append(k) # 对于需要高级参数的插件,亦在下拉菜单中显示
with gr.Row():
dropdown = gr.Dropdown(dropdown_fn_list, value=r"点击这里搜索插件列表", label="", show_label=False).style(container=False)
dropdown = gr.Dropdown(dropdown_fn_list, value=r"点击这里输入「关键词」搜索插件", label="", show_label=False).style(container=False)
with gr.Row():
plugin_advanced_arg = gr.Textbox(show_label=True, label="高级参数输入区", visible=False, elem_id="advance_arg_input_legacy",
placeholder="这里是特殊函数插件的高级参数输入区").style(container=False)
with gr.Row():
switchy_bt = gr.Button(r"请先从插件列表中选择", variant="secondary").style(size="sm")
switchy_bt = gr.Button(r"请先从插件列表中选择", variant="secondary", elem_id="elem_switchy_bt").style(size="sm")
with gr.Row():
with gr.Accordion("点击展开“文件下载区”。", open=False) as area_file_up:
file_upload = gr.Files(label="任何文件, 推荐上传压缩文件(zip, tar)", file_count="multiple", elem_id="elem_upload")
# 左上角工具栏定义
from themes.gui_toolbar import define_gui_toolbar
checkboxes, checkboxes_2, max_length_sl, theme_dropdown, system_prompt, file_upload_2, md_dropdown, top_p, temperature = \
define_gui_toolbar(AVAIL_LLM_MODELS, LLM_MODEL, INIT_SYS_PROMPT, THEME, AVAIL_THEMES, ADD_WAIFU, help_menu_description, js_code_for_toggle_darkmode)
# 浮动菜单定义
from themes.gui_floating_menu import define_gui_floating_menu
area_input_secondary, txt2, area_customize, submitBtn2, resetBtn2, clearBtn2, stopBtn2 = \
area_input_secondary, txt2, area_customize, _, resetBtn2, clearBtn2, stopBtn2 = \
define_gui_floating_menu(customize_btns, functional, predefined_btns, cookies, web_cookie_cache)
# 插件二级菜单的实现
from themes.gui_advanced_plugin_class import define_gui_advanced_plugin_class
new_plugin_callback, route_switchy_bt_with_arg, usr_confirmed_arg = \
define_gui_advanced_plugin_class(plugins)
@@ -188,11 +219,15 @@ def main():
input_combo_order = ["cookies", "max_length_sl", "md_dropdown", "txt", "txt2", "top_p", "temperature", "chatbot", "history", "system_prompt", "plugin_advanced_arg"]
output_combo = [cookies, chatbot, history, status]
predict_args = dict(fn=ArgsGeneralWrapper(predict), inputs=[*input_combo, gr.State(True)], outputs=output_combo)
# 提交按钮、重置按钮
cancel_handles.append(txt.submit(**predict_args))
cancel_handles.append(txt2.submit(**predict_args))
cancel_handles.append(submitBtn.click(**predict_args))
cancel_handles.append(submitBtn2.click(**predict_args))
multiplex_submit_btn.click(
None, [multiplex_sel], None, _js="""(multiplex_sel)=>multiplex_function_begin(multiplex_sel)""")
txt.submit(
None, [multiplex_sel], None, _js="""(multiplex_sel)=>multiplex_function_begin(multiplex_sel)""")
multiplex_sel.select(
None, [multiplex_sel], None, _js=f"""(multiplex_sel)=>run_multiplex_shift(multiplex_sel)""")
cancel_handles.append(submit_btn.click(**predict_args))
resetBtn.click(None, None, [chatbot, history, status], _js=js_code_reset) # 先在前端快速清除chatbot&status
resetBtn2.click(None, None, [chatbot, history, status], _js=js_code_reset) # 先在前端快速清除chatbot&status
reset_server_side_args = (lambda history: ([], [], "已重置", json.dumps(history)), [history], [chatbot, history, status, history_cache])
@@ -201,10 +236,7 @@ def main():
clearBtn.click(None, None, [txt, txt2], _js=js_code_clear)
clearBtn2.click(None, None, [txt, txt2], _js=js_code_clear)
if AUTO_CLEAR_TXT:
submitBtn.click(None, None, [txt, txt2], _js=js_code_clear)
submitBtn2.click(None, None, [txt, txt2], _js=js_code_clear)
txt.submit(None, None, [txt, txt2], _js=js_code_clear)
txt2.submit(None, None, [txt, txt2], _js=js_code_clear)
submit_btn.click(None, None, [txt, txt2], _js=js_code_clear)
# 基础功能区的回调函数注册
for k in functional:
if ("Visible" in functional[k]) and (not functional[k]["Visible"]): continue
@@ -218,34 +250,26 @@ def main():
file_upload_2.upload(on_file_uploaded, [file_upload_2, chatbot, txt, txt2, checkboxes, cookies], [chatbot, txt, txt2, cookies]).then(None, None, None, _js=r"()=>{toast_push('上传完毕 ...'); cancel_loading_status();}")
# 函数插件-固定按钮区
for k in plugins:
register_advanced_plugin_init_arr += f"""register_plugin_init("{k}","{encode_plugin_info(k, plugins[k])}");"""
if plugins[k].get("Class", None):
plugins[k]["JsMenu"] = plugins[k]["Class"]().get_js_code_for_generating_menu(k)
register_advanced_plugin_init_code_arr += """register_advanced_plugin_init_code("{k}","{gui_js}");""".format(k=k, gui_js=plugins[k]["JsMenu"])
register_advanced_plugin_init_arr += """register_advanced_plugin_init_code("{k}","{gui_js}");""".format(k=k, gui_js=plugins[k]["JsMenu"])
if not plugins[k].get("AsButton", True): continue
if plugins[k].get("Class", None) is None:
assert plugins[k].get("Function", None) is not None
click_handle = plugins[k]["Button"].click(ArgsGeneralWrapper(plugins[k]["Function"]), [*input_combo], output_combo)
click_handle.then(on_report_generated, [cookies, file_upload, chatbot], [cookies, file_upload, chatbot]).then(None, [plugins[k]["Button"]], None, _js=r"(fn)=>on_plugin_exe_complete(fn)")
cancel_handles.append(click_handle)
click_handle = plugins[k]["Button"].click(None, inputs=[], outputs=None, _js=f"""()=>run_classic_plugin_via_id("{plugins[k]["ButtonElemId"]}")""")
else:
click_handle = plugins[k]["Button"].click(None, inputs=[], outputs=None, _js=f"""()=>run_advanced_plugin_launch_code("{k}")""")
# 函数插件-下拉菜单与随变按钮的互动
def on_dropdown_changed(k):
variant = plugins[k]["Color"] if "Color" in plugins[k] else "secondary"
info = plugins[k].get("Info", k)
ret = {switchy_bt: gr.update(value=k, variant=variant, info_str=f'函数插件区: {info}')}
if plugins[k].get("AdvancedArgs", False): # 是否唤起高级插件参数区
ret.update({plugin_advanced_arg: gr.update(visible=True, label=f"插件[{k}]的高级参数说明:" + plugins[k].get("ArgsReminder", [f"没有提供高级参数功能说明"]))})
else:
ret.update({plugin_advanced_arg: gr.update(visible=False, label=f"插件[{k}]不需要高级参数。")})
return ret
dropdown.select(on_dropdown_changed, [dropdown], [switchy_bt, plugin_advanced_arg] )
# 函数插件-下拉菜单与随变按钮的互动(新版-更流畅)
dropdown.select(None, [dropdown], None, _js=f"""(dropdown)=>run_dropdown_shift(dropdown)""")
# 模型切换时的回调
def on_md_dropdown_changed(k):
return {chatbot: gr.update(label="当前模型:"+k)}
md_dropdown.select(on_md_dropdown_changed, [md_dropdown], [chatbot] )
md_dropdown.select(on_md_dropdown_changed, [md_dropdown], [chatbot])
# 主题修改
def on_theme_dropdown_changed(theme, secret_css):
adjust_theme, css_part1, _, adjust_dynamic_theme = load_dynamic_theme(theme)
if adjust_dynamic_theme:
@@ -253,15 +277,8 @@ def main():
else:
css_part2 = adjust_theme()._get_theme_css()
return css_part2 + css_part1
theme_handle = theme_dropdown.select(on_theme_dropdown_changed, [theme_dropdown, secret_css], [secret_css])
theme_handle.then(
None,
[secret_css],
None,
_js=js_code_for_css_changing
)
theme_handle = theme_dropdown.select(on_theme_dropdown_changed, [theme_dropdown, secret_css], [secret_css]) # , _js="""change_theme_prepare""")
theme_handle.then(None, [theme_dropdown, secret_css], None, _js="""change_theme""")
switchy_bt.click(None, [switchy_bt], None, _js="(switchy_bt)=>on_flex_button_click(switchy_bt)")
# 随变按钮的回调函数注册
@@ -276,9 +293,10 @@ def main():
click_handle_ng.then(on_report_generated, [cookies, file_upload, chatbot], [cookies, file_upload, chatbot]).then(None, [switchy_bt], None, _js=r"(fn)=>on_plugin_exe_complete(fn)")
cancel_handles.append(click_handle_ng)
# 新一代插件的高级参数区确认按钮(隐藏)
click_handle_ng = new_plugin_callback.click(route_switchy_bt_with_arg, [
gr.State(["new_plugin_callback", "usr_confirmed_arg"] + input_combo_order),
new_plugin_callback, usr_confirmed_arg, *input_combo
click_handle_ng = new_plugin_callback.click(route_switchy_bt_with_arg,
[
gr.State(["new_plugin_callback", "usr_confirmed_arg"] + input_combo_order), # 第一个参数: 指定了后续参数的名称
new_plugin_callback, usr_confirmed_arg, *input_combo # 后续参数: 真正的参数
], output_combo)
click_handle_ng.then(on_report_generated, [cookies, file_upload, chatbot], [cookies, file_upload, chatbot]).then(None, [switchy_bt], None, _js=r"(fn)=>on_plugin_exe_complete(fn)")
cancel_handles.append(click_handle_ng)
@@ -298,6 +316,8 @@ def main():
elif match_group(plugin['Group'], group_list): fns_list.append(k) # 刷新下拉列表
return [*btn_list, gr.Dropdown.update(choices=fns_list)]
plugin_group_sel.select(fn=on_group_change, inputs=[plugin_group_sel], outputs=[*[plugin['Button'] for name, plugin in plugins_as_btn.items()], dropdown])
# 是否启动语音输入功能
if ENABLE_AUDIO:
from crazy_functions.live_audio.audio_io import RealtimeAudioDistribution
rad = RealtimeAudioDistribution()
@@ -305,18 +325,18 @@ def main():
rad.feed(cookies['uuid'].hex, audio)
audio_mic.stream(deal_audio, inputs=[audio_mic, cookies])
# 生成当前浏览器窗口的uuid刷新失效
app_block.load(assign_user_uuid, inputs=[cookies], outputs=[cookies])
# 初始化(前端)
from shared_utils.cookie_manager import load_web_cookie_cache__fn_builder
load_web_cookie_cache = load_web_cookie_cache__fn_builder(customize_btns, cookies, predefined_btns)
app_block.load(load_web_cookie_cache, inputs = [web_cookie_cache, cookies],
outputs = [web_cookie_cache, cookies, *customize_btns.values(), *predefined_btns.values()], _js=js_code_for_persistent_cookie_init)
app_block.load(None, inputs=[], outputs=None, _js=f"""()=>GptAcademicJavaScriptInit("{DARK_MODE}","{INIT_SYS_PROMPT}","{ADD_WAIFU}","{LAYOUT}","{TTS_TYPE}")""") # 配置暗色主题或亮色主题
app_block.load(None, inputs=[], outputs=None, _js="""()=>{REP}""".replace("REP", register_advanced_plugin_init_code_arr))
app_block.load(None, inputs=[], outputs=None, _js="""()=>{REP}""".replace("REP", register_advanced_plugin_init_arr))
# gradio的inbrowser触发不太稳定回滚代码到原始的浏览器打开函数
# Gradio的inbrowser触发不太稳定回滚代码到原始的浏览器打开函数
def run_delayed_tasks():
import threading, webbrowser, time
print(f"如果浏览器没有自动打开请复制并转到以下URL")

View File

@@ -34,6 +34,9 @@ from .bridge_google_gemini import predict_no_ui_long_connection as genai_noui
from .bridge_zhipu import predict_no_ui_long_connection as zhipu_noui
from .bridge_zhipu import predict as zhipu_ui
from .bridge_taichu import predict_no_ui_long_connection as taichu_noui
from .bridge_taichu import predict as taichu_ui
from .bridge_cohere import predict as cohere_ui
from .bridge_cohere import predict_no_ui_long_connection as cohere_noui
@@ -116,6 +119,15 @@ model_info = {
"token_cnt": get_token_num_gpt35,
},
"taichu": {
"fn_with_ui": taichu_ui,
"fn_without_ui": taichu_noui,
"endpoint": openai_endpoint,
"max_token": 4096,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
"gpt-3.5-turbo-16k": {
"fn_with_ui": chatgpt_ui,
"fn_without_ui": chatgpt_noui,
@@ -183,6 +195,17 @@ model_info = {
"fn_with_ui": chatgpt_ui,
"fn_without_ui": chatgpt_noui,
"endpoint": openai_endpoint,
"has_multimodal_capacity": True,
"max_token": 128000,
"tokenizer": tokenizer_gpt4,
"token_cnt": get_token_num_gpt4,
},
"gpt-4o-mini": {
"fn_with_ui": chatgpt_ui,
"fn_without_ui": chatgpt_noui,
"endpoint": openai_endpoint,
"has_multimodal_capacity": True,
"max_token": 128000,
"tokenizer": tokenizer_gpt4,
"token_cnt": get_token_num_gpt4,
@@ -191,6 +214,7 @@ model_info = {
"gpt-4o-2024-05-13": {
"fn_with_ui": chatgpt_ui,
"fn_without_ui": chatgpt_noui,
"has_multimodal_capacity": True,
"endpoint": openai_endpoint,
"max_token": 128000,
"tokenizer": tokenizer_gpt4,
@@ -227,6 +251,7 @@ model_info = {
"gpt-4-turbo": {
"fn_with_ui": chatgpt_ui,
"fn_without_ui": chatgpt_noui,
"has_multimodal_capacity": True,
"endpoint": openai_endpoint,
"max_token": 128000,
"tokenizer": tokenizer_gpt4,
@@ -236,13 +261,13 @@ model_info = {
"gpt-4-turbo-2024-04-09": {
"fn_with_ui": chatgpt_ui,
"fn_without_ui": chatgpt_noui,
"has_multimodal_capacity": True,
"endpoint": openai_endpoint,
"max_token": 128000,
"tokenizer": tokenizer_gpt4,
"token_cnt": get_token_num_gpt4,
},
"gpt-3.5-random": {
"fn_with_ui": chatgpt_ui,
"fn_without_ui": chatgpt_noui,
@@ -459,7 +484,7 @@ for model in AVAIL_LLM_MODELS:
# -=-=-=-=-=-=- 以下部分是新加入的模型,可能附带额外依赖 -=-=-=-=-=-=-
# claude家族
claude_models = ["claude-instant-1.2","claude-2.0","claude-2.1","claude-3-haiku-20240307","claude-3-sonnet-20240229","claude-3-opus-20240229"]
claude_models = ["claude-instant-1.2","claude-2.0","claude-2.1","claude-3-haiku-20240307","claude-3-sonnet-20240229","claude-3-opus-20240229","claude-3-5-sonnet-20240620"]
if any(item in claude_models for item in AVAIL_LLM_MODELS):
from .bridge_claude import predict_no_ui_long_connection as claude_noui
from .bridge_claude import predict as claude_ui
@@ -523,6 +548,16 @@ if any(item in claude_models for item in AVAIL_LLM_MODELS):
"token_cnt": get_token_num_gpt35,
},
})
model_info.update({
"claude-3-5-sonnet-20240620": {
"fn_with_ui": claude_ui,
"fn_without_ui": claude_noui,
"endpoint": claude_endpoint,
"max_token": 200000,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
})
if "jittorllms_rwkv" in AVAIL_LLM_MODELS:
from .bridge_jittorllms_rwkv import predict_no_ui_long_connection as rwkv_noui
from .bridge_jittorllms_rwkv import predict as rwkv_ui
@@ -844,6 +879,15 @@ if "sparkv3" in AVAIL_LLM_MODELS or "sparkv3.5" in AVAIL_LLM_MODELS: # 讯飞
"max_token": 4096,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
"sparkv4":{
"fn_with_ui": spark_ui,
"fn_without_ui": spark_noui,
"can_multi_thread": True,
"endpoint": None,
"max_token": 4096,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
}
})
except:
@@ -932,12 +976,13 @@ for model in [m for m in AVAIL_LLM_MODELS if m.startswith("one-api-")]:
# "mixtral-8x7b" 是模型名(必要)
# "(max_token=6666)" 是配置(非必要)
try:
_, max_token_tmp = read_one_api_model_name(model)
origin_model_name, max_token_tmp = read_one_api_model_name(model)
# 如果是已知模型,则尝试获取其信息
original_model_info = model_info.get(origin_model_name.replace("one-api-", "", 1), None)
except:
print(f"one-api模型 {model} 的 max_token 配置不是整数,请检查配置文件。")
continue
model_info.update({
model: {
this_model_info = {
"fn_with_ui": chatgpt_ui,
"fn_without_ui": chatgpt_noui,
"can_multi_thread": True,
@@ -945,8 +990,17 @@ for model in [m for m in AVAIL_LLM_MODELS if m.startswith("one-api-")]:
"max_token": max_token_tmp,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
})
}
# 同步已知模型的其他信息
attribute = "has_multimodal_capacity"
if original_model_info is not None and original_model_info.get(attribute, None) is not None: this_model_info.update({attribute: original_model_info.get(attribute, None)})
# attribute = "attribute2"
# if original_model_info is not None and original_model_info.get(attribute, None) is not None: this_model_info.update({attribute: original_model_info.get(attribute, None)})
# attribute = "attribute3"
# if original_model_info is not None and original_model_info.get(attribute, None) is not None: this_model_info.update({attribute: original_model_info.get(attribute, None)})
model_info.update({model: this_model_info})
# -=-=-=-=-=-=- vllm 对齐支持 -=-=-=-=-=-=-
for model in [m for m in AVAIL_LLM_MODELS if m.startswith("vllm-")]:
# 为了更灵活地接入vllm多模型管理界面设计了此接口例子AVAIL_LLM_MODELS = ["vllm-/home/hmp/llm/cache/Qwen1___5-32B-Chat(max_token=6666)"]

View File

@@ -1,5 +1,3 @@
# 借鉴了 https://github.com/GaiZhenbiao/ChuanhuChatGPT 项目
"""
该文件中主要包含三个函数
@@ -11,19 +9,19 @@
"""
import json
import os
import re
import time
import gradio as gr
import logging
import traceback
import requests
import importlib
import random
# config_private.py放自己的秘密如API和代理网址
# 读取时首先看是否存在私密的config_private配置文件不受git管控如果有则覆盖原config文件
from toolbox import get_conf, update_ui, is_any_api_key, select_api_key, what_keys, clip_history
from toolbox import trimmed_format_exc, is_the_upload_folder, read_one_api_model_name, log_chat
from toolbox import ChatBotWithCookies
from toolbox import ChatBotWithCookies, have_any_recent_upload_image_files, encode_image
proxies, TIMEOUT_SECONDS, MAX_RETRY, API_ORG, AZURE_CFG_ARRAY = \
get_conf('proxies', 'TIMEOUT_SECONDS', 'MAX_RETRY', 'API_ORG', 'AZURE_CFG_ARRAY')
@@ -41,6 +39,57 @@ def get_full_error(chunk, stream_response):
break
return chunk
def make_multimodal_input(inputs, image_paths):
image_base64_array = []
for image_path in image_paths:
path = os.path.abspath(image_path)
base64 = encode_image(path)
inputs = inputs + f'<br/><br/><div align="center"><img src="file={path}" base64="{base64}"></div>'
image_base64_array.append(base64)
return inputs, image_base64_array
def reverse_base64_from_input(inputs):
# 定义一个正则表达式来匹配 Base64 字符串(假设格式为 base64="<Base64编码>"
# pattern = re.compile(r'base64="([^"]+)"></div>')
pattern = re.compile(r'<br/><br/><div align="center"><img[^<>]+base64="([^"]+)"></div>')
# 使用 findall 方法查找所有匹配的 Base64 字符串
base64_strings = pattern.findall(inputs)
# 返回反转后的 Base64 字符串列表
return base64_strings
def contain_base64(inputs):
base64_strings = reverse_base64_from_input(inputs)
return len(base64_strings) > 0
def append_image_if_contain_base64(inputs):
if not contain_base64(inputs):
return inputs
else:
image_base64_array = reverse_base64_from_input(inputs)
pattern = re.compile(r'<br/><br/><div align="center"><img[^><]+></div>')
inputs = re.sub(pattern, '', inputs)
res = []
res.append({
"type": "text",
"text": inputs
})
for image_base64 in image_base64_array:
res.append({
"type": "image_url",
"image_url": {
"url": f"data:image/jpeg;base64,{image_base64}"
}
})
return res
def remove_image_if_contain_base64(inputs):
if not contain_base64(inputs):
return inputs
else:
pattern = re.compile(r'<br/><br/><div align="center"><img[^><]+></div>')
inputs = re.sub(pattern, '', inputs)
return inputs
def decode_chunk(chunk):
# 提前读取一些信息 (用于判断异常)
chunk_decoded = chunk.decode()
@@ -159,6 +208,7 @@ def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:ChatBotWith
chatbot 为WebUI中显示的对话列表修改它然后yeild出去可以直接修改对话界面内容
additional_fn代表点击的哪个按钮按钮见functional.py
"""
from .bridge_all import model_info
if is_any_api_key(inputs):
chatbot._cookies['api_key'] = inputs
chatbot.append(("输入已识别为openai的api_key", what_keys(inputs)))
@@ -174,7 +224,17 @@ def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:ChatBotWith
from core_functional import handle_core_functionality
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
chatbot.append((inputs, ""))
# 多模态模型
has_multimodal_capacity = model_info[llm_kwargs['llm_model']].get('has_multimodal_capacity', False)
if has_multimodal_capacity:
has_recent_image_upload, image_paths = have_any_recent_upload_image_files(chatbot, pop=True)
else:
has_recent_image_upload, image_paths = False, []
if has_recent_image_upload:
_inputs, image_base64_array = make_multimodal_input(inputs, image_paths)
else:
_inputs, image_base64_array = inputs, []
chatbot.append((_inputs, ""))
yield from update_ui(chatbot=chatbot, history=history, msg="等待响应") # 刷新界面
# check mis-behavior
@@ -184,7 +244,7 @@ def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:ChatBotWith
time.sleep(2)
try:
headers, payload = generate_payload(inputs, llm_kwargs, history, system_prompt, stream)
headers, payload = generate_payload(inputs, llm_kwargs, history, system_prompt, image_base64_array, has_multimodal_capacity, stream)
except RuntimeError as e:
chatbot[-1] = (inputs, f"您提供的api-key不满足要求不包含任何可用于{llm_kwargs['llm_model']}的api-key。您可能选择了错误的模型或请求源。")
yield from update_ui(chatbot=chatbot, history=history, msg="api-key不满足要求") # 刷新界面
@@ -192,7 +252,6 @@ def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:ChatBotWith
# 检查endpoint是否合法
try:
from .bridge_all import model_info
endpoint = verify_endpoint(model_info[llm_kwargs['llm_model']]['endpoint'])
except:
tb_str = '```\n' + trimmed_format_exc() + '```'
@@ -200,7 +259,11 @@ def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:ChatBotWith
yield from update_ui(chatbot=chatbot, history=history, msg="Endpoint不满足要求") # 刷新界面
return
history.append(inputs); history.append("")
# 加入历史
if has_recent_image_upload:
history.extend([_inputs, ""])
else:
history.extend([inputs, ""])
retry = 0
while True:
@@ -314,7 +377,7 @@ def handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg)
chatbot[-1] = (chatbot[-1][0], f"[Local Message] 异常 \n\n{tb_str} \n\n{regular_txt_to_markdown(chunk_decoded)}")
return chatbot, history
def generate_payload(inputs, llm_kwargs, history, system_prompt, stream):
def generate_payload(inputs:str, llm_kwargs:dict, history:list, system_prompt:str, image_base64_array:list=[], has_multimodal_capacity:bool=False, stream:bool=True):
"""
整合所有信息选择LLM模型生成http请求为发送请求做准备
"""
@@ -337,17 +400,27 @@ def generate_payload(inputs, llm_kwargs, history, system_prompt, stream):
azure_api_key_unshared = AZURE_CFG_ARRAY[llm_kwargs['llm_model']]["AZURE_API_KEY"]
headers.update({"api-key": azure_api_key_unshared})
conversation_cnt = len(history) // 2
if has_multimodal_capacity:
# 当以下条件满足时,启用多模态能力:
# 1. 模型本身是多模态模型has_multimodal_capacity
# 2. 输入包含图像len(image_base64_array) > 0
# 3. 历史输入包含图像( any([contain_base64(h) for h in history])
enable_multimodal_capacity = (len(image_base64_array) > 0) or any([contain_base64(h) for h in history])
else:
enable_multimodal_capacity = False
if not enable_multimodal_capacity:
# 不使用多模态能力
conversation_cnt = len(history) // 2
messages = [{"role": "system", "content": system_prompt}]
if conversation_cnt:
for index in range(0, 2*conversation_cnt, 2):
what_i_have_asked = {}
what_i_have_asked["role"] = "user"
what_i_have_asked["content"] = history[index]
what_i_have_asked["content"] = remove_image_if_contain_base64(history[index])
what_gpt_answer = {}
what_gpt_answer["role"] = "assistant"
what_gpt_answer["content"] = history[index+1]
what_gpt_answer["content"] = remove_image_if_contain_base64(history[index+1])
if what_i_have_asked["content"] != "":
if what_gpt_answer["content"] == "": continue
if what_gpt_answer["content"] == timeout_bot_msg: continue
@@ -355,11 +428,46 @@ def generate_payload(inputs, llm_kwargs, history, system_prompt, stream):
messages.append(what_gpt_answer)
else:
messages[-1]['content'] = what_gpt_answer['content']
what_i_ask_now = {}
what_i_ask_now["role"] = "user"
what_i_ask_now["content"] = inputs
messages.append(what_i_ask_now)
else:
# 多模态能力
conversation_cnt = len(history) // 2
messages = [{"role": "system", "content": system_prompt}]
if conversation_cnt:
for index in range(0, 2*conversation_cnt, 2):
what_i_have_asked = {}
what_i_have_asked["role"] = "user"
what_i_have_asked["content"] = append_image_if_contain_base64(history[index])
what_gpt_answer = {}
what_gpt_answer["role"] = "assistant"
what_gpt_answer["content"] = append_image_if_contain_base64(history[index+1])
if what_i_have_asked["content"] != "":
if what_gpt_answer["content"] == "": continue
if what_gpt_answer["content"] == timeout_bot_msg: continue
messages.append(what_i_have_asked)
messages.append(what_gpt_answer)
else:
messages[-1]['content'] = what_gpt_answer['content']
what_i_ask_now = {}
what_i_ask_now["role"] = "user"
what_i_ask_now["content"] = []
what_i_ask_now["content"].append({
"type": "text",
"text": inputs
})
for image_base64 in image_base64_array:
what_i_ask_now["content"].append({
"type": "image_url",
"image_url": {
"url": f"data:image/jpeg;base64,{image_base64}"
}
})
messages.append(what_i_ask_now)
model = llm_kwargs['llm_model']
if llm_kwargs['llm_model'].startswith('api2d-'):
model = llm_kwargs['llm_model'][len('api2d-'):]
@@ -388,10 +496,10 @@ def generate_payload(inputs, llm_kwargs, history, system_prompt, stream):
"n": 1,
"stream": stream,
}
try:
print(f" {llm_kwargs['llm_model']} : {conversation_cnt} : {inputs[:100]} ..........")
except:
print('输入中可能存在乱码。')
# try:
# print(f" {llm_kwargs['llm_model']} : {conversation_cnt} : {inputs[:100]} ..........")
# except:
# print('输入中可能存在乱码。')
return headers,payload

View File

@@ -27,10 +27,8 @@ timeout_bot_msg = '[Local Message] Request timeout. Network error. Please check
def report_invalid_key(key):
if get_conf("BLOCK_INVALID_APIKEY"):
# 实验性功能自动检测并屏蔽失效的KEY请勿使用
from request_llms.key_manager import ApiKeyManager
api_key = ApiKeyManager().add_key_to_blacklist(key)
# 弃用功能
return
def get_full_error(chunk, stream_response):
"""

View File

@@ -17,7 +17,7 @@ import json
import requests
from toolbox import get_conf, update_ui, trimmed_format_exc, encode_image, every_image_file_in_path, log_chat
picture_system_prompt = "\n当回复图像时,必须说明正在回复哪张图像。所有图像仅在最后一个问题中提供,即使它们在历史记录中被提及。请使用'这是第X张图像:'的格式来指明您正在描述的是哪张图像。"
Claude_3_Models = ["claude-3-haiku-20240307", "claude-3-sonnet-20240229", "claude-3-opus-20240229"]
Claude_3_Models = ["claude-3-haiku-20240307", "claude-3-sonnet-20240229", "claude-3-opus-20240229", "claude-3-5-sonnet-20240620"]
# config_private.py放自己的秘密如API和代理网址
# 读取时首先看是否存在私密的config_private配置文件不受git管控如果有则覆盖原config文件

View File

@@ -1,7 +1,7 @@
import time
import os
from toolbox import update_ui, get_conf, update_ui_lastest_msg
from toolbox import check_packages, report_exception
from toolbox import check_packages, report_exception, log_chat
model_name = 'Qwen'
@@ -59,6 +59,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
chatbot[-1] = (inputs, response)
yield from update_ui(chatbot=chatbot, history=history)
log_chat(llm_model=llm_kwargs["llm_model"], input_str=inputs, output_str=response)
# 总结输出
if response == f"[Local Message] 等待{model_name}响应中 ...":
response = f"[Local Message] {model_name}响应异常 ..."

View File

@@ -0,0 +1,72 @@
import time
import os
from toolbox import update_ui, get_conf, update_ui_lastest_msg, log_chat
from toolbox import check_packages, report_exception, have_any_recent_upload_image_files
from toolbox import ChatBotWithCookies
# model_name = 'Taichu-2.0'
# taichu_default_model = 'taichu_llm'
def validate_key():
TAICHU_API_KEY = get_conf("TAICHU_API_KEY")
if TAICHU_API_KEY == '': return False
return True
def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="",
observe_window:list=[], console_slience:bool=False):
"""
⭐多线程方法
函数的说明请见 request_llms/bridge_all.py
"""
watch_dog_patience = 5
response = ""
# if llm_kwargs["llm_model"] == "taichu":
# llm_kwargs["llm_model"] = "taichu"
if validate_key() is False:
raise RuntimeError('请配置 TAICHU_API_KEY')
# 开始接收回复
from .com_taichu import TaichuChatInit
zhipu_bro_init = TaichuChatInit()
for chunk, response in zhipu_bro_init.generate_chat(inputs, llm_kwargs, history, sys_prompt):
if len(observe_window) >= 1:
observe_window[0] = response
if len(observe_window) >= 2:
if (time.time() - observe_window[1]) > watch_dog_patience:
raise RuntimeError("程序终止。")
return response
def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:ChatBotWithCookies,
history:list=[], system_prompt:str='', stream:bool=True, additional_fn:str=None):
"""
⭐单线程方法
函数的说明请见 request_llms/bridge_all.py
"""
chatbot.append([inputs, ""])
yield from update_ui(chatbot=chatbot, history=history)
if validate_key() is False:
yield from update_ui_lastest_msg(lastmsg="[Local Message] 请配置ZHIPUAI_API_KEY", chatbot=chatbot, history=history, delay=0)
return
if additional_fn is not None:
from core_functional import handle_core_functionality
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
chatbot[-1] = [inputs, ""]
yield from update_ui(chatbot=chatbot, history=history)
# if llm_kwargs["llm_model"] == "taichu":
# llm_kwargs["llm_model"] = taichu_default_model
# 开始接收回复
from .com_taichu import TaichuChatInit
zhipu_bro_init = TaichuChatInit()
for chunk, response in zhipu_bro_init.generate_chat(inputs, llm_kwargs, history, system_prompt):
chatbot[-1] = [inputs, response]
yield from update_ui(chatbot=chatbot, history=history)
history.extend([inputs, response])
log_chat(llm_model=llm_kwargs["llm_model"], input_str=inputs, output_str=response)
yield from update_ui(chatbot=chatbot, history=history)

View File

@@ -65,8 +65,12 @@ class QwenRequestInstance():
self.result_buf += f"[Local Message] 请求错误:状态码:{response.status_code},错误码:{response.code},消息:{response.message}"
yield self.result_buf
break
logging.info(f'[raw_input] {inputs}')
logging.info(f'[response] {self.result_buf}')
# 耗尽generator避免报错
while True:
try: next(responses)
except: break
return self.result_buf

View File

@@ -67,6 +67,7 @@ class SparkRequestInstance():
self.gpt_url_v3 = "ws://spark-api.xf-yun.com/v3.1/chat"
self.gpt_url_v35 = "wss://spark-api.xf-yun.com/v3.5/chat"
self.gpt_url_img = "wss://spark-api.cn-huabei-1.xf-yun.com/v2.1/image"
self.gpt_url_v4 = "wss://spark-api.xf-yun.com/v4.0/chat"
self.time_to_yield_event = threading.Event()
self.time_to_exit_event = threading.Event()
@@ -94,6 +95,8 @@ class SparkRequestInstance():
gpt_url = self.gpt_url_v3
elif llm_kwargs['llm_model'] == 'sparkv3.5':
gpt_url = self.gpt_url_v35
elif llm_kwargs['llm_model'] == 'sparkv4':
gpt_url = self.gpt_url_v4
else:
gpt_url = self.gpt_url
file_manifest = []
@@ -194,6 +197,7 @@ def gen_params(appid, inputs, llm_kwargs, history, system_prompt, file_manifest)
"sparkv2": "generalv2",
"sparkv3": "generalv3",
"sparkv3.5": "generalv3.5",
"sparkv4": "4.0Ultra"
}
domains_select = domains[llm_kwargs['llm_model']]
if file_manifest: domains_select = 'image'

View File

@@ -0,0 +1,56 @@
# encoding: utf-8
# @Time : 2024/1/22
# @Author : Kilig947 & binary husky
# @Descr : 兼容最新的智谱Ai
from toolbox import get_conf
from toolbox import get_conf, encode_image, get_pictures_list
import logging, os, requests
import json
class TaichuChatInit:
def __init__(self): ...
def __conversation_user(self, user_input: str, llm_kwargs:dict):
return {"role": "user", "content": user_input}
def __conversation_history(self, history:list, llm_kwargs:dict):
messages = []
conversation_cnt = len(history) // 2
if conversation_cnt:
for index in range(0, 2 * conversation_cnt, 2):
what_i_have_asked = self.__conversation_user(history[index], llm_kwargs)
what_gpt_answer = {
"role": "assistant",
"content": history[index + 1]
}
messages.append(what_i_have_asked)
messages.append(what_gpt_answer)
return messages
def generate_chat(self, inputs:str, llm_kwargs:dict, history:list, system_prompt:str):
TAICHU_API_KEY = get_conf("TAICHU_API_KEY")
params = {
'api_key': TAICHU_API_KEY,
'model_code': 'taichu_llm',
'question': '\n\n'.join(history) + inputs,
'prefix': system_prompt,
'temperature': llm_kwargs.get('temperature', 0.95),
'stream_format': 'json'
}
api = 'https://ai-maas.wair.ac.cn/maas/v1/model_api/invoke'
response = requests.post(api, json=params, stream=True)
results = ""
if response.status_code == 200:
response.encoding = 'utf-8'
for line in response.iter_lines(decode_unicode=True):
try: delta = json.loads(line)['data']['content']
except: delta = json.loads(line)['choices'][0]['text']
results += delta
yield delta, results
else:
raise ValueError
if __name__ == '__main__':
zhipu = TaichuChatInit()
zhipu.generate_chat('你好', {'llm_model': 'glm-4'}, [], '你是WPSAi')

View File

@@ -44,7 +44,8 @@ def decode_chunk(chunk):
try:
chunk = json.loads(chunk[6:])
except:
finish_reason = "JSON_ERROR"
respose = ""
finish_reason = chunk
# 错误处理部分
if "error" in chunk:
respose = "API_ERROR"
@@ -199,10 +200,13 @@ def get_predict_function(
stream_response = response.iter_lines()
result = ""
finish_reason = ""
while True:
try:
chunk = next(stream_response)
except StopIteration:
if result == "":
raise RuntimeError(f"获得空的回复,可能原因:{finish_reason}")
break
except requests.exceptions.ConnectionError:
chunk = next(stream_response) # 失败了,重试一次?再失败就没办法了。
@@ -350,6 +354,10 @@ def get_predict_function(
response_text, finish_reason = decode_chunk(chunk)
# 返回的数据流第一次为空,继续等待
if response_text == "" and finish_reason != "False":
status_text = f"finish_reason: {finish_reason}"
yield from update_ui(
chatbot=chatbot, history=history, msg=status_text
)
continue
if chunk:
try:

View File

@@ -46,6 +46,16 @@ code_highlight_configs_block_mermaid = {
},
}
mathpatterns = {
r"(?<!\\|\$)(\$)([^\$]+)(\$)": {"allow_multi_lines": False}, #  $...$
r"(?<!\\)(\$\$)([^\$]+)(\$\$)": {"allow_multi_lines": True}, # $$...$$
r"(?<!\\)(\\\[)(.+?)(\\\])": {"allow_multi_lines": False}, # \[...\]
r'(?<!\\)(\\\()(.+?)(\\\))': {'allow_multi_lines': False}, # \(...\)
# r'(?<!\\)(\\begin{([a-z]+?\*?)})(.+?)(\\end{\2})': {'allow_multi_lines': True}, # \begin...\end
# r'(?<!\\)(\$`)([^`]+)(`\$)': {'allow_multi_lines': False}, # $`...`$
}
def tex2mathml_catch_exception(content, *args, **kwargs):
try:
content = tex2mathml(content, *args, **kwargs)
@@ -96,14 +106,7 @@ def is_equation(txt):
return False
if "$" not in txt and "\\[" not in txt:
return False
mathpatterns = {
r"(?<!\\|\$)(\$)([^\$]+)(\$)": {"allow_multi_lines": False}, #  $...$
r"(?<!\\)(\$\$)([^\$]+)(\$\$)": {"allow_multi_lines": True}, # $$...$$
r"(?<!\\)(\\\[)(.+?)(\\\])": {"allow_multi_lines": False}, # \[...\]
# r'(?<!\\)(\\\()(.+?)(\\\))': {'allow_multi_lines': False}, # \(...\)
# r'(?<!\\)(\\begin{([a-z]+?\*?)})(.+?)(\\end{\2})': {'allow_multi_lines': True}, # \begin...\end
# r'(?<!\\)(\$`)([^`]+)(`\$)': {'allow_multi_lines': False}, # $`...`$
}
matches = []
for pattern, property in mathpatterns.items():
flags = re.ASCII | re.DOTALL if property["allow_multi_lines"] else re.ASCII
@@ -207,13 +210,68 @@ def fix_code_segment_indent(txt):
return txt
def fix_dollar_sticking_bug(txt):
"""
修复不标准的dollar公式符号的问题
"""
txt_result = ""
single_stack_height = 0
double_stack_height = 0
while True:
while True:
index = txt.find('$')
if index == -1:
txt_result += txt
return txt_result
if single_stack_height > 0:
if txt[:(index+1)].find('\n') > 0 or txt[:(index+1)].find('<td>') > 0 or txt[:(index+1)].find('</td>') > 0:
print('公式之中出现了异常 (Unexpect element in equation)')
single_stack_height = 0
txt_result += ' $'
continue
if double_stack_height > 0:
if txt[:(index+1)].find('\n\n') > 0:
print('公式之中出现了异常 (Unexpect element in equation)')
double_stack_height = 0
txt_result += '$$'
continue
is_double = (txt[index+1] == '$')
if is_double:
if single_stack_height != 0:
# add a padding
txt = txt[:(index+1)] + " " + txt[(index+1):]
continue
if double_stack_height == 0:
double_stack_height = 1
else:
double_stack_height = 0
txt_result += txt[:(index+2)]
txt = txt[(index+2):]
else:
if double_stack_height != 0:
# print(txt[:(index)])
print('发现异常嵌套公式')
if single_stack_height == 0:
single_stack_height = 1
else:
single_stack_height = 0
# print(txt[:(index)])
txt_result += txt[:(index+1)]
txt = txt[(index+1):]
break
def markdown_convertion_for_file(txt):
"""
将Markdown格式的文本转换为HTML格式。如果包含数学公式则先将公式转换为HTML格式。
"""
from themes.theme import advanced_css
pre = f"""
<!DOCTYPE html><head><meta charset="utf-8"><title>PDF文档翻译</title><style>{advanced_css}</style></head>
<!DOCTYPE html><head><meta charset="utf-8"><title>GPT-Academic输出文档</title><style>{advanced_css}</style></head>
<body>
<div class="test_temp1" style="width:10%; height: 500px; float:left;"></div>
<div class="test_temp2" style="width:80%;padding: 40px;float:left;padding-left: 20px;padding-right: 20px;box-shadow: rgba(0, 0, 0, 0.2) 0px 0px 8px 8px;border-radius: 10px;">
@@ -232,10 +290,10 @@ def markdown_convertion_for_file(txt):
find_equation_pattern = r'<script type="math/tex(?:.*?)>(.*?)</script>'
txt = fix_markdown_indent(txt)
convert_stage_1 = fix_dollar_sticking_bug(txt)
# convert everything to html format
split = markdown.markdown(text="---")
convert_stage_1 = markdown.markdown(
text=txt,
convert_stage_2 = markdown.markdown(
text=convert_stage_1,
extensions=[
"sane_lists",
"tables",
@@ -245,14 +303,24 @@ def markdown_convertion_for_file(txt):
],
extension_configs={**markdown_extension_configs, **code_highlight_configs},
)
convert_stage_1 = markdown_bug_hunt(convert_stage_1)
def repl_fn(match):
content = match.group(2)
return f'<script type="math/tex">{content}</script>'
pattern = "|".join([pattern for pattern, property in mathpatterns.items() if not property["allow_multi_lines"]])
pattern = re.compile(pattern, flags=re.ASCII)
convert_stage_3 = pattern.sub(repl_fn, convert_stage_2)
convert_stage_4 = markdown_bug_hunt(convert_stage_3)
# 2. convert to rendered equation
convert_stage_2_2, n = re.subn(
find_equation_pattern, replace_math_render, convert_stage_1, flags=re.DOTALL
convert_stage_5, n = re.subn(
find_equation_pattern, replace_math_render, convert_stage_4, flags=re.DOTALL
)
# cat them together
return pre + convert_stage_2_2 + suf
return pre + convert_stage_5 + suf
@lru_cache(maxsize=128) # 使用 lru缓存 加快转换速度
def markdown_convertion(txt):

View File

@@ -0,0 +1,25 @@
def is_full_width_char(ch):
"""判断给定的单个字符是否是全角字符"""
if '\u4e00' <= ch <= '\u9fff':
return True # 中文字符
if '\uff01' <= ch <= '\uff5e':
return True # 全角符号
if '\u3000' <= ch <= '\u303f':
return True # CJK标点符号
return False
def scolling_visual_effect(text, scroller_max_len):
text = text.\
replace('\n', '').replace('`', '.').replace(' ', '.').replace('<br/>', '.....').replace('$', '.')
place_take_cnt = 0
pointer = len(text) - 1
if len(text) < scroller_max_len:
return text
while place_take_cnt < scroller_max_len and pointer > 0:
if is_full_width_char(text[pointer]): place_take_cnt += 2
else: place_take_cnt += 1
pointer -= 1
return text[pointer:]

View File

@@ -57,12 +57,12 @@ def validate_path_safety(path_or_url, user):
sensitive_path = PATH_LOGGING
elif path_or_url.startswith(PATH_PRIVATE_UPLOAD): # 用户的上传目录(按用户划分)
sensitive_path = PATH_PRIVATE_UPLOAD
elif path_or_url.startswith('tests'): # 一个常用的测试目录
elif path_or_url.startswith('tests') or path_or_url.startswith('build'): # 一个常用的测试目录
return True
else:
raise FriendlyException(f"输入文件的路径 ({path_or_url}) 存在,但位置非法。请将文件上传后再执行该任务。") # return False
if sensitive_path:
allowed_users = [user, 'autogen', default_user_name] # three user path that can be accessed
allowed_users = [user, 'autogen', 'arxiv_cache', default_user_name] # three user path that can be accessed
for user_allowed in allowed_users:
if f"{os.sep}".join(path_or_url.split(os.sep)[:2]) == os.path.join(sensitive_path, user_allowed):
return True
@@ -81,7 +81,7 @@ def _authorize_user(path_or_url, request, gradio_app):
if sensitive_path:
token = request.cookies.get("access-token") or request.cookies.get("access-token-unsecure")
user = gradio_app.tokens.get(token) # get user
allowed_users = [user, 'autogen', default_user_name] # three user path that can be accessed
allowed_users = [user, 'autogen', 'arxiv_cache', default_user_name] # three user path that can be accessed
for user_allowed in allowed_users:
# exact match
if f"{os.sep}".join(path_or_url.split(os.sep)[:2]) == os.path.join(sensitive_path, user_allowed):
@@ -99,7 +99,7 @@ class Server(uvicorn.Server):
self.thread = threading.Thread(target=self.run, daemon=True)
self.thread.start()
while not self.started:
time.sleep(1e-3)
time.sleep(5e-2)
def close(self):
self.should_exit = True
@@ -159,6 +159,16 @@ def start_app(app_block, CONCURRENT_COUNT, AUTHENTICATION, PORT, SSL_KEYFILE, SS
return "越权访问!"
return await endpoint(path_or_url, request)
from fastapi import Request, status
from fastapi.responses import FileResponse, RedirectResponse
@gradio_app.get("/academic_logout")
async def logout():
response = RedirectResponse(url=CUSTOM_PATH, status_code=status.HTTP_302_FOUND)
response.delete_cookie('access-token')
response.delete_cookie('access-token-unsecure')
return response
# --- --- enable TTS (text-to-speech) functionality --- ---
TTS_TYPE = get_conf("TTS_TYPE")
if TTS_TYPE != "DISABLE":
# audio generation functionality
@@ -220,13 +230,22 @@ def start_app(app_block, CONCURRENT_COUNT, AUTHENTICATION, PORT, SSL_KEYFILE, SS
fastapi_app = FastAPI(lifespan=app_lifespan)
fastapi_app.mount(CUSTOM_PATH, gradio_app)
# --- --- favicon --- ---
# --- --- favicon and block fastapi api reference routes --- ---
from starlette.responses import JSONResponse
if CUSTOM_PATH != '/':
from fastapi.responses import FileResponse
@fastapi_app.get("/favicon.ico")
async def favicon():
return FileResponse(app_block.favicon_path)
@fastapi_app.middleware("http")
async def middleware(request: Request, call_next):
if request.scope['path'] in ["/docs", "/redoc", "/openapi.json"]:
return JSONResponse(status_code=404, content={"message": "Not Found"})
response = await call_next(request)
return response
# --- --- uvicorn.Config --- ---
ssl_keyfile = None if SSL_KEYFILE == "" else SSL_KEYFILE
ssl_certfile = None if SSL_CERTFILE == "" else SSL_CERTFILE

10
tests/init_test.py Normal file
View File

@@ -0,0 +1,10 @@
def validate_path():
import os, sys
os.path.dirname(__file__)
root_dir_assume = os.path.abspath(os.path.dirname(__file__) + "/..")
os.chdir(root_dir_assume)
sys.path.append(root_dir_assume)
validate_path() # validate path so you can run from base directory

View File

@@ -0,0 +1,22 @@
"""
对项目中的各个插件进行测试。运行方法:直接运行 python tests/test_plugins.py
"""
import os, sys, importlib
def validate_path():
dir_name = os.path.dirname(__file__)
root_dir_assume = os.path.abspath(dir_name + "/..")
os.chdir(root_dir_assume)
sys.path.append(root_dir_assume)
validate_path() # 返回项目根路径
if __name__ == "__main__":
plugin_test = importlib.import_module('test_utils').plugin_test
plugin_test(plugin='crazy_functions.Latex_Function->Latex翻译中文并重新编译PDF', main_input="2203.01927")

View File

@@ -14,12 +14,13 @@ validate_path() # validate path so you can run from base directory
if "在线模型":
if __name__ == "__main__":
from request_llms.bridge_cohere import predict_no_ui_long_connection
from request_llms.bridge_taichu import predict_no_ui_long_connection
# from request_llms.bridge_cohere import predict_no_ui_long_connection
# from request_llms.bridge_spark import predict_no_ui_long_connection
# from request_llms.bridge_zhipu import predict_no_ui_long_connection
# from request_llms.bridge_chatglm3 import predict_no_ui_long_connection
llm_kwargs = {
"llm_model": "command-r-plus",
"llm_model": "taichu",
"max_length": 4096,
"top_p": 1,
"temperature": 1,

View File

@@ -2,23 +2,16 @@
对项目中的各个插件进行测试。运行方法:直接运行 python tests/test_plugins.py
"""
import init_test
import os, sys
def validate_path():
dir_name = os.path.dirname(__file__)
root_dir_assume = os.path.abspath(dir_name + "/..")
os.chdir(root_dir_assume)
sys.path.append(root_dir_assume)
validate_path() # 返回项目根路径
if __name__ == "__main__":
from tests.test_utils import plugin_test
from test_utils import plugin_test
plugin_test(plugin='crazy_functions.Internet_GPT->连接网络回答问题', main_input="谁是应急食品?")
plugin_test(plugin='crazy_functions.SourceCode_Comment->注释Python项目', main_input="build/test/python_comment")
# plugin_test(plugin='crazy_functions.Internet_GPT->连接网络回答问题', main_input="谁是应急食品?")
# plugin_test(plugin='crazy_functions.函数动态生成->函数动态生成', main_input='交换图像的蓝色通道和红色通道', advanced_arg={"file_path_arg": "./build/ants.jpg"})
@@ -39,9 +32,9 @@ if __name__ == "__main__":
# plugin_test(plugin='crazy_functions.命令行助手->命令行助手', main_input='查看当前的docker容器列表')
# plugin_test(plugin='crazy_functions.解析项目源代码->解析一个Python项目', main_input="crazy_functions/test_project/python/dqn")
# plugin_test(plugin='crazy_functions.SourceCode_Analyse->解析一个Python项目', main_input="crazy_functions/test_project/python/dqn")
# plugin_test(plugin='crazy_functions.解析项目源代码->解析一个C项目', main_input="crazy_functions/test_project/cpp/cppipc")
# plugin_test(plugin='crazy_functions.SourceCode_Analyse->解析一个C项目', main_input="crazy_functions/test_project/cpp/cppipc")
# plugin_test(plugin='crazy_functions.Latex全文润色->Latex英文润色', main_input="crazy_functions/test_project/latex/attention")

View File

@@ -0,0 +1,342 @@
import init_test
from toolbox import CatchException, update_ui
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
from request_llms.bridge_all import predict_no_ui_long_connection
import datetime
import re
from textwrap import dedent
# TODO: 解决缩进问题
find_function_end_prompt = '''
Below is a page of code that you need to read. This page may not yet complete, you job is to split this page to sperate functions, class functions etc.
- Provide the line number where the first visible function ends.
- Provide the line number where the next visible function begins.
- If there are no other functions in this page, you should simply return the line number of the last line.
- Only focus on functions declared by `def` keyword. Ignore inline functions. Ignore function calls.
------------------ Example ------------------
INPUT:
```
L0000 |import sys
L0001 |import re
L0002 |
L0003 |def trimmed_format_exc():
L0004 | import os
L0005 | import traceback
L0006 | str = traceback.format_exc()
L0007 | current_path = os.getcwd()
L0008 | replace_path = "."
L0009 | return str.replace(current_path, replace_path)
L0010 |
L0011 |
L0012 |def trimmed_format_exc_markdown():
L0013 | ...
L0014 | ...
```
OUTPUT:
```
<first_function_end_at>L0009</first_function_end_at>
<next_function_begin_from>L0012</next_function_begin_from>
```
------------------ End of Example ------------------
------------------ the real INPUT you need to process NOW ------------------
```
{THE_TAGGED_CODE}
```
'''
revise_funtion_prompt = '''
You need to read the following code, and revise the code according to following instructions:
1. You should analyze the purpose of the functions (if there are any).
2. You need to add docstring for the provided functions (if there are any).
Be aware:
1. You must NOT modify the indent of code.
2. You are NOT authorized to change or translate non-comment code, and you are NOT authorized to add empty lines either.
3. Use English to add comments and docstrings. Do NOT translate Chinese that is already in the code.
------------------ Example ------------------
INPUT:
```
L0000 |
L0001 |def zip_result(folder):
L0002 | t = gen_time_str()
L0003 | zip_folder(folder, get_log_folder(), f"result.zip")
L0004 | return os.path.join(get_log_folder(), f"result.zip")
L0005 |
L0006 |
```
OUTPUT:
<instruction_1_purpose>
This function compresses a given folder, and return the path of the resulting `zip` file.
</instruction_1_purpose>
<instruction_2_revised_code>
```
def zip_result(folder):
"""
Compresses the specified folder into a zip file and stores it in the log folder.
Args:
folder (str): The path to the folder that needs to be compressed.
Returns:
str: The path to the created zip file in the log folder.
"""
t = gen_time_str()
zip_folder(folder, get_log_folder(), f"result.zip") # ⭐ Execute the zipping of folder
return os.path.join(get_log_folder(), f"result.zip")
```
</instruction_2_revised_code>
------------------ End of Example ------------------
------------------ the real INPUT you need to process NOW ------------------
```
{THE_CODE}
```
{INDENT_REMINDER}
'''
class ContextWindowManager():
def __init__(self, llm_kwargs) -> None:
self.full_context = []
self.full_context_with_line_no = []
self.current_page_start = 0
self.page_limit = 100 # 100 lines of code each page
self.ignore_limit = 20
self.llm_kwargs = llm_kwargs
def generate_tagged_code_from_full_context(self):
for i, code in enumerate(self.full_context):
number = i
padded_number = f"{number:04}"
result = f"L{padded_number}"
self.full_context_with_line_no.append(f"{result} | {code}")
return self.full_context_with_line_no
def read_file(self, path):
with open(path, 'r', encoding='utf8') as f:
self.full_context = f.readlines()
self.full_context_with_line_no = self.generate_tagged_code_from_full_context()
def find_next_function_begin(self, tagged_code:list, begin_and_end):
begin, end = begin_and_end
THE_TAGGED_CODE = ''.join(tagged_code)
self.llm_kwargs['temperature'] = 0
result = predict_no_ui_long_connection(
inputs=find_function_end_prompt.format(THE_TAGGED_CODE=THE_TAGGED_CODE),
llm_kwargs=self.llm_kwargs,
history=[],
sys_prompt="",
observe_window=[],
console_slience=True
)
def extract_number(text):
# 使用正则表达式匹配模式
match = re.search(r'<next_function_begin_from>L(\d+)</next_function_begin_from>', text)
if match:
# 提取匹配的数字部分并转换为整数
return int(match.group(1))
return None
line_no = extract_number(result)
if line_no is not None:
return line_no
else:
raise RuntimeError
return end
def _get_next_window(self):
#
current_page_start = self.current_page_start
if self.current_page_start == len(self.full_context) + 1:
raise StopIteration
# 如果剩余的行数非常少,一鼓作气处理掉
if len(self.full_context) - self.current_page_start < self.ignore_limit:
future_page_start = len(self.full_context) + 1
self.current_page_start = future_page_start
return current_page_start, future_page_start
tagged_code = self.full_context_with_line_no[ self.current_page_start: self.current_page_start + self.page_limit]
line_no = self.find_next_function_begin(tagged_code, [self.current_page_start, self.current_page_start + self.page_limit])
if line_no > len(self.full_context) - 5:
line_no = len(self.full_context) + 1
future_page_start = line_no
self.current_page_start = future_page_start
# ! consider eof
return current_page_start, future_page_start
def dedent(self, text):
"""Remove any common leading whitespace from every line in `text`.
"""
# Look for the longest leading string of spaces and tabs common to
# all lines.
margin = None
_whitespace_only_re = re.compile('^[ \t]+$', re.MULTILINE)
_leading_whitespace_re = re.compile('(^[ \t]*)(?:[^ \t\n])', re.MULTILINE)
text = _whitespace_only_re.sub('', text)
indents = _leading_whitespace_re.findall(text)
for indent in indents:
if margin is None:
margin = indent
# Current line more deeply indented than previous winner:
# no change (previous winner is still on top).
elif indent.startswith(margin):
pass
# Current line consistent with and no deeper than previous winner:
# it's the new winner.
elif margin.startswith(indent):
margin = indent
# Find the largest common whitespace between current line and previous
# winner.
else:
for i, (x, y) in enumerate(zip(margin, indent)):
if x != y:
margin = margin[:i]
break
# sanity check (testing/debugging only)
if 0 and margin:
for line in text.split("\n"):
assert not line or line.startswith(margin), \
"line = %r, margin = %r" % (line, margin)
if margin:
text = re.sub(r'(?m)^' + margin, '', text)
return text, len(margin)
def get_next_batch(self):
current_page_start, future_page_start = self._get_next_window()
return self.full_context[current_page_start: future_page_start], current_page_start, future_page_start
def tag_code(self, fn):
code = ''.join(fn)
_, n_indent = self.dedent(code)
indent_reminder = "" if n_indent == 0 else "(Reminder: as you can see, this piece of code has indent made up with {n_indent} whitespace, please preseve them in the OUTPUT.)"
self.llm_kwargs['temperature'] = 0
result = predict_no_ui_long_connection(
inputs=revise_funtion_prompt.format(THE_CODE=code, INDENT_REMINDER=indent_reminder),
llm_kwargs=self.llm_kwargs,
history=[],
sys_prompt="",
observe_window=[],
console_slience=True
)
def get_code_block(reply):
import re
pattern = r"```([\s\S]*?)```" # regex pattern to match code blocks
matches = re.findall(pattern, reply) # find all code blocks in text
if len(matches) == 1:
return matches[0].strip('python') # code block
return None
code_block = get_code_block(result)
if code_block is not None:
code_block = self.sync_and_patch(original=code, revised=code_block)
return code_block
else:
return code
def sync_and_patch(self, original, revised):
"""Ensure the number of pre-string empty lines in revised matches those in original."""
def count_leading_empty_lines(s, reverse=False):
"""Count the number of leading empty lines in a string."""
lines = s.split('\n')
if reverse: lines = list(reversed(lines))
count = 0
for line in lines:
if line.strip() == '':
count += 1
else:
break
return count
original_empty_lines = count_leading_empty_lines(original)
revised_empty_lines = count_leading_empty_lines(revised)
if original_empty_lines > revised_empty_lines:
additional_lines = '\n' * (original_empty_lines - revised_empty_lines)
revised = additional_lines + revised
elif original_empty_lines < revised_empty_lines:
lines = revised.split('\n')
revised = '\n'.join(lines[revised_empty_lines - original_empty_lines:])
original_empty_lines = count_leading_empty_lines(original, reverse=True)
revised_empty_lines = count_leading_empty_lines(revised, reverse=True)
if original_empty_lines > revised_empty_lines:
additional_lines = '\n' * (original_empty_lines - revised_empty_lines)
revised = revised + additional_lines
elif original_empty_lines < revised_empty_lines:
lines = revised.split('\n')
revised = '\n'.join(lines[:-(revised_empty_lines - original_empty_lines)])
return revised
from toolbox import get_plugin_default_kwargs
llm_kwargs = get_plugin_default_kwargs()["llm_kwargs"]
cwm = ContextWindowManager(llm_kwargs)
cwm.read_file(path="./test.py")
output_buf = ""
with open('temp.py', 'w+', encoding='utf8') as f:
while True:
try:
next_batch, line_no_start, line_no_end = cwm.get_next_batch()
result = cwm.tag_code(next_batch)
f.write(result)
output_buf += result
except StopIteration:
next_batch, line_no_start, line_no_end = [], -1, -1
break
print('-------------------------------------------')
print(''.join(next_batch))
print('-------------------------------------------')
print(cwm)

17
tests/test_safe_pickle.py Normal file
View File

@@ -0,0 +1,17 @@
def validate_path():
import os, sys
os.path.dirname(__file__)
root_dir_assume = os.path.abspath(os.path.dirname(__file__) + "/..")
os.chdir(root_dir_assume)
sys.path.append(root_dir_assume)
validate_path() # validate path so you can run from base directory
from crazy_functions.latex_fns.latex_pickle_io import objdump, objload
from crazy_functions.latex_fns.latex_actions import LatexPaperFileGroup, LatexPaperSplit
pfg = LatexPaperFileGroup()
pfg.get_token_num = None
pfg.target = "target_elem"
x = objdump(pfg)
t = objload()
print(t.target)

View File

@@ -0,0 +1,102 @@
def validate_path():
import os, sys
os.path.dirname(__file__)
root_dir_assume = os.path.abspath(os.path.dirname(__file__) + "/..")
os.chdir(root_dir_assume)
sys.path.append(root_dir_assume)
validate_path() # validate path so you can run from base directory
def write_chat_to_file(chatbot, history=None, file_name=None):
"""
将对话记录history以Markdown格式写入文件中。如果没有指定文件名则使用当前时间生成文件名。
"""
import os
import time
from themes.theme import advanced_css
# debug
import pickle
# def objdump(obj, file="objdump.tmp"):
# with open(file, "wb+") as f:
# pickle.dump(obj, f)
# return
def objload(file="objdump.tmp"):
import os
if not os.path.exists(file):
return
with open(file, "rb") as f:
return pickle.load(f)
# objdump((chatbot, history))
chatbot, history = objload()
with open("test.html", 'w', encoding='utf8') as f:
from textwrap import dedent
form = dedent("""
<!DOCTYPE html><head><meta charset="utf-8"><title>对话存档</title><style>{CSS}</style></head>
<body>
<div class="test_temp1" style="width:10%; height: 500px; float:left;"></div>
<div class="test_temp2" style="width:80%;padding: 40px;float:left;padding-left: 20px;padding-right: 20px;box-shadow: rgba(0, 0, 0, 0.2) 0px 0px 8px 8px;border-radius: 10px;">
<div class="chat-body" style="display: flex;justify-content: center;flex-direction: column;align-items: center;flex-wrap: nowrap;">
{CHAT_PREVIEW}
<div></div>
<div></div>
<div style="text-align: center;width:80%;padding: 0px;float:left;padding-left:20px;padding-right:20px;box-shadow: rgba(0, 0, 0, 0.05) 0px 0px 1px 2px;border-radius: 1px;">对话(原始数据)</div>
{HISTORY_PREVIEW}
</div>
</div>
<div class="test_temp3" style="width:10%; height: 500px; float:left;"></div>
</body>
""")
qa_from = dedent("""
<div class="QaBox" style="width:80%;padding: 20px;margin-bottom: 20px;box-shadow: rgb(0 255 159 / 50%) 0px 0px 1px 2px;border-radius: 4px;">
<div class="Question" style="border-radius: 2px;">{QUESTION}</div>
<hr color="blue" style="border-top: dotted 2px #ccc;">
<div class="Answer" style="border-radius: 2px;">{ANSWER}</div>
</div>
""")
history_from = dedent("""
<div class="historyBox" style="width:80%;padding: 0px;float:left;padding-left:20px;padding-right:20px;box-shadow: rgba(0, 0, 0, 0.05) 0px 0px 1px 2px;border-radius: 1px;">
<div class="entry" style="border-radius: 2px;">{ENTRY}</div>
</div>
""")
CHAT_PREVIEW_BUF = ""
for i, contents in enumerate(chatbot):
question, answer = contents[0], contents[1]
if question is None: question = ""
try: question = str(question)
except: question = ""
if answer is None: answer = ""
try: answer = str(answer)
except: answer = ""
CHAT_PREVIEW_BUF += qa_from.format(QUESTION=question, ANSWER=answer)
HISTORY_PREVIEW_BUF = ""
for h in history:
HISTORY_PREVIEW_BUF += history_from.format(ENTRY=h)
html_content = form.format(CHAT_PREVIEW=CHAT_PREVIEW_BUF, HISTORY_PREVIEW=HISTORY_PREVIEW_BUF, CSS=advanced_css)
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_content, 'lxml')
# 提取QaBox信息
qa_box_list = []
qa_boxes = soup.find_all("div", class_="QaBox")
for box in qa_boxes:
question = box.find("div", class_="Question").get_text(strip=False)
answer = box.find("div", class_="Answer").get_text(strip=False)
qa_box_list.append({"Question": question, "Answer": answer})
# 提取historyBox信息
history_box_list = []
history_boxes = soup.find_all("div", class_="historyBox")
for box in history_boxes:
entry = box.find("div", class_="entry").get_text(strip=False)
history_box_list.append(entry)
print('')
write_chat_to_file(None, None, None)

58
tests/test_searxng.py Normal file
View File

@@ -0,0 +1,58 @@
def validate_path():
import os, sys
os.path.dirname(__file__)
root_dir_assume = os.path.abspath(os.path.dirname(__file__) + "/..")
os.chdir(root_dir_assume)
sys.path.append(root_dir_assume)
validate_path() # validate path so you can run from base directory
from toolbox import get_conf
import requests
def searxng_request(query, proxies, categories='general', searxng_url=None, engines=None):
url = 'http://localhost:50001/'
if engines is None:
engine = 'bing,'
if categories == 'general':
params = {
'q': query, # 搜索查询
'format': 'json', # 输出格式为JSON
'language': 'zh', # 搜索语言
'engines': engine,
}
elif categories == 'science':
params = {
'q': query, # 搜索查询
'format': 'json', # 输出格式为JSON
'language': 'zh', # 搜索语言
'categories': 'science'
}
else:
raise ValueError('不支持的检索类型')
headers = {
'Accept-Language': 'zh-CN,zh;q=0.9',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36',
'X-Forwarded-For': '112.112.112.112',
'X-Real-IP': '112.112.112.112'
}
results = []
response = requests.post(url, params=params, headers=headers, proxies=proxies, timeout=30)
if response.status_code == 200:
json_result = response.json()
for result in json_result['results']:
item = {
"title": result.get("title", ""),
"content": result.get("content", ""),
"link": result["url"],
}
print(result['engines'])
results.append(item)
return results
else:
if response.status_code == 429:
raise ValueError("Searxng在线搜索服务当前使用人数太多请稍后。")
else:
raise ValueError("在线搜索失败,状态码: " + str(response.status_code) + '\t' + response.content.decode('utf-8'))
res = searxng_request("vr environment", None, categories='science', searxng_url=None, engines=None)
print(res)

View File

@@ -142,3 +142,132 @@
border-top-width: 0;
}
.welcome-card-container {
text-align: center;
margin: 0 auto;
display: flex;
position: absolute;
width: inherit;
padding: 50px;
top: 50%;
left: 50%;
transform: translate(-50%, -50%);
flex-wrap: wrap;
justify-content: center;
transition: opacity 1s ease-in-out;
opacity: 0;
}
.welcome-card-container.show {
opacity: 1;
}
.welcome-card-container.hide {
opacity: 0;
}
.welcome-card {
border-radius: 10px;
box-shadow: 0px 0px 6px 3px #e5e7eb6b;
padding: 15px;
margin: 10px;
flex: 1 0 calc(30% - 5px);
transform: rotateY(0deg);
transition: transform 0.1s;
transform-style: preserve-3d;
}
.welcome-card.show {
transform: rotateY(0deg);
}
.welcome-card.hide {
transform: rotateY(90deg);
}
.welcome-title {
font-size: 40px;
padding: 20px;
margin: 10px;
flex: 0 0 calc(90%);
}
.welcome-card-title {
font-size: 20px;
margin: 2px;
flex: 0 0 calc(95%);
padding-bottom: 8px;
padding-top: 8px;
padding-right: 8px;
padding-left: 8px;
display: flex;
justify-content: center;
}
.welcome-svg {
padding-right: 10px;
}
.welcome-title-text {
text-wrap: nowrap;
}
.welcome-content {
text-wrap: balance;
height: 55px;
display: flex;
align-items: center;
}
#gpt-submit-row {
display: flex;
gap: 0 !important;
border-radius: var(--button-large-radius);
border: var(--button-border-width) solid var(--button-primary-border-color);
/* background: var(--button-primary-background-fill); */
background: var(--button-primary-background-fill-hover);
color: var(--button-primary-text-color);
box-shadow: var(--button-shadow);
transition: var(--button-transition);
display: flex;
}
#gpt-submit-row:hover {
border-color: var(--button-primary-border-color-hover);
/* background: var(--button-primary-background-fill-hover); */
/* color: var(--button-primary-text-color-hover); */
}
#gpt-submit-row button#elem_submit_visible {
border-top-right-radius: 0px;
border-bottom-right-radius: 0px;
box-shadow: none !important;
flex-grow: 1;
}
#gpt-submit-row #gpt-submit-dropdown {
border-top-left-radius: 0px;
border-bottom-left-radius: 0px;
border-left: 0.5px solid #FFFFFF88 !important;
display: flex;
overflow: unset !important;
max-width: 40px !important;
min-width: 40px !important;
}
#gpt-submit-row #gpt-submit-dropdown input {
pointer-events: none;
opacity: 0; /* 隐藏输入框 */
width: 0;
margin-inline: 0;
cursor: pointer;
}
#gpt-submit-row #gpt-submit-dropdown label {
display: flex;
width: 0;
}
#gpt-submit-row #gpt-submit-dropdown label div.wrap {
background: none;
box-shadow: none;
border: none;
}
#gpt-submit-row #gpt-submit-dropdown label div.wrap div.wrap-inner {
background: none;
padding-inline: 0;
height: 100%;
}
#gpt-submit-row #gpt-submit-dropdown svg.dropdown-arrow {
transform: scale(2) translate(4.5px, -0.3px);
}
#gpt-submit-row #gpt-submit-dropdown > *:hover {
cursor: context-menu;
}

View File

@@ -1,3 +1,6 @@
// 标志位
enable_tts = false;
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
// 第 1 部分: 工具函数
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
@@ -793,6 +796,26 @@ function minor_ui_adjustment() {
}, 200); // 每50毫秒执行一次
}
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
// 对提交按钮的下拉选框做的变化
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
function ButtonWithDropdown_init() {
let submitButton = document.querySelector('button#elem_submit_visible');
let submitDropdown = document.querySelector('#gpt-submit-dropdown');
function updateDropdownWidth() {
if (submitButton) {
let setWidth = submitButton.clientWidth + submitDropdown.clientWidth;
let setLeft = -1 * submitButton.clientWidth;
document.getElementById('submit-dropdown-style')?.remove();
const styleElement = document.createElement('style');
styleElement.id = 'submit-dropdown-style';
styleElement.innerHTML = `#gpt-submit-dropdown ul.options { width: ${setWidth}px; left: ${setLeft}px; }`;
document.head.appendChild(styleElement);
}
}
window.addEventListener('resize', updateDropdownWidth);
updateDropdownWidth();
}
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
// 第 6 部分: 避免滑动
@@ -914,131 +937,6 @@ function gpt_academic_gradio_saveload(
}
}
enable_tts = false;
async function GptAcademicJavaScriptInit(dark, prompt, live2d, layout, tts) {
// 第一部分,布局初始化
audio_fn_init();
minor_ui_adjustment();
chatbotIndicator = gradioApp().querySelector('#gpt-chatbot > div.wrap');
var chatbotObserver = new MutationObserver(() => {
chatbotContentChanged(1);
});
chatbotObserver.observe(chatbotIndicator, { attributes: true, childList: true, subtree: true });
if (layout === "LEFT-RIGHT") { chatbotAutoHeight(); }
if (layout === "LEFT-RIGHT") { limit_scroll_position(); }
// 第二部分读取Cookie初始话界面
let searchString = "";
let bool_value = "";
// darkmode 深色模式
if (getCookie("js_darkmode_cookie")) {
dark = getCookie("js_darkmode_cookie")
}
dark = dark == "True";
if (document.querySelectorAll('.dark').length) {
if (!dark) {
document.querySelectorAll('.dark').forEach(el => el.classList.remove('dark'));
}
} else {
if (dark) {
document.querySelector('body').classList.add('dark');
}
}
// 自动朗读
if (tts != "DISABLE"){
enable_tts = true;
if (getCookie("js_auto_read_cookie")) {
auto_read_tts = getCookie("js_auto_read_cookie")
auto_read_tts = auto_read_tts == "True";
if (auto_read_tts) {
allow_auto_read_tts_flag = true;
}
}
}
// SysPrompt 系统静默提示词
gpt_academic_gradio_saveload("load", "elem_prompt", "js_system_prompt_cookie", null, "str");
// Temperature 大模型温度参数
gpt_academic_gradio_saveload("load", "elem_temperature", "js_temperature_cookie", null, "float");
// md_dropdown 大模型类型选择
if (getCookie("js_md_dropdown_cookie")) {
const cached_model = getCookie("js_md_dropdown_cookie");
var model_sel = await get_gradio_component("elem_model_sel");
// determine whether the cached model is in the choices
if (model_sel.props.choices.includes(cached_model)){
// change dropdown
gpt_academic_gradio_saveload("load", "elem_model_sel", "js_md_dropdown_cookie", null, "str");
// 连锁修改chatbot的label
push_data_to_gradio_component({
label: '当前模型:' + getCookie("js_md_dropdown_cookie"),
__type__: 'update'
}, "gpt-chatbot", "obj")
}
}
// clearButton 自动清除按钮
if (getCookie("js_clearbtn_show_cookie")) {
// have cookie
bool_value = getCookie("js_clearbtn_show_cookie")
bool_value = bool_value == "True";
searchString = "输入清除键";
if (bool_value) {
// make btns appear
let clearButton = document.getElementById("elem_clear"); clearButton.style.display = "block";
let clearButton2 = document.getElementById("elem_clear2"); clearButton2.style.display = "block";
// deal with checkboxes
let arr_with_clear_btn = update_array(
await get_data_from_gradio_component('cbs'), "输入清除键", "add"
)
push_data_to_gradio_component(arr_with_clear_btn, "cbs", "no_conversion");
} else {
// make btns disappear
let clearButton = document.getElementById("elem_clear"); clearButton.style.display = "none";
let clearButton2 = document.getElementById("elem_clear2"); clearButton2.style.display = "none";
// deal with checkboxes
let arr_without_clear_btn = update_array(
await get_data_from_gradio_component('cbs'), "输入清除键", "remove"
)
push_data_to_gradio_component(arr_without_clear_btn, "cbs", "no_conversion");
}
}
// live2d 显示
if (getCookie("js_live2d_show_cookie")) {
// have cookie
searchString = "添加Live2D形象";
bool_value = getCookie("js_live2d_show_cookie");
bool_value = bool_value == "True";
if (bool_value) {
loadLive2D();
let arr_with_live2d = update_array(
await get_data_from_gradio_component('cbsc'), "添加Live2D形象", "add"
)
push_data_to_gradio_component(arr_with_live2d, "cbsc", "no_conversion");
} else {
try {
$('.waifu').hide();
let arr_without_live2d = update_array(
await get_data_from_gradio_component('cbsc'), "添加Live2D形象", "remove"
)
push_data_to_gradio_component(arr_without_live2d, "cbsc", "no_conversion");
} catch (error) {
}
}
} else {
// do not have cookie
if (live2d) {
loadLive2D();
} else {
}
}
}
function reset_conversation(a, b) {
// console.log("js_code_reset");
@@ -1172,364 +1070,6 @@ async function on_plugin_exe_complete(fn_name) {
}
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
// 第 8 部分: TTS语音生成函数
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
audio_debug = false;
class AudioPlayer {
constructor() {
this.audioCtx = new (window.AudioContext || window.webkitAudioContext)();
this.queue = [];
this.isPlaying = false;
this.currentSource = null; // 添加属性来保存当前播放的源
}
// Base64 编码的字符串转换为 ArrayBuffer
base64ToArrayBuffer(base64) {
const binaryString = window.atob(base64);
const len = binaryString.length;
const bytes = new Uint8Array(len);
for (let i = 0; i < len; i++) {
bytes[i] = binaryString.charCodeAt(i);
}
return bytes.buffer;
}
// 检查音频播放队列并播放音频
checkQueue() {
if (!this.isPlaying && this.queue.length > 0) {
this.isPlaying = true;
const nextAudio = this.queue.shift();
this.play_wave(nextAudio);
}
}
// 将音频添加到播放队列
enqueueAudio(audio_buf_wave) {
if (allow_auto_read_tts_flag) {
this.queue.push(audio_buf_wave);
this.checkQueue();
}
}
// 播放音频
async play_wave(encodedAudio) {
//const audioData = this.base64ToArrayBuffer(encodedAudio);
const audioData = encodedAudio;
try {
const buffer = await this.audioCtx.decodeAudioData(audioData);
const source = this.audioCtx.createBufferSource();
source.buffer = buffer;
source.connect(this.audioCtx.destination);
source.onended = () => {
if (allow_auto_read_tts_flag) {
this.isPlaying = false;
this.currentSource = null; // 播放结束后清空当前源
this.checkQueue();
}
};
this.currentSource = source; // 保存当前播放的源
source.start();
} catch (e) {
console.log("Audio error!", e);
this.isPlaying = false;
this.currentSource = null; // 出错时也应清空当前源
this.checkQueue();
}
}
// 新增:立即停止播放音频的方法
stop() {
if (this.currentSource) {
this.queue = []; // 清空队列
this.currentSource.stop(); // 停止当前源
this.currentSource = null; // 清空当前源
this.isPlaying = false; // 更新播放状态
// 关闭音频上下文可能会导致无法再次播放音频,因此仅停止当前源
// this.audioCtx.close(); // 可选:如果需要可以关闭音频上下文
}
}
}
const audioPlayer = new AudioPlayer();
class FIFOLock {
constructor() {
this.queue = [];
this.currentTaskExecuting = false;
}
lock() {
let resolveLock;
const lock = new Promise(resolve => {
resolveLock = resolve;
});
this.queue.push(resolveLock);
if (!this.currentTaskExecuting) {
this._dequeueNext();
}
return lock;
}
_dequeueNext() {
if (this.queue.length === 0) {
this.currentTaskExecuting = false;
return;
}
this.currentTaskExecuting = true;
const resolveLock = this.queue.shift();
resolveLock();
}
unlock() {
this.currentTaskExecuting = false;
this._dequeueNext();
}
}
function delay(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
// Define the trigger function with delay parameter T in milliseconds
function trigger(T, fire) {
// Variable to keep track of the timer ID
let timeoutID = null;
// Variable to store the latest arguments
let lastArgs = null;
return function (...args) {
// Update lastArgs with the latest arguments
lastArgs = args;
// Clear the existing timer if the function is called again
if (timeoutID !== null) {
clearTimeout(timeoutID);
}
// Set a new timer that calls the `fire` function with the latest arguments after T milliseconds
timeoutID = setTimeout(() => {
fire(...lastArgs);
}, T);
};
}
prev_text = ""; // previous text, this is used to check chat changes
prev_text_already_pushed = ""; // previous text already pushed to audio, this is used to check where we should continue to play audio
prev_chatbot_index = -1;
const delay_live_text_update = trigger(3000, on_live_stream_terminate);
function on_live_stream_terminate(latest_text) {
// remove `prev_text_already_pushed` from `latest_text`
if (audio_debug) console.log("on_live_stream_terminate", latest_text);
remaining_text = latest_text.slice(prev_text_already_pushed.length);
if ((!isEmptyOrWhitespaceOnly(remaining_text)) && remaining_text.length != 0) {
prev_text_already_pushed = latest_text;
push_text_to_audio(remaining_text);
}
}
function is_continue_from_prev(text, prev_text) {
abl = 5
if (text.length < prev_text.length - abl) {
return false;
}
if (prev_text.length > 10) {
return text.startsWith(prev_text.slice(0, Math.min(prev_text.length - abl, 100)));
} else {
return text.startsWith(prev_text);
}
}
function isEmptyOrWhitespaceOnly(remaining_text) {
// Replace \n and 。 with empty strings
let textWithoutSpecifiedCharacters = remaining_text.replace(/[\n。]/g, '');
// Check if the remaining string is empty
return textWithoutSpecifiedCharacters.trim().length === 0;
}
function process_increased_text(remaining_text) {
// console.log('[is continue], remaining_text: ', remaining_text)
// remaining_text starts with \n or 。, then move these chars into prev_text_already_pushed
while (remaining_text.startsWith('\n') || remaining_text.startsWith('。')) {
prev_text_already_pushed = prev_text_already_pushed + remaining_text[0];
remaining_text = remaining_text.slice(1);
}
if (remaining_text.includes('\n') || remaining_text.includes('。')) { // determine remaining_text contain \n or 。
// new message begin!
index_of_last_sep = Math.max(remaining_text.lastIndexOf('\n'), remaining_text.lastIndexOf('。'));
// break the text into two parts
tobe_pushed = remaining_text.slice(0, index_of_last_sep + 1);
prev_text_already_pushed = prev_text_already_pushed + tobe_pushed;
// console.log('[is continue], push: ', tobe_pushed)
// console.log('[is continue], update prev_text_already_pushed: ', prev_text_already_pushed)
if (!isEmptyOrWhitespaceOnly(tobe_pushed)) {
// console.log('[is continue], remaining_text is empty')
push_text_to_audio(tobe_pushed);
}
}
}
function process_latest_text_output(text, chatbot_index) {
if (text.length == 0) {
prev_text = text;
prev_text_mask = text;
// console.log('empty text')
return;
}
if (text == prev_text) {
// console.log('[nothing changed]')
return;
}
var is_continue = is_continue_from_prev(text, prev_text_already_pushed);
if (chatbot_index == prev_chatbot_index && is_continue) {
// on_text_continue_grow
remaining_text = text.slice(prev_text_already_pushed.length);
process_increased_text(remaining_text);
delay_live_text_update(text); // in case of no \n or 。 in the text, this timer will finally commit
}
else if (chatbot_index == prev_chatbot_index && !is_continue) {
if (audio_debug) console.log('---------------------');
if (audio_debug) console.log('text twisting!');
if (audio_debug) console.log('[new message begin]', 'text', text, 'prev_text_already_pushed', prev_text_already_pushed);
if (audio_debug) console.log('---------------------');
prev_text_already_pushed = "";
delay_live_text_update(text); // in case of no \n or 。 in the text, this timer will finally commit
}
else {
// on_new_message_begin, we have to clear `prev_text_already_pushed`
if (audio_debug) console.log('---------------------');
if (audio_debug) console.log('new message begin!');
if (audio_debug) console.log('[new message begin]', 'text', text, 'prev_text_already_pushed', prev_text_already_pushed);
if (audio_debug) console.log('---------------------');
prev_text_already_pushed = "";
process_increased_text(text);
delay_live_text_update(text); // in case of no \n or 。 in the text, this timer will finally commit
}
prev_text = text;
prev_chatbot_index = chatbot_index;
}
const audio_push_lock = new FIFOLock();
async function push_text_to_audio(text) {
if (!allow_auto_read_tts_flag) {
return;
}
await audio_push_lock.lock();
var lines = text.split(/[\n。]/);
for (const audio_buf_text of lines) {
if (audio_buf_text) {
// Append '/vits' to the current URL to form the target endpoint
const url = `${window.location.href}vits`;
// Define the payload to be sent in the POST request
const payload = {
text: audio_buf_text, // Ensure 'audio_buf_text' is defined with valid data
text_language: "zh"
};
// Call the async postData function and log the response
post_text(url, payload, send_index);
send_index = send_index + 1;
if (audio_debug) console.log(send_index, audio_buf_text);
// sleep 2 seconds
if (allow_auto_read_tts_flag) {
await delay(3000);
}
}
}
audio_push_lock.unlock();
}
send_index = 0;
recv_index = 0;
to_be_processed = [];
async function UpdatePlayQueue(cnt, audio_buf_wave) {
if (cnt != recv_index) {
to_be_processed.push([cnt, audio_buf_wave]);
if (audio_debug) console.log('cache', cnt);
}
else {
if (audio_debug) console.log('processing', cnt);
recv_index = recv_index + 1;
if (audio_buf_wave) {
audioPlayer.enqueueAudio(audio_buf_wave);
}
// deal with other cached audio
while (true) {
find_any = false;
for (i = to_be_processed.length - 1; i >= 0; i--) {
if (to_be_processed[i][0] == recv_index) {
if (audio_debug) console.log('processing cached', recv_index);
if (to_be_processed[i][1]) {
audioPlayer.enqueueAudio(to_be_processed[i][1]);
}
to_be_processed.pop(i);
find_any = true;
recv_index = recv_index + 1;
}
}
if (!find_any) { break; }
}
}
}
function post_text(url, payload, cnt) {
if (allow_auto_read_tts_flag) {
postData(url, payload, cnt)
.then(data => {
UpdatePlayQueue(cnt, data);
return;
});
} else {
UpdatePlayQueue(cnt, null);
return;
}
}
notify_user_error = false
// Create an async function to perform the POST request
async function postData(url = '', data = {}) {
try {
// Use the Fetch API with await
const response = await fetch(url, {
method: 'POST', // Specify the request method
body: JSON.stringify(data), // Convert the JavaScript object to a JSON string
});
// Check if the response is ok (status in the range 200-299)
if (!response.ok) {
// If not OK, throw an error
console.info('There was a problem during audio generation requests:', response.status);
// if (!notify_user_error){
// notify_user_error = true;
// alert('There was a problem during audio generation requests:', response.status);
// }
return null;
}
// If OK, parse and return the JSON response
return await response.arrayBuffer();
} catch (error) {
// Log any errors that occur during the fetch operation
console.info('There was a problem during audio generation requests:', error);
// if (!notify_user_error){
// notify_user_error = true;
// alert('There was a problem during audio generation requests:', error);
// }
return null;
}
}
async function generate_menu(guiBase64String, btnName){
// assign the button and menu data
push_data_to_gradio_component(guiBase64String, "invisible_current_pop_up_plugin_arg", "string");
@@ -1690,24 +1230,140 @@ function close_current_pop_up_plugin(){
hide_all_elem();
}
// 生成高级插件的选择菜单
advanced_plugin_init_code_lib = {}
plugin_init_info_lib = {}
function register_plugin_init(key, base64String){
// console.log('x')
const stringData = atob(base64String);
let guiJsonData = JSON.parse(stringData);
if (key in plugin_init_info_lib)
{
}
else
{
plugin_init_info_lib[key] = {};
}
plugin_init_info_lib[key].info = guiJsonData.Info;
plugin_init_info_lib[key].color = guiJsonData.Color;
plugin_init_info_lib[key].elem_id = guiJsonData.ButtonElemId;
plugin_init_info_lib[key].label = guiJsonData.Label
plugin_init_info_lib[key].enable_advanced_arg = guiJsonData.AdvancedArgs;
plugin_init_info_lib[key].arg_reminder = guiJsonData.ArgsReminder;
}
function register_advanced_plugin_init_code(key, code){
advanced_plugin_init_code_lib[key] = code;
if (key in plugin_init_info_lib)
{
}
else
{
plugin_init_info_lib[key] = {};
}
plugin_init_info_lib[key].secondary_menu_code = code;
}
function run_advanced_plugin_launch_code(key){
// convert js code string to function
generate_menu(advanced_plugin_init_code_lib[key], key);
generate_menu(plugin_init_info_lib[key].secondary_menu_code, key);
}
function on_flex_button_click(key){
if (advanced_plugin_init_code_lib.hasOwnProperty(key)){
if (plugin_init_info_lib.hasOwnProperty(key) && plugin_init_info_lib[key].hasOwnProperty('secondary_menu_code')){
run_advanced_plugin_launch_code(key);
}else{
document.getElementById("old_callback_btn_for_plugin_exe").click();
}
}
async function run_dropdown_shift(dropdown){
let key = dropdown;
push_data_to_gradio_component({
value: key,
variant: plugin_init_info_lib[key].color,
info_str: plugin_init_info_lib[key].info,
__type__: 'update'
}, "elem_switchy_bt", "obj");
if (plugin_init_info_lib[key].enable_advanced_arg){
push_data_to_gradio_component({
visible: true,
label: plugin_init_info_lib[key].label,
__type__: 'update'
}, "advance_arg_input_legacy", "obj");
} else {
push_data_to_gradio_component({
visible: false,
label: plugin_init_info_lib[key].label,
__type__: 'update'
}, "advance_arg_input_legacy", "obj");
}
}
async function duplicate_in_new_window() {
// 获取当前页面的URL
var url = window.location.href;
// 在新标签页中打开这个URL
window.open(url, '_blank');
}
async function run_classic_plugin_via_id(plugin_elem_id){
for (key in plugin_init_info_lib){
if (plugin_init_info_lib[key].elem_id == plugin_elem_id){
// 获取按钮名称
let current_btn_name = await get_data_from_gradio_component(plugin_elem_id);
// 执行
call_plugin_via_name(current_btn_name);
return;
}
}
return;
}
async function call_plugin_via_name(current_btn_name) {
gui_args = {}
// 关闭菜单 (如果处于开启状态)
push_data_to_gradio_component({
visible: false,
__type__: 'update'
}, "plugin_arg_menu", "obj");
hide_all_elem();
// 为了与旧插件兼容,生成菜单时,自动加载旧高级参数输入区的值
let advance_arg_input_legacy = await get_data_from_gradio_component('advance_arg_input_legacy');
if (advance_arg_input_legacy.length != 0){
gui_args["advanced_arg"] = {};
gui_args["advanced_arg"].user_confirmed_value = advance_arg_input_legacy;
}
// execute the plugin
push_data_to_gradio_component(JSON.stringify(gui_args), "invisible_current_pop_up_plugin_arg_final", "string");
push_data_to_gradio_component(current_btn_name, "invisible_callback_btn_for_plugin_exe", "string");
document.getElementById("invisible_callback_btn_for_plugin_exe").click();
}
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
// 多用途复用提交按钮
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
async function click_real_submit_btn() {
document.getElementById("elem_submit").click();
}
async function multiplex_function_begin(multiplex_sel) {
if (multiplex_sel === "常规对话") {
click_real_submit_btn();
return;
}
if (multiplex_sel === "多模型对话") {
let _align_name_in_crazy_function_py = "询问多个GPT模型";
call_plugin_via_name(_align_name_in_crazy_function_py);
return;
}
}
async function run_multiplex_shift(multiplex_sel){
let key = multiplex_sel;
if (multiplex_sel === "常规对话") {
key = "提交";
} else {
key = "提交 (" + multiplex_sel + ")";
}
push_data_to_gradio_component({
value: key,
__type__: 'update'
}, "elem_submit_visible", "obj");
}

View File

@@ -1,3 +1,4 @@
from functools import lru_cache
from toolbox import get_conf
CODE_HIGHLIGHT, ADD_WAIFU, LAYOUT = get_conf("CODE_HIGHLIGHT", "ADD_WAIFU", "LAYOUT")
@@ -23,22 +24,32 @@ def minimize_js(common_js_path):
except:
return common_js_path
@lru_cache
def get_common_html_javascript_code():
js = "\n"
common_js_path = "themes/common.js"
common_js_path_list = [
"themes/common.js",
"themes/theme.js",
"themes/tts.js",
"themes/init.js",
"themes/welcome.js",
]
if ADD_WAIFU: # 添加Live2D
common_js_path_list += [
"themes/waifu_plugin/jquery.min.js",
"themes/waifu_plugin/jquery-ui.min.js",
]
for common_js_path in common_js_path_list:
if '.min.' not in common_js_path:
minimized_js_path = minimize_js(common_js_path)
for jsf in [
f"file={minimized_js_path}",
]:
else:
minimized_js_path = common_js_path
jsf = f"file={minimized_js_path}"
js += f"""<script src="{jsf}"></script>\n"""
# 添加Live2D
if ADD_WAIFU:
for jsf in [
"file=themes/waifu_plugin/jquery.min.js",
"file=themes/waifu_plugin/jquery-ui.min.js",
]:
js += f"""<script src="{jsf}"></script>\n"""
else:
if not ADD_WAIFU:
js += """<script>window.loadLive2D = function(){};</script>\n"""
return js

View File

@@ -24,8 +24,8 @@
/* 小按钮 */
#basic-panel .sm {
font-family: "Microsoft YaHei UI", "Helvetica", "Microsoft YaHei", "ui-sans-serif", "sans-serif", "system-ui";
--button-small-text-weight: 600;
--button-small-text-size: 16px;
--button-small-text-weight: 400;
--button-small-text-size: 14px;
border-bottom-right-radius: 6px;
border-bottom-left-radius: 6px;
border-top-right-radius: 6px;

View File

@@ -1,7 +1,7 @@
:root {
--chatbot-color-light: #000000;
--chatbot-color-dark: #FFFFFF;
--chatbot-background-color-light: #F3F3F3;
--chatbot-background-color-light: #FFFFFF;
--chatbot-background-color-dark: #121111;
--message-user-background-color-light: #95EC69;
--message-user-background-color-dark: #26B561;
@@ -196,7 +196,7 @@ footer {
transition: opacity 0.3s ease-in-out;
}
textarea.svelte-1pie7s6 {
background: #e7e6e6 !important;
background: #f1f1f1 !important;
width: 100% !important;
}

View File

@@ -62,11 +62,11 @@ def adjust_theme():
button_primary_text_color="white",
button_primary_text_color_dark="white",
button_secondary_background_fill="*neutral_100",
button_secondary_background_fill_hover="*neutral_50",
button_secondary_background_fill_hover="#FEFEFE",
button_secondary_background_fill_dark="*neutral_900",
button_secondary_text_color="*neutral_800",
button_secondary_text_color_dark="white",
background_fill_primary="*neutral_50",
background_fill_primary="#FEFEFE",
background_fill_primary_dark="#1F1F1F",
block_title_text_color="*primary_500",
block_title_background_fill_dark="*primary_900",

View File

@@ -31,18 +31,27 @@ def define_gui_advanced_plugin_class(plugins):
invisible_callback_btn_for_plugin_exe = gr.Button(r"未选定任何插件", variant="secondary", visible=False, elem_id="invisible_callback_btn_for_plugin_exe").style(size="sm")
# 随变按钮的回调函数注册
def route_switchy_bt_with_arg(request: gr.Request, input_order, *arg):
arguments = {k:v for k,v in zip(input_order, arg)}
which_plugin = arguments.pop('new_plugin_callback')
arguments = {k:v for k,v in zip(input_order, arg)} # 重新梳理输入参数转化为kwargs字典
which_plugin = arguments.pop('new_plugin_callback') # 获取需要执行的插件名称
if which_plugin in [r"未选定任何插件"]: return
usr_confirmed_arg = arguments.pop('usr_confirmed_arg')
usr_confirmed_arg = arguments.pop('usr_confirmed_arg') # 获取插件参数
arg_confirm: dict = {}
usr_confirmed_arg_dict = json.loads(usr_confirmed_arg)
usr_confirmed_arg_dict = json.loads(usr_confirmed_arg) # 读取插件参数
for arg_name in usr_confirmed_arg_dict:
arg_confirm.update({arg_name: str(usr_confirmed_arg_dict[arg_name]['user_confirmed_value'])})
if plugins[which_plugin].get("Class", None) is not None: # 获取插件执行函数
plugin_obj = plugins[which_plugin]["Class"]
arguments['plugin_advanced_arg'] = arg_confirm
if arg_confirm.get('main_input', None) is not None:
plugin_exe = plugin_obj.execute
else:
plugin_exe = plugins[which_plugin]["Function"]
arguments['plugin_advanced_arg'] = arg_confirm # 更新高级参数输入区的参数
if arg_confirm.get('main_input', None) is not None: # 更新主输入区的参数
arguments['txt'] = arg_confirm['main_input']
yield from ArgsGeneralWrapper(plugin_obj.execute)(request, *arguments.values())
# 万事俱备,开始执行
yield from ArgsGeneralWrapper(plugin_exe)(request, *arguments.values())
return invisible_callback_btn_for_plugin_exe, route_switchy_bt_with_arg, usr_confirmed_arg

View File

@@ -8,8 +8,10 @@ def define_gui_floating_menu(customize_btns, functional, predefined_btns, cookie
with gr.Column(scale=10):
txt2 = gr.Textbox(show_label=False, placeholder="Input question here.",
elem_id='user_input_float', lines=8, label="输入区2").style(container=False)
txt2.submit(None, None, None, _js="""click_real_submit_btn""")
with gr.Column(scale=1, min_width=40):
submitBtn2 = gr.Button("提交", variant="primary"); submitBtn2.style(size="sm")
submitBtn2.click(None, None, None, _js="""click_real_submit_btn""")
resetBtn2 = gr.Button("重置", variant="secondary"); resetBtn2.style(size="sm")
stopBtn2 = gr.Button("停止", variant="secondary"); stopBtn2.style(size="sm")
clearBtn2 = gr.Button("清除", elem_id="elem_clear2", variant="secondary", visible=False); clearBtn2.style(size="sm")

View File

@@ -29,6 +29,10 @@ def define_gui_toolbar(AVAIL_LLM_MODELS, LLM_MODEL, INIT_SYS_PROMPT, THEME, AVAI
checkboxes_2 = gr.CheckboxGroup(opt, value=value, label="显示/隐藏自定义菜单", elem_id='cbsc').style(container=False)
dark_mode_btn = gr.Button("切换界面明暗 ☀", variant="secondary").style(size="sm")
dark_mode_btn.click(None, None, None, _js=js_code_for_toggle_darkmode)
open_new_tab = gr.Button("打开新对话", variant="secondary").style(size="sm")
open_new_tab.click(None, None, None, _js=f"""()=>duplicate_in_new_window()""")
with gr.Tab("帮助", elem_id="interact-panel"):
gr.Markdown(help_menu_description)
return checkboxes, checkboxes_2, max_length_sl, theme_dropdown, system_prompt, file_upload_2, md_dropdown, top_p, temperature

133
themes/init.js Normal file
View File

@@ -0,0 +1,133 @@
async function GptAcademicJavaScriptInit(dark, prompt, live2d, layout, tts) {
// 第一部分,布局初始化
audio_fn_init();
minor_ui_adjustment();
ButtonWithDropdown_init();
// 加载欢迎页面
const welcomeMessage = new WelcomeMessage();
welcomeMessage.begin_render();
chatbotIndicator = gradioApp().querySelector('#gpt-chatbot > div.wrap');
var chatbotObserver = new MutationObserver(() => {
chatbotContentChanged(1);
welcomeMessage.update();
});
chatbotObserver.observe(chatbotIndicator, { attributes: true, childList: true, subtree: true });
if (layout === "LEFT-RIGHT") { chatbotAutoHeight(); }
if (layout === "LEFT-RIGHT") { limit_scroll_position(); }
// 第二部分读取Cookie初始话界面
let searchString = "";
let bool_value = "";
// darkmode 深色模式
if (getCookie("js_darkmode_cookie")) {
dark = getCookie("js_darkmode_cookie")
}
dark = dark == "True";
if (document.querySelectorAll('.dark').length) {
if (!dark) {
document.querySelectorAll('.dark').forEach(el => el.classList.remove('dark'));
}
} else {
if (dark) {
document.querySelector('body').classList.add('dark');
}
}
// 自动朗读
if (tts != "DISABLE"){
enable_tts = true;
if (getCookie("js_auto_read_cookie")) {
auto_read_tts = getCookie("js_auto_read_cookie")
auto_read_tts = auto_read_tts == "True";
if (auto_read_tts) {
allow_auto_read_tts_flag = true;
}
}
}
// SysPrompt 系统静默提示词
gpt_academic_gradio_saveload("load", "elem_prompt", "js_system_prompt_cookie", null, "str");
// Temperature 大模型温度参数
gpt_academic_gradio_saveload("load", "elem_temperature", "js_temperature_cookie", null, "float");
// md_dropdown 大模型类型选择
if (getCookie("js_md_dropdown_cookie")) {
const cached_model = getCookie("js_md_dropdown_cookie");
var model_sel = await get_gradio_component("elem_model_sel");
// determine whether the cached model is in the choices
if (model_sel.props.choices.includes(cached_model)){
// change dropdown
gpt_academic_gradio_saveload("load", "elem_model_sel", "js_md_dropdown_cookie", null, "str");
// 连锁修改chatbot的label
push_data_to_gradio_component({
label: '当前模型:' + getCookie("js_md_dropdown_cookie"),
__type__: 'update'
}, "gpt-chatbot", "obj")
}
}
// clearButton 自动清除按钮
if (getCookie("js_clearbtn_show_cookie")) {
// have cookie
bool_value = getCookie("js_clearbtn_show_cookie")
bool_value = bool_value == "True";
searchString = "输入清除键";
if (bool_value) {
// make btns appear
let clearButton = document.getElementById("elem_clear"); clearButton.style.display = "block";
let clearButton2 = document.getElementById("elem_clear2"); clearButton2.style.display = "block";
// deal with checkboxes
let arr_with_clear_btn = update_array(
await get_data_from_gradio_component('cbs'), "输入清除键", "add"
)
push_data_to_gradio_component(arr_with_clear_btn, "cbs", "no_conversion");
} else {
// make btns disappear
let clearButton = document.getElementById("elem_clear"); clearButton.style.display = "none";
let clearButton2 = document.getElementById("elem_clear2"); clearButton2.style.display = "none";
// deal with checkboxes
let arr_without_clear_btn = update_array(
await get_data_from_gradio_component('cbs'), "输入清除键", "remove"
)
push_data_to_gradio_component(arr_without_clear_btn, "cbs", "no_conversion");
}
}
// live2d 显示
if (getCookie("js_live2d_show_cookie")) {
// have cookie
searchString = "添加Live2D形象";
bool_value = getCookie("js_live2d_show_cookie");
bool_value = bool_value == "True";
if (bool_value) {
loadLive2D();
let arr_with_live2d = update_array(
await get_data_from_gradio_component('cbsc'), "添加Live2D形象", "add"
)
push_data_to_gradio_component(arr_with_live2d, "cbsc", "no_conversion");
} else {
try {
$('.waifu').hide();
let arr_without_live2d = update_array(
await get_data_from_gradio_component('cbsc'), "添加Live2D形象", "remove"
)
push_data_to_gradio_component(arr_without_live2d, "cbsc", "no_conversion");
} catch (error) {
}
}
} else {
// do not have cookie
if (live2d) {
loadLive2D();
} else {
}
}
// 主题加载(恢复到上次)
change_theme("", "")
}

View File

@@ -1 +0,0 @@
// we have moved mermaid-related code to gradio-fix repository: binary-husky/gradio-fix@32150d0

View File

@@ -1 +0,0 @@
// we have moved mermaid-related code to gradio-fix repository: binary-husky/gradio-fix@32150d0

View File

@@ -1 +0,0 @@
// we have moved mermaid-related code to gradio-fix repository: binary-husky/gradio-fix@32150d0

View File

@@ -1 +0,0 @@
// we have moved mermaid-related code to gradio-fix repository: binary-husky/gradio-fix@32150d0

View File

7
themes/svg/arxiv.svg Normal file
View File

@@ -0,0 +1,7 @@
<?xml version="1.0"?>
<svg width="1024" height="1024" xmlns="http://www.w3.org/2000/svg" xmlns:svg="http://www.w3.org/2000/svg" class="icon" version="1.1">
<g class="layer">
<title>Layer 1</title>
<path d="m140,188l584,0l0,164l76,0l0,-208c0,-17.7 -14.3,-32 -32,-32l-672,0c-17.7,0 -32,14.3 -32,32l0,736c0,17.7 14.3,32 32,32l544,0l0,-76l-500,0l0,-648zm274.3,68l-60.6,0c-3.4,0 -6.4,2.2 -7.6,5.4l-127.1,368c-0.3,0.8 -0.4,1.7 -0.4,2.6c0,4.4 3.6,8 8,8l55.1,0c3.4,0 6.4,-2.2 7.6,-5.4l32.7,-94.6l196.2,0l-96.2,-278.6c-1.3,-3.2 -4.3,-5.4 -7.7,-5.4zm12.4,228l-85.5,0l42.8,-123.8l42.7,123.8zm509.3,44l-136,0l0,-93c0,-4.4 -3.6,-8 -8,-8l-56,0c-4.4,0 -8,3.6 -8,8l0,93l-136,0c-13.3,0 -24,10.7 -24,24l0,176c0,13.3 10.7,24 24,24l136,0l0,152c0,4.4 3.6,8 8,8l56,0c4.4,0 8,-3.6 8,-8l0,-152l136,0c13.3,0 24,-10.7 24,-24l0,-176c0,-13.3 -10.7,-24 -24,-24zm-208,152l-88,0l0,-80l88,0l0,80zm160,0l-88,0l0,-80l88,0l0,80z" fill="#00aeff" id="svg_1"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 940 B

8
themes/svg/brain.svg Normal file
View File

@@ -0,0 +1,8 @@
<?xml version="1.0"?>
<svg width="1024" height="1024" xmlns="http://www.w3.org/2000/svg" xmlns:svg="http://www.w3.org/2000/svg" class="icon" version="1.1">
<g class="layer">
<title>Layer 1</title>
<path d="m832,96a96,96 0 1 1 -90.72,127.52l-1.54,0.26l-3.74,0.22l-256,0l0,256l256,0c1.82,0 3.58,0.16 5.31,0.45a96,96 0 1 1 0,63.07l-1.6,0.26l-3.71,0.22l-256,0l0,256l256,0c1.82,0 3.58,0.16 5.31,0.45a96,96 0 1 1 0,63.07l-1.6,0.26l-3.71,0.22l-288,0a32,32 0 0 1 -31.78,-28.26l-0.22,-3.74l0,-288l-68.03,0a128.06,128.06 0 0 1 -117.57,95.87l-6.4,0.13a128,128 0 1 1 123.97,-160l68.03,0l0,-288a32,32 0 0 1 28.26,-31.78l3.74,-0.22l288,0c1.82,0 3.58,0.16 5.31,0.45a96,96 0 0 1 90.69,-64.45zm0,704a32,32 0 1 0 0,64a32,32 0 0 0 0,-64zm-608,-352a64,64 0 1 0 0,128a64,64 0 0 0 0,-128zm608,32a32,32 0 1 0 0,64a32,32 0 0 0 0,-64zm0,-320a32,32 0 1 0 0,64a32,32 0 0 0 0,-64z" fill="#00aeff" id="svg_1"/>
<path d="m224,384a128,128 0 1 1 0,256a128,128 0 0 1 0,-256zm0,64a64,64 0 1 0 0,128a64,64 0 0 0 0,-128z" fill="#00aeff" id="svg_2"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 1.0 KiB

8
themes/svg/conf.svg Normal file
View File

@@ -0,0 +1,8 @@
<?xml version="1.0"?>
<svg width="1024" height="1024" xmlns="http://www.w3.org/2000/svg" xmlns:svg="http://www.w3.org/2000/svg" class="icon" version="1.1">
<g class="layer">
<title>Layer 1</title>
<path d="m373.83,194.49a44.92,44.92 0 1 1 0,-89.84a44.96,44.96 0 1 1 0,89.84m165.43,-163.51l-63.23,0s-189.11,-1.28 -204.17,99.33l0,122.19l241.47,0l0,34.95l-358.3,-0.66s-136.03,9.94 -138.51,206.74l0,57.75s-3.96,190.03 130.12,216.55l92.32,0l0,-128.09a132,132 0 0 1 132.06,-132.03l256.43,0a123.11,123.11 0 0 0 123.18,-123.12l0,-238.62c0,-2.25 -1.19,-103.13 -211.37,-115.02" fill="#00aeff" id="svg_1"/>
<path d="m647.01,853.16c24.84,0 44.96,20.01 44.96,44.85a44.96,44.96 0 1 1 -44.96,-44.85m-165.43,163.51l63.23,0s189.14,1.22 204.17,-99.36l0,-122.22l-241.47,0l0,-34.95l358.27,0.66s136.06,-9.88 138.54,-206.65l0,-57.74s3.96,-190.04 -130.12,-216.56l-92.32,0l0,128.03a132.06,132.06 0 0 1 -132.06,132.1l-256.47,0a123.11,123.11 0 0 0 -123.14,123.08l0,238.59c0,2.24 1.19,103.16 211.37,115.02" fill="#00aeff" id="svg_2"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 1.0 KiB

1
themes/svg/default.svg Normal file
View File

@@ -0,0 +1 @@
<svg t="1721122982934" class="icon" viewBox="0 0 1024 1024" version="1.1" xmlns="http://www.w3.org/2000/svg" p-id="1823" width="200" height="200"><path d="M512 512m-512 0a512 512 0 1 0 1024 0 512 512 0 1 0-1024 0Z" fill="#04BEE8" p-id="1824"></path><path d="M324.408781 655.018925C505.290126 655.018925 651.918244 508.387706 651.918244 327.509463c0-152.138029-103.733293-280.047334-244.329811-316.853972C205.813923 52.463528 47.497011 213.017581 8.987325 415.981977 47.587706 553.880127 174.183098 655.018925 324.408781 655.018925z" fill="#FFFFFF" fill-opacity=".2" p-id="1825"></path><path d="M512 1024c282.766631 0 512-229.233369 512-512 0-31.765705-2.891385-62.853911-8.433853-93.018889C928.057169 336.0999 809.874701 285.26268 679.824375 285.26268c-269.711213 0-488.357305 218.645317-488.357305 488.357305 0 54.959576 9.084221 107.802937 25.822474 157.10377C300.626556 989.489417 402.283167 1024 512 1024z" fill="#FFFFFF" fill-opacity=".15" p-id="1826"></path><path d="M732.535958 756.566238c36.389596 0 65.889478-29.499882 65.889477-65.889478 0 36.389596 29.502983 65.889478 65.889478 65.889478-17.053747 0-65.889478 29.502983-65.889478 65.889477 0-36.386495-29.499882-65.889478-65.889477-65.889477zM159.685087 247.279334c25.686819 0 46.51022-20.8234 46.51022-46.51022 0 25.686819 20.8234 46.51022 46.510219 46.51022-12.03607 0-46.51022 20.8234-46.510219 46.510219 0-25.686819-20.8234-46.51022-46.51022-46.510219z" fill="#FFFFFF" fill-opacity=".5" p-id="1827"></path><path d="M206.195307 333.32324c8.562531 0 15.503407-6.940875 15.503406-15.503407 0 8.562531 6.940875 15.503407 15.503407 15.503407-4.012282 0-15.503407 6.940875-15.503407 15.503406 0-8.562531-6.940875-15.503407-15.503406-15.503406z" fill="#FFFFFF" fill-opacity=".3" p-id="1828"></path><path d="M282.161998 248.054504m80.617714 0l299.215746 0q80.617714 0 80.617714 80.617714l0 366.380379q0 80.617714-80.617714 80.617713l-299.215746 0q-80.617714 0-80.617714-80.617713l0-366.380379q0-80.617714 80.617714-80.617714Z" fill="#FFFFFF" p-id="1829"></path><path d="M530.216503 280.611658h113.433774v146.467658c0 10.89967-13.049992 16.498725-20.948978 8.9881l-35.767909-34.009048-35.767909 34.009048C543.266495 443.578041 530.216503 437.978986 530.216503 427.079316V280.611658z" fill="#29C8EB" p-id="1830"></path><path d="M365.105223 280.611658m14.728237 0l0 0q14.728236 0 14.728236 14.728236l0 417.041635q0 14.728236-14.728236 14.728236l0 0q-14.728236 0-14.728237-14.728236l0-417.041635q0-14.728236 14.728237-14.728236Z" fill="#29C8EB" p-id="1831"></path></svg>

After

Width:  |  Height:  |  Size: 2.5 KiB

9
themes/svg/doc.svg Normal file
View File

@@ -0,0 +1,9 @@
<?xml version="1.0"?>
<svg width="1024" height="1024" xmlns="http://www.w3.org/2000/svg" xmlns:svg="http://www.w3.org/2000/svg" class="icon" version="1.1">
<g class="layer">
<title>Layer 1</title>
<path d="m298.67,96a32,32 0 0 1 0,64a138.67,138.67 0 0 0 -138.67,138.67l0,42.66a32,32 0 0 1 -64,0l0,-42.66a202.67,202.67 0 0 1 202.67,-202.67zm597.33,554.67a32,32 0 0 1 32,32l0,42.66a202.67,202.67 0 0 1 -202.67,202.67l-42.66,0a32,32 0 0 1 0,-64l42.66,0a138.67,138.67 0 0 0 138.67,-138.67l0,-42.66a32,32 0 0 1 32,-32zm-128,-405.34a32,32 0 0 1 0,64l-213.33,0a32,32 0 0 1 0,-64l213.33,0zm0,128a32,32 0 0 1 0,64l-128,0a32,32 0 0 1 0,-64l128,0z" fill="#00aeff" id="svg_1"/>
<path d="m780.8,96a138.67,138.67 0 0 1 138.67,138.67l0,213.33a138.67,138.67 0 0 1 -138.67,138.67l-98.13,0a32,32 0 0 1 0,-64l98.13,0a74.67,74.67 0 0 0 74.67,-74.67l0,-213.33a74.67,74.67 0 0 0 -74.67,-74.67l-247.47,0a74.67,74.67 0 0 0 -74.66,74.67l0,106.66a32,32 0 0 1 -64,0l0,-106.66a138.67,138.67 0 0 1 138.66,-138.67l247.47,0zm-487.68,500.05a32,32 0 0 1 45.23,0l64,64a32,32 0 0 1 0,45.23l-64,64a32,32 0 0 1 -45.23,-45.23l41.34,-41.38l-41.38,-41.39a32,32 0 0 1 -3.07,-41.64l3.11,-3.59z" fill="#00aeff" id="svg_2"/>
<path d="m448,437.33a138.67,138.67 0 0 1 138.67,138.67l0,213.33a138.67,138.67 0 0 1 -138.67,138.67l-213.33,0a138.67,138.67 0 0 1 -138.67,-138.67l0,-213.33a138.67,138.67 0 0 1 138.67,-138.67l213.33,0zm0,64l-213.33,0a74.67,74.67 0 0 0 -74.67,74.67l0,213.33c0,41.22 33.45,74.67 74.67,74.67l213.33,0a74.67,74.67 0 0 0 74.67,-74.67l0,-213.33a74.67,74.67 0 0 0 -74.67,-74.67z" fill="#00aeff" id="svg_3"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 1.6 KiB

16
themes/svg/img.svg Normal file
View File

@@ -0,0 +1,16 @@
<?xml version="1.0"?>
<svg width="1024" height="1024" xmlns="http://www.w3.org/2000/svg" xmlns:svg="http://www.w3.org/2000/svg" class="icon" version="1.1">
<g class="layer">
<title>Layer 1</title>
<path d="m960,234l0,556c0,5.52 -4.48,10 -10,10l-438,0l0,-576l438,0c5.52,0 10,4.48 10,10z" fill="#F7FEFE" id="svg_1"/>
<path d="m931.93,800l-419.93,0l0,-202.03l114.19,-114.19c5.21,-5.21 13.71,-5 18.66,0.44l287.08,315.78z" fill="#3404FC" id="svg_2"/>
<path d="m512,800l256,0l-256,-275.69l0,275.69z" fill="#0097FF" id="svg_3"/>
<path d="m848,336m-48,0a48,48 0 1 0 96,0a48,48 0 1 0 -96,0z" fill="#EC1C36" id="svg_4"/>
<path d="m305.29,242.44m-32,0a32,32 0 1 0 64,0a32,32 0 1 0 -64,0z" fill="#00aeff" id="svg_5"/>
<path d="m112,320m-32,0a32,32 0 1 0 64,0a32,32 0 1 0 -64,0z" fill="#00aeff" id="svg_6"/>
<path d="m96,512m-32,0a32,32 0 1 0 64,0a32,32 0 1 0 -64,0z" fill="#00aeff" id="svg_7"/>
<path d="m305.29,781.56m-32,0a32,32 0 1 0 64,0a32,32 0 1 0 -64,0z" fill="#00aeff" id="svg_8"/>
<path d="m112,704m-32,0a32,32 0 1 0 64,0a32,32 0 1 0 -64,0z" fill="#00aeff" id="svg_9"/>
<path d="m950,816.31l-438,0l0,-32l432,0l0,-544.62l-432,0l0,-32l438,0c14.34,0 26,11.66 26,26l0,556.62c0,14.34 -11.66,26 -26,26zm-630,-192.31c8.84,0 16,-7.16 16,-16s-7.16,-16 -16,-16l-38.63,0l-96,96l-11.4,0c-7.12,-27.57 -32.21,-48 -61.97,-48c-35.29,0 -64,28.71 -64,64s28.71,64 64,64c29.77,0 54.85,-20.43 61.97,-48l24.65,0l96.01,-96l25.37,0zm-208,112c-17.67,0 -32,-14.33 -32,-32s14.33,-32 32,-32s32,14.33 32,32s-14.33,32 -32,32zm288,-304c8.84,0 16,-7.16 16,-16s-7.16,-16 -16,-16l-32,0c-8.84,0 -16,7.16 -16,16s7.16,16 16,16l32,0zm-80,80c0,8.84 7.16,16 16,16l32,0c8.84,0 16,-7.16 16,-16s-7.16,-16 -16,-16l-32,0c-8.84,0 -16,7.16 -16,16zm32,96c0,8.84 7.16,16 16,16l32,0c8.84,0 16,-7.16 16,-16s-7.16,-16 -16,-16l-32,0c-8.84,0 -16,7.16 -16,16zm-240,-224c29.77,0 54.85,-20.43 61.97,-48l11.4,0l96,96l38.63,0c8.84,0 16,-7.16 16,-16s-7.16,-16 -16,-16l-25.37,0l-96,-96l-24.65,0c-7.12,-27.57 -32.21,-48 -61.97,-48c-35.29,0 -64,28.71 -64,64s28.7,64 63.99,64zm0,-96c17.67,0 32,14.33 32,32s-14.33,32 -32,32s-32,-14.33 -32,-32s14.33,-32 32,-32zm-16,288c29.77,0 54.85,-20.43 61.97,-48l133.43,0c6.96,0 12.59,-7.16 12.59,-16s-5.64,-16 -12.59,-16l-133.43,0c-7.12,-27.57 -32.21,-48 -61.97,-48c-35.29,0 -64,28.71 -64,64s28.71,64 64,64zm0,-96c17.67,0 32,14.33 32,32s-14.33,32 -32,32s-32,-14.33 -32,-32s14.33,-32 32,-32zm384,-416c-8.84,0 -16,7.16 -16,16l0,224l-73.37,0l-29.8,-29.79a63.62,63.62 0 0 0 8.46,-31.77c0,-35.29 -28.71,-64 -64,-64s-64,28.71 -64,64s28.71,64 64,64c12.16,0 23.53,-3.41 33.21,-9.31l38.87,38.87l86.63,0l0,64l-16,0c-8.84,0 -16,7.16 -16,16s7.16,16 16,16l16,0l0,64l-48,0c-8.84,0 -16,7.16 -16,16s7.16,16 16,16l48,0l0,64l-16,0c-8.84,0 -16,7.16 -16,16s7.16,16 16,16l16,0l0,64l-86.63,0l-38.87,38.87a63.61,63.61 0 0 0 -33.21,-9.31c-35.29,0 -64,28.71 -64,64s28.71,64 64,64s64,-28.71 64,-64c0,-11.55 -3.08,-22.4 -8.46,-31.77l29.8,-29.79l73.37,0l0,224c0,8.84 7.16,16 16,16s16,-7.16 16,-16l0,-864c0,-8.84 -7.16,-16 -16,-16zm-143.57,185.81c-2.62,11.14 -11.07,20.03 -21.95,23.29c-2.91,0.87 -6,1.34 -9.19,1.34c-17.68,0 -32,-14.33 -32,-32c0,-17.68 14.32,-32 32,-32c17.67,0 32,14.32 32,32c0,2.54 -0.29,5 -0.86,7.37zm-31.14,563.75c-17.68,0 -32,-14.32 -32,-32c0,-17.67 14.32,-32 32,-32c3.19,0 6.28,0.47 9.19,1.34c10.88,3.26 19.33,12.15 21.95,23.29c0.57,2.37 0.86,4.83 0.86,7.37c0,17.68 -14.33,32 -32,32z" fill="#00aeff" id="svg_10"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 3.3 KiB

10
themes/svg/mm.svg Normal file
View File

@@ -0,0 +1,10 @@
<?xml version="1.0"?>
<svg width="1066" height="1024" xmlns="http://www.w3.org/2000/svg" xmlns:svg="http://www.w3.org/2000/svg" class="icon" version="1.1">
<g class="layer">
<title>Layer 1</title>
<path d="m443.9,511.45a96,96 0 1 0 192,0a96,96 0 0 0 -192,0zm96,32a32,32 0 1 1 0,-64a32,32 0 0 1 0,64z" fill="#00aeff" id="svg_1"/>
<path d="m74.67,512c0,80.17 65.87,142.08 147.54,181.67c84.26,40.84 198.06,65.07 321.7,65.07c123.74,0 237.49,-24.23 321.75,-65.07c81.71,-39.63 147.59,-101.54 147.59,-181.71c0,-80.22 -65.88,-142.08 -147.59,-181.68c-84.26,-40.87 -198.05,-65.11 -321.7,-65.11c-123.69,0 -237.49,24.24 -321.71,65.11c-81.71,39.6 -147.58,101.51 -147.58,181.68l0,0.04zm469.29,172.07c-114.9,0 -217.09,-22.61 -289.15,-57.6c-74.67,-36.18 -105.48,-79.01 -105.48,-114.47c0,-35.46 30.85,-78.29 105.48,-114.52c72.06,-34.94 174.25,-57.6 289.15,-57.6c114.9,0 217.09,22.66 289.15,57.6c74.67,36.23 105.47,79.06 105.47,114.52c0,35.5 -30.85,78.34 -105.47,114.52c-72.11,34.98 -174.25,57.6 -289.15,57.6l0,-0.05z" fill="#00aeff" id="svg_2"/>
<path d="m300.2,705.19c-5.97,82.74 15.7,130.86 46.42,148.57c30.72,17.75 83.25,12.46 151.9,-34.09c66.3,-44.93 137.04,-122.07 194.47,-221.57c57.47,-99.5 88.92,-199.34 94.72,-279.21c5.98,-82.77 -15.74,-130.86 -46.46,-148.61c-30.72,-17.75 -83.2,-12.46 -151.9,34.09c-66.3,44.93 -137.04,122.12 -194.47,221.61c-57.43,99.5 -88.92,199.3 -94.72,279.21l0.04,0zm-74.49,-5.37c6.78,-93.44 42.66,-204.08 104.53,-311.17c61.82,-107.09 139.69,-193.54 217.17,-246.1c75.18,-50.94 161.71,-77.01 231.17,-36.95c69.46,40.11 90.11,128.09 83.59,218.67c-6.75,93.39 -42.67,204.07 -104.54,311.16c-61.82,107.1 -139.69,193.5 -217.17,246.06c-75.18,50.95 -161.71,77.06 -231.17,36.95c-69.46,-40.1 -90.11,-128.08 -83.58,-218.62z" fill="#00aeff" id="svg_3"/>
<path d="m300.2,318.85c5.76,79.87 37.21,179.71 94.68,279.21c57.43,99.5 128.17,176.64 194.43,221.61c68.7,46.51 121.18,51.84 151.9,34.09c30.72,-17.75 52.43,-65.88 46.46,-148.61c-5.76,-79.87 -37.25,-179.71 -94.72,-279.21c-57.43,-99.5 -128.13,-176.64 -194.43,-221.61c-68.7,-46.51 -121.18,-51.8 -151.9,-34.09c-30.72,17.75 -52.43,65.88 -46.42,148.61zm-74.49,5.37c-6.53,-90.53 14.12,-178.51 83.58,-218.62c69.46,-40.11 155.99,-13.99 231.13,36.95c77.52,52.52 155.39,138.96 217.21,246.06c61.87,107.09 97.75,217.77 104.54,311.17c6.52,90.53 -14.17,178.51 -83.63,218.62c-69.42,40.11 -155.95,13.99 -231.08,-36.95c-77.53,-52.52 -155.44,-138.96 -217.26,-246.06c-61.83,-107.09 -97.71,-217.77 -104.49,-311.17z" fill="#00aeff" id="svg_4"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 2.4 KiB

7
themes/svg/polish.svg Normal file
View File

@@ -0,0 +1,7 @@
<?xml version="1.0"?>
<svg width="1024" height="1024" xmlns="http://www.w3.org/2000/svg" xmlns:svg="http://www.w3.org/2000/svg" class="icon" version="1.1">
<g class="layer">
<title>Layer 1</title>
<path d="m671.27,337.36l2.05,1.92a174.18,174.18 0 0 1 0.19,244.18l-383.3,390.22a168.04,168.04 0 0 1 -237.73,1.79l-2.04,-2.05a174.18,174.18 0 0 1 -0.26,-244.12l383.3,-390.15a168.04,168.04 0 0 1 237.73,-1.79l0.06,0zm-165.09,73.2l-383.31,390.22a72.31,72.31 0 0 0 0.07,101.36l0.77,0.83a66.29,66.29 0 0 0 93.87,-0.7l383.3,-390.22a72.31,72.31 0 0 0 0,-101.42l-0.83,-0.77a66.36,66.36 0 0 0 -93.87,0.7zm282.32,209.7a47.35,47.35 0 0 1 0.64,0.45l122.48,72.05c23.04,13.57 30.91,43.07 17.79,66.29a47.35,47.35 0 0 1 -64.63,17.92l-0.58,-0.45l-122.48,-72.05a48.95,48.95 0 0 1 -17.78,-66.29a47.35,47.35 0 0 1 64.63,-17.92l-0.07,0zm187.43,-191.84a48.38,48.38 0 0 1 0,96.69l-140.52,0a48.38,48.38 0 0 1 0,-96.69l140.52,0zm-49.27,-292.63l0.64,0.64a48.82,48.82 0 0 1 0,68.4l-100.66,102.58a46.97,46.97 0 0 1 -66.42,0.64l-0.64,-0.64a48.82,48.82 0 0 1 0,-68.41l100.66,-102.57a46.97,46.97 0 0 1 66.42,-0.64zm-632.55,-35.64a46.97,46.97 0 0 1 0.58,0.63l100.65,102.52a48.82,48.82 0 0 1 0,68.47a46.97,46.97 0 0 1 -66.42,0.64l-0.64,-0.64l-100.65,-102.58a48.82,48.82 0 0 1 0,-68.47a46.97,46.97 0 0 1 66.48,-0.57zm284.57,-100.15c26.23,0 47.48,21.24 47.48,47.48l0,146.92a47.42,47.42 0 1 1 -94.9,0l0,-146.92c0,-26.24 21.31,-47.48 47.42,-47.48z" fill="#00aeff" id="svg_1"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 1.4 KiB

7
themes/svg/tts.svg Normal file
View File

@@ -0,0 +1,7 @@
<?xml version="1.0"?>
<svg width="1092" height="1024" xmlns="http://www.w3.org/2000/svg" xmlns:svg="http://www.w3.org/2000/svg" class="icon" version="1.1">
<g class="layer">
<title>Layer 1</title>
<path d="m1010.55,424.93c0,86.01 -65.36,155.88 -147.04,166.63l-43.63,0c-81.68,-10.75 -147.04,-80.62 -147.04,-166.63l-92.57,0c0,123.63 92.6,231.11 212.41,252.62l0,107.52l87.17,0l0,-107.52c119.81,-21.51 212.42,-123.63 212.42,-252.59l-81.72,0l0,-0.03zm-76.25,-231.16c0,-53.76 -43.56,-91.37 -92.57,-91.37a91.2,91.2 0 0 0 -92.61,91.37l0,91.38l190.64,0l0,-91.38l-5.46,0zm-190.64,231.16c0,53.76 43.59,91.37 92.61,91.37a91.2,91.2 0 0 0 92.6,-91.37l0,-91.38l-185.21,0l0,91.38zm-279.45,-274.23l-139.94,-140.46l-6.83,-6.83l-3.41,-3.41l-3.42,3.41l-6.82,6.86l-20.48,20.55l-3.42,3.42l3.42,6.86l13.65,13.68l75.09,75.37l-153.6,0c-122.88,6.83 -218.45,109.57 -218.45,229.44l0,10.28l51.2,0l0,-10.24c0,-92.5 75.09,-171.28 167.25,-178.11l153.6,0l-75.09,75.33l-10.24,10.28l-3.41,6.82l-3.42,3.45l3.42,3.41l27.3,27.41l3.42,3.42l3.41,-3.42l30.72,-30.82l116.05,-116.43l3.42,-3.41l-3.42,-6.86zm-406.18,383.55l0,130.16l64.85,0l0,-65.06l129.71,0l0,359.59l-64.86,0l0,65.06l194.56,0l0,-65.06l-64.85,0l0,-359.59l129.71,0l0,65.06l64.85,0l0,-130.16l-453.97,0z" fill="#00aeff" id="svg_1"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 1.2 KiB

8
themes/svg/vt.svg Normal file
View File

@@ -0,0 +1,8 @@
<?xml version="1.0"?>
<svg width="1024" height="1024" xmlns="http://www.w3.org/2000/svg" xmlns:svg="http://www.w3.org/2000/svg" class="icon" version="1.1">
<g class="layer">
<title>Layer 1</title>
<path d="m256,64a192,192 0 0 0 -192,192l0,512a192,192 0 0 0 192,192l512,0a192,192 0 0 0 192,-192l0,-512a192,192 0 0 0 -192,-192l-512,0zm92.16,416l-158.08,-158.08l67.84,-67.9l169.41,169.34a80,80 0 0 1 0,113.15l-169.41,169.35l-67.84,-67.78l158.08,-158.08zm131.84,160l288,0l0,96l-288,0l0,-96z" fill="#00aeff" id="svg_1"/>
<path d="m190.08,638.02l158.08,-158.08l-158.08,-158.08l67.84,-67.84l169.41,169.34a80,80 0 0 1 0,113.15l-169.41,169.35l-67.84,-67.78l0,-0.06zm577.92,1.92l-288,0l0,96l288,0l0,-95.94l0,-0.06z" fill="#2951E0" id="svg_2" opacity="0.2"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 767 B

41
themes/theme.js Normal file
View File

@@ -0,0 +1,41 @@
async function try_load_previous_theme(){
if (getCookie("js_theme_selection_cookie")) {
theme_selection = getCookie("js_theme_selection_cookie");
let css = localStorage.getItem('theme-' + theme_selection);
if (css) {
change_theme(theme_selection, css);
}
}
}
async function change_theme(theme_selection, css) {
if (theme_selection.length==0) {
try_load_previous_theme();
return;
}
var existingStyles = document.querySelectorAll("body > gradio-app > div > style")
for (var i = 0; i < existingStyles.length; i++) {
var style = existingStyles[i];
style.parentNode.removeChild(style);
}
var existingStyles = document.querySelectorAll("style[data-loaded-css]");
for (var i = 0; i < existingStyles.length; i++) {
var style = existingStyles[i];
style.parentNode.removeChild(style);
}
setCookie("js_theme_selection_cookie", theme_selection, 3);
localStorage.setItem('theme-' + theme_selection, css);
var styleElement = document.createElement('style');
styleElement.setAttribute('data-loaded-css', 'placeholder');
styleElement.innerHTML = css;
document.body.appendChild(styleElement);
}
// // 记录本次的主题切换
// async function change_theme_prepare(theme_selection, secret_css) {
// setCookie("js_theme_selection_cookie", theme_selection, 3);
// }

View File

@@ -71,29 +71,10 @@ def from_cookie_str(c):
"""
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
第 3 部分
内嵌的javascript代码
内嵌的javascript代码这部分代码会逐渐移动到common.js中
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
"""
js_code_for_css_changing = """(css) => {
var existingStyles = document.querySelectorAll("body > gradio-app > div > style")
for (var i = 0; i < existingStyles.length; i++) {
var style = existingStyles[i];
style.parentNode.removeChild(style);
}
var existingStyles = document.querySelectorAll("style[data-loaded-css]");
for (var i = 0; i < existingStyles.length; i++) {
var style = existingStyles[i];
style.parentNode.removeChild(style);
}
var styleElement = document.createElement('style');
styleElement.setAttribute('data-loaded-css', 'placeholder');
styleElement.innerHTML = css;
document.body.appendChild(styleElement);
}
"""
js_code_for_toggle_darkmode = """() => {
if (document.querySelectorAll('.dark').length) {
setCookie("js_darkmode_cookie", "False", 365);
@@ -114,6 +95,8 @@ js_code_for_persistent_cookie_init = """(web_cookie_cache, cookie) => {
# 详见 themes/common.js
js_code_reset = """
(a,b,c)=>{
let stopButton = document.getElementById("elem_stop");
stopButton.click();
return reset_conversation(a,b);
}
"""

351
themes/tts.js Normal file
View File

@@ -0,0 +1,351 @@
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
// TTS语音生成函数
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
audio_debug = false;
class AudioPlayer {
constructor() {
this.audioCtx = new (window.AudioContext || window.webkitAudioContext)();
this.queue = [];
this.isPlaying = false;
this.currentSource = null; // 添加属性来保存当前播放的源
}
// Base64 编码的字符串转换为 ArrayBuffer
base64ToArrayBuffer(base64) {
const binaryString = window.atob(base64);
const len = binaryString.length;
const bytes = new Uint8Array(len);
for (let i = 0; i < len; i++) {
bytes[i] = binaryString.charCodeAt(i);
}
return bytes.buffer;
}
// 检查音频播放队列并播放音频
checkQueue() {
if (!this.isPlaying && this.queue.length > 0) {
this.isPlaying = true;
const nextAudio = this.queue.shift();
this.play_wave(nextAudio);
}
}
// 将音频添加到播放队列
enqueueAudio(audio_buf_wave) {
if (allow_auto_read_tts_flag) {
this.queue.push(audio_buf_wave);
this.checkQueue();
}
}
// 播放音频
async play_wave(encodedAudio) {
//const audioData = this.base64ToArrayBuffer(encodedAudio);
const audioData = encodedAudio;
try {
const buffer = await this.audioCtx.decodeAudioData(audioData);
const source = this.audioCtx.createBufferSource();
source.buffer = buffer;
source.connect(this.audioCtx.destination);
source.onended = () => {
if (allow_auto_read_tts_flag) {
this.isPlaying = false;
this.currentSource = null; // 播放结束后清空当前源
this.checkQueue();
}
};
this.currentSource = source; // 保存当前播放的源
source.start();
} catch (e) {
console.log("Audio error!", e);
this.isPlaying = false;
this.currentSource = null; // 出错时也应清空当前源
this.checkQueue();
}
}
// 新增:立即停止播放音频的方法
stop() {
if (this.currentSource) {
this.queue = []; // 清空队列
this.currentSource.stop(); // 停止当前源
this.currentSource = null; // 清空当前源
this.isPlaying = false; // 更新播放状态
// 关闭音频上下文可能会导致无法再次播放音频,因此仅停止当前源
// this.audioCtx.close(); // 可选:如果需要可以关闭音频上下文
}
}
}
const audioPlayer = new AudioPlayer();
class FIFOLock {
constructor() {
this.queue = [];
this.currentTaskExecuting = false;
}
lock() {
let resolveLock;
const lock = new Promise(resolve => {
resolveLock = resolve;
});
this.queue.push(resolveLock);
if (!this.currentTaskExecuting) {
this._dequeueNext();
}
return lock;
}
_dequeueNext() {
if (this.queue.length === 0) {
this.currentTaskExecuting = false;
return;
}
this.currentTaskExecuting = true;
const resolveLock = this.queue.shift();
resolveLock();
}
unlock() {
this.currentTaskExecuting = false;
this._dequeueNext();
}
}
function delay(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
// Define the trigger function with delay parameter T in milliseconds
function trigger(T, fire) {
// Variable to keep track of the timer ID
let timeoutID = null;
// Variable to store the latest arguments
let lastArgs = null;
return function (...args) {
// Update lastArgs with the latest arguments
lastArgs = args;
// Clear the existing timer if the function is called again
if (timeoutID !== null) {
clearTimeout(timeoutID);
}
// Set a new timer that calls the `fire` function with the latest arguments after T milliseconds
timeoutID = setTimeout(() => {
fire(...lastArgs);
}, T);
};
}
prev_text = ""; // previous text, this is used to check chat changes
prev_text_already_pushed = ""; // previous text already pushed to audio, this is used to check where we should continue to play audio
prev_chatbot_index = -1;
const delay_live_text_update = trigger(3000, on_live_stream_terminate);
function on_live_stream_terminate(latest_text) {
// remove `prev_text_already_pushed` from `latest_text`
if (audio_debug) console.log("on_live_stream_terminate", latest_text);
remaining_text = latest_text.slice(prev_text_already_pushed.length);
if ((!isEmptyOrWhitespaceOnly(remaining_text)) && remaining_text.length != 0) {
prev_text_already_pushed = latest_text;
push_text_to_audio(remaining_text);
}
}
function is_continue_from_prev(text, prev_text) {
abl = 5
if (text.length < prev_text.length - abl) {
return false;
}
if (prev_text.length > 10) {
return text.startsWith(prev_text.slice(0, Math.min(prev_text.length - abl, 100)));
} else {
return text.startsWith(prev_text);
}
}
function isEmptyOrWhitespaceOnly(remaining_text) {
// Replace \n and 。 with empty strings
let textWithoutSpecifiedCharacters = remaining_text.replace(/[\n。]/g, '');
// Check if the remaining string is empty
return textWithoutSpecifiedCharacters.trim().length === 0;
}
function process_increased_text(remaining_text) {
// console.log('[is continue], remaining_text: ', remaining_text)
// remaining_text starts with \n or 。, then move these chars into prev_text_already_pushed
while (remaining_text.startsWith('\n') || remaining_text.startsWith('。')) {
prev_text_already_pushed = prev_text_already_pushed + remaining_text[0];
remaining_text = remaining_text.slice(1);
}
if (remaining_text.includes('\n') || remaining_text.includes('。')) { // determine remaining_text contain \n or 。
// new message begin!
index_of_last_sep = Math.max(remaining_text.lastIndexOf('\n'), remaining_text.lastIndexOf('。'));
// break the text into two parts
tobe_pushed = remaining_text.slice(0, index_of_last_sep + 1);
prev_text_already_pushed = prev_text_already_pushed + tobe_pushed;
// console.log('[is continue], push: ', tobe_pushed)
// console.log('[is continue], update prev_text_already_pushed: ', prev_text_already_pushed)
if (!isEmptyOrWhitespaceOnly(tobe_pushed)) {
// console.log('[is continue], remaining_text is empty')
push_text_to_audio(tobe_pushed);
}
}
}
function process_latest_text_output(text, chatbot_index) {
if (text.length == 0) {
prev_text = text;
prev_text_mask = text;
// console.log('empty text')
return;
}
if (text == prev_text) {
// console.log('[nothing changed]')
return;
}
var is_continue = is_continue_from_prev(text, prev_text_already_pushed);
if (chatbot_index == prev_chatbot_index && is_continue) {
// on_text_continue_grow
remaining_text = text.slice(prev_text_already_pushed.length);
process_increased_text(remaining_text);
delay_live_text_update(text); // in case of no \n or 。 in the text, this timer will finally commit
}
else if (chatbot_index == prev_chatbot_index && !is_continue) {
if (audio_debug) console.log('---------------------');
if (audio_debug) console.log('text twisting!');
if (audio_debug) console.log('[new message begin]', 'text', text, 'prev_text_already_pushed', prev_text_already_pushed);
if (audio_debug) console.log('---------------------');
prev_text_already_pushed = "";
delay_live_text_update(text); // in case of no \n or 。 in the text, this timer will finally commit
}
else {
// on_new_message_begin, we have to clear `prev_text_already_pushed`
if (audio_debug) console.log('---------------------');
if (audio_debug) console.log('new message begin!');
if (audio_debug) console.log('[new message begin]', 'text', text, 'prev_text_already_pushed', prev_text_already_pushed);
if (audio_debug) console.log('---------------------');
prev_text_already_pushed = "";
process_increased_text(text);
delay_live_text_update(text); // in case of no \n or 。 in the text, this timer will finally commit
}
prev_text = text;
prev_chatbot_index = chatbot_index;
}
const audio_push_lock = new FIFOLock();
async function push_text_to_audio(text) {
if (!allow_auto_read_tts_flag) {
return;
}
await audio_push_lock.lock();
var lines = text.split(/[\n。]/);
for (const audio_buf_text of lines) {
if (audio_buf_text) {
// Append '/vits' to the current URL to form the target endpoint
const url = `${window.location.href}vits`;
// Define the payload to be sent in the POST request
const payload = {
text: audio_buf_text, // Ensure 'audio_buf_text' is defined with valid data
text_language: "zh"
};
// Call the async postData function and log the response
post_text(url, payload, send_index);
send_index = send_index + 1;
if (audio_debug) console.log(send_index, audio_buf_text);
// sleep 2 seconds
if (allow_auto_read_tts_flag) {
await delay(3000);
}
}
}
audio_push_lock.unlock();
}
send_index = 0;
recv_index = 0;
to_be_processed = [];
async function UpdatePlayQueue(cnt, audio_buf_wave) {
if (cnt != recv_index) {
to_be_processed.push([cnt, audio_buf_wave]);
if (audio_debug) console.log('cache', cnt);
}
else {
if (audio_debug) console.log('processing', cnt);
recv_index = recv_index + 1;
if (audio_buf_wave) {
audioPlayer.enqueueAudio(audio_buf_wave);
}
// deal with other cached audio
while (true) {
find_any = false;
for (i = to_be_processed.length - 1; i >= 0; i--) {
if (to_be_processed[i][0] == recv_index) {
if (audio_debug) console.log('processing cached', recv_index);
if (to_be_processed[i][1]) {
audioPlayer.enqueueAudio(to_be_processed[i][1]);
}
to_be_processed.pop(i);
find_any = true;
recv_index = recv_index + 1;
}
}
if (!find_any) { break; }
}
}
}
function post_text(url, payload, cnt) {
if (allow_auto_read_tts_flag) {
postData(url, payload, cnt)
.then(data => {
UpdatePlayQueue(cnt, data);
return;
});
} else {
UpdatePlayQueue(cnt, null);
return;
}
}
notify_user_error = false
// Create an async function to perform the POST request
async function postData(url = '', data = {}) {
try {
// Use the Fetch API with await
const response = await fetch(url, {
method: 'POST', // Specify the request method
body: JSON.stringify(data), // Convert the JavaScript object to a JSON string
});
// Check if the response is ok (status in the range 200-299)
if (!response.ok) {
// If not OK, throw an error
console.info('There was a problem during audio generation requests:', response.status);
// if (!notify_user_error){
// notify_user_error = true;
// alert('There was a problem during audio generation requests:', response.status);
// }
return null;
}
// If OK, parse and return the JSON response
return await response.arrayBuffer();
} catch (error) {
// Log any errors that occur during the fetch operation
console.info('There was a problem during audio generation requests:', error);
// if (!notify_user_error){
// notify_user_error = true;
// alert('There was a problem during audio generation requests:', error);
// }
return null;
}
}

271
themes/welcome.js Normal file
View File

@@ -0,0 +1,271 @@
class WelcomeMessage {
constructor() {
this.static_welcome_message = [
{
title: "环境配置教程",
content: "配置模型和插件,释放大语言模型的学术应用潜力。",
svg: "file=themes/svg/conf.svg",
url: "https://github.com/binary-husky/gpt_academic/wiki/%E9%A1%B9%E7%9B%AE%E9%85%8D%E7%BD%AE%E8%AF%B4%E6%98%8E",
},
{
title: "Arxiv论文一键翻译",
content: "无缝切换学术阅读语言,最优英文转中文的学术论文阅读体验。",
svg: "file=themes/svg/arxiv.svg",
url: "https://www.bilibili.com/video/BV1dz4y1v77A/",
},
{
title: "多模态模型",
content: "试试将截屏直接粘贴到输入框中,随后使用多模态模型提问。",
svg: "file=themes/svg/mm.svg",
url: "https://github.com/binary-husky/gpt_academic",
},
{
title: "文档与源码批处理",
content: "您可以将任意文件拖入「此处」,随后调用对应插件功能。",
svg: "file=themes/svg/doc.svg",
url: "https://github.com/binary-husky/gpt_academic",
},
{
title: "图表与脑图绘制",
content: "试试输入一段语料,然后点击「总结绘制脑图」。",
svg: "file=themes/svg/brain.svg",
url: "https://www.bilibili.com/video/BV18c41147H9/",
},
{
title: "虚空终端",
content: "点击右侧插件区的「虚空终端」插件,然后直接输入您的想法。",
svg: "file=themes/svg/vt.svg",
url: "https://github.com/binary-husky/gpt_academic",
},
{
title: "DALLE图像生成",
content: "接入DALLE生成插画或者项目Logo辅助头脑风暴并激发灵感。",
svg: "file=themes/svg/img.svg",
url: "https://github.com/binary-husky/gpt_academic",
},
{
title: "TTS语音克隆",
content: "借助SoVits以您喜爱的角色的声音回答问题。",
svg: "file=themes/svg/tts.svg",
url: "https://www.bilibili.com/video/BV1Rp421S7tF/",
},
{
title: "实时语音对话",
content: "配置实时语音对话功能,无须任何激活词,我将一直倾听。",
svg: "file=themes/svg/default.svg",
url: "https://github.com/binary-husky/gpt_academic/blob/master/docs/use_audio.md",
},
{
title: "Latex全文润色",
content: "上传需要润色的latex论文让大语言模型帮您改论文。",
svg: "file=themes/svg/polish.svg",
url: "https://github.com/binary-husky/gpt_academic",
}
];
this.visible = false;
this.max_welcome_card_num = 6;
this.card_array = [];
this.static_welcome_message_previous = [];
this.reflesh_time_interval = 15*1000;
}
begin_render() {
this.update();
}
async startRefleshCards() {
await new Promise(r => setTimeout(r, this.reflesh_time_interval));
await this.reflesh_cards();
if (this.visible){
setTimeout(() => {
this.startRefleshCards.call(this);
}, 1);
}
}
async reflesh_cards() {
if (!this.visible){
return;
}
// re-rank this.static_welcome_message randomly
this.static_welcome_message_temp = this.shuffle(this.static_welcome_message);
// find items that in this.static_welcome_message_temp but not in this.static_welcome_message_previous
const not_shown_previously = this.static_welcome_message_temp.filter(item => !this.static_welcome_message_previous.includes(item));
const already_shown_previously = this.static_welcome_message_temp.filter(item => this.static_welcome_message_previous.includes(item));
// combine two lists
this.static_welcome_message_previous = not_shown_previously.concat(already_shown_previously);
(async () => {
// 使用 for...of 循环来处理异步操作
for (let index = 0; index < this.card_array.length; index++) {
if (index >= this.max_welcome_card_num) {
break;
}
const card = this.card_array[index];
card.classList.remove('hide');
card.classList.remove('show');
// 等待动画结束
card.addEventListener('transitionend', () => {
// 更新卡片信息
const message = this.static_welcome_message_previous[index];
const title = card.getElementsByClassName('welcome-card-title')[0];
const content = card.getElementsByClassName('welcome-content-c')[0];
const svg = card.getElementsByClassName('welcome-svg')[0];
const text = card.getElementsByClassName('welcome-title-text')[0];
svg.src = message.svg;
text.textContent = message.title;
text.href = message.url;
content.textContent = message.content;
card.classList.remove('hide');
// 等待动画结束
card.addEventListener('transitionend', () => {
card.classList.remove('show');
}, { once: true });
card.classList.add('show');
}, { once: true });
card.classList.add('hide');
// 等待 250 毫秒
await new Promise(r => setTimeout(r, 200));
}
})();
}
shuffle(array) {
var currentIndex = array.length, randomIndex;
// While there remain elements to shuffle...
while (currentIndex != 0) {
// Pick a remaining element...
randomIndex = Math.floor(Math.random() * currentIndex);
currentIndex--;
// And swap it with the current element.
[array[currentIndex], array[randomIndex]] = [
array[randomIndex], array[currentIndex]];
}
return array;
}
async update() {
console.log('update')
var page_width = document.documentElement.clientWidth;
const width_to_hide_welcome = 1200;
if (!await this.isChatbotEmpty() || page_width < width_to_hide_welcome) {
if (this.visible) {
this.removeWelcome();
this.visible = false;
this.card_array = [];
this.static_welcome_message_previous = [];
}
return;
}
if (this.visible){
return;
}
// console.log("welcome");
this.showWelcome();
this.visible = true;
this.startRefleshCards();
}
showCard(message) {
const card = document.createElement('div');
card.classList.add('welcome-card');
// 创建标题
const title = document.createElement('div');
title.classList.add('welcome-card-title');
// 创建图标
const svg = document.createElement('img');
svg.classList.add('welcome-svg');
svg.src = message.svg;
svg.style.height = '30px';
title.appendChild(svg);
// 创建标题
const text = document.createElement('a');
text.textContent = message.title;
text.classList.add('welcome-title-text');
text.href = message.url;
text.target = "_blank";
title.appendChild(text)
// 创建内容
const content = document.createElement('div');
content.classList.add('welcome-content');
const content_c = document.createElement('div');
content_c.classList.add('welcome-content-c');
content_c.textContent = message.content;
content.appendChild(content_c);
// 将标题和内容添加到卡片 div 中
card.appendChild(title);
card.appendChild(content);
return card;
}
async showWelcome() {
// 首先,找到想要添加子元素的父元素
const elem_chatbot = document.getElementById('gpt-chatbot');
// 创建一个新的div元素
const welcome_card_container = document.createElement('div');
welcome_card_container.classList.add('welcome-card-container');
// 创建主标题
const major_title = document.createElement('div');
major_title.classList.add('welcome-title');
major_title.textContent = "欢迎使用GPT-Academic";
welcome_card_container.appendChild(major_title)
// 创建卡片
this.static_welcome_message.forEach((message, index) => {
if (index >= this.max_welcome_card_num) {
return;
}
this.static_welcome_message_previous.push(message);
const card = this.showCard(message);
this.card_array.push(card);
welcome_card_container.appendChild(card);
});
elem_chatbot.appendChild(welcome_card_container);
// 添加显示动画
requestAnimationFrame(() => {
welcome_card_container.classList.add('show');
});
}
async removeWelcome() {
// remove welcome-card-container
const elem_chatbot = document.getElementById('gpt-chatbot');
const welcome_card_container = document.getElementsByClassName('welcome-card-container')[0];
// 添加隐藏动画
welcome_card_container.classList.add('hide');
// 等待动画结束后再移除元素
welcome_card_container.addEventListener('transitionend', () => {
elem_chatbot.removeChild(welcome_card_container);
}, { once: true });
}
async isChatbotEmpty() {
return (await get_data_from_gradio_component("gpt-chatbot")).length == 0;
}
}

View File

@@ -1,3 +1,4 @@
import importlib
import time
import inspect
@@ -219,9 +220,10 @@ def CatchException(f):
try:
yield from f(main_input, llm_kwargs, plugin_kwargs, chatbot_with_cookie, history, *args, **kwargs)
except FriendlyException as e:
tb_str = '```\n' + trimmed_format_exc() + '```'
if len(chatbot_with_cookie) == 0:
chatbot_with_cookie.clear()
chatbot_with_cookie.append(["插件调度异常", None])
chatbot_with_cookie.append(["插件调度异常:\n" + tb_str, None])
chatbot_with_cookie[-1] = [chatbot_with_cookie[-1][0], e.generate_error_html()]
yield from update_ui(chatbot=chatbot_with_cookie, history=history, msg=f'异常') # 刷新界面
except Exception as e:
@@ -565,8 +567,6 @@ def generate_file_link(report_files:List[str]):
return file_links
def on_report_generated(cookies:dict, files:List[str], chatbot:ChatBotWithCookies):
if "files_to_promote" in cookies:
report_files = cookies["files_to_promote"]
@@ -920,15 +920,18 @@ def get_pictures_list(path):
return file_manifest
def have_any_recent_upload_image_files(chatbot:ChatBotWithCookies):
def have_any_recent_upload_image_files(chatbot:ChatBotWithCookies, pop:bool=False):
_5min = 5 * 60
if chatbot is None:
return False, None # chatbot is None
if pop:
most_recent_uploaded = chatbot._cookies.pop("most_recent_uploaded", None)
else:
most_recent_uploaded = chatbot._cookies.get("most_recent_uploaded", None)
# most_recent_uploaded 是一个放置最新上传图像的路径
if not most_recent_uploaded:
return False, None # most_recent_uploaded is None
if time.time() - most_recent_uploaded["time"] < _5min:
most_recent_uploaded = chatbot._cookies.get("most_recent_uploaded", None)
path = most_recent_uploaded["path"]
file_manifest = get_pictures_list(path)
if len(file_manifest) == 0:

View File

@@ -1,5 +1,5 @@
{
"version": 3.80,
"version": 3.83,
"show_feature": true,
"new_feature": "支持更复杂的插件框架 <-> 上传文件时显示进度条 <-> 添加TTS语音输出EdgeTTS和SoVits语音克隆 <-> Doc2x PDF翻译 <-> 添加回溯对话按钮"
"new_feature": "增加欢迎页面 <-> 优化图像生成插件 <-> 添加紫东太初大模型支持 <-> 保留主题选择 <-> 支持更复杂的插件框架 <-> 上传文件时显示进度条"
}