Compare commits

...

114 Commits

Author SHA1 Message Date
binary-husky
4783fd6f37 UP 2023-04-24 21:02:16 +08:00
binary-husky
9a4b56277c Function Refector 2023-04-24 20:59:10 +08:00
binary-husky
5eea959103 Markdown翻译支持github url 2023-04-24 20:51:34 +08:00
binary-husky
856df8fb62 验证对话上下文 2023-04-24 20:18:32 +08:00
binary-husky
8e59412c47 修正newbing交互的不合理代码 2023-04-24 20:14:23 +08:00
binary-husky
8f571ff68f Merge branch 'v3.3' 2023-04-24 19:58:07 +08:00
binary-husky
b6d2766e59 改善功能 2023-04-24 19:54:28 +08:00
binary-husky
73ce471a0e max_worker_limit 2023-04-24 19:24:19 +08:00
binary-husky
4e113139c8 Merge branch 'master' into v3.3 2023-04-24 19:09:44 +08:00
binary-husky
e4c4b28ddf Update README.md 2023-04-24 18:20:33 +08:00
binary-husky
081acc6404 修复颜色 2023-04-24 17:42:24 +08:00
binary-husky
1a999497d7 Merge branch 'master' of github.com:binary-husky/chatgpt_academic 2023-04-24 17:33:23 +08:00
binary-husky
6137963355 拯救一下之前的灾难性的代码配色 2023-04-24 17:33:18 +08:00
binary-husky
22bffdb737 Update README.md 2023-04-24 12:25:10 +08:00
binary-husky
75adcbffeb Update README.md 2023-04-24 12:24:46 +08:00
binary-husky
4451770061 Update README.md 2023-04-24 12:24:29 +08:00
binary-husky
09c413a272 Update README.md 2023-04-24 12:17:58 +08:00
binary-husky
ddb6c90a8f Update README.md 2023-04-24 12:17:04 +08:00
binary-husky
71590426f9 Update README.md 2023-04-24 12:16:49 +08:00
binary-husky
b3e5cdb3a5 加一些注释 2023-04-24 12:08:42 +08:00
binary-husky
6595ab813e 修正计数错误 2023-04-24 11:54:15 +08:00
binary-husky
d1efbd26da 修正prompt 2023-04-24 11:48:39 +08:00
binary-husky
f04683732e 待调查的BUG 2023-04-24 11:39:40 +08:00
binary-husky
cb0241db78 Merge branch 'master' of github.com:binary-husky/chatgpt_academic 2023-04-24 11:34:53 +08:00
binary-husky
a097b6cd03 减少每次处理的论文数 2023-04-24 11:34:47 +08:00
Your Name
487ffe7888 Merge remote-tracking branch 'origin/master' into v3.3 2023-04-24 02:07:07 +08:00
binary-husky
51424a7d08 Update README.md 2023-04-24 01:57:13 +08:00
binary-husky
06e8e8f9a6 Update README.md 2023-04-24 01:55:53 +08:00
binary-husky
0512b311f8 Update README.md 2023-04-24 01:55:10 +08:00
binary-husky
81d53d0726 Update README.md 2023-04-24 01:47:35 +08:00
binary-husky
a141c5ccdc Update README.md 2023-04-24 01:46:58 +08:00
binary-husky
e361d741c3 Update README.md 2023-04-24 01:44:30 +08:00
binary-husky
f5bc58dbde Update README.md 2023-04-24 01:41:47 +08:00
Your Name
e7b73f3041 update readme 2023-04-24 00:43:57 +08:00
Your Name
ed8db8c8ae README 2023-04-23 23:49:55 +08:00
Your Name
df97213d3b version 3.3 2023-04-23 23:43:07 +08:00
Your Name
97443d1f83 移除依赖 2023-04-23 23:40:18 +08:00
Your Name
59bed52faf 修改依赖的引用方式 2023-04-23 23:39:54 +08:00
Your Name
3814c3a915 修改依赖 2023-04-23 23:36:55 +08:00
Your Name
d98d0a291e 移动函数位置 2023-04-23 23:34:13 +08:00
Your Name
ee94fa6dc4 拆分成两个文件 2023-04-23 23:32:35 +08:00
Your Name
d2e46f6684 更新提示 2023-04-23 23:26:23 +08:00
Your Name
5948dcacd5 加线程锁 2023-04-23 23:25:49 +08:00
Your Name
3041858e7f 优化提示 2023-04-23 23:16:25 +08:00
Your Name
9c2a6bc413 优化错误提示 2023-04-23 23:13:00 +08:00
Your Name
1cf8b6c6c8 修复细节 2023-04-23 22:47:45 +08:00
Your Name
781ef4487c 修复一些细节 2023-04-23 22:44:18 +08:00
Your Name
4a494354b1 显示newbing回复的网址 2023-04-23 22:34:24 +08:00
Your Name
385c775aa5 支持3.10以下的python版本使用newbing 2023-04-23 20:54:57 +08:00
binary-husky
518385dea2 add newbing, testing 2023-04-23 19:17:09 +08:00
binary-husky
4d1eea7bd5 更新说明 2023-04-23 18:40:58 +08:00
binary-husky
9cb51ccc70 restore default model 2023-04-23 18:38:05 +08:00
binary-husky
94dc398163 restore default model 2023-04-23 18:37:15 +08:00
binary-husky
65317e33af Merge branch 'newbing' into v3.3 2023-04-23 18:35:21 +08:00
binary-husky
06fbdf43af 更正部分注释 2023-04-23 18:34:16 +08:00
binary-husky
ab61418410 better traceback 2023-04-23 18:13:30 +08:00
binary-husky
0785ff2aed 微调对话裁剪 2023-04-23 17:45:56 +08:00
binary-husky
676fe40d39 优化chatgpt对话的截断策略 2023-04-23 17:32:44 +08:00
binary-husky
0b89673ee9 Merge pull request #571 from codycjy/notebook_args
feat(jupyter): use args to disable Markdown parse
2023-04-23 11:24:41 +08:00
binary-husky
2f4e050612 Update README.md 2023-04-23 11:22:35 +08:00
binary-husky
87d963bda5 UP 2023-04-23 11:19:16 +08:00
binary-husky
07807e4653 插件支持保存对话 2023-04-23 11:17:56 +08:00
binary-husky
2b96217f2b 实现Newbing聊天功能 2023-04-22 21:18:35 +08:00
saltfish
13342c2988 feat(jupter): use args to disable Markdown parse 2023-04-22 21:11:24 +08:00
binary-husky
95f8b2824a Merge branch 'master' of github.com:binary-husky/chatgpt_academic 2023-04-22 18:56:07 +08:00
binary-husky
4065d6e234 版本3.2 2023-04-22 18:56:02 +08:00
binary-husky
d3dcd432e8 Update README.md 2023-04-22 18:47:11 +08:00
binary-husky
7d14de79bf Merge pull request #502 from mrhblfx/new_code_fun
解析项目源代码(手动指定和筛选源代码文件类型)
2023-04-22 18:40:47 +08:00
binary-husky
15c6b52b5f 修改README 2023-04-22 18:22:33 +08:00
binary-husky
c0f1b5bc8e 修改说明 2023-04-22 18:21:43 +08:00
mrhblfx
bd62c6be68 使提示更佳全面 2023-04-22 18:20:01 +08:00
binary-husky
70bd21f09a 修改二级路径运行的说明 2023-04-22 18:19:49 +08:00
Your Name
a0f15f1512 修改注释 2023-04-22 18:10:42 +08:00
mrhblfx
4575046ce1 使提示更佳全面 2023-04-22 18:08:27 +08:00
Your Name
33ea7391b5 Merge branch 'subpath' 2023-04-22 18:07:58 +08:00
Your Name
e90eee2d8e 加入subpath支持,但暂不启用 2023-04-22 18:07:24 +08:00
Your Name
7d44210a48 fix apache2 sub-path deploy issue #544 2023-04-22 17:55:50 +08:00
binary-husky
206f4138b6 Merge pull request #544 from yuxiaoyuan0406/suburl
fix apache2 sub-path deploy issue
2023-04-22 17:42:02 +08:00
mrhblfx
6d2807f499 Merge branch 'binary-husky:master' into new_code_fun 2023-04-22 17:38:26 +08:00
Your Name
f1234937c6 add check path back 2023-04-22 17:30:21 +08:00
Your Name
7beea951c6 unifying code 2023-04-22 17:24:22 +08:00
Your Name
6f7e8076c7 Merge branch 'suburl' of https://github.com/yuxiaoyuan0406/chatgpt_academic into yuxiaoyuan0406-suburl 2023-04-22 16:44:15 +08:00
binary-husky
ae24fab441 Merge pull request #562 from codycjy/codycjy
Parse and generate ipynb (Issue #501)
2023-04-22 16:22:03 +08:00
Your Name
880be21bf7 Add test for juptyer notebook plugin 2023-04-22 16:19:36 +08:00
Your Name
559b3cd6bb Merge branch 'codycjy' of https://github.com/codycjy/chatgpt_academic into codycjy-codycjy 2023-04-22 16:02:24 +08:00
binary-husky
9d9df8aa57 Update 解析JupyterNotebook.py 2023-04-22 16:01:32 +08:00
binary-husky
64548d33a9 Update crazy_functional.py 2023-04-22 15:58:43 +08:00
Your Name
c3cafd8d6f 微调界面布局 2023-04-22 15:52:21 +08:00
Your Name
e9a6efef7f 修复非压缩文件上传的读取问题 2023-04-22 15:39:51 +08:00
Your Name
89a75e26c3 修复extract_folder_path被定位到根目录的bug 2023-04-22 15:36:49 +08:00
Your Name
1139d395f2 将高级参数输入通用化(默认隐藏),应用到所有的下拉菜单函数插件中 2023-04-22 15:06:54 +08:00
saltfish
e20070939c Parse and generate ipynb (Issue #501)
Implemented code to parse and generate the ipynb files. The solution addresses Issue #501.
2023-04-22 00:36:28 +08:00
mrhblfx
3236fcca21 update 2023-04-21 21:02:11 +08:00
Your Name
5353eba376 version 3.15 添加联网回答问题 2023-04-21 20:03:38 +08:00
Your Name
7339b06acb Merge branch 'master' of github.com:binary-husky/chatgpt_academic 2023-04-21 19:28:37 +08:00
Your Name
ce1fc3a999 修改chatglm不记忆上下文的bug 2023-04-21 19:28:32 +08:00
binary-husky
a9a489231a Update bridge_all.py 2023-04-21 18:56:56 +08:00
binary-husky
e889590a91 Update README.md 2023-04-21 18:49:24 +08:00
Your Name
9481405f6f 更新提示 2023-04-21 18:37:20 +08:00
Your Name
7317d79a3c 更新提醒 2023-04-21 18:28:51 +08:00
mrhblfx
de0ed4a6f5 style:accordion of 解析任意code项目 is closed by default 2023-04-20 22:01:27 +08:00
mrhblfx
0ff838443e fix a bug 2023-04-20 21:44:35 +08:00
mrhblfx
cfbfb68618 Merge branch 'master' of github.com:mrhblfx/chatgpt_academic 2023-04-20 21:12:22 +08:00
yuxiaoyuan0406
9945d5048a 更好的检查子路径逻辑 2023-04-20 18:31:26 +08:00
yuxiaoyuan0406
f0ff1f2c64 添加CUSTOM_PATH来部署到子级路径 2023-04-20 18:22:58 +08:00
yuxiaoyuan0406
7dd73e1330 添加了一个检查path的工具 2023-04-20 18:20:25 +08:00
yuxiaoyuan0406
4cfbacdb26 fix sub-path deploy 2023-04-20 17:21:47 +08:00
mrhblfx
26af2b1bb4 update by pull 2023-04-19 18:26:48 +08:00
mrhblfx
20bec70160 Merge branch 'master' of github.com:mrhblfx/chatgpt_academic 2023-04-18 23:40:51 +08:00
mrhblfx
9b5f088793 Changed matching rules 2023-04-18 23:31:12 +08:00
mrhblfx
3a561a70db Reduced one parameter 2023-04-18 23:30:19 +08:00
mrhblfx
11e33ec657 Reduced one input box 2023-04-18 23:29:18 +08:00
mrhblfx
d1926725d3 Add parsing arbitrary code items 2023-04-16 23:33:43 +08:00
mrhblfx
2f9a4e1618 Add parsing arbitrary code items 2023-04-16 23:00:45 +08:00
24 changed files with 1549 additions and 202 deletions

1
.gitignore vendored
View File

@@ -145,3 +145,4 @@ cradle*
debug*
private*
crazy_functions/test_project/pdf_and_word
crazy_functions/test_samples

View File

@@ -1,8 +1,13 @@
> **Note**
>
> 本项目依赖的Gradio组件的新版pip包(Gradio 3.26~3.27)有严重bug。所以请在安装时严格选择requirements.txt中**指定的版本**。
>
> `pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/`
>
# <img src="docs/logo.png" width="40" > ChatGPT 学术优化
**如果喜欢这个项目请给它一个Star如果你发明了更好用的快捷键或函数插件欢迎发issue或者pull requests**
**如果喜欢这个项目请给它一个Star如果你发明了更好用的快捷键或函数插件欢迎发pull requests**
If you like this project, please give it a Star. If you've come up with more useful academic shortcuts or functional plugins, feel free to open an issue or pull request. We also have a README in [English|](docs/README_EN.md)[日本語|](docs/README_JP.md)[Русский|](docs/README_RS.md)[Français](docs/README_FR.md) translated by this project itself.
@@ -20,24 +25,26 @@ If you like this project, please give it a Star. If you've come up with more use
--- | ---
一键润色 | 支持一键润色、一键查找论文语法错误
一键中英互译 | 一键中英互译
一键代码解释 | 可以正确显示代码、解释代码
一键代码解释 | 显示代码、解释代码、生成代码、给代码加注释
[自定义快捷键](https://www.bilibili.com/video/BV14s4y1E7jN) | 支持自定义快捷键
[配置代理服务器](https://www.bilibili.com/video/BV1rc411W7Dr) | 支持配置代理服务器
模块化设计 | 支持自定义高阶的函数插件与[函数插件],插件支持[热更新](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
[配置代理服务器](https://www.bilibili.com/video/BV1rc411W7Dr) | 支持代理连接OpenAI/Google等秒解锁ChatGPT互联网[实时信息聚合](https://www.bilibili.com/video/BV1om4y127ck/)能力
模块化设计 | 支持自定义强大的[函数插件](https://github.com/binary-husky/chatgpt_academic/tree/master/crazy_functions),插件支持[热更新](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
[自我程序剖析](https://www.bilibili.com/video/BV1cj411A7VW) | [函数插件] [一键读懂](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)本项目的源代码
[程序剖析](https://www.bilibili.com/video/BV1cj411A7VW) | [函数插件] 一键可以剖析其他Python/C/C++/Java/Lua/...项目树
读论文 | [函数插件] 一键解读latex论文全文并生成摘要
Latex全文翻译、润色 | [函数插件] 一键翻译或润色latex论文
读论文、翻译论文 | [函数插件] 一键解读latex/pdf论文全文并生成摘要
Latex全文[翻译](https://www.bilibili.com/video/BV1nk4y1Y7Js/)、[润色](https://www.bilibili.com/video/BV1FT411H7c5/) | [函数插件] 一键翻译或润色latex论文
批量注释生成 | [函数插件] 一键批量生成函数注释
Markdown[中英互译](https://www.bilibili.com/video/BV1yo4y157jV/) | [函数插件] 看到上面5种语言的[README](https://github.com/binary-husky/chatgpt_academic/blob/master/docs/README_EN.md)了吗?
chat分析报告生成 | [函数插件] 运行后自动生成总结汇报
Markdown中英互译 | [函数插件] 看到上面5种语言的[README](https://github.com/binary-husky/chatgpt_academic/blob/master/docs/README_EN.md)了吗?
[arxiv小助手](https://www.bilibili.com/video/BV1LM4y1279X) | [函数插件] 输入arxiv文章url即可一键翻译摘要+下载PDF
[PDF论文全文翻译功能](https://www.bilibili.com/video/BV1KT411x7Wn) | [函数插件] PDF论文提取题目&摘要+翻译全文(多线程)
[谷歌学术统合小助手](https://www.bilibili.com/video/BV19L411U7ia) | [函数插件] 给定任意谷歌学术搜索页面URL让gpt帮你选择有趣的文章
公式/图片/表格显示 | 可以同时显示公式的tex形式和渲染形式支持公式、代码高亮
多线程函数插件支持 | 支持多线调用chatgpt一键处理海量文本或程序
[Arxiv小助手](https://www.bilibili.com/video/BV1LM4y1279X) | [函数插件] 输入arxiv文章url即可一键翻译摘要+下载PDF
[谷歌学术统合小助手](https://www.bilibili.com/video/BV19L411U7ia) | [函数插件] 给定任意谷歌学术搜索页面URL让gpt帮你[写relatedworks](https://www.bilibili.com/video/BV1GP411U7Az/)
互联网信息聚合+GPT | [函数插件] 一键让ChatGPT先Google搜索再回答问题信息流永不过时
公式/图片/表格显示 | 可以同时显示公式的[tex形式和渲染形式](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png),支持公式、代码高亮
多线程函数插件支持 | 支持多线调用chatgpt一键处理[海量文本](https://www.bilibili.com/video/BV1FT411H7c5/)或程序
启动暗色gradio[主题](https://github.com/binary-husky/chatgpt_academic/issues/173) | 在浏览器url后面添加```/?__dark-theme=true```可以切换dark主题
[多LLM模型](https://www.bilibili.com/video/BV1wT411p7yf)支持,[API2D](https://api2d.com/)接口支持 | 同时被GPT3.5、GPT4和[清华ChatGLM](https://github.com/THUDM/ChatGLM-6B)伺候的感觉一定会很不错吧?
更多LLM模型接入 | 新加入Newbing测试接口(新必应AI)
huggingface免科学上网[在线体验](https://huggingface.co/spaces/qingxu98/gpt-academic) | 登陆huggingface后复制[此空间](https://huggingface.co/spaces/qingxu98/gpt-academic)
…… | ……
@@ -173,6 +180,8 @@ docker run --rm -it --net=host --gpus=all gpt-academic bash
2. 使用WSL2Windows Subsystem for Linux 子系统)
请访问[部署wiki-2](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
3. 如何在二级网址(如`http://localhost/subpath`)下运行
请访问[FastAPI运行说明](docs/WithFastapi.md)
## 安装-代理配置
1. 常规方法
@@ -268,12 +277,15 @@ docker run --rm -it --net=host --gpus=all gpt-academic bash
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/233575247-fb00819e-6d1b-4bb7-bd54-1d7528f03dd9.png" width="800" >
<img src="https://user-images.githubusercontent.com/96192199/233779501-5ce826f0-6cca-4d59-9e5f-b4eacb8cc15f.png" width="800" >
</div>
## Todo 与 版本规划:
- version 3.2+ (todo): 函数插件支持更多参数接口
- version 3.3+ (todo): NewBing支持
- version 3.2: 函数插件支持更多参数接口 (保存对话功能, 解读任意语言代码+同时询问任意的LLM组合)
- version 3.1: 支持同时问询多个gpt模型支持api2d支持多个apikey负载均衡
- version 3.0: 对chatglm和其他小型llm的支持
- version 2.6: 重构了插件结构,提高了交互性,加入更多插件

View File

@@ -45,7 +45,7 @@ MAX_RETRY = 2
# OpenAI模型选择是gpt4现在只对申请成功的人开放体验gpt-4可以试试api2d
LLM_MODEL = "gpt-3.5-turbo" # 可选 ↓↓↓
AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm"]
AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing"]
# 本地LLM模型如ChatGLM的执行方式 CPU/GPU
LOCAL_MODEL_DEVICE = "cpu" # 可选 "cuda"
@@ -57,6 +57,15 @@ CONCURRENT_COUNT = 100
# [("username", "password"), ("username2", "password2"), ...]
AUTHENTICATION = []
# 重新URL重新定向实现更换API_URL的作用常规情况下不要修改
# 格式 {"https://api.openai.com/v1/chat/completions": "重定向的URL"}
# 重新URL重新定向实现更换API_URL的作用常规情况下不要修改!!
# 格式 {"https://api.openai.com/v1/chat/completions": "在这里填写重定向的api.openai.com的URL"}
API_URL_REDIRECT = {}
# 如果需要在二级路径下运行(常规情况下,不要修改!!需要配合修改main.py才能生效!
CUSTOM_PATH = "/"
# 如果需要使用newbing把newbing的长长的cookie放到这里
NEWBING_STYLE = "creative" # ["creative", "balanced", "precise"]
NEWBING_COOKIES = """
your bing cookies here
"""

View File

@@ -19,12 +19,26 @@ def get_crazy_functions():
from crazy_functions.解析项目源代码 import 解析一个Lua项目
from crazy_functions.解析项目源代码 import 解析一个CSharp项目
from crazy_functions.总结word文档 import 总结word文档
from crazy_functions.解析JupyterNotebook import 解析ipynb文件
from crazy_functions.对话历史存档 import 对话历史存档
from crazy_functions.批量Markdown翻译 import Markdown英译中
function_plugins = {
"解析整个Python项目": {
"Color": "stop", # 按钮颜色
"Function": HotReload(解析一个Python项目)
},
"保存当前的对话": {
"AsButton":False,
"Function": HotReload(对话历史存档)
},
"[测试功能] 解析Jupyter Notebook文件": {
"Color": "stop",
"AsButton":False,
"Function": HotReload(解析ipynb文件),
"AdvancedArgs": True, # 调用时唤起高级参数输入区默认False
"ArgsReminder": "若输入0则不解析notebook中的Markdown块", # 高级参数输入区的显示提示
},
"批量总结Word文档": {
"Color": "stop",
"Function": HotReload(总结word文档)
@@ -68,8 +82,14 @@ def get_crazy_functions():
"Color": "stop", # 按钮颜色
"Function": HotReload(读文章写摘要)
},
"Markdown/Readme英译中": {
# HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
"Color": "stop",
"Function": HotReload(Markdown英译中)
},
"批量生成函数注释": {
"Color": "stop", # 按钮颜色
"AsButton": False, # 加入下拉菜单中
"Function": HotReload(批量生成函数注释)
},
"[多线程Demo] 解析此项目本身(源码自译解)": {
@@ -97,7 +117,6 @@ def get_crazy_functions():
from crazy_functions.Latex全文翻译 import Latex中译英
from crazy_functions.Latex全文翻译 import Latex英译中
from crazy_functions.批量Markdown翻译 import Markdown中译英
from crazy_functions.批量Markdown翻译 import Markdown英译中
function_plugins.update({
"批量翻译PDF文档多线程": {
@@ -162,13 +181,8 @@ def get_crazy_functions():
"AsButton": False, # 加入下拉菜单中
"Function": HotReload(Markdown中译英)
},
"[测试功能] 批量Markdown英译中输入路径或上传压缩包": {
# HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
"Color": "stop",
"AsButton": False, # 加入下拉菜单中
"Function": HotReload(Markdown英译中)
},
})
###################### 第三组插件 ###########################
@@ -181,7 +195,7 @@ def get_crazy_functions():
"Function": HotReload(下载arxiv论文并翻译摘要)
}
})
from crazy_functions.联网的ChatGPT import 连接网络回答问题
function_plugins.update({
"连接网络回答问题(先输入问题,再点击按钮,需要访问谷歌)": {
@@ -191,5 +205,25 @@ def get_crazy_functions():
}
})
from crazy_functions.解析项目源代码 import 解析任意code项目
function_plugins.update({
"解析项目源代码(手动指定和筛选源代码文件类型)": {
"Color": "stop",
"AsButton": False,
"AdvancedArgs": True, # 调用时唤起高级参数输入区默认False
"ArgsReminder": "输入时用逗号隔开, *代表通配符, 加了^代表不匹配; 不输入代表全部匹配。例如: \"*.c, ^*.cpp, config.toml, ^*.toml\"", # 高级参数输入区的显示提示
"Function": HotReload(解析任意code项目)
},
})
from crazy_functions.询问多个大语言模型 import 同时问询_指定模型
function_plugins.update({
"询问多个GPT模型手动指定询问哪些模型": {
"Color": "stop",
"AsButton": False,
"AdvancedArgs": True, # 调用时唤起高级参数输入区默认False
"ArgsReminder": "支持任意数量的llm接口用&符号分隔。例如chatglm&gpt-3.5-turbo&api2d-gpt-4", # 高级参数输入区的显示提示
"Function": HotReload(同时问询_指定模型)
},
})
###################### 第n组插件 ###########################
return function_plugins

View File

@@ -108,6 +108,13 @@ def test_联网回答问题():
print("当前问答:", cb[-1][-1].replace("\n"," "))
for i, it in enumerate(cb): print亮蓝(it[0]); print亮黄(it[1])
def test_解析ipynb文件():
from crazy_functions.解析JupyterNotebook import 解析ipynb文件
txt = "crazy_functions/test_samples"
for cookies, cb, hist, msg in 解析ipynb文件(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
print(cb)
# test_解析一个Python项目()
# test_Latex英文润色()
# test_Markdown中译英()
@@ -116,9 +123,8 @@ def test_联网回答问题():
# test_总结word文档()
# test_下载arxiv论文并翻译摘要()
# test_解析一个Cpp项目()
test_联网回答问题()
# test_联网回答问题()
test_解析ipynb文件()
input("程序完成,回车退出。")
print("退出。")

View File

@@ -1,5 +1,4 @@
import traceback
from toolbox import update_ui, get_conf
from toolbox import update_ui, get_conf, trimmed_format_exc
def input_clipping(inputs, history, max_token_limit):
import numpy as np
@@ -94,12 +93,12 @@ def request_gpt_model_in_new_thread_with_ui_alive(
continue # 返回重试
else:
# 【选择放弃】
tb_str = '```\n' + traceback.format_exc() + '```'
tb_str = '```\n' + trimmed_format_exc() + '```'
mutable[0] += f"[Local Message] 警告,在执行过程中遭遇问题, Traceback\n\n{tb_str}\n\n"
return mutable[0] # 放弃
except:
# 【第三种情况】:其他错误:重试几次
tb_str = '```\n' + traceback.format_exc() + '```'
tb_str = '```\n' + trimmed_format_exc() + '```'
print(tb_str)
mutable[0] += f"[Local Message] 警告,在执行过程中遭遇问题, Traceback\n\n{tb_str}\n\n"
if retry_op > 0:
@@ -173,7 +172,7 @@ def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
if max_workers == -1: # 读取配置文件
try: max_workers, = get_conf('DEFAULT_WORKER_NUM')
except: max_workers = 8
if max_workers <= 0 or max_workers >= 20: max_workers = 8
if max_workers <= 0: max_workers = 3
# 屏蔽掉 chatglm的多线程可能会导致严重卡顿
if not (llm_kwargs['llm_model'].startswith('gpt-') or llm_kwargs['llm_model'].startswith('api2d-')):
max_workers = 1
@@ -220,14 +219,14 @@ def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
continue # 返回重试
else:
# 【选择放弃】
tb_str = '```\n' + traceback.format_exc() + '```'
tb_str = '```\n' + trimmed_format_exc() + '```'
gpt_say += f"[Local Message] 警告,线程{index}在执行过程中遭遇问题, Traceback\n\n{tb_str}\n\n"
if len(mutable[index][0]) > 0: gpt_say += "此线程失败前收到的回答:\n\n" + mutable[index][0]
mutable[index][2] = "输入过长已放弃"
return gpt_say # 放弃
except:
# 【第三种情况】:其他错误
tb_str = '```\n' + traceback.format_exc() + '```'
tb_str = '```\n' + trimmed_format_exc() + '```'
print(tb_str)
gpt_say += f"[Local Message] 警告,线程{index}在执行过程中遭遇问题, Traceback\n\n{tb_str}\n\n"
if len(mutable[index][0]) > 0: gpt_say += "此线程失败前收到的回答:\n\n" + mutable[index][0]

View File

@@ -0,0 +1,42 @@
from toolbox import CatchException, update_ui
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
def write_chat_to_file(chatbot, file_name=None):
"""
将对话记录history以Markdown格式写入文件中。如果没有指定文件名则使用当前时间生成文件名。
"""
import os
import time
if file_name is None:
file_name = 'chatGPT对话历史' + time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + '.html'
os.makedirs('./gpt_log/', exist_ok=True)
with open(f'./gpt_log/{file_name}', 'w', encoding='utf8') as f:
for i, contents in enumerate(chatbot):
for content in contents:
try: # 这个bug没找到触发条件暂时先这样顶一下
if type(content) != str: content = str(content)
except:
continue
f.write(content)
f.write('\n\n')
f.write('<hr color="red"> \n\n')
res = '对话历史写入:' + os.path.abspath(f'./gpt_log/{file_name}')
print(res)
return res
@CatchException
def 对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
"""
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
llm_kwargs gpt模型参数如温度和top_p等一般原样传递下去就行
plugin_kwargs 插件模型的参数,暂时没有用武之地
chatbot 聊天显示框的句柄,用于显示给用户
history 聊天历史,前情提要
system_prompt 给gpt的静默提醒
web_port 当前软件运行的端口号
"""
chatbot.append(("保存当前对话", f"[Local Message] {write_chat_to_file(chatbot)}"))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间我们先及时地做一次界面更新

View File

@@ -84,7 +84,33 @@ def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
def get_files_from_everything(txt):
import glob, os
success = True
if txt.startswith('http'):
# 网络的远程文件
txt = txt.replace("https://github.com/", "https://raw.githubusercontent.com/")
txt = txt.replace("/blob/", "/")
import requests
from toolbox import get_conf
proxies, = get_conf('proxies')
r = requests.get(txt, proxies=proxies)
with open('./gpt_log/temp.md', 'wb+') as f: f.write(r.content)
project_folder = './gpt_log/'
file_manifest = ['./gpt_log/temp.md']
elif txt.endswith('.md'):
# 直接给定文件
file_manifest = [txt]
project_folder = os.path.dirname(txt)
elif os.path.exists(txt):
# 本地路径,递归搜索
project_folder = txt
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.md', recursive=True)]
else:
success = False
return success, file_manifest, project_folder
@CatchException
@@ -98,6 +124,7 @@ def Markdown英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
# 尝试导入依赖,如果缺少依赖,则给出安装建议
try:
import tiktoken
import glob, os
except:
report_execption(chatbot, history,
a=f"解析项目: {txt}",
@@ -105,19 +132,21 @@ def Markdown英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
history = [] # 清空历史,以免输入溢出
import glob, os
if os.path.exists(txt):
project_folder = txt
else:
success, file_manifest, project_folder = get_files_from_everything(txt)
if not success:
# 什么都没有
if txt == "": txt = '空空如也的输入栏'
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.md', recursive=True)]
if len(file_manifest) == 0:
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.md文件: {txt}")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
yield from 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en->zh')
@@ -135,6 +164,7 @@ def Markdown中译英(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
# 尝试导入依赖,如果缺少依赖,则给出安装建议
try:
import tiktoken
import glob, os
except:
report_execption(chatbot, history,
a=f"解析项目: {txt}",
@@ -142,18 +172,13 @@ def Markdown中译英(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
history = [] # 清空历史,以免输入溢出
import glob, os
if os.path.exists(txt):
project_folder = txt
else:
success, file_manifest, project_folder = get_files_from_everything(txt)
if not success:
# 什么都没有
if txt == "": txt = '空空如也的输入栏'
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
if txt.endswith('.md'):
file_manifest = [txt]
else:
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.md', recursive=True)]
if len(file_manifest) == 0:
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.md文件: {txt}")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面

View File

@@ -0,0 +1,145 @@
from toolbox import update_ui
from toolbox import CatchException, report_execption, write_results_to_file
fast_debug = True
class PaperFileGroup():
def __init__(self):
self.file_paths = []
self.file_contents = []
self.sp_file_contents = []
self.sp_file_index = []
self.sp_file_tag = []
# count_token
from request_llm.bridge_all import model_info
enc = model_info["gpt-3.5-turbo"]['tokenizer']
def get_token_num(txt): return len(
enc.encode(txt, disallowed_special=()))
self.get_token_num = get_token_num
def run_file_split(self, max_token_limit=1900):
"""
将长文本分离开来
"""
for index, file_content in enumerate(self.file_contents):
if self.get_token_num(file_content) < max_token_limit:
self.sp_file_contents.append(file_content)
self.sp_file_index.append(index)
self.sp_file_tag.append(self.file_paths[index])
else:
from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
segments = breakdown_txt_to_satisfy_token_limit_for_pdf(
file_content, self.get_token_num, max_token_limit)
for j, segment in enumerate(segments):
self.sp_file_contents.append(segment)
self.sp_file_index.append(index)
self.sp_file_tag.append(
self.file_paths[index] + f".part-{j}.txt")
def parseNotebook(filename, enable_markdown=1):
import json
CodeBlocks = []
with open(filename, 'r', encoding='utf-8', errors='replace') as f:
notebook = json.load(f)
for cell in notebook['cells']:
if cell['cell_type'] == 'code' and cell['source']:
# remove blank lines
cell['source'] = [line for line in cell['source'] if line.strip()
!= '']
CodeBlocks.append("".join(cell['source']))
elif enable_markdown and cell['cell_type'] == 'markdown' and cell['source']:
cell['source'] = [line for line in cell['source'] if line.strip()
!= '']
CodeBlocks.append("Markdown:"+"".join(cell['source']))
Code = ""
for idx, code in enumerate(CodeBlocks):
Code += f"This is {idx+1}th code block: \n"
Code += code+"\n"
return Code
def ipynb解释(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
enable_markdown = plugin_kwargs.get("advanced_arg", "1")
try:
enable_markdown = int(enable_markdown)
except ValueError:
enable_markdown = 1
pfg = PaperFileGroup()
for fp in file_manifest:
file_content = parseNotebook(fp, enable_markdown=enable_markdown)
pfg.file_paths.append(fp)
pfg.file_contents.append(file_content)
# <-------- 拆分过长的IPynb文件 ---------->
pfg.run_file_split(max_token_limit=1024)
n_split = len(pfg.sp_file_contents)
inputs_array = [r"This is a Jupyter Notebook file, tell me about Each Block in Chinese. Focus Just On Code." +
r"If a block starts with `Markdown` which means it's a markdown block in ipynbipynb. " +
r"Start a new line for a block and block num use Chinese." +
f"\n\n{frag}" for frag in pfg.sp_file_contents]
inputs_show_user_array = [f"{f}的分析如下" for f in pfg.sp_file_tag]
sys_prompt_array = ["You are a professional programmer."] * n_split
gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
inputs_array=inputs_array,
inputs_show_user_array=inputs_show_user_array,
llm_kwargs=llm_kwargs,
chatbot=chatbot,
history_array=[[""] for _ in range(n_split)],
sys_prompt_array=sys_prompt_array,
# max_workers=5, # OpenAI所允许的最大并行过载
scroller_max_len=80
)
# <-------- 整理结果,退出 ---------->
block_result = " \n".join(gpt_response_collection)
chatbot.append(("解析的结果如下", block_result))
history.extend(["解析的结果如下", block_result])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# <-------- 写入文件,退出 ---------->
res = write_results_to_file(history)
chatbot.append(("完成了吗?", res))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
@CatchException
def 解析ipynb文件(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
chatbot.append([
"函数插件功能?",
"对IPynb文件进行解析。Contributor: codycjy."])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
history = [] # 清空历史
import glob
import os
if os.path.exists(txt):
project_folder = txt
else:
if txt == "":
txt = '空空如也的输入栏'
report_execption(chatbot, history,
a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
if txt.endswith('.ipynb'):
file_manifest = [txt]
else:
file_manifest = [f for f in glob.glob(
f'{project_folder}/**/*.ipynb', recursive=True)]
if len(file_manifest) == 0:
report_execption(chatbot, history,
a=f"解析项目: {txt}", b=f"找不到任何.ipynb文件: {txt}")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
yield from ipynb解释(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, )

View File

@@ -1,5 +1,6 @@
from toolbox import update_ui
from toolbox import CatchException, report_execption, write_results_to_file
from .crazy_utils import input_clipping
def 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
import os, copy
@@ -11,7 +12,7 @@ def 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
history_array = []
sys_prompt_array = []
report_part_1 = []
assert len(file_manifest) <= 512, "源文件太多超过512个, 请缩减输入文件的数量。或者您也可以选择删除此行警告并修改代码拆分file_manifest列表从而实现分批次处理。"
############################## <第一步,逐个文件分析,多线程> ##################################
for index, fp in enumerate(file_manifest):
@@ -61,13 +62,15 @@ def 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
previous_iteration_files.extend([os.path.relpath(fp, project_folder) for index, fp in enumerate(this_iteration_file_manifest)])
previous_iteration_files_string = ', '.join(previous_iteration_files)
current_iteration_focus = ', '.join([os.path.relpath(fp, project_folder) for index, fp in enumerate(this_iteration_file_manifest)])
i_say = f'根据以上分析,对程序的整体功能和构架重新做出概括。然后用一张markdown表格整理每个文件的功能(包括{previous_iteration_files_string}'
i_say = f'用一张Markdown表格简要描述以下文件的功能{previous_iteration_files_string}。根据以上分析,用一句话概括程序的整体功能'
inputs_show_user = f'根据以上分析,对程序的整体功能和构架重新做出概括,由于输入长度限制,可能需要分组处理,本组文件为 {current_iteration_focus} + 已经汇总的文件组。'
this_iteration_history = copy.deepcopy(this_iteration_gpt_response_collection)
this_iteration_history = copy.deepcopy(this_iteration_gpt_response_collection)
this_iteration_history.append(last_iteration_result)
# 裁剪input
inputs, this_iteration_history_feed = input_clipping(inputs=i_say, history=this_iteration_history, max_token_limit=2560)
result = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=i_say, inputs_show_user=inputs_show_user, llm_kwargs=llm_kwargs, chatbot=chatbot,
history=this_iteration_history, # 迭代之前的分析
inputs=inputs, inputs_show_user=inputs_show_user, llm_kwargs=llm_kwargs, chatbot=chatbot,
history=this_iteration_history_feed, # 迭代之前的分析
sys_prompt="你是一个程序架构分析师,正在分析一个项目的源代码。")
report_part_2.extend([i_say, result])
last_iteration_result = result
@@ -222,8 +225,8 @@ def 解析一个Golang项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
@CatchException
def 解析一个Lua项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
history = [] # 清空历史,以免输入溢出
@@ -243,9 +246,9 @@ def 解析一个Lua项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何lua文件: {txt}")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
@CatchException
def 解析一个CSharp项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
history = [] # 清空历史,以免输入溢出
@@ -263,4 +266,45 @@ def 解析一个CSharp项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何CSharp文件: {txt}")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
@CatchException
def 解析任意code项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
txt_pattern = plugin_kwargs.get("advanced_arg")
txt_pattern = txt_pattern.replace("", ",")
# 将要匹配的模式(例如: *.c, *.cpp, *.py, config.toml)
pattern_include = [_.lstrip(" ,").rstrip(" ,") for _ in txt_pattern.split(",") if _ != "" and not _.strip().startswith("^")]
if not pattern_include: pattern_include = ["*"] # 不输入即全部匹配
# 将要忽略匹配的文件后缀(例如: ^*.c, ^*.cpp, ^*.py)
pattern_except_suffix = [_.lstrip(" ^*.,").rstrip(" ,") for _ in txt_pattern.split(" ") if _ != "" and _.strip().startswith("^*.")]
pattern_except_suffix += ['zip', 'rar', '7z', 'tar', 'gz'] # 避免解析压缩文件
# 将要忽略匹配的文件名(例如: ^README.md)
pattern_except_name = [_.lstrip(" ^*,").rstrip(" ,").replace(".", "\.") for _ in txt_pattern.split(" ") if _ != "" and _.strip().startswith("^") and not _.strip().startswith("^*.")]
# 生成正则表达式
pattern_except = '/[^/]+\.(' + "|".join(pattern_except_suffix) + ')$'
pattern_except += '|/(' + "|".join(pattern_except_name) + ')$' if pattern_except_name != [] else ''
history.clear()
import glob, os, re
if os.path.exists(txt):
project_folder = txt
else:
if txt == "": txt = '空空如也的输入栏'
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
# 若上传压缩文件, 先寻找到解压的文件夹路径, 从而避免解析压缩文件
maybe_dir = [f for f in glob.glob(f'{project_folder}/*') if os.path.isdir(f)]
if len(maybe_dir)>0 and maybe_dir[0].endswith('.extract'):
extract_folder_path = maybe_dir[0]
else:
extract_folder_path = project_folder
# 按输入的匹配模式寻找上传的非压缩文件和已解压的文件
file_manifest = [f for pattern in pattern_include for f in glob.glob(f'{extract_folder_path}/**/{pattern}', recursive=True) if "" != extract_folder_path and \
os.path.isfile(f) and (not re.search(pattern_except, f) or pattern.endswith('.' + re.search(pattern_except, f).group().split('.')[-1]))]
if len(file_manifest) == 0:
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何文件: {txt}")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)

View File

@@ -25,6 +25,35 @@ def 同时问询(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt
retry_times_at_unknown_error=0
)
history.append(txt)
history.append(gpt_say)
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
@CatchException
def 同时问询_指定模型(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
"""
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
llm_kwargs gpt模型参数如温度和top_p等一般原样传递下去就行
plugin_kwargs 插件模型的参数如温度和top_p等一般原样传递下去就行
chatbot 聊天显示框的句柄,用于显示给用户
history 聊天历史,前情提要
system_prompt 给gpt的静默提醒
web_port 当前软件运行的端口号
"""
history = [] # 清空历史,以免输入溢出
chatbot.append((txt, "正在同时咨询ChatGPT和ChatGLM……"))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间我们先及时地做一次界面更新
# llm_kwargs['llm_model'] = 'chatglm&gpt-3.5-turbo&api2d-gpt-3.5-turbo' # 支持任意数量的llm接口用&符号分隔
llm_kwargs['llm_model'] = plugin_kwargs.get("advanced_arg", 'chatglm&gpt-3.5-turbo') # 'chatglm&gpt-3.5-turbo' # 支持任意数量的llm接口用&符号分隔
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=txt, inputs_show_user=txt,
llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,
sys_prompt=system_prompt,
retry_times_at_unknown_error=0
)
history.append(txt)
history.append(gpt_say)
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新

View File

@@ -70,6 +70,7 @@ def 谷歌检索小助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
# 尝试导入依赖,如果缺少依赖,则给出安装建议
try:
import arxiv
import math
from bs4 import BeautifulSoup
except:
report_execption(chatbot, history,
@@ -80,25 +81,26 @@ def 谷歌检索小助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
# 清空历史,以免输入溢出
history = []
meta_paper_info_list = yield from get_meta_information(txt, chatbot, history)
batchsize = 5
for batch in range(math.ceil(len(meta_paper_info_list)/batchsize)):
if len(meta_paper_info_list[:batchsize]) > 0:
i_say = "下面是一些学术文献的数据,提取出以下内容:" + \
"1、英文题目2、中文题目翻译3、作者4、arxiv公开is_paper_in_arxiv4、引用数量cite5、中文摘要翻译。" + \
f"以下是信息源:{str(meta_paper_info_list[:batchsize])}"
if len(meta_paper_info_list[:10]) > 0:
i_say = "下面是一些学术文献的数据,请从中提取出以下内容。" + \
"1、英文题目2、中文题目翻译3、作者4、arxiv公开is_paper_in_arxiv4、引用数量cite5、中文摘要翻译。" + \
f"以下是信息源:{str(meta_paper_info_list[:10])}"
inputs_show_user = f"请分析此页面中出现的所有文章:{txt},这是第{batch+1}"
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=i_say, inputs_show_user=inputs_show_user,
llm_kwargs=llm_kwargs, chatbot=chatbot, history=[],
sys_prompt="你是一个学术翻译请从数据中提取信息。你必须使用Markdown表格。你必须逐个文献进行处理。"
)
inputs_show_user = f"请分析此页面中出现的所有文章:{txt}"
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=i_say, inputs_show_user=inputs_show_user,
llm_kwargs=llm_kwargs, chatbot=chatbot, history=[],
sys_prompt="你是一个学术翻译请从数据中提取信息。你必须使用Markdown格式。你必须逐个文献进行处理。"
)
history.extend([ f"{batch+1}", gpt_say ])
meta_paper_info_list = meta_paper_info_list[batchsize:]
history.extend([ "第一批", gpt_say ])
meta_paper_info_list = meta_paper_info_list[10:]
chatbot.append(["状态?", "已经全部完成"])
chatbot.append(["状态?",
"已经全部完成您可以试试让AI写一个Related Works例如您可以继续输入Write a \"Related Works\" section about \"你搜索的研究领域\" for me."])
msg = '正常'
yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
res = write_results_to_file(history)

43
docs/WithFastapi.md Normal file
View File

@@ -0,0 +1,43 @@
# Running with fastapi
We currently support fastapi in order to solve sub-path deploy issue.
1. change CUSTOM_PATH setting in `config.py`
``` sh
nano config.py
```
2. Edit main.py
```diff
auto_opentab_delay()
- demo.queue(concurrency_count=CONCURRENT_COUNT).launch(server_name="0.0.0.0", server_port=PORT, auth=AUTHENTICATION, favicon_path="docs/logo.png")
+ demo.queue(concurrency_count=CONCURRENT_COUNT)
- # 如果需要在二级路径下运行
- # CUSTOM_PATH, = get_conf('CUSTOM_PATH')
- # if CUSTOM_PATH != "/":
- # from toolbox import run_gradio_in_subpath
- # run_gradio_in_subpath(demo, auth=AUTHENTICATION, port=PORT, custom_path=CUSTOM_PATH)
- # else:
- # demo.launch(server_name="0.0.0.0", server_port=PORT, auth=AUTHENTICATION, favicon_path="docs/logo.png")
+ 如果需要在二级路径下运行
+ CUSTOM_PATH, = get_conf('CUSTOM_PATH')
+ if CUSTOM_PATH != "/":
+ from toolbox import run_gradio_in_subpath
+ run_gradio_in_subpath(demo, auth=AUTHENTICATION, port=PORT, custom_path=CUSTOM_PATH)
+ else:
+ demo.launch(server_name="0.0.0.0", server_port=PORT, auth=AUTHENTICATION, favicon_path="docs/logo.png")
if __name__ == "__main__":
main()
```
3. Go!
``` sh
python main.py
```

40
main.py
View File

@@ -45,7 +45,7 @@ def main():
gr_L1 = lambda: gr.Row().style()
gr_L2 = lambda scale: gr.Column(scale=scale)
if LAYOUT == "TOP-DOWN":
if LAYOUT == "TOP-DOWN":
gr_L1 = lambda: DummyWith()
gr_L2 = lambda scale: gr.Row()
CHATBOT_HEIGHT /= 2
@@ -88,9 +88,12 @@ def main():
with gr.Row():
with gr.Accordion("更多函数插件", open=True):
dropdown_fn_list = [k for k in crazy_fns.keys() if not crazy_fns[k].get("AsButton", True)]
with gr.Column(scale=1):
with gr.Row():
dropdown = gr.Dropdown(dropdown_fn_list, value=r"打开插件列表", label="").style(container=False)
with gr.Column(scale=1):
with gr.Row():
plugin_advanced_arg = gr.Textbox(show_label=True, label="高级参数输入区", visible=False,
placeholder="这里是特殊函数插件的高级参数输入区").style(container=False)
with gr.Row():
switchy_bt = gr.Button(r"请先从插件列表中选择", variant="secondary")
with gr.Row():
with gr.Accordion("点击展开“文件上传区”。上传本地文件可供红色函数插件调用。", open=False) as area_file_up:
@@ -100,7 +103,7 @@ def main():
top_p = gr.Slider(minimum=-0, maximum=1.0, value=1.0, step=0.01,interactive=True, label="Top-p (nucleus sampling)",)
temperature = gr.Slider(minimum=-0, maximum=2.0, value=1.0, step=0.01, interactive=True, label="Temperature",)
max_length_sl = gr.Slider(minimum=256, maximum=4096, value=512, step=1, interactive=True, label="Local LLM MaxLength",)
checkboxes = gr.CheckboxGroup(["基础功能区", "函数插件区", "底部输入区", "输入清除键"], value=["基础功能区", "函数插件区"], label="显示/隐藏功能区")
checkboxes = gr.CheckboxGroup(["基础功能区", "函数插件区", "底部输入区", "输入清除键", "插件参数区"], value=["基础功能区", "函数插件区"], label="显示/隐藏功能区")
md_dropdown = gr.Dropdown(AVAIL_LLM_MODELS, value=LLM_MODEL, label="更换LLM模型/请求源").style(container=False)
gr.Markdown(description)
@@ -122,11 +125,12 @@ def main():
ret.update({area_input_secondary: gr.update(visible=("底部输入区" in a))})
ret.update({clearBtn: gr.update(visible=("输入清除键" in a))})
ret.update({clearBtn2: gr.update(visible=("输入清除键" in a))})
ret.update({plugin_advanced_arg: gr.update(visible=("插件参数区" in a))})
if "底部输入区" in a: ret.update({txt: gr.update(value="")})
return ret
checkboxes.select(fn_area_visibility, [checkboxes], [area_basic_fn, area_crazy_fn, area_input_primary, area_input_secondary, txt, txt2, clearBtn, clearBtn2] )
checkboxes.select(fn_area_visibility, [checkboxes], [area_basic_fn, area_crazy_fn, area_input_primary, area_input_secondary, txt, txt2, clearBtn, clearBtn2, plugin_advanced_arg] )
# 整理反复出现的控件句柄组合
input_combo = [cookies, max_length_sl, md_dropdown, txt, txt2, top_p, temperature, chatbot, history, system_prompt]
input_combo = [cookies, max_length_sl, md_dropdown, txt, txt2, top_p, temperature, chatbot, history, system_prompt, plugin_advanced_arg]
output_combo = [cookies, chatbot, history, status]
predict_args = dict(fn=ArgsGeneralWrapper(predict), inputs=input_combo, outputs=output_combo)
# 提交按钮、重置按钮
@@ -153,20 +157,22 @@ def main():
# 函数插件-下拉菜单与随变按钮的互动
def on_dropdown_changed(k):
variant = crazy_fns[k]["Color"] if "Color" in crazy_fns[k] else "secondary"
return {switchy_bt: gr.update(value=k, variant=variant)}
dropdown.select(on_dropdown_changed, [dropdown], [switchy_bt] )
ret = {switchy_bt: gr.update(value=k, variant=variant)}
if crazy_fns[k].get("AdvancedArgs", False): # 是否唤起高级插件参数区
ret.update({plugin_advanced_arg: gr.update(visible=True, label=f"插件[{k}]的高级参数说明:" + crazy_fns[k].get("ArgsReminder", [f"没有提供高级参数功能说明"]))})
else:
ret.update({plugin_advanced_arg: gr.update(visible=False, label=f"插件[{k}]不需要高级参数。")})
return ret
dropdown.select(on_dropdown_changed, [dropdown], [switchy_bt, plugin_advanced_arg] )
def on_md_dropdown_changed(k):
return {chatbot: gr.update(label="当前模型:"+k)}
md_dropdown.select(on_md_dropdown_changed, [md_dropdown], [chatbot] )
# 随变按钮的回调函数注册
def route(k, *args, **kwargs):
if k in [r"打开插件列表", r"请先从插件列表中选择"]: return
if k in [r"打开插件列表", r"请先从插件列表中选择"]: return
yield from ArgsGeneralWrapper(crazy_fns[k]["Function"])(*args, **kwargs)
click_handle = switchy_bt.click(route,[switchy_bt, *input_combo, gr.State(PORT)], output_combo)
click_handle.then(on_report_generated, [file_upload, chatbot], [file_upload, chatbot])
# def expand_file_area(file_upload, area_file_up):
# if len(file_upload)>0: return {area_file_up: gr.update(open=True)}
# click_handle.then(expand_file_area, [file_upload, area_file_up], [area_file_up])
cancel_handles.append(click_handle)
# 终止按钮的回调函数注册
stopBtn.click(fn=None, inputs=None, outputs=None, cancels=cancel_handles)
@@ -178,7 +184,7 @@ def main():
print(f"如果浏览器没有自动打开请复制并转到以下URL")
print(f"\t(亮色主题): http://localhost:{PORT}")
print(f"\t(暗色主题): http://localhost:{PORT}/?__dark-theme=true")
def open():
def open():
time.sleep(2) # 打开浏览器
webbrowser.open_new_tab(f"http://localhost:{PORT}/?__dark-theme=true")
threading.Thread(target=open, name="open-browser", daemon=True).start()
@@ -188,5 +194,13 @@ def main():
auto_opentab_delay()
demo.queue(concurrency_count=CONCURRENT_COUNT).launch(server_name="0.0.0.0", server_port=PORT, auth=AUTHENTICATION, favicon_path="docs/logo.png")
# 如果需要在二级路径下运行
# CUSTOM_PATH, = get_conf('CUSTOM_PATH')
# if CUSTOM_PATH != "/":
# from toolbox import run_gradio_in_subpath
# run_gradio_in_subpath(demo, auth=AUTHENTICATION, port=PORT, custom_path=CUSTOM_PATH)
# else:
# demo.launch(server_name="0.0.0.0", server_port=PORT, auth=AUTHENTICATION, favicon_path="docs/logo.png")
if __name__ == "__main__":
main()

View File

@@ -1,4 +1,4 @@
# 如何使用其他大语言模型v3.0分支测试中)
# 如何使用其他大语言模型
## ChatGLM
@@ -15,7 +15,7 @@ LLM_MODEL = "chatglm"
---
## Text-Generation-UI (TGUI)
## Text-Generation-UI (TGUI,调试中,暂不可用)
### 1. 部署TGUI
``` sh

View File

@@ -1,17 +1,17 @@
"""
该文件中主要包含2个函数
该文件中主要包含2个函数是所有LLM的通用接口它们会继续向下调用更底层的LLM模型处理多模型并行等细节
不具备多线程能力的函数:
1. predict: 正常对话时使用,具备完备的交互功能,不可多线程
不具备多线程能力的函数:正常对话时使用,具备完备的交互功能,不可多线程
1. predict(...)
具备多线程调用能力的函数
2. predict_no_ui_long_connection在实验过程中发现调用predict_no_ui处理长文档时和openai的连接容易断掉这个函数用stream的方式解决这个问题同样支持多线程
具备多线程调用能力的函数:在函数插件中被调用,灵活而简洁
2. predict_no_ui_long_connection(...)
"""
import tiktoken
from functools import lru_cache
from concurrent.futures import ThreadPoolExecutor
from toolbox import get_conf
from toolbox import get_conf, trimmed_format_exc
from .bridge_chatgpt import predict_no_ui_long_connection as chatgpt_noui
from .bridge_chatgpt import predict as chatgpt_ui
@@ -19,6 +19,9 @@ from .bridge_chatgpt import predict as chatgpt_ui
from .bridge_chatglm import predict_no_ui_long_connection as chatglm_noui
from .bridge_chatglm import predict as chatglm_ui
from .bridge_newbing import predict_no_ui_long_connection as newbing_noui
from .bridge_newbing import predict as newbing_ui
# from .bridge_tgui import predict_no_ui_long_connection as tgui_noui
# from .bridge_tgui import predict as tgui_ui
@@ -48,6 +51,7 @@ class LazyloadTiktoken(object):
API_URL_REDIRECT, = get_conf("API_URL_REDIRECT")
openai_endpoint = "https://api.openai.com/v1/chat/completions"
api2d_endpoint = "https://openai.api2d.net/v1/chat/completions"
newbing_endpoint = "wss://sydney.bing.com/sydney/ChatHub"
# 兼容旧版的配置
try:
API_URL, = get_conf("API_URL")
@@ -59,6 +63,7 @@ except:
# 新版配置
if openai_endpoint in API_URL_REDIRECT: openai_endpoint = API_URL_REDIRECT[openai_endpoint]
if api2d_endpoint in API_URL_REDIRECT: api2d_endpoint = API_URL_REDIRECT[api2d_endpoint]
if newbing_endpoint in API_URL_REDIRECT: newbing_endpoint = API_URL_REDIRECT[newbing_endpoint]
# 获取tokenizer
@@ -116,7 +121,15 @@ model_info = {
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
# newbing
"newbing": {
"fn_with_ui": newbing_ui,
"fn_without_ui": newbing_noui,
"endpoint": newbing_endpoint,
"max_token": 4096,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
}
@@ -128,10 +141,7 @@ def LLM_CATCH_EXCEPTION(f):
try:
return f(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience)
except Exception as e:
from toolbox import get_conf
import traceback
proxies, = get_conf('proxies')
tb_str = '\n```\n' + traceback.format_exc() + '\n```\n'
tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n'
observe_window[0] = tb_str
return tb_str
return decorated
@@ -182,7 +192,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history, sys_prompt, obser
def mutex_manager(window_mutex, observe_window):
while True:
time.sleep(0.5)
time.sleep(0.25)
if not window_mutex[-1]: break
# 看门狗watchdog
for i in range(n_model):
@@ -210,7 +220,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history, sys_prompt, obser
return_string_collect.append( f"{str(models[i])} 说】: <font color=\"{colors[i]}\"> {future.result()} </font>" )
window_mutex[-1] = False # stop mutex thread
res = '<br/>\n\n---\n\n'.join(return_string_collect)
res = '<br/><br/>\n\n---\n\n'.join(return_string_collect)
return res

View File

@@ -32,6 +32,7 @@ class GetGLMHandle(Process):
return self.chatglm_model is not None
def run(self):
# 子进程执行
# 第一次运行,加载参数
retry = 0
while True:
@@ -53,17 +54,24 @@ class GetGLMHandle(Process):
self.child.send('[Local Message] Call ChatGLM fail 不能正常加载ChatGLM的参数。')
raise RuntimeError("不能正常加载ChatGLM的参数")
# 进入任务等待状态
while True:
# 进入任务等待状态
kwargs = self.child.recv()
# 收到消息,开始请求
try:
for response, history in self.chatglm_model.stream_chat(self.chatglm_tokenizer, **kwargs):
self.child.send(response)
# # 中途接收可能的终止指令(如果有的话)
# if self.child.poll():
# command = self.child.recv()
# if command == '[Terminate]': break
except:
self.child.send('[Local Message] Call ChatGLM fail.')
# 请求处理结束,开始下一个循环
self.child.send('[Finish]')
def stream_chat(self, **kwargs):
# 主进程执行
self.parent.send(kwargs)
while True:
res = self.parent.recv()
@@ -92,8 +100,8 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
# chatglm 没有 sys_prompt 接口因此把prompt加入 history
history_feedin = []
history_feedin.append(["What can I do?", sys_prompt])
for i in range(len(history)//2):
history_feedin.append(["What can I do?", sys_prompt] )
history_feedin.append([history[2*i], history[2*i+1]] )
watch_dog_patience = 5 # 看门狗 (watchdog) 的耐心, 设置5秒即可
@@ -130,11 +138,17 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"]
# 处理历史信息
history_feedin = []
history_feedin.append(["What can I do?", system_prompt] )
for i in range(len(history)//2):
history_feedin.append(["What can I do?", system_prompt] )
history_feedin.append([history[2*i], history[2*i+1]] )
# 开始接收chatglm的回复
for response in glm_handle.stream_chat(query=inputs, history=history_feedin, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
chatbot[-1] = (inputs, response)
yield from update_ui(chatbot=chatbot, history=history)
yield from update_ui(chatbot=chatbot, history=history)
# 总结输出
history.extend([inputs, response])
yield from update_ui(chatbot=chatbot, history=history)

View File

@@ -21,7 +21,7 @@ import importlib
# config_private.py放自己的秘密如API和代理网址
# 读取时首先看是否存在私密的config_private配置文件不受git管控如果有则覆盖原config文件
from toolbox import get_conf, update_ui, is_any_api_key, select_api_key
from toolbox import get_conf, update_ui, is_any_api_key, select_api_key, what_keys, clip_history, trimmed_format_exc
proxies, API_KEY, TIMEOUT_SECONDS, MAX_RETRY = \
get_conf('proxies', 'API_KEY', 'TIMEOUT_SECONDS', 'MAX_RETRY')
@@ -118,7 +118,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
"""
if is_any_api_key(inputs):
chatbot._cookies['api_key'] = inputs
chatbot.append(("输入已识别为openai的api_key", "api_key已导入"))
chatbot.append(("输入已识别为openai的api_key", what_keys(inputs)))
yield from update_ui(chatbot=chatbot, history=history, msg="api_key已导入") # 刷新界面
return
elif not is_any_api_key(chatbot._cookies['api_key']):
@@ -141,11 +141,11 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
try:
headers, payload = generate_payload(inputs, llm_kwargs, history, system_prompt, stream)
except RuntimeError as e:
chatbot[-1] = (inputs, f"您提供的api-key不满足要求不包含任何可用于{llm_kwargs['llm_model']}的api-key。")
chatbot[-1] = (inputs, f"您提供的api-key不满足要求不包含任何可用于{llm_kwargs['llm_model']}的api-key。您可能选择了错误的模型或请求源。")
yield from update_ui(chatbot=chatbot, history=history, msg="api-key不满足要求") # 刷新界面
return
history.append(inputs); history.append(" ")
history.append(inputs); history.append("")
retry = 0
while True:
@@ -198,21 +198,24 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
chunk_decoded = chunk.decode()
error_msg = chunk_decoded
if "reduce the length" in error_msg:
chatbot[-1] = (chatbot[-1][0], "[Local Message] Reduce the length. 本次输入过长,或历史数据过长. 历史缓存数据现已释放,您可以请再次尝试.")
history = [] # 清除历史
if len(history) >= 2: history[-1] = ""; history[-2] = "" # 清除当前溢出的输入history[-2] 是本次输入, history[-1] 是本次输出
history = clip_history(inputs=inputs, history=history, tokenizer=model_info[llm_kwargs['llm_model']]['tokenizer'],
max_token_limit=(model_info[llm_kwargs['llm_model']]['max_token'])) # history至少释放二分之一
chatbot[-1] = (chatbot[-1][0], "[Local Message] Reduce the length. 本次输入过长, 或历史数据过长. 历史缓存数据已部分释放, 您可以请再次尝试. (若再次失败则更可能是因为输入过长.)")
# history = [] # 清除历史
elif "does not exist" in error_msg:
chatbot[-1] = (chatbot[-1][0], f"[Local Message] Model {llm_kwargs['llm_model']} does not exist. 模型不存在或者您没有获得体验资格.")
chatbot[-1] = (chatbot[-1][0], f"[Local Message] Model {llm_kwargs['llm_model']} does not exist. 模型不存在, 或者您没有获得体验资格.")
elif "Incorrect API key" in error_msg:
chatbot[-1] = (chatbot[-1][0], "[Local Message] Incorrect API key. OpenAI以提供了不正确的API_KEY为由拒绝服务.")
chatbot[-1] = (chatbot[-1][0], "[Local Message] Incorrect API key. OpenAI以提供了不正确的API_KEY为由, 拒绝服务.")
elif "exceeded your current quota" in error_msg:
chatbot[-1] = (chatbot[-1][0], "[Local Message] You exceeded your current quota. OpenAI以账户额度不足为由拒绝服务.")
chatbot[-1] = (chatbot[-1][0], "[Local Message] You exceeded your current quota. OpenAI以账户额度不足为由, 拒绝服务.")
elif "bad forward key" in error_msg:
chatbot[-1] = (chatbot[-1][0], "[Local Message] Bad forward key. API2D账户额度不足.")
elif "Not enough point" in error_msg:
chatbot[-1] = (chatbot[-1][0], "[Local Message] Not enough point. API2D账户点数不足.")
else:
from toolbox import regular_txt_to_markdown
tb_str = '```\n' + traceback.format_exc() + '```'
tb_str = '```\n' + trimmed_format_exc() + '```'
chatbot[-1] = (chatbot[-1][0], f"[Local Message] 异常 \n\n{tb_str} \n\n{regular_txt_to_markdown(chunk_decoded[4:])}")
yield from update_ui(chatbot=chatbot, history=history, msg="Json异常" + error_msg) # 刷新界面
return

View File

@@ -0,0 +1,251 @@
"""
========================================================================
第一部分来自EdgeGPT.py
https://github.com/acheong08/EdgeGPT
========================================================================
"""
from .edge_gpt import NewbingChatbot
load_message = "等待NewBing响应。"
"""
========================================================================
第二部分子进程Worker调用主体
========================================================================
"""
import time
import json
import re
import asyncio
import importlib
import threading
from toolbox import update_ui, get_conf, trimmed_format_exc
from multiprocessing import Process, Pipe
def preprocess_newbing_out(s):
pattern = r'\^(\d+)\^' # 匹配^数字^
sub = lambda m: '\['+m.group(1)+'\]' # 将匹配到的数字作为替换值
result = re.sub(pattern, sub, s) # 替换操作
if '[1]' in result:
result += '\n\n```\n' + "\n".join([r for r in result.split('\n') if r.startswith('[')]) + '\n```\n'
return result
def preprocess_newbing_out_simple(result):
if '[1]' in result:
result += '\n\n```\n' + "\n".join([r for r in result.split('\n') if r.startswith('[')]) + '\n```\n'
return result
class NewBingHandle(Process):
def __init__(self):
super().__init__(daemon=True)
self.parent, self.child = Pipe()
self.newbing_model = None
self.info = ""
self.success = True
self.local_history = []
self.check_dependency()
self.start()
self.threadLock = threading.Lock()
def check_dependency(self):
try:
self.success = False
import certifi, httpx, rich
self.info = "依赖检测通过等待NewBing响应。注意目前不能多人同时调用NewBing接口有线程锁否则将导致每个人的NewBing问询历史互相渗透。调用NewBing时会自动使用已配置的代理。"
self.success = True
except:
self.info = "缺少的依赖如果要使用Newbing除了基础的pip依赖以外您还需要运行`pip install -r request_llm/requirements_newbing.txt`安装Newbing的依赖。"
self.success = False
def ready(self):
return self.newbing_model is not None
async def async_run(self):
# 读取配置
NEWBING_STYLE, = get_conf('NEWBING_STYLE')
from request_llm.bridge_all import model_info
endpoint = model_info['newbing']['endpoint']
while True:
# 等待
kwargs = self.child.recv()
question=kwargs['query']
history=kwargs['history']
system_prompt=kwargs['system_prompt']
# 是否重置
if len(self.local_history) > 0 and len(history)==0:
await self.newbing_model.reset()
self.local_history = []
# 开始问问题
prompt = ""
if system_prompt not in self.local_history:
self.local_history.append(system_prompt)
prompt += system_prompt + '\n'
# 追加历史
for ab in history:
a, b = ab
if a not in self.local_history:
self.local_history.append(a)
prompt += a + '\n'
# if b not in self.local_history:
# self.local_history.append(b)
# prompt += b + '\n'
# 问题
prompt += question
self.local_history.append(question)
print('question:', prompt)
# 提交
async for final, response in self.newbing_model.ask_stream(
prompt=question,
conversation_style=NEWBING_STYLE, # ["creative", "balanced", "precise"]
wss_link=endpoint, # "wss://sydney.bing.com/sydney/ChatHub"
):
if not final:
print(response)
self.child.send(str(response))
else:
print('-------- receive final ---------')
self.child.send('[Finish]')
# self.local_history.append(response)
def run(self):
"""
这个函数运行在子进程
"""
# 第一次运行,加载参数
self.success = False
self.local_history = []
if (self.newbing_model is None) or (not self.success):
# 代理设置
proxies, = get_conf('proxies')
if proxies is None:
self.proxies_https = None
else:
self.proxies_https = proxies['https']
# cookie
NEWBING_COOKIES, = get_conf('NEWBING_COOKIES')
try:
cookies = json.loads(NEWBING_COOKIES)
except:
self.success = False
tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n'
self.child.send(f'[Local Message] 不能加载Newbing组件。NEWBING_COOKIES未填写或有格式错误。')
self.child.send('[Fail]')
self.child.send('[Finish]')
raise RuntimeError(f"不能加载Newbing组件。NEWBING_COOKIES未填写或有格式错误。")
try:
self.newbing_model = NewbingChatbot(proxy=self.proxies_https, cookies=cookies)
except:
self.success = False
tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n'
self.child.send(f'[Local Message] 不能加载Newbing组件。{tb_str}')
self.child.send('[Fail]')
self.child.send('[Finish]')
raise RuntimeError(f"不能加载Newbing组件。")
self.success = True
try:
# 进入任务等待状态
asyncio.run(self.async_run())
except Exception:
tb_str = '```\n' + trimmed_format_exc() + '```'
self.child.send(f'[Local Message] Newbing失败 {tb_str}.')
self.child.send('[Fail]')
self.child.send('[Finish]')
def stream_chat(self, **kwargs):
"""
这个函数运行在主进程
"""
self.threadLock.acquire()
self.parent.send(kwargs) # 发送请求到子进程
while True:
res = self.parent.recv() # 等待newbing回复的片段
if res == '[Finish]':
break # 结束
elif res == '[Fail]':
self.success = False
break
else:
yield res # newbing回复的片段
self.threadLock.release()
"""
========================================================================
第三部分:主进程统一调用函数接口
========================================================================
"""
global newbing_handle
newbing_handle = None
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=None, console_slience=False):
"""
多线程方法
函数的说明请见 request_llm/bridge_all.py
"""
global newbing_handle
if (newbing_handle is None) or (not newbing_handle.success):
newbing_handle = NewBingHandle()
observe_window[0] = load_message + "\n\n" + newbing_handle.info
if not newbing_handle.success:
error = newbing_handle.info
newbing_handle = None
raise RuntimeError(error)
# 没有 sys_prompt 接口因此把prompt加入 history
history_feedin = []
for i in range(len(history)//2):
history_feedin.append([history[2*i], history[2*i+1]] )
watch_dog_patience = 5 # 看门狗 (watchdog) 的耐心, 设置5秒即可
response = ""
observe_window[0] = "[Local Message]: 等待NewBing响应中 ..."
for response in newbing_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=sys_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
observe_window[0] = preprocess_newbing_out_simple(response)
if len(observe_window) >= 2:
if (time.time()-observe_window[1]) > watch_dog_patience:
raise RuntimeError("程序终止。")
return preprocess_newbing_out_simple(response)
def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None):
"""
单线程方法
函数的说明请见 request_llm/bridge_all.py
"""
chatbot.append((inputs, "[Local Message]: 等待NewBing响应中 ..."))
global newbing_handle
if (newbing_handle is None) or (not newbing_handle.success):
newbing_handle = NewBingHandle()
chatbot[-1] = (inputs, load_message + "\n\n" + newbing_handle.info)
yield from update_ui(chatbot=chatbot, history=[])
if not newbing_handle.success:
newbing_handle = None
return
if additional_fn is not None:
import core_functional
importlib.reload(core_functional) # 热更新prompt
core_functional = core_functional.get_core_functions()
if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"]
history_feedin = []
for i in range(len(history)//2):
history_feedin.append([history[2*i], history[2*i+1]] )
chatbot[-1] = (inputs, "[Local Message]: 等待NewBing响应中 ...")
response = "[Local Message]: 等待NewBing响应中 ..."
yield from update_ui(chatbot=chatbot, history=history, msg="NewBing响应缓慢尚未完成全部响应请耐心完成后再提交新问题。")
for response in newbing_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=system_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
chatbot[-1] = (inputs, preprocess_newbing_out(response))
yield from update_ui(chatbot=chatbot, history=history, msg="NewBing响应缓慢尚未完成全部响应请耐心完成后再提交新问题。")
if response == "[Local Message]: 等待NewBing响应中 ...": response = "[Local Message]: NewBing响应异常请刷新界面重试 ..."
history.extend([inputs, response])
yield from update_ui(chatbot=chatbot, history=history, msg="完成全部响应,请提交新问题。")

409
request_llm/edge_gpt.py Normal file
View File

@@ -0,0 +1,409 @@
"""
========================================================================
第一部分来自EdgeGPT.py
https://github.com/acheong08/EdgeGPT
========================================================================
"""
import argparse
import asyncio
import json
import os
import random
import re
import ssl
import sys
import uuid
from enum import Enum
from typing import Generator
from typing import Literal
from typing import Optional
from typing import Union
import websockets.client as websockets
DELIMITER = "\x1e"
# Generate random IP between range 13.104.0.0/14
FORWARDED_IP = (
f"13.{random.randint(104, 107)}.{random.randint(0, 255)}.{random.randint(0, 255)}"
)
HEADERS = {
"accept": "application/json",
"accept-language": "en-US,en;q=0.9",
"content-type": "application/json",
"sec-ch-ua": '"Not_A Brand";v="99", "Microsoft Edge";v="110", "Chromium";v="110"',
"sec-ch-ua-arch": '"x86"',
"sec-ch-ua-bitness": '"64"',
"sec-ch-ua-full-version": '"109.0.1518.78"',
"sec-ch-ua-full-version-list": '"Chromium";v="110.0.5481.192", "Not A(Brand";v="24.0.0.0", "Microsoft Edge";v="110.0.1587.69"',
"sec-ch-ua-mobile": "?0",
"sec-ch-ua-model": "",
"sec-ch-ua-platform": '"Windows"',
"sec-ch-ua-platform-version": '"15.0.0"',
"sec-fetch-dest": "empty",
"sec-fetch-mode": "cors",
"sec-fetch-site": "same-origin",
"x-ms-client-request-id": str(uuid.uuid4()),
"x-ms-useragent": "azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32",
"Referer": "https://www.bing.com/search?q=Bing+AI&showconv=1&FORM=hpcodx",
"Referrer-Policy": "origin-when-cross-origin",
"x-forwarded-for": FORWARDED_IP,
}
HEADERS_INIT_CONVER = {
"authority": "edgeservices.bing.com",
"accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7",
"accept-language": "en-US,en;q=0.9",
"cache-control": "max-age=0",
"sec-ch-ua": '"Chromium";v="110", "Not A(Brand";v="24", "Microsoft Edge";v="110"',
"sec-ch-ua-arch": '"x86"',
"sec-ch-ua-bitness": '"64"',
"sec-ch-ua-full-version": '"110.0.1587.69"',
"sec-ch-ua-full-version-list": '"Chromium";v="110.0.5481.192", "Not A(Brand";v="24.0.0.0", "Microsoft Edge";v="110.0.1587.69"',
"sec-ch-ua-mobile": "?0",
"sec-ch-ua-model": '""',
"sec-ch-ua-platform": '"Windows"',
"sec-ch-ua-platform-version": '"15.0.0"',
"sec-fetch-dest": "document",
"sec-fetch-mode": "navigate",
"sec-fetch-site": "none",
"sec-fetch-user": "?1",
"upgrade-insecure-requests": "1",
"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36 Edg/110.0.1587.69",
"x-edge-shopping-flag": "1",
"x-forwarded-for": FORWARDED_IP,
}
def get_ssl_context():
import certifi
ssl_context = ssl.create_default_context()
ssl_context.load_verify_locations(certifi.where())
return ssl_context
class NotAllowedToAccess(Exception):
pass
class ConversationStyle(Enum):
creative = "h3imaginative,clgalileo,gencontentv3"
balanced = "galileo"
precise = "h3precise,clgalileo"
CONVERSATION_STYLE_TYPE = Optional[
Union[ConversationStyle, Literal["creative", "balanced", "precise"]]
]
def _append_identifier(msg: dict) -> str:
"""
Appends special character to end of message to identify end of message
"""
# Convert dict to json string
return json.dumps(msg) + DELIMITER
def _get_ran_hex(length: int = 32) -> str:
"""
Returns random hex string
"""
return "".join(random.choice("0123456789abcdef") for _ in range(length))
class _ChatHubRequest:
"""
Request object for ChatHub
"""
def __init__(
self,
conversation_signature: str,
client_id: str,
conversation_id: str,
invocation_id: int = 0,
) -> None:
self.struct: dict = {}
self.client_id: str = client_id
self.conversation_id: str = conversation_id
self.conversation_signature: str = conversation_signature
self.invocation_id: int = invocation_id
def update(
self,
prompt,
conversation_style,
options,
) -> None:
"""
Updates request object
"""
if options is None:
options = [
"deepleo",
"enable_debug_commands",
"disable_emoji_spoken_text",
"enablemm",
]
if conversation_style:
if not isinstance(conversation_style, ConversationStyle):
conversation_style = getattr(ConversationStyle, conversation_style)
options = [
"nlu_direct_response_filter",
"deepleo",
"disable_emoji_spoken_text",
"responsible_ai_policy_235",
"enablemm",
conversation_style.value,
"dtappid",
"cricinfo",
"cricinfov2",
"dv3sugg",
]
self.struct = {
"arguments": [
{
"source": "cib",
"optionsSets": options,
"sliceIds": [
"222dtappid",
"225cricinfo",
"224locals0",
],
"traceId": _get_ran_hex(32),
"isStartOfSession": self.invocation_id == 0,
"message": {
"author": "user",
"inputMethod": "Keyboard",
"text": prompt,
"messageType": "Chat",
},
"conversationSignature": self.conversation_signature,
"participant": {
"id": self.client_id,
},
"conversationId": self.conversation_id,
},
],
"invocationId": str(self.invocation_id),
"target": "chat",
"type": 4,
}
self.invocation_id += 1
class _Conversation:
"""
Conversation API
"""
def __init__(
self,
cookies,
proxy,
) -> None:
self.struct: dict = {
"conversationId": None,
"clientId": None,
"conversationSignature": None,
"result": {"value": "Success", "message": None},
}
import httpx
self.proxy = proxy
proxy = (
proxy
or os.environ.get("all_proxy")
or os.environ.get("ALL_PROXY")
or os.environ.get("https_proxy")
or os.environ.get("HTTPS_PROXY")
or None
)
if proxy is not None and proxy.startswith("socks5h://"):
proxy = "socks5://" + proxy[len("socks5h://") :]
self.session = httpx.Client(
proxies=proxy,
timeout=30,
headers=HEADERS_INIT_CONVER,
)
for cookie in cookies:
self.session.cookies.set(cookie["name"], cookie["value"])
# Send GET request
response = self.session.get(
url=os.environ.get("BING_PROXY_URL")
or "https://edgeservices.bing.com/edgesvc/turing/conversation/create",
)
if response.status_code != 200:
response = self.session.get(
"https://edge.churchless.tech/edgesvc/turing/conversation/create",
)
if response.status_code != 200:
print(f"Status code: {response.status_code}")
print(response.text)
print(response.url)
raise Exception("Authentication failed")
try:
self.struct = response.json()
except (json.decoder.JSONDecodeError, NotAllowedToAccess) as exc:
raise Exception(
"Authentication failed. You have not been accepted into the beta.",
) from exc
if self.struct["result"]["value"] == "UnauthorizedRequest":
raise NotAllowedToAccess(self.struct["result"]["message"])
class _ChatHub:
"""
Chat API
"""
def __init__(self, conversation) -> None:
self.wss = None
self.request: _ChatHubRequest
self.loop: bool
self.task: asyncio.Task
print(conversation.struct)
self.request = _ChatHubRequest(
conversation_signature=conversation.struct["conversationSignature"],
client_id=conversation.struct["clientId"],
conversation_id=conversation.struct["conversationId"],
)
async def ask_stream(
self,
prompt: str,
wss_link: str,
conversation_style: CONVERSATION_STYLE_TYPE = None,
raw: bool = False,
options: dict = None,
) -> Generator[str, None, None]:
"""
Ask a question to the bot
"""
if self.wss and not self.wss.closed:
await self.wss.close()
# Check if websocket is closed
self.wss = await websockets.connect(
wss_link,
extra_headers=HEADERS,
max_size=None,
ssl=get_ssl_context()
)
await self._initial_handshake()
# Construct a ChatHub request
self.request.update(
prompt=prompt,
conversation_style=conversation_style,
options=options,
)
# Send request
await self.wss.send(_append_identifier(self.request.struct))
final = False
while not final:
objects = str(await self.wss.recv()).split(DELIMITER)
for obj in objects:
if obj is None or not obj:
continue
response = json.loads(obj)
if response.get("type") != 2 and raw:
yield False, response
elif response.get("type") == 1 and response["arguments"][0].get(
"messages",
):
resp_txt = response["arguments"][0]["messages"][0]["adaptiveCards"][
0
]["body"][0].get("text")
yield False, resp_txt
elif response.get("type") == 2:
final = True
yield True, response
async def _initial_handshake(self) -> None:
await self.wss.send(_append_identifier({"protocol": "json", "version": 1}))
await self.wss.recv()
async def close(self) -> None:
"""
Close the connection
"""
if self.wss and not self.wss.closed:
await self.wss.close()
class NewbingChatbot:
"""
Combines everything to make it seamless
"""
def __init__(
self,
cookies,
proxy
) -> None:
if cookies is None:
cookies = {}
self.cookies = cookies
self.proxy = proxy
self.chat_hub: _ChatHub = _ChatHub(
_Conversation(self.cookies, self.proxy),
)
async def ask(
self,
prompt: str,
wss_link: str,
conversation_style: CONVERSATION_STYLE_TYPE = None,
options: dict = None,
) -> dict:
"""
Ask a question to the bot
"""
async for final, response in self.chat_hub.ask_stream(
prompt=prompt,
conversation_style=conversation_style,
wss_link=wss_link,
options=options,
):
if final:
return response
await self.chat_hub.wss.close()
return None
async def ask_stream(
self,
prompt: str,
wss_link: str,
conversation_style: CONVERSATION_STYLE_TYPE = None,
raw: bool = False,
options: dict = None,
) -> Generator[str, None, None]:
"""
Ask a question to the bot
"""
async for response in self.chat_hub.ask_stream(
prompt=prompt,
conversation_style=conversation_style,
wss_link=wss_link,
raw=raw,
options=options,
):
yield response
async def close(self) -> None:
"""
Close the connection
"""
await self.chat_hub.close()
async def reset(self) -> None:
"""
Reset the conversation
"""
await self.close()
self.chat_hub = _ChatHub(_Conversation(self.cookies, self.proxy))

View File

@@ -0,0 +1,8 @@
BingImageCreator
certifi
httpx
prompt_toolkit
requests
rich
websockets
httpx[socks]

247
theme.py
View File

@@ -137,6 +137,16 @@ advanced_css = """
/* 行内代码的背景设为淡灰色,设定圆角和间距. */
.markdown-body code {
display: inline;
white-space: break-spaces;
border-radius: 6px;
margin: 0 2px 0 2px;
padding: .2em .4em .1em .4em;
background-color: rgba(13, 17, 23, 0.95);
color: #c9d1d9;
}
.dark .markdown-body code {
display: inline;
white-space: break-spaces;
border-radius: 6px;
@@ -144,8 +154,19 @@ advanced_css = """
padding: .2em .4em .1em .4em;
background-color: rgba(175,184,193,0.2);
}
/* 设定代码块的样式,包括背景颜色、内、外边距、圆角。 */
.markdown-body pre code {
display: block;
overflow: auto;
white-space: pre;
background-color: rgba(13, 17, 23, 0.95);
border-radius: 10px;
padding: 1em;
margin: 1em 2em 1em 0.5em;
}
.dark .markdown-body pre code {
display: block;
overflow: auto;
white-space: pre;
@@ -160,72 +181,162 @@ advanced_css = """
if CODE_HIGHLIGHT:
advanced_css += """
.hll { background-color: #ffffcc }
.c { color: #3D7B7B; font-style: italic } /* Comment */
.err { border: 1px solid #FF0000 } /* Error */
.k { color: hsl(197, 94%, 51%); font-weight: bold } /* Keyword */
.o { color: #666666 } /* Operator */
.ch { color: #3D7B7B; font-style: italic } /* Comment.Hashbang */
.cm { color: #3D7B7B; font-style: italic } /* Comment.Multiline */
.cp { color: #9C6500 } /* Comment.Preproc */
.cpf { color: #3D7B7B; font-style: italic } /* Comment.PreprocFile */
.c1 { color: #3D7B7B; font-style: italic } /* Comment.Single */
.cs { color: #3D7B7B; font-style: italic } /* Comment.Special */
.gd { color: #A00000 } /* Generic.Deleted */
.ge { font-style: italic } /* Generic.Emph */
.gr { color: #E40000 } /* Generic.Error */
.gh { color: #000080; font-weight: bold } /* Generic.Heading */
.gi { color: #008400 } /* Generic.Inserted */
.go { color: #717171 } /* Generic.Output */
.gp { color: #000080; font-weight: bold } /* Generic.Prompt */
.gs { font-weight: bold } /* Generic.Strong */
.gu { color: #800080; font-weight: bold } /* Generic.Subheading */
.gt { color: #a9dd00 } /* Generic.Traceback */
.kc { color: #008000; font-weight: bold } /* Keyword.Constant */
.kd { color: #008000; font-weight: bold } /* Keyword.Declaration */
.kn { color: #008000; font-weight: bold } /* Keyword.Namespace */
.kp { color: #008000 } /* Keyword.Pseudo */
.kr { color: #008000; font-weight: bold } /* Keyword.Reserved */
.kt { color: #B00040 } /* Keyword.Type */
.m { color: #666666 } /* Literal.Number */
.s { color: #BA2121 } /* Literal.String */
.na { color: #687822 } /* Name.Attribute */
.nb { color: #e5f8c3 } /* Name.Builtin */
.nc { color: #ffad65; font-weight: bold } /* Name.Class */
.no { color: #880000 } /* Name.Constant */
.nd { color: #AA22FF } /* Name.Decorator */
.ni { color: #717171; font-weight: bold } /* Name.Entity */
.ne { color: #CB3F38; font-weight: bold } /* Name.Exception */
.nf { color: #f9f978 } /* Name.Function */
.nl { color: #767600 } /* Name.Label */
.nn { color: #0000FF; font-weight: bold } /* Name.Namespace */
.nt { color: #008000; font-weight: bold } /* Name.Tag */
.nv { color: #19177C } /* Name.Variable */
.ow { color: #AA22FF; font-weight: bold } /* Operator.Word */
.w { color: #bbbbbb } /* Text.Whitespace */
.mb { color: #666666 } /* Literal.Number.Bin */
.mf { color: #666666 } /* Literal.Number.Float */
.mh { color: #666666 } /* Literal.Number.Hex */
.mi { color: #666666 } /* Literal.Number.Integer */
.mo { color: #666666 } /* Literal.Number.Oct */
.sa { color: #BA2121 } /* Literal.String.Affix */
.sb { color: #BA2121 } /* Literal.String.Backtick */
.sc { color: #BA2121 } /* Literal.String.Char */
.dl { color: #BA2121 } /* Literal.String.Delimiter */
.sd { color: #BA2121; font-style: italic } /* Literal.String.Doc */
.s2 { color: #2bf840 } /* Literal.String.Double */
.se { color: #AA5D1F; font-weight: bold } /* Literal.String.Escape */
.sh { color: #BA2121 } /* Literal.String.Heredoc */
.si { color: #A45A77; font-weight: bold } /* Literal.String.Interpol */
.sx { color: #008000 } /* Literal.String.Other */
.sr { color: #A45A77 } /* Literal.String.Regex */
.s1 { color: #BA2121 } /* Literal.String.Single */
.ss { color: #19177C } /* Literal.String.Symbol */
.bp { color: #008000 } /* Name.Builtin.Pseudo */
.fm { color: #0000FF } /* Name.Function.Magic */
.vc { color: #19177C } /* Name.Variable.Class */
.vg { color: #19177C } /* Name.Variable.Global */
.vi { color: #19177C } /* Name.Variable.Instance */
.vm { color: #19177C } /* Name.Variable.Magic */
.il { color: #666666 } /* Literal.Number.Integer.Long */
.codehilite .hll { background-color: #6e7681 }
.codehilite .c { color: #8b949e; font-style: italic } /* Comment */
.codehilite .err { color: #f85149 } /* Error */
.codehilite .esc { color: #c9d1d9 } /* Escape */
.codehilite .g { color: #c9d1d9 } /* Generic */
.codehilite .k { color: #ff7b72 } /* Keyword */
.codehilite .l { color: #a5d6ff } /* Literal */
.codehilite .n { color: #c9d1d9 } /* Name */
.codehilite .o { color: #ff7b72; font-weight: bold } /* Operator */
.codehilite .x { color: #c9d1d9 } /* Other */
.codehilite .p { color: #c9d1d9 } /* Punctuation */
.codehilite .ch { color: #8b949e; font-style: italic } /* Comment.Hashbang */
.codehilite .cm { color: #8b949e; font-style: italic } /* Comment.Multiline */
.codehilite .cp { color: #8b949e; font-weight: bold; font-style: italic } /* Comment.Preproc */
.codehilite .cpf { color: #8b949e; font-style: italic } /* Comment.PreprocFile */
.codehilite .c1 { color: #8b949e; font-style: italic } /* Comment.Single */
.codehilite .cs { color: #8b949e; font-weight: bold; font-style: italic } /* Comment.Special */
.codehilite .gd { color: #ffa198; background-color: #490202 } /* Generic.Deleted */
.codehilite .ge { color: #c9d1d9; font-style: italic } /* Generic.Emph */
.codehilite .gr { color: #ffa198 } /* Generic.Error */
.codehilite .gh { color: #79c0ff; font-weight: bold } /* Generic.Heading */
.codehilite .gi { color: #56d364; background-color: #0f5323 } /* Generic.Inserted */
.codehilite .go { color: #8b949e } /* Generic.Output */
.codehilite .gp { color: #8b949e } /* Generic.Prompt */
.codehilite .gs { color: #c9d1d9; font-weight: bold } /* Generic.Strong */
.codehilite .gu { color: #79c0ff } /* Generic.Subheading */
.codehilite .gt { color: #ff7b72 } /* Generic.Traceback */
.codehilite .g-Underline { color: #c9d1d9; text-decoration: underline } /* Generic.Underline */
.codehilite .kc { color: #79c0ff } /* Keyword.Constant */
.codehilite .kd { color: #ff7b72 } /* Keyword.Declaration */
.codehilite .kn { color: #ff7b72 } /* Keyword.Namespace */
.codehilite .kp { color: #79c0ff } /* Keyword.Pseudo */
.codehilite .kr { color: #ff7b72 } /* Keyword.Reserved */
.codehilite .kt { color: #ff7b72 } /* Keyword.Type */
.codehilite .ld { color: #79c0ff } /* Literal.Date */
.codehilite .m { color: #a5d6ff } /* Literal.Number */
.codehilite .s { color: #a5d6ff } /* Literal.String */
.codehilite .na { color: #c9d1d9 } /* Name.Attribute */
.codehilite .nb { color: #c9d1d9 } /* Name.Builtin */
.codehilite .nc { color: #f0883e; font-weight: bold } /* Name.Class */
.codehilite .no { color: #79c0ff; font-weight: bold } /* Name.Constant */
.codehilite .nd { color: #d2a8ff; font-weight: bold } /* Name.Decorator */
.codehilite .ni { color: #ffa657 } /* Name.Entity */
.codehilite .ne { color: #f0883e; font-weight: bold } /* Name.Exception */
.codehilite .nf { color: #d2a8ff; font-weight: bold } /* Name.Function */
.codehilite .nl { color: #79c0ff; font-weight: bold } /* Name.Label */
.codehilite .nn { color: #ff7b72 } /* Name.Namespace */
.codehilite .nx { color: #c9d1d9 } /* Name.Other */
.codehilite .py { color: #79c0ff } /* Name.Property */
.codehilite .nt { color: #7ee787 } /* Name.Tag */
.codehilite .nv { color: #79c0ff } /* Name.Variable */
.codehilite .ow { color: #ff7b72; font-weight: bold } /* Operator.Word */
.codehilite .pm { color: #c9d1d9 } /* Punctuation.Marker */
.codehilite .w { color: #6e7681 } /* Text.Whitespace */
.codehilite .mb { color: #a5d6ff } /* Literal.Number.Bin */
.codehilite .mf { color: #a5d6ff } /* Literal.Number.Float */
.codehilite .mh { color: #a5d6ff } /* Literal.Number.Hex */
.codehilite .mi { color: #a5d6ff } /* Literal.Number.Integer */
.codehilite .mo { color: #a5d6ff } /* Literal.Number.Oct */
.codehilite .sa { color: #79c0ff } /* Literal.String.Affix */
.codehilite .sb { color: #a5d6ff } /* Literal.String.Backtick */
.codehilite .sc { color: #a5d6ff } /* Literal.String.Char */
.codehilite .dl { color: #79c0ff } /* Literal.String.Delimiter */
.codehilite .sd { color: #a5d6ff } /* Literal.String.Doc */
.codehilite .s2 { color: #a5d6ff } /* Literal.String.Double */
.codehilite .se { color: #79c0ff } /* Literal.String.Escape */
.codehilite .sh { color: #79c0ff } /* Literal.String.Heredoc */
.codehilite .si { color: #a5d6ff } /* Literal.String.Interpol */
.codehilite .sx { color: #a5d6ff } /* Literal.String.Other */
.codehilite .sr { color: #79c0ff } /* Literal.String.Regex */
.codehilite .s1 { color: #a5d6ff } /* Literal.String.Single */
.codehilite .ss { color: #a5d6ff } /* Literal.String.Symbol */
.codehilite .bp { color: #c9d1d9 } /* Name.Builtin.Pseudo */
.codehilite .fm { color: #d2a8ff; font-weight: bold } /* Name.Function.Magic */
.codehilite .vc { color: #79c0ff } /* Name.Variable.Class */
.codehilite .vg { color: #79c0ff } /* Name.Variable.Global */
.codehilite .vi { color: #79c0ff } /* Name.Variable.Instance */
.codehilite .vm { color: #79c0ff } /* Name.Variable.Magic */
.codehilite .il { color: #a5d6ff } /* Literal.Number.Integer.Long */
.dark .codehilite .hll { background-color: #2C3B41 }
.dark .codehilite .c { color: #79d618; font-style: italic } /* Comment */
.dark .codehilite .err { color: #FF5370 } /* Error */
.dark .codehilite .esc { color: #89DDFF } /* Escape */
.dark .codehilite .g { color: #EEFFFF } /* Generic */
.dark .codehilite .k { color: #BB80B3 } /* Keyword */
.dark .codehilite .l { color: #C3E88D } /* Literal */
.dark .codehilite .n { color: #EEFFFF } /* Name */
.dark .codehilite .o { color: #89DDFF } /* Operator */
.dark .codehilite .p { color: #89DDFF } /* Punctuation */
.dark .codehilite .ch { color: #79d618; font-style: italic } /* Comment.Hashbang */
.dark .codehilite .cm { color: #79d618; font-style: italic } /* Comment.Multiline */
.dark .codehilite .cp { color: #79d618; font-style: italic } /* Comment.Preproc */
.dark .codehilite .cpf { color: #79d618; font-style: italic } /* Comment.PreprocFile */
.dark .codehilite .c1 { color: #79d618; font-style: italic } /* Comment.Single */
.dark .codehilite .cs { color: #79d618; font-style: italic } /* Comment.Special */
.dark .codehilite .gd { color: #FF5370 } /* Generic.Deleted */
.dark .codehilite .ge { color: #89DDFF } /* Generic.Emph */
.dark .codehilite .gr { color: #FF5370 } /* Generic.Error */
.dark .codehilite .gh { color: #C3E88D } /* Generic.Heading */
.dark .codehilite .gi { color: #C3E88D } /* Generic.Inserted */
.dark .codehilite .go { color: #79d618 } /* Generic.Output */
.dark .codehilite .gp { color: #FFCB6B } /* Generic.Prompt */
.dark .codehilite .gs { color: #FF5370 } /* Generic.Strong */
.dark .codehilite .gu { color: #89DDFF } /* Generic.Subheading */
.dark .codehilite .gt { color: #FF5370 } /* Generic.Traceback */
.dark .codehilite .kc { color: #89DDFF } /* Keyword.Constant */
.dark .codehilite .kd { color: #BB80B3 } /* Keyword.Declaration */
.dark .codehilite .kn { color: #89DDFF; font-style: italic } /* Keyword.Namespace */
.dark .codehilite .kp { color: #89DDFF } /* Keyword.Pseudo */
.dark .codehilite .kr { color: #BB80B3 } /* Keyword.Reserved */
.dark .codehilite .kt { color: #BB80B3 } /* Keyword.Type */
.dark .codehilite .ld { color: #C3E88D } /* Literal.Date */
.dark .codehilite .m { color: #F78C6C } /* Literal.Number */
.dark .codehilite .s { color: #C3E88D } /* Literal.String */
.dark .codehilite .na { color: #BB80B3 } /* Name.Attribute */
.dark .codehilite .nb { color: #82AAFF } /* Name.Builtin */
.dark .codehilite .nc { color: #FFCB6B } /* Name.Class */
.dark .codehilite .no { color: #EEFFFF } /* Name.Constant */
.dark .codehilite .nd { color: #82AAFF } /* Name.Decorator */
.dark .codehilite .ni { color: #89DDFF } /* Name.Entity */
.dark .codehilite .ne { color: #FFCB6B } /* Name.Exception */
.dark .codehilite .nf { color: #82AAFF } /* Name.Function */
.dark .codehilite .nl { color: #82AAFF } /* Name.Label */
.dark .codehilite .nn { color: #FFCB6B } /* Name.Namespace */
.dark .codehilite .nx { color: #EEFFFF } /* Name.Other */
.dark .codehilite .py { color: #FFCB6B } /* Name.Property */
.dark .codehilite .nt { color: #FF5370 } /* Name.Tag */
.dark .codehilite .nv { color: #89DDFF } /* Name.Variable */
.dark .codehilite .ow { color: #89DDFF; font-style: italic } /* Operator.Word */
.dark .codehilite .pm { color: #89DDFF } /* Punctuation.Marker */
.dark .codehilite .w { color: #EEFFFF } /* Text.Whitespace */
.dark .codehilite .mb { color: #F78C6C } /* Literal.Number.Bin */
.dark .codehilite .mf { color: #F78C6C } /* Literal.Number.Float */
.dark .codehilite .mh { color: #F78C6C } /* Literal.Number.Hex */
.dark .codehilite .mi { color: #F78C6C } /* Literal.Number.Integer */
.dark .codehilite .mo { color: #F78C6C } /* Literal.Number.Oct */
.dark .codehilite .sa { color: #BB80B3 } /* Literal.String.Affix */
.dark .codehilite .sb { color: #C3E88D } /* Literal.String.Backtick */
.dark .codehilite .sc { color: #C3E88D } /* Literal.String.Char */
.dark .codehilite .dl { color: #EEFFFF } /* Literal.String.Delimiter */
.dark .codehilite .sd { color: #79d618; font-style: italic } /* Literal.String.Doc */
.dark .codehilite .s2 { color: #C3E88D } /* Literal.String.Double */
.dark .codehilite .se { color: #EEFFFF } /* Literal.String.Escape */
.dark .codehilite .sh { color: #C3E88D } /* Literal.String.Heredoc */
.dark .codehilite .si { color: #89DDFF } /* Literal.String.Interpol */
.dark .codehilite .sx { color: #C3E88D } /* Literal.String.Other */
.dark .codehilite .sr { color: #89DDFF } /* Literal.String.Regex */
.dark .codehilite .s1 { color: #C3E88D } /* Literal.String.Single */
.dark .codehilite .ss { color: #89DDFF } /* Literal.String.Symbol */
.dark .codehilite .bp { color: #89DDFF } /* Name.Builtin.Pseudo */
.dark .codehilite .fm { color: #82AAFF } /* Name.Function.Magic */
.dark .codehilite .vc { color: #89DDFF } /* Name.Variable.Class */
.dark .codehilite .vg { color: #89DDFF } /* Name.Variable.Global */
.dark .codehilite .vi { color: #89DDFF } /* Name.Variable.Instance */
.dark .codehilite .vm { color: #82AAFF } /* Name.Variable.Magic */
.dark .codehilite .il { color: #F78C6C } /* Literal.Number.Integer.Long */
"""

View File

@@ -5,7 +5,20 @@ import inspect
import re
from latex2mathml.converter import convert as tex2mathml
from functools import wraps, lru_cache
############################### 插件输入输出接驳区 #######################################
"""
========================================================================
第一部分
函数插件输入输出接驳区
- ChatBotWithCookies: 带Cookies的Chatbot类为实现更多强大的功能做基础
- ArgsGeneralWrapper: 装饰器函数,用于重组输入参数,改变输入参数的顺序与结构
- update_ui: 刷新界面用 yield from update_ui(chatbot, history)
- CatchException: 将插件中出的所有问题显示在界面上
- HotReload: 实现插件的热更新
- trimmed_format_exc: 打印traceback为了安全而隐藏绝对地址
========================================================================
"""
class ChatBotWithCookies(list):
def __init__(self, cookie):
self._cookies = cookie
@@ -20,33 +33,35 @@ class ChatBotWithCookies(list):
def get_cookies(self):
return self._cookies
def ArgsGeneralWrapper(f):
"""
装饰器函数,用于重组输入参数,改变输入参数的顺序与结构。
"""
def decorated(cookies, max_length, llm_model, txt, txt2, top_p, temperature, chatbot, history, system_prompt, *args):
def decorated(cookies, max_length, llm_model, txt, txt2, top_p, temperature, chatbot, history, system_prompt, plugin_advanced_arg, *args):
txt_passon = txt
if txt == "" and txt2 != "": txt_passon = txt2
# 引入一个有cookie的chatbot
cookies.update({
'top_p':top_p,
'top_p':top_p,
'temperature':temperature,
})
llm_kwargs = {
'api_key': cookies['api_key'],
'llm_model': llm_model,
'top_p':top_p,
'top_p':top_p,
'max_length': max_length,
'temperature':temperature,
}
plugin_kwargs = {
# 目前还没有
"advanced_arg": plugin_advanced_arg,
}
chatbot_with_cookie = ChatBotWithCookies(cookies)
chatbot_with_cookie.write_list(chatbot)
yield from f(txt_passon, llm_kwargs, plugin_kwargs, chatbot_with_cookie, history, system_prompt, *args)
return decorated
def update_ui(chatbot, history, msg='正常', **kwargs): # 刷新界面
"""
刷新用户界面
@@ -54,10 +69,18 @@ def update_ui(chatbot, history, msg='正常', **kwargs): # 刷新界面
assert isinstance(chatbot, ChatBotWithCookies), "在传递chatbot的过程中不要将其丢弃。必要时可用clear将其清空然后用for+append循环重新赋值。"
yield chatbot.get_cookies(), chatbot, history, msg
def trimmed_format_exc():
import os, traceback
str = traceback.format_exc()
current_path = os.getcwd()
replace_path = "."
return str.replace(current_path, replace_path)
def CatchException(f):
"""
装饰器函数捕捉函数f中的异常并封装到一个生成器中返回并显示到聊天当中。
"""
@wraps(f)
def decorated(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT):
try:
@@ -66,7 +89,7 @@ def CatchException(f):
from check_proxy import check_proxy
from toolbox import get_conf
proxies, = get_conf('proxies')
tb_str = '```\n' + traceback.format_exc() + '```'
tb_str = '```\n' + trimmed_format_exc() + '```'
if chatbot is None or len(chatbot) == 0:
chatbot = [["插件调度异常", "异常原因"]]
chatbot[-1] = (chatbot[-1][0],
@@ -93,7 +116,23 @@ def HotReload(f):
return decorated
####################################### 其他小工具 #####################################
"""
========================================================================
第二部分
其他小工具:
- write_results_to_file: 将结果写入markdown文件中
- regular_txt_to_markdown: 将普通文本转换为Markdown格式的文本。
- report_execption: 向chatbot中添加简单的意外错误信息
- text_divide_paragraph: 将文本按照段落分隔符分割开生成带有段落标签的HTML代码。
- markdown_convertion: 用多种方式组合将markdown转化为好看的html
- format_io: 接管gradio默认的markdown处理方式
- on_file_uploaded: 处理文件的上传(自动解压)
- on_report_generated: 将生成的报告自动投射到文件上传区
- clip_history: 当历史上下文过长时,自动截断
- get_conf: 获取设置
- select_api_key: 根据当前的模型类别抽取可用的api-key
========================================================================
"""
def get_reduce_token_percent(text):
"""
@@ -113,7 +152,6 @@ def get_reduce_token_percent(text):
return 0.5, '不详'
def write_results_to_file(history, file_name=None):
"""
将对话记录history以Markdown格式写入文件中。如果没有指定文件名则使用当前时间生成文件名。
@@ -219,7 +257,7 @@ def markdown_convertion(txt):
return content
else:
return tex2mathml_catch_exception(content)
def markdown_bug_hunt(content):
"""
解决一个mdx_math的bug单$包裹begin命令时多余<script>
@@ -227,7 +265,7 @@ def markdown_convertion(txt):
content = content.replace('<script type="math/tex">\n<script type="math/tex; mode=display">', '<script type="math/tex; mode=display">')
content = content.replace('</script>\n</script>', '</script>')
return content
if ('$' in txt) and ('```' not in txt): # 有$标识的公式符号,且没有代码段```的标识
# convert everything to html format
@@ -248,7 +286,7 @@ def markdown_convertion(txt):
def close_up_code_segment_during_stream(gpt_reply):
"""
在gpt输出代码的中途输出了前面的```,但还没输出完后面的```),补上后面的```
Args:
gpt_reply (str): GPT模型返回的回复字符串。
@@ -369,6 +407,9 @@ def find_recent_files(directory):
def on_file_uploaded(files, chatbot, txt, txt2, checkboxes):
"""
当文件被上传时的回调函数
"""
if len(files) == 0:
return chatbot, txt
import shutil
@@ -388,8 +429,7 @@ def on_file_uploaded(files, chatbot, txt, txt2, checkboxes):
shutil.copy(file.name, f'private_upload/{time_tag}/{file_origin_name}')
err_msg += extract_archive(f'private_upload/{time_tag}/{file_origin_name}',
dest_dir=f'private_upload/{time_tag}/{file_origin_name}.extract')
moved_files = [fp for fp in glob.glob(
'private_upload/**/*', recursive=True)]
moved_files = [fp for fp in glob.glob('private_upload/**/*', recursive=True)]
if "底部输入区" in checkboxes:
txt = ""
txt2 = f'private_upload/{time_tag}'
@@ -432,6 +472,19 @@ def is_any_api_key(key):
else:
return is_openai_api_key(key) or is_api2d_key(key)
def what_keys(keys):
avail_key_list = {'OpenAI Key':0, "API2D Key":0}
key_list = keys.split(',')
for k in key_list:
if is_openai_api_key(k):
avail_key_list['OpenAI Key'] += 1
for k in key_list:
if is_api2d_key(k):
avail_key_list['API2D Key'] += 1
return f"检测到: OpenAI Key {avail_key_list['OpenAI Key']}API2D Key {avail_key_list['API2D Key']}"
def select_api_key(keys, llm_model):
import random
@@ -447,7 +500,7 @@ def select_api_key(keys, llm_model):
if is_api2d_key(k): avail_key_list.append(k)
if len(avail_key_list) == 0:
raise RuntimeError(f"您提供的api-key不满足要求不包含任何可用于{llm_model}的api-key。")
raise RuntimeError(f"您提供的api-key不满足要求不包含任何可用于{llm_model}的api-key。您可能选择了错误的模型或请求源。")
api_key = random.choice(avail_key_list) # 随机负载均衡
return api_key
@@ -495,10 +548,10 @@ def clear_line_break(txt):
class DummyWith():
"""
这段代码定义了一个名为DummyWith的空上下文管理器
它的作用是……额……用,即在代码结构不变得情况下取代其他的上下文管理器。
它的作用是……额……就是不起作用,即在代码结构不变得情况下取代其他的上下文管理器。
上下文管理器是一种Python对象用于与with语句一起使用
以确保一些资源在代码块执行期间得到正确的初始化和清理。
上下文管理器必须实现两个方法,分别为 __enter__()和 __exit__()。
上下文管理器必须实现两个方法,分别为 __enter__()和 __exit__()。
在上下文执行开始的情况下__enter__()方法会在代码块被执行前被调用,
而在上下文执行结束时__exit__()方法则会被调用。
"""
@@ -507,3 +560,86 @@ class DummyWith():
def __exit__(self, exc_type, exc_value, traceback):
return
def run_gradio_in_subpath(demo, auth, port, custom_path):
"""
把gradio的运行地址更改到指定的二次路径上
"""
def is_path_legal(path: str)->bool:
'''
check path for sub url
path: path to check
return value: do sub url wrap
'''
if path == "/": return True
if len(path) == 0:
print("ilegal custom path: {}\npath must not be empty\ndeploy on root url".format(path))
return False
if path[0] == '/':
if path[1] != '/':
print("deploy on sub-path {}".format(path))
return True
return False
print("ilegal custom path: {}\npath should begin with \'/\'\ndeploy on root url".format(path))
return False
if not is_path_legal(custom_path): raise RuntimeError('Ilegal custom path')
import uvicorn
import gradio as gr
from fastapi import FastAPI
app = FastAPI()
if custom_path != "/":
@app.get("/")
def read_main():
return {"message": f"Gradio is running at: {custom_path}"}
app = gr.mount_gradio_app(app, demo, path=custom_path)
uvicorn.run(app, host="0.0.0.0", port=port) # , auth=auth
def clip_history(inputs, history, tokenizer, max_token_limit):
"""
reduce the length of history by clipping.
this function search for the longest entries to clip, little by little,
until the number of token of history is reduced under threshold.
通过裁剪来缩短历史记录的长度。
此函数逐渐地搜索最长的条目进行剪辑,
直到历史记录的标记数量降低到阈值以下。
"""
import numpy as np
from request_llm.bridge_all import model_info
def get_token_num(txt):
return len(tokenizer.encode(txt, disallowed_special=()))
input_token_num = get_token_num(inputs)
if input_token_num < max_token_limit * 3 / 4:
# 当输入部分的token占比小于限制的3/4时裁剪时
# 1. 把input的余量留出来
max_token_limit = max_token_limit - input_token_num
# 2. 把输出用的余量留出来
max_token_limit = max_token_limit - 128
# 3. 如果余量太小了,直接清除历史
if max_token_limit < 128:
history = []
return history
else:
# 当输入部分的token占比 > 限制的3/4时直接清除历史
history = []
return history
everything = ['']
everything.extend(history)
n_token = get_token_num('\n'.join(everything))
everything_token = [get_token_num(e) for e in everything]
# 截断时的颗粒度
delta = max(everything_token) // 16
while n_token > max_token_limit:
where = np.argmax(everything_token)
encoded = tokenizer.encode(everything[where], disallowed_special=())
clipped_encoded = encoded[:len(encoded)-delta]
everything[where] = tokenizer.decode(clipped_encoded)[:-1] # -1 to remove the may-be illegal char
everything_token[where] = get_token_num(everything[where])
n_token = get_token_num('\n'.join(everything))
history = everything[1:]
return history

View File

@@ -1,5 +1,5 @@
{
"version": 3.1,
"version": 3.3,
"show_feature": true,
"new_feature": "添加支持清华ChatGLM和GPT-4 <-> 改进架构支持与多个LLM模型同时对话 <-> 添加支持API2D国内可支持gpt4<-> 支持多API-KEY负载均衡并列填写逗号分割 <-> 添加输入区文本清除按键"
"new_feature": "支持NewBing !! <-> 保存对话功能 <-> 解读任意语言代码+同时询问任意的LLM组合 <-> 添加联网Google回答问题插件 <-> 修复ChatGLM上下文BUG <-> 添加支持清华ChatGLM和GPT-4 <-> 改进架构支持与多个LLM模型同时对话 <-> 添加支持API2D国内可支持gpt4"
}