Compare commits
49 Commits
hongyi-zha
...
binary-hus
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
b8d1eabd46 | ||
|
|
de9bc50ac4 | ||
|
|
3b83279855 | ||
|
|
37164a826e | ||
|
|
f2e73aa580 | ||
|
|
8565a35cf7 | ||
|
|
72d78eb150 | ||
|
|
7aeda537ac | ||
|
|
6cea17d4b7 | ||
|
|
20bc51d747 | ||
|
|
b8ebefa427 | ||
|
|
dcc9326f0b | ||
|
|
94fc396eb9 | ||
|
|
e594e1b928 | ||
|
|
8fe545d97b | ||
|
|
6f978fa72e | ||
|
|
19be471aa8 | ||
|
|
38956934fd | ||
|
|
32439e14b5 | ||
|
|
317389bf4b | ||
|
|
2c740fc641 | ||
|
|
96832a8228 | ||
|
|
361557da3c | ||
|
|
5f18d4a1af | ||
|
|
0d10bc570f | ||
|
|
3ce7d9347d | ||
|
|
8a78d7b89f | ||
|
|
0e43b08837 | ||
|
|
74bced2d35 | ||
|
|
961a24846f | ||
|
|
b7e4744f28 | ||
|
|
71adc40901 | ||
|
|
a2099f1622 | ||
|
|
c0a697f6c8 | ||
|
|
bdde1d2fd7 | ||
|
|
63373ab3b6 | ||
|
|
fb6566adde | ||
|
|
9f2ef9ec49 | ||
|
|
35c1aa21e4 | ||
|
|
627d739720 | ||
|
|
37f15185b6 | ||
|
|
9643e1c25f | ||
|
|
28eae2f80e | ||
|
|
7ab379688e | ||
|
|
3d4c6f54f1 | ||
|
|
1714116a89 | ||
|
|
2bc65a99ca | ||
|
|
d698b96209 | ||
|
|
6b1c6f0bf7 |
@@ -18,7 +18,6 @@ WORKDIR /gpt
|
|||||||
|
|
||||||
# 安装大部分依赖,利用Docker缓存加速以后的构建 (以下三行,可以删除)
|
# 安装大部分依赖,利用Docker缓存加速以后的构建 (以下三行,可以删除)
|
||||||
COPY requirements.txt ./
|
COPY requirements.txt ./
|
||||||
COPY ./docs/gradio-3.32.6-py3-none-any.whl ./docs/gradio-3.32.6-py3-none-any.whl
|
|
||||||
RUN pip3 install -r requirements.txt
|
RUN pip3 install -r requirements.txt
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
94
README.md
94
README.md
@@ -1,8 +1,8 @@
|
|||||||
> **Caution**
|
> [!IMPORTANT]
|
||||||
>
|
> 2024.1.18: 更新3.70版本,支持Mermaid绘图库(让大模型绘制脑图)
|
||||||
> 2023.11.12: 某些依赖包尚不兼容python 3.12,推荐python 3.11。
|
> 2024.1.17: 恭迎GLM4,全力支持Qwen、GLM、DeepseekCoder等国内中文大语言基座模型!
|
||||||
>
|
> 2024.1.17: 某些依赖包尚不兼容python 3.12,推荐python 3.11。
|
||||||
> 2023.12.26: 安装依赖时,请选择`requirements.txt`中**指定的版本**。 安装命令:`pip install -r requirements.txt`。本项目完全开源免费,您可通过订阅[在线服务](https://github.com/binary-husky/gpt_academic/wiki/online)的方式鼓励本项目的发展。
|
> 2024.1.17: 安装依赖时,请选择`requirements.txt`中**指定的版本**。 安装命令:`pip install -r requirements.txt`。本项目完全开源免费,您可通过订阅[在线服务](https://github.com/binary-husky/gpt_academic/wiki/online)的方式鼓励本项目的发展。
|
||||||
|
|
||||||
<br>
|
<br>
|
||||||
|
|
||||||
@@ -42,13 +42,11 @@ If you like this project, please give it a Star.
|
|||||||
Read this in [English](docs/README.English.md) | [日本語](docs/README.Japanese.md) | [한국어](docs/README.Korean.md) | [Русский](docs/README.Russian.md) | [Français](docs/README.French.md). All translations have been provided by the project itself. To translate this project to arbitrary language with GPT, read and run [`multi_language.py`](multi_language.py) (experimental).
|
Read this in [English](docs/README.English.md) | [日本語](docs/README.Japanese.md) | [한국어](docs/README.Korean.md) | [Русский](docs/README.Russian.md) | [Français](docs/README.French.md). All translations have been provided by the project itself. To translate this project to arbitrary language with GPT, read and run [`multi_language.py`](multi_language.py) (experimental).
|
||||||
<br>
|
<br>
|
||||||
|
|
||||||
|
> [!NOTE]
|
||||||
> 1.请注意只有 **高亮** 标识的插件(按钮)才支持读取文件,部分插件位于插件区的**下拉菜单**中。另外我们以**最高优先级**欢迎和处理任何新插件的PR。
|
> 1.本项目中每个文件的功能都在[自译解报告](https://github.com/binary-husky/gpt_academic/wiki/GPT‐Academic项目自译解报告)`self_analysis.md`详细说明。随着版本的迭代,您也可以随时自行点击相关函数插件,调用GPT重新生成项目的自我解析报告。常见问题请查阅wiki。
|
||||||
>
|
|
||||||
> 2.本项目中每个文件的功能都在[自译解报告](https://github.com/binary-husky/gpt_academic/wiki/GPT‐Academic项目自译解报告)`self_analysis.md`详细说明。随着版本的迭代,您也可以随时自行点击相关函数插件,调用GPT重新生成项目的自我解析报告。常见问题请查阅wiki。
|
|
||||||
> [](#installation) [](https://github.com/binary-husky/gpt_academic/releases) [](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明) []([https://github.com/binary-husky/gpt_academic/wiki/项目配置说明](https://github.com/binary-husky/gpt_academic/wiki))
|
> [](#installation) [](https://github.com/binary-husky/gpt_academic/releases) [](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明) []([https://github.com/binary-husky/gpt_academic/wiki/项目配置说明](https://github.com/binary-husky/gpt_academic/wiki))
|
||||||
>
|
>
|
||||||
> 3.本项目兼容并鼓励尝试国产大语言模型ChatGLM等。支持多个api-key共存,可在配置文件中填写如`API_KEY="openai-key1,openai-key2,azure-key3,api2d-key4"`。需要临时更换`API_KEY`时,在输入区输入临时的`API_KEY`然后回车键提交即可生效。
|
> 2.本项目兼容并鼓励尝试国内中文大语言基座模型如通义千问,智谱GLM等。支持多个api-key共存,可在配置文件中填写如`API_KEY="openai-key1,openai-key2,azure-key3,api2d-key4"`。需要临时更换`API_KEY`时,在输入区输入临时的`API_KEY`然后回车键提交即可生效。
|
||||||
|
|
||||||
<br><br>
|
<br><br>
|
||||||
|
|
||||||
@@ -56,7 +54,12 @@ Read this in [English](docs/README.English.md) | [日本語](docs/README.Japanes
|
|||||||
|
|
||||||
功能(⭐= 近期新增功能) | 描述
|
功能(⭐= 近期新增功能) | 描述
|
||||||
--- | ---
|
--- | ---
|
||||||
⭐[接入新模型](https://github.com/binary-husky/gpt_academic/wiki/%E5%A6%82%E4%BD%95%E5%88%87%E6%8D%A2%E6%A8%A1%E5%9E%8B) | 百度[千帆](https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Nlks5zkzu)与文心一言, 通义千问[Qwen](https://modelscope.cn/models/qwen/Qwen-7B-Chat/summary),上海AI-Lab[书生](https://github.com/InternLM/InternLM),讯飞[星火](https://xinghuo.xfyun.cn/),[LLaMa2](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf),[智谱API](https://open.bigmodel.cn/),DALLE3, [DeepseekCoder](https://coder.deepseek.com/)
|
⭐[接入新模型](https://github.com/binary-husky/gpt_academic/wiki/%E5%A6%82%E4%BD%95%E5%88%87%E6%8D%A2%E6%A8%A1%E5%9E%8B) | 百度[千帆](https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Nlks5zkzu)与文心一言, 通义千问[Qwen](https://modelscope.cn/models/qwen/Qwen-7B-Chat/summary),上海AI-Lab[书生](https://github.com/InternLM/InternLM),讯飞[星火](https://xinghuo.xfyun.cn/),[LLaMa2](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf),[智谱GLM4](https://open.bigmodel.cn/),DALLE3, [DeepseekCoder](https://coder.deepseek.com/)
|
||||||
|
⭐支持mermaid图像渲染 | 支持让GPT生成[流程图](https://www.bilibili.com/video/BV18c41147H9/)、状态转移图、甘特图、饼状图、GitGraph等等(3.7版本)
|
||||||
|
⭐Arxiv论文精细翻译 ([Docker](https://github.com/binary-husky/gpt_academic/pkgs/container/gpt_academic_with_latex)) | [插件] 一键[以超高质量翻译arxiv论文](https://www.bilibili.com/video/BV1dz4y1v77A/),目前最好的论文翻译工具
|
||||||
|
⭐[实时语音对话输入](https://github.com/binary-husky/gpt_academic/blob/master/docs/use_audio.md) | [插件] 异步[监听音频](https://www.bilibili.com/video/BV1AV4y187Uy/),自动断句,自动寻找回答时机
|
||||||
|
⭐AutoGen多智能体插件 | [插件] 借助微软AutoGen,探索多Agent的智能涌现可能!
|
||||||
|
⭐虚空终端插件 | [插件] 能够使用自然语言直接调度本项目其他插件
|
||||||
润色、翻译、代码解释 | 一键润色、翻译、查找论文语法错误、解释代码
|
润色、翻译、代码解释 | 一键润色、翻译、查找论文语法错误、解释代码
|
||||||
[自定义快捷键](https://www.bilibili.com/video/BV14s4y1E7jN) | 支持自定义快捷键
|
[自定义快捷键](https://www.bilibili.com/video/BV14s4y1E7jN) | 支持自定义快捷键
|
||||||
模块化设计 | 支持自定义强大的[插件](https://github.com/binary-husky/gpt_academic/tree/master/crazy_functions),插件支持[热更新](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
|
模块化设计 | 支持自定义强大的[插件](https://github.com/binary-husky/gpt_academic/tree/master/crazy_functions),插件支持[热更新](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
|
||||||
@@ -65,22 +68,16 @@ Read this in [English](docs/README.English.md) | [日本語](docs/README.Japanes
|
|||||||
Latex全文[翻译](https://www.bilibili.com/video/BV1nk4y1Y7Js/)、[润色](https://www.bilibili.com/video/BV1FT411H7c5/) | [插件] 一键翻译或润色latex论文
|
Latex全文[翻译](https://www.bilibili.com/video/BV1nk4y1Y7Js/)、[润色](https://www.bilibili.com/video/BV1FT411H7c5/) | [插件] 一键翻译或润色latex论文
|
||||||
批量注释生成 | [插件] 一键批量生成函数注释
|
批量注释生成 | [插件] 一键批量生成函数注释
|
||||||
Markdown[中英互译](https://www.bilibili.com/video/BV1yo4y157jV/) | [插件] 看到上面5种语言的[README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md)了吗?就是出自他的手笔
|
Markdown[中英互译](https://www.bilibili.com/video/BV1yo4y157jV/) | [插件] 看到上面5种语言的[README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md)了吗?就是出自他的手笔
|
||||||
chat分析报告生成 | [插件] 运行后自动生成总结汇报
|
|
||||||
[PDF论文全文翻译功能](https://www.bilibili.com/video/BV1KT411x7Wn) | [插件] PDF论文提取题目&摘要+翻译全文(多线程)
|
[PDF论文全文翻译功能](https://www.bilibili.com/video/BV1KT411x7Wn) | [插件] PDF论文提取题目&摘要+翻译全文(多线程)
|
||||||
[Arxiv小助手](https://www.bilibili.com/video/BV1LM4y1279X) | [插件] 输入arxiv文章url即可一键翻译摘要+下载PDF
|
[Arxiv小助手](https://www.bilibili.com/video/BV1LM4y1279X) | [插件] 输入arxiv文章url即可一键翻译摘要+下载PDF
|
||||||
Latex论文一键校对 | [插件] 仿Grammarly对Latex文章进行语法、拼写纠错+输出对照PDF
|
Latex论文一键校对 | [插件] 仿Grammarly对Latex文章进行语法、拼写纠错+输出对照PDF
|
||||||
[谷歌学术统合小助手](https://www.bilibili.com/video/BV19L411U7ia) | [插件] 给定任意谷歌学术搜索页面URL,让gpt帮你[写relatedworks](https://www.bilibili.com/video/BV1GP411U7Az/)
|
[谷歌学术统合小助手](https://www.bilibili.com/video/BV19L411U7ia) | [插件] 给定任意谷歌学术搜索页面URL,让gpt帮你[写relatedworks](https://www.bilibili.com/video/BV1GP411U7Az/)
|
||||||
互联网信息聚合+GPT | [插件] 一键[让GPT从互联网获取信息](https://www.bilibili.com/video/BV1om4y127ck)回答问题,让信息永不过时
|
互联网信息聚合+GPT | [插件] 一键[让GPT从互联网获取信息](https://www.bilibili.com/video/BV1om4y127ck)回答问题,让信息永不过时
|
||||||
⭐Arxiv论文精细翻译 ([Docker](https://github.com/binary-husky/gpt_academic/pkgs/container/gpt_academic_with_latex)) | [插件] 一键[以超高质量翻译arxiv论文](https://www.bilibili.com/video/BV1dz4y1v77A/),目前最好的论文翻译工具
|
|
||||||
⭐[实时语音对话输入](https://github.com/binary-husky/gpt_academic/blob/master/docs/use_audio.md) | [插件] 异步[监听音频](https://www.bilibili.com/video/BV1AV4y187Uy/),自动断句,自动寻找回答时机
|
|
||||||
公式/图片/表格显示 | 可以同时显示公式的[tex形式和渲染形式](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png),支持公式、代码高亮
|
公式/图片/表格显示 | 可以同时显示公式的[tex形式和渲染形式](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png),支持公式、代码高亮
|
||||||
⭐AutoGen多智能体插件 | [插件] 借助微软AutoGen,探索多Agent的智能涌现可能!
|
|
||||||
启动暗色[主题](https://github.com/binary-husky/gpt_academic/issues/173) | 在浏览器url后面添加```/?__theme=dark```可以切换dark主题
|
启动暗色[主题](https://github.com/binary-husky/gpt_academic/issues/173) | 在浏览器url后面添加```/?__theme=dark```可以切换dark主题
|
||||||
[多LLM模型](https://www.bilibili.com/video/BV1wT411p7yf)支持 | 同时被GPT3.5、GPT4、[清华ChatGLM2](https://github.com/THUDM/ChatGLM2-6B)、[复旦MOSS](https://github.com/OpenLMLab/MOSS)伺候的感觉一定会很不错吧?
|
[多LLM模型](https://www.bilibili.com/video/BV1wT411p7yf)支持 | 同时被GPT3.5、GPT4、[清华ChatGLM2](https://github.com/THUDM/ChatGLM2-6B)、[复旦MOSS](https://github.com/OpenLMLab/MOSS)伺候的感觉一定会很不错吧?
|
||||||
⭐ChatGLM2微调模型 | 支持加载ChatGLM2微调模型,提供ChatGLM2微调辅助插件
|
|
||||||
更多LLM模型接入,支持[huggingface部署](https://huggingface.co/spaces/qingxu98/gpt-academic) | 加入Newbing接口(新必应),引入清华[Jittorllms](https://github.com/Jittor/JittorLLMs)支持[LLaMA](https://github.com/facebookresearch/llama)和[盘古α](https://openi.org.cn/pangu/)
|
更多LLM模型接入,支持[huggingface部署](https://huggingface.co/spaces/qingxu98/gpt-academic) | 加入Newbing接口(新必应),引入清华[Jittorllms](https://github.com/Jittor/JittorLLMs)支持[LLaMA](https://github.com/facebookresearch/llama)和[盘古α](https://openi.org.cn/pangu/)
|
||||||
⭐[void-terminal](https://github.com/binary-husky/void-terminal) pip包 | 脱离GUI,在Python中直接调用本项目的所有函数插件(开发中)
|
⭐[void-terminal](https://github.com/binary-husky/void-terminal) pip包 | 脱离GUI,在Python中直接调用本项目的所有函数插件(开发中)
|
||||||
⭐虚空终端插件 | [插件] 能够使用自然语言直接调度本项目其他插件
|
|
||||||
更多新功能展示 (图像生成等) …… | 见本文档结尾处 ……
|
更多新功能展示 (图像生成等) …… | 见本文档结尾处 ……
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
@@ -119,6 +116,25 @@ Latex论文一键校对 | [插件] 仿Grammarly对Latex文章进行语法、拼
|
|||||||
<br><br>
|
<br><br>
|
||||||
|
|
||||||
# Installation
|
# Installation
|
||||||
|
|
||||||
|
```mermaid
|
||||||
|
flowchart TD
|
||||||
|
A{"安装方法"} --> W1("I. 🔑直接运行 (Windows, Linux or MacOS)")
|
||||||
|
W1 --> W11["1. Python pip包管理依赖"]
|
||||||
|
W1 --> W12["2. Anaconda包管理依赖(推荐⭐)"]
|
||||||
|
|
||||||
|
A --> W2["II. 🐳使用Docker (Windows, Linux or MacOS)"]
|
||||||
|
|
||||||
|
W2 --> k1["1. 部署项目全部能力的大镜像(推荐⭐)"]
|
||||||
|
W2 --> k2["2. 仅在线模型(GPT, GLM4等)镜像"]
|
||||||
|
W2 --> k3["3. 在线模型 + Latex的大镜像"]
|
||||||
|
|
||||||
|
A --> W4["IV. 🚀其他部署方法"]
|
||||||
|
W4 --> C1["1. Windows/MacOS 一键安装运行脚本(推荐⭐)"]
|
||||||
|
W4 --> C2["2. Huggingface, Sealos远程部署"]
|
||||||
|
W4 --> C4["3. ... 其他 ..."]
|
||||||
|
```
|
||||||
|
|
||||||
### 安装方法I:直接运行 (Windows, Linux or MacOS)
|
### 安装方法I:直接运行 (Windows, Linux or MacOS)
|
||||||
|
|
||||||
1. 下载项目
|
1. 下载项目
|
||||||
@@ -132,7 +148,7 @@ Latex论文一键校对 | [插件] 仿Grammarly对Latex文章进行语法、拼
|
|||||||
|
|
||||||
在`config.py`中,配置API KEY等变量。[特殊网络环境设置方法](https://github.com/binary-husky/gpt_academic/issues/1)、[Wiki-项目配置说明](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)。
|
在`config.py`中,配置API KEY等变量。[特殊网络环境设置方法](https://github.com/binary-husky/gpt_academic/issues/1)、[Wiki-项目配置说明](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)。
|
||||||
|
|
||||||
「 程序会优先检查是否存在名为`config_private.py`的私密配置文件,并用其中的配置覆盖`config.py`的同名配置。如您能理解以上读取逻辑,我们强烈建议您在`config.py`同路径下创建一个名为`config_private.py`的新配置文件,并使用`config_private.py`配置项目,以确保更新或其他用户无法轻易查看您的私有配置 」。
|
「 程序会优先检查是否存在名为`config_private.py`的私密配置文件,并用其中的配置覆盖`config.py`的同名配置。如您能理解以上读取逻辑,我们强烈建议您在`config.py`同路径下创建一个名为`config_private.py`的新配置文件,并使用`config_private.py`配置项目,从而确保自动更新时不会丢失配置 」。
|
||||||
|
|
||||||
「 支持通过`环境变量`配置项目,环境变量的书写格式参考`docker-compose.yml`文件或者我们的[Wiki页面](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)。配置读取优先级: `环境变量` > `config_private.py` > `config.py` 」。
|
「 支持通过`环境变量`配置项目,环境变量的书写格式参考`docker-compose.yml`文件或者我们的[Wiki页面](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)。配置读取优先级: `环境变量` > `config_private.py` > `config.py` 」。
|
||||||
|
|
||||||
@@ -152,10 +168,10 @@ Latex论文一键校对 | [插件] 仿Grammarly对Latex文章进行语法、拼
|
|||||||
<details><summary>如果需要支持清华ChatGLM2/复旦MOSS/RWKV作为后端,请点击展开此处</summary>
|
<details><summary>如果需要支持清华ChatGLM2/复旦MOSS/RWKV作为后端,请点击展开此处</summary>
|
||||||
<p>
|
<p>
|
||||||
|
|
||||||
【可选步骤】如果需要支持清华ChatGLM2/复旦MOSS作为后端,需要额外安装更多依赖(前提条件:熟悉Python + 用过Pytorch + 电脑配置够强):
|
【可选步骤】如果需要支持清华ChatGLM3/复旦MOSS作为后端,需要额外安装更多依赖(前提条件:熟悉Python + 用过Pytorch + 电脑配置够强):
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
# 【可选步骤I】支持清华ChatGLM2。清华ChatGLM备注:如果遇到"Call ChatGLM fail 不能正常加载ChatGLM的参数" 错误,参考如下: 1:以上默认安装的为torch+cpu版,使用cuda需要卸载torch重新安装torch+cuda; 2:如因本机配置不够无法加载模型,可以修改request_llm/bridge_chatglm.py中的模型精度, 将 AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) 都修改为 AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)
|
# 【可选步骤I】支持清华ChatGLM3。清华ChatGLM备注:如果遇到"Call ChatGLM fail 不能正常加载ChatGLM的参数" 错误,参考如下: 1:以上默认安装的为torch+cpu版,使用cuda需要卸载torch重新安装torch+cuda; 2:如因本机配置不够无法加载模型,可以修改request_llm/bridge_chatglm.py中的模型精度, 将 AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) 都修改为 AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)
|
||||||
python -m pip install -r request_llms/requirements_chatglm.txt
|
python -m pip install -r request_llms/requirements_chatglm.txt
|
||||||
|
|
||||||
# 【可选步骤II】支持复旦MOSS
|
# 【可选步骤II】支持复旦MOSS
|
||||||
@@ -197,7 +213,7 @@ pip install peft
|
|||||||
docker-compose up
|
docker-compose up
|
||||||
```
|
```
|
||||||
|
|
||||||
1. 仅ChatGPT+文心一言+spark等在线模型(推荐大多数人选择)
|
1. 仅ChatGPT + GLM4 + 文心一言+spark等在线模型(推荐大多数人选择)
|
||||||
[](https://github.com/binary-husky/gpt_academic/actions/workflows/build-without-local-llms.yml)
|
[](https://github.com/binary-husky/gpt_academic/actions/workflows/build-without-local-llms.yml)
|
||||||
[](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-latex.yml)
|
[](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-latex.yml)
|
||||||
[](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-audio-assistant.yml)
|
[](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-audio-assistant.yml)
|
||||||
@@ -209,7 +225,7 @@ pip install peft
|
|||||||
|
|
||||||
P.S. 如果需要依赖Latex的插件功能,请见Wiki。另外,您也可以直接使用方案4或者方案0获取Latex功能。
|
P.S. 如果需要依赖Latex的插件功能,请见Wiki。另外,您也可以直接使用方案4或者方案0获取Latex功能。
|
||||||
|
|
||||||
2. ChatGPT + ChatGLM2 + MOSS + LLAMA2 + 通义千问(需要熟悉[Nvidia Docker](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#installing-on-ubuntu-and-debian)运行时)
|
2. ChatGPT + GLM3 + MOSS + LLAMA2 + 通义千问(需要熟悉[Nvidia Docker](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#installing-on-ubuntu-and-debian)运行时)
|
||||||
[](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-chatglm.yml)
|
[](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-chatglm.yml)
|
||||||
|
|
||||||
``` sh
|
``` sh
|
||||||
@@ -308,9 +324,9 @@ Tip:不指定文件直接点击 `载入对话历史存档` 可以查看历史h
|
|||||||
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/bc7ab234-ad90-48a0-8d62-f703d9e74665" width="500" >
|
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/bc7ab234-ad90-48a0-8d62-f703d9e74665" width="500" >
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
8. OpenAI音频解析与总结
|
8. 基于mermaid的流图、脑图绘制
|
||||||
<div align="center">
|
<div align="center">
|
||||||
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/709ccf95-3aee-498a-934a-e1c22d3d5d5b" width="500" >
|
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/c518b82f-bd53-46e2-baf5-ad1b081c1da4" width="500" >
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
9. Latex全文校对纠错
|
9. Latex全文校对纠错
|
||||||
@@ -327,8 +343,8 @@ Tip:不指定文件直接点击 `载入对话历史存档` 可以查看历史h
|
|||||||
|
|
||||||
|
|
||||||
### II:版本:
|
### II:版本:
|
||||||
|
- version 3.80(TODO): 优化AutoGen插件主题并设计一系列衍生插件
|
||||||
- version 3.70(todo): 优化AutoGen插件主题并设计一系列衍生插件
|
- version 3.70: 引入Mermaid绘图,实现GPT画脑图等功能
|
||||||
- version 3.60: 引入AutoGen作为新一代插件的基石
|
- version 3.60: 引入AutoGen作为新一代插件的基石
|
||||||
- version 3.57: 支持GLM3,星火v3,文心一言v4,修复本地模型的并发BUG
|
- version 3.57: 支持GLM3,星火v3,文心一言v4,修复本地模型的并发BUG
|
||||||
- version 3.56: 支持动态追加基础功能按钮,新汇报PDF汇总页面
|
- version 3.56: 支持动态追加基础功能按钮,新汇报PDF汇总页面
|
||||||
@@ -361,6 +377,32 @@ GPT Academic开发者QQ群:`610599535`
|
|||||||
- 某些浏览器翻译插件干扰此软件前端的运行
|
- 某些浏览器翻译插件干扰此软件前端的运行
|
||||||
- 官方Gradio目前有很多兼容性问题,请**务必使用`requirement.txt`安装Gradio**
|
- 官方Gradio目前有很多兼容性问题,请**务必使用`requirement.txt`安装Gradio**
|
||||||
|
|
||||||
|
```mermaid
|
||||||
|
timeline LR
|
||||||
|
title GPT-Academic项目发展历程
|
||||||
|
section 2.x
|
||||||
|
1.0~2.2: 基础功能: 引入模块化函数插件: 可折叠式布局: 函数插件支持热重载
|
||||||
|
2.3~2.5: 增强多线程交互性: 新增PDF全文翻译功能: 新增输入区切换位置的功能: 自更新
|
||||||
|
2.6: 重构了插件结构: 提高了交互性: 加入更多插件
|
||||||
|
section 3.x
|
||||||
|
3.0~3.1: 对chatglm支持: 对其他小型llm支持: 支持同时问询多个gpt模型: 支持多个apikey负载均衡
|
||||||
|
3.2~3.3: 函数插件支持更多参数接口: 保存对话功能: 解读任意语言代码: 同时询问任意的LLM组合: 互联网信息综合功能
|
||||||
|
3.4: 加入arxiv论文翻译: 加入latex论文批改功能
|
||||||
|
3.44: 正式支持Azure: 优化界面易用性
|
||||||
|
3.46: 自定义ChatGLM2微调模型: 实时语音对话
|
||||||
|
3.49: 支持阿里达摩院通义千问: 上海AI-Lab书生: 讯飞星火: 支持百度千帆平台 & 文心一言
|
||||||
|
3.50: 虚空终端: 支持插件分类: 改进UI: 设计新主题
|
||||||
|
3.53: 动态选择不同界面主题: 提高稳定性: 解决多用户冲突问题
|
||||||
|
3.55: 动态代码解释器: 重构前端界面: 引入悬浮窗口与菜单栏
|
||||||
|
3.56: 动态追加基础功能按钮: 新汇报PDF汇总页面
|
||||||
|
3.57: GLM3, 星火v3: 支持文心一言v4: 修复本地模型的并发BUG
|
||||||
|
3.60: 引入AutoGen
|
||||||
|
3.70: 引入Mermaid绘图: 实现GPT画脑图等功能
|
||||||
|
3.80(TODO): 优化AutoGen插件主题: 设计衍生插件
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
### III:主题
|
### III:主题
|
||||||
可以通过修改`THEME`选项(config.py)变更主题
|
可以通过修改`THEME`选项(config.py)变更主题
|
||||||
1. `Chuanhu-Small-and-Beautiful` [网址](https://github.com/GaiZhenbiao/ChuanhuChatGPT/)
|
1. `Chuanhu-Small-and-Beautiful` [网址](https://github.com/GaiZhenbiao/ChuanhuChatGPT/)
|
||||||
|
|||||||
12
config.py
12
config.py
@@ -90,9 +90,9 @@ LLM_MODEL = "gpt-3.5-turbo" # 可选 ↓↓↓
|
|||||||
AVAIL_LLM_MODELS = ["gpt-3.5-turbo-1106","gpt-4-1106-preview","gpt-4-vision-preview",
|
AVAIL_LLM_MODELS = ["gpt-3.5-turbo-1106","gpt-4-1106-preview","gpt-4-vision-preview",
|
||||||
"gpt-3.5-turbo-16k", "gpt-3.5-turbo", "azure-gpt-3.5",
|
"gpt-3.5-turbo-16k", "gpt-3.5-turbo", "azure-gpt-3.5",
|
||||||
"gpt-4", "gpt-4-32k", "azure-gpt-4", "api2d-gpt-4",
|
"gpt-4", "gpt-4-32k", "azure-gpt-4", "api2d-gpt-4",
|
||||||
"gemini-pro", "chatglm3", "moss", "claude-2"]
|
"gemini-pro", "chatglm3", "claude-2", "zhipuai"]
|
||||||
# P.S. 其他可用的模型还包括 [
|
# P.S. 其他可用的模型还包括 [
|
||||||
# "qwen-turbo", "qwen-plus", "qwen-max"
|
# "moss", "qwen-turbo", "qwen-plus", "qwen-max"
|
||||||
# "zhipuai", "qianfan", "deepseekcoder", "llama2", "qwen-local", "gpt-3.5-turbo-0613",
|
# "zhipuai", "qianfan", "deepseekcoder", "llama2", "qwen-local", "gpt-3.5-turbo-0613",
|
||||||
# "gpt-3.5-turbo-16k-0613", "gpt-3.5-random", "api2d-gpt-3.5-turbo", 'api2d-gpt-3.5-turbo-16k',
|
# "gpt-3.5-turbo-16k-0613", "gpt-3.5-random", "api2d-gpt-3.5-turbo", 'api2d-gpt-3.5-turbo-16k',
|
||||||
# "spark", "sparkv2", "sparkv3", "chatglm_onnx", "claude-1-100k", "claude-2", "internlm", "jittorllms_pangualpha", "jittorllms_llama"
|
# "spark", "sparkv2", "sparkv3", "chatglm_onnx", "claude-1-100k", "claude-2", "internlm", "jittorllms_pangualpha", "jittorllms_llama"
|
||||||
@@ -195,7 +195,13 @@ XFYUN_API_KEY = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
|
|||||||
|
|
||||||
# 接入智谱大模型
|
# 接入智谱大模型
|
||||||
ZHIPUAI_API_KEY = ""
|
ZHIPUAI_API_KEY = ""
|
||||||
ZHIPUAI_MODEL = "chatglm_turbo"
|
ZHIPUAI_MODEL = "glm-4" # 可选 "glm-3-turbo" "glm-4"
|
||||||
|
|
||||||
|
|
||||||
|
# # 火山引擎YUNQUE大模型
|
||||||
|
# YUNQUE_SECRET_KEY = ""
|
||||||
|
# YUNQUE_ACCESS_KEY = ""
|
||||||
|
# YUNQUE_MODEL = ""
|
||||||
|
|
||||||
|
|
||||||
# Claude API KEY
|
# Claude API KEY
|
||||||
|
|||||||
@@ -3,30 +3,58 @@
|
|||||||
# 'stop' 颜色对应 theme.py 中的 color_er
|
# 'stop' 颜色对应 theme.py 中的 color_er
|
||||||
import importlib
|
import importlib
|
||||||
from toolbox import clear_line_break
|
from toolbox import clear_line_break
|
||||||
|
from textwrap import dedent
|
||||||
|
|
||||||
def get_core_functions():
|
def get_core_functions():
|
||||||
return {
|
return {
|
||||||
|
|
||||||
"英语学术润色": {
|
"英语学术润色": {
|
||||||
# 前缀,会被加在你的输入之前。例如,用来描述你的要求,例如翻译、解释代码、润色等等
|
# [1*] 前缀,会被加在你的输入之前。例如,用来描述你的要求,例如翻译、解释代码、润色等等
|
||||||
"Prefix": r"Below is a paragraph from an academic paper. Polish the writing to meet the academic style, " +
|
"Prefix": r"Below is a paragraph from an academic paper. Polish the writing to meet the academic style, "
|
||||||
r"improve the spelling, grammar, clarity, concision and overall readability. When necessary, rewrite the whole sentence. " +
|
r"improve the spelling, grammar, clarity, concision and overall readability. When necessary, rewrite the whole sentence. "
|
||||||
r"Firstly, you should provide the polished paragraph. "
|
r"Firstly, you should provide the polished paragraph. "
|
||||||
r"Secondly, you should list all your modification and explain the reasons to do so in markdown table." + "\n\n",
|
r"Secondly, you should list all your modification and explain the reasons to do so in markdown table." + "\n\n",
|
||||||
# 后缀,会被加在你的输入之后。例如,配合前缀可以把你的输入内容用引号圈起来
|
# [2*] 后缀,会被加在你的输入之后。例如,配合前缀可以把你的输入内容用引号圈起来
|
||||||
"Suffix": r"",
|
"Suffix": r"",
|
||||||
# 按钮颜色 (默认 secondary)
|
# [3] 按钮颜色 (可选参数,默认 secondary)
|
||||||
"Color": r"secondary",
|
"Color": r"secondary",
|
||||||
# 按钮是否可见 (默认 True,即可见)
|
# [4] 按钮是否可见 (可选参数,默认 True,即可见)
|
||||||
"Visible": True,
|
"Visible": True,
|
||||||
# 是否在触发时清除历史 (默认 False,即不处理之前的对话历史)
|
# [5] 是否在触发时清除历史 (可选参数,默认 False,即不处理之前的对话历史)
|
||||||
"AutoClearHistory": False
|
"AutoClearHistory": False,
|
||||||
|
# [6] 文本预处理 (可选参数,默认 None,举例:写个函数移除所有的换行符)
|
||||||
|
"PreProcess": None,
|
||||||
},
|
},
|
||||||
"中文学术润色": {
|
|
||||||
"Prefix": r"作为一名中文学术论文写作改进助理,你的任务是改进所提供文本的拼写、语法、清晰、简洁和整体可读性," +
|
|
||||||
r"同时分解长句,减少重复,并提供改进建议。请只提供文本的更正版本,避免包括解释。请编辑以下文本" + "\n\n",
|
"总结绘制脑图": {
|
||||||
"Suffix": r"",
|
# 前缀,会被加在你的输入之前。例如,用来描述你的要求,例如翻译、解释代码、润色等等
|
||||||
|
"Prefix": r"",
|
||||||
|
# 后缀,会被加在你的输入之后。例如,配合前缀可以把你的输入内容用引号圈起来
|
||||||
|
"Suffix":
|
||||||
|
dedent("\n"+r'''
|
||||||
|
==============================
|
||||||
|
使用mermaid flowchart对以上文本进行总结,概括上述段落的内容以及内在逻辑关系,例如:
|
||||||
|
|
||||||
|
以下是对以上文本的总结,以mermaid flowchart的形式展示:
|
||||||
|
```mermaid
|
||||||
|
flowchart LR
|
||||||
|
A["节点名1"] --> B("节点名2")
|
||||||
|
B --> C{"节点名3"}
|
||||||
|
C --> D["节点名4"]
|
||||||
|
C --> |"箭头名1"| E["节点名5"]
|
||||||
|
C --> |"箭头名2"| F["节点名6"]
|
||||||
|
```
|
||||||
|
|
||||||
|
警告:
|
||||||
|
(1)使用中文
|
||||||
|
(2)节点名字使用引号包裹,如["Laptop"]
|
||||||
|
(3)`|` 和 `"`之间不要存在空格
|
||||||
|
(4)根据情况选择flowchart LR(从左到右)或者flowchart TD(从上到下)
|
||||||
|
'''),
|
||||||
},
|
},
|
||||||
|
|
||||||
|
|
||||||
"查找语法错误": {
|
"查找语法错误": {
|
||||||
"Prefix": r"Help me ensure that the grammar and the spelling is correct. "
|
"Prefix": r"Help me ensure that the grammar and the spelling is correct. "
|
||||||
r"Do not try to polish the text, if no mistake is found, tell me that this paragraph is good. "
|
r"Do not try to polish the text, if no mistake is found, tell me that this paragraph is good. "
|
||||||
@@ -46,11 +74,15 @@ def get_core_functions():
|
|||||||
"Suffix": r"",
|
"Suffix": r"",
|
||||||
"PreProcess": clear_line_break, # 预处理:清除换行符
|
"PreProcess": clear_line_break, # 预处理:清除换行符
|
||||||
},
|
},
|
||||||
|
|
||||||
|
|
||||||
"中译英": {
|
"中译英": {
|
||||||
"Prefix": r"Please translate following sentence to English:" + "\n\n",
|
"Prefix": r"Please translate following sentence to English:" + "\n\n",
|
||||||
"Suffix": r"",
|
"Suffix": r"",
|
||||||
},
|
},
|
||||||
"学术中英互译": {
|
|
||||||
|
|
||||||
|
"学术英中互译": {
|
||||||
"Prefix": r"I want you to act as a scientific English-Chinese translator, " +
|
"Prefix": r"I want you to act as a scientific English-Chinese translator, " +
|
||||||
r"I will provide you with some paragraphs in one language " +
|
r"I will provide you with some paragraphs in one language " +
|
||||||
r"and your task is to accurately and academically translate the paragraphs only into the other language. " +
|
r"and your task is to accurately and academically translate the paragraphs only into the other language. " +
|
||||||
@@ -59,29 +91,36 @@ def get_core_functions():
|
|||||||
r"such as natural language processing, and rhetorical knowledge " +
|
r"such as natural language processing, and rhetorical knowledge " +
|
||||||
r"and experience about effective writing techniques to reply. " +
|
r"and experience about effective writing techniques to reply. " +
|
||||||
r"I'll give you my paragraphs as follows, tell me what language it is written in, and then translate:" + "\n\n",
|
r"I'll give you my paragraphs as follows, tell me what language it is written in, and then translate:" + "\n\n",
|
||||||
"Suffix": "",
|
"Suffix": r"",
|
||||||
"Color": "secondary",
|
|
||||||
},
|
},
|
||||||
|
|
||||||
|
|
||||||
"英译中": {
|
"英译中": {
|
||||||
"Prefix": r"翻译成地道的中文:" + "\n\n",
|
"Prefix": r"翻译成地道的中文:" + "\n\n",
|
||||||
"Suffix": r"",
|
"Suffix": r"",
|
||||||
"Visible": False,
|
"Visible": False,
|
||||||
},
|
},
|
||||||
|
|
||||||
|
|
||||||
"找图片": {
|
"找图片": {
|
||||||
"Prefix": r"我需要你找一张网络图片。使用Unsplash API(https://source.unsplash.com/960x640/?<英语关键词>)获取图片URL," +
|
"Prefix": r"我需要你找一张网络图片。使用Unsplash API(https://source.unsplash.com/960x640/?<英语关键词>)获取图片URL,"
|
||||||
r"然后请使用Markdown格式封装,并且不要有反斜线,不要用代码块。现在,请按以下描述给我发送图片:" + "\n\n",
|
r"然后请使用Markdown格式封装,并且不要有反斜线,不要用代码块。现在,请按以下描述给我发送图片:" + "\n\n",
|
||||||
"Suffix": r"",
|
"Suffix": r"",
|
||||||
"Visible": False,
|
"Visible": False,
|
||||||
},
|
},
|
||||||
|
|
||||||
|
|
||||||
"解释代码": {
|
"解释代码": {
|
||||||
"Prefix": r"请解释以下代码:" + "\n```\n",
|
"Prefix": r"请解释以下代码:" + "\n```\n",
|
||||||
"Suffix": "\n```\n",
|
"Suffix": "\n```\n",
|
||||||
},
|
},
|
||||||
|
|
||||||
|
|
||||||
"参考文献转Bib": {
|
"参考文献转Bib": {
|
||||||
"Prefix": r"Here are some bibliography items, please transform them into bibtex style." +
|
"Prefix": r"Here are some bibliography items, please transform them into bibtex style."
|
||||||
r"Note that, reference styles maybe more than one kind, you should transform each item correctly." +
|
r"Note that, reference styles maybe more than one kind, you should transform each item correctly."
|
||||||
r"Items need to be transformed:",
|
r"Items need to be transformed:" + "\n\n",
|
||||||
"Visible": False,
|
"Visible": False,
|
||||||
"Suffix": r"",
|
"Suffix": r"",
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -98,8 +137,14 @@ def handle_core_functionality(additional_fn, inputs, history, chatbot):
|
|||||||
return inputs, history
|
return inputs, history
|
||||||
else:
|
else:
|
||||||
# 预制功能
|
# 预制功能
|
||||||
if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
|
if "PreProcess" in core_functional[additional_fn]:
|
||||||
|
if core_functional[additional_fn]["PreProcess"] is not None:
|
||||||
|
inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
|
||||||
inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"]
|
inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"]
|
||||||
if core_functional[additional_fn].get("AutoClearHistory", False):
|
if core_functional[additional_fn].get("AutoClearHistory", False):
|
||||||
history = []
|
history = []
|
||||||
return inputs, history
|
return inputs, history
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
t = get_core_functions()["总结绘制脑图"]
|
||||||
|
print(t["Prefix"] + t["Suffix"])
|
||||||
@@ -37,110 +37,109 @@ def get_crazy_functions():
|
|||||||
from crazy_functions.批量Markdown翻译 import Markdown中译英
|
from crazy_functions.批量Markdown翻译 import Markdown中译英
|
||||||
from crazy_functions.虚空终端 import 虚空终端
|
from crazy_functions.虚空终端 import 虚空终端
|
||||||
|
|
||||||
|
|
||||||
function_plugins = {
|
function_plugins = {
|
||||||
"虚空终端": {
|
"虚空终端": {
|
||||||
"Group": "对话|编程|学术|智能体",
|
"Group": "对话|编程|学术|智能体",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": True,
|
"AsButton": True,
|
||||||
"Function": HotReload(虚空终端)
|
"Function": HotReload(虚空终端),
|
||||||
},
|
},
|
||||||
"解析整个Python项目": {
|
"解析整个Python项目": {
|
||||||
"Group": "编程",
|
"Group": "编程",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": True,
|
"AsButton": True,
|
||||||
"Info": "解析一个Python项目的所有源文件(.py) | 输入参数为路径",
|
"Info": "解析一个Python项目的所有源文件(.py) | 输入参数为路径",
|
||||||
"Function": HotReload(解析一个Python项目)
|
"Function": HotReload(解析一个Python项目),
|
||||||
},
|
},
|
||||||
"载入对话历史存档(先上传存档或输入路径)": {
|
"载入对话历史存档(先上传存档或输入路径)": {
|
||||||
"Group": "对话",
|
"Group": "对话",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False,
|
"AsButton": False,
|
||||||
"Info": "载入对话历史存档 | 输入参数为路径",
|
"Info": "载入对话历史存档 | 输入参数为路径",
|
||||||
"Function": HotReload(载入对话历史存档)
|
"Function": HotReload(载入对话历史存档),
|
||||||
},
|
},
|
||||||
"删除所有本地对话历史记录(谨慎操作)": {
|
"删除所有本地对话历史记录(谨慎操作)": {
|
||||||
"Group": "对话",
|
"Group": "对话",
|
||||||
"AsButton": False,
|
"AsButton": False,
|
||||||
"Info": "删除所有本地对话历史记录,谨慎操作 | 不需要输入参数",
|
"Info": "删除所有本地对话历史记录,谨慎操作 | 不需要输入参数",
|
||||||
"Function": HotReload(删除所有本地对话历史记录)
|
"Function": HotReload(删除所有本地对话历史记录),
|
||||||
},
|
},
|
||||||
"清除所有缓存文件(谨慎操作)": {
|
"清除所有缓存文件(谨慎操作)": {
|
||||||
"Group": "对话",
|
"Group": "对话",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False, # 加入下拉菜单中
|
"AsButton": False, # 加入下拉菜单中
|
||||||
"Info": "清除所有缓存文件,谨慎操作 | 不需要输入参数",
|
"Info": "清除所有缓存文件,谨慎操作 | 不需要输入参数",
|
||||||
"Function": HotReload(清除缓存)
|
"Function": HotReload(清除缓存),
|
||||||
},
|
},
|
||||||
"批量总结Word文档": {
|
"批量总结Word文档": {
|
||||||
"Group": "学术",
|
"Group": "学术",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": True,
|
"AsButton": True,
|
||||||
"Info": "批量总结word文档 | 输入参数为路径",
|
"Info": "批量总结word文档 | 输入参数为路径",
|
||||||
"Function": HotReload(总结word文档)
|
"Function": HotReload(总结word文档),
|
||||||
},
|
},
|
||||||
"解析整个Matlab项目": {
|
"解析整个Matlab项目": {
|
||||||
"Group": "编程",
|
"Group": "编程",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False,
|
"AsButton": False,
|
||||||
"Info": "解析一个Matlab项目的所有源文件(.m) | 输入参数为路径",
|
"Info": "解析一个Matlab项目的所有源文件(.m) | 输入参数为路径",
|
||||||
"Function": HotReload(解析一个Matlab项目)
|
"Function": HotReload(解析一个Matlab项目),
|
||||||
},
|
},
|
||||||
"解析整个C++项目头文件": {
|
"解析整个C++项目头文件": {
|
||||||
"Group": "编程",
|
"Group": "编程",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False, # 加入下拉菜单中
|
"AsButton": False, # 加入下拉菜单中
|
||||||
"Info": "解析一个C++项目的所有头文件(.h/.hpp) | 输入参数为路径",
|
"Info": "解析一个C++项目的所有头文件(.h/.hpp) | 输入参数为路径",
|
||||||
"Function": HotReload(解析一个C项目的头文件)
|
"Function": HotReload(解析一个C项目的头文件),
|
||||||
},
|
},
|
||||||
"解析整个C++项目(.cpp/.hpp/.c/.h)": {
|
"解析整个C++项目(.cpp/.hpp/.c/.h)": {
|
||||||
"Group": "编程",
|
"Group": "编程",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False, # 加入下拉菜单中
|
"AsButton": False, # 加入下拉菜单中
|
||||||
"Info": "解析一个C++项目的所有源文件(.cpp/.hpp/.c/.h)| 输入参数为路径",
|
"Info": "解析一个C++项目的所有源文件(.cpp/.hpp/.c/.h)| 输入参数为路径",
|
||||||
"Function": HotReload(解析一个C项目)
|
"Function": HotReload(解析一个C项目),
|
||||||
},
|
},
|
||||||
"解析整个Go项目": {
|
"解析整个Go项目": {
|
||||||
"Group": "编程",
|
"Group": "编程",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False, # 加入下拉菜单中
|
"AsButton": False, # 加入下拉菜单中
|
||||||
"Info": "解析一个Go项目的所有源文件 | 输入参数为路径",
|
"Info": "解析一个Go项目的所有源文件 | 输入参数为路径",
|
||||||
"Function": HotReload(解析一个Golang项目)
|
"Function": HotReload(解析一个Golang项目),
|
||||||
},
|
},
|
||||||
"解析整个Rust项目": {
|
"解析整个Rust项目": {
|
||||||
"Group": "编程",
|
"Group": "编程",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False, # 加入下拉菜单中
|
"AsButton": False, # 加入下拉菜单中
|
||||||
"Info": "解析一个Rust项目的所有源文件 | 输入参数为路径",
|
"Info": "解析一个Rust项目的所有源文件 | 输入参数为路径",
|
||||||
"Function": HotReload(解析一个Rust项目)
|
"Function": HotReload(解析一个Rust项目),
|
||||||
},
|
},
|
||||||
"解析整个Java项目": {
|
"解析整个Java项目": {
|
||||||
"Group": "编程",
|
"Group": "编程",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False, # 加入下拉菜单中
|
"AsButton": False, # 加入下拉菜单中
|
||||||
"Info": "解析一个Java项目的所有源文件 | 输入参数为路径",
|
"Info": "解析一个Java项目的所有源文件 | 输入参数为路径",
|
||||||
"Function": HotReload(解析一个Java项目)
|
"Function": HotReload(解析一个Java项目),
|
||||||
},
|
},
|
||||||
"解析整个前端项目(js,ts,css等)": {
|
"解析整个前端项目(js,ts,css等)": {
|
||||||
"Group": "编程",
|
"Group": "编程",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False, # 加入下拉菜单中
|
"AsButton": False, # 加入下拉菜单中
|
||||||
"Info": "解析一个前端项目的所有源文件(js,ts,css等) | 输入参数为路径",
|
"Info": "解析一个前端项目的所有源文件(js,ts,css等) | 输入参数为路径",
|
||||||
"Function": HotReload(解析一个前端项目)
|
"Function": HotReload(解析一个前端项目),
|
||||||
},
|
},
|
||||||
"解析整个Lua项目": {
|
"解析整个Lua项目": {
|
||||||
"Group": "编程",
|
"Group": "编程",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False, # 加入下拉菜单中
|
"AsButton": False, # 加入下拉菜单中
|
||||||
"Info": "解析一个Lua项目的所有源文件 | 输入参数为路径",
|
"Info": "解析一个Lua项目的所有源文件 | 输入参数为路径",
|
||||||
"Function": HotReload(解析一个Lua项目)
|
"Function": HotReload(解析一个Lua项目),
|
||||||
},
|
},
|
||||||
"解析整个CSharp项目": {
|
"解析整个CSharp项目": {
|
||||||
"Group": "编程",
|
"Group": "编程",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False, # 加入下拉菜单中
|
"AsButton": False, # 加入下拉菜单中
|
||||||
"Info": "解析一个CSharp项目的所有源文件 | 输入参数为路径",
|
"Info": "解析一个CSharp项目的所有源文件 | 输入参数为路径",
|
||||||
"Function": HotReload(解析一个CSharp项目)
|
"Function": HotReload(解析一个CSharp项目),
|
||||||
},
|
},
|
||||||
"解析Jupyter Notebook文件": {
|
"解析Jupyter Notebook文件": {
|
||||||
"Group": "编程",
|
"Group": "编程",
|
||||||
@@ -156,103 +155,102 @@ def get_crazy_functions():
|
|||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False,
|
"AsButton": False,
|
||||||
"Info": "读取Tex论文并写摘要 | 输入参数为路径",
|
"Info": "读取Tex论文并写摘要 | 输入参数为路径",
|
||||||
"Function": HotReload(读文章写摘要)
|
"Function": HotReload(读文章写摘要),
|
||||||
},
|
},
|
||||||
"翻译README或MD": {
|
"翻译README或MD": {
|
||||||
"Group": "编程",
|
"Group": "编程",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": True,
|
"AsButton": True,
|
||||||
"Info": "将Markdown翻译为中文 | 输入参数为路径或URL",
|
"Info": "将Markdown翻译为中文 | 输入参数为路径或URL",
|
||||||
"Function": HotReload(Markdown英译中)
|
"Function": HotReload(Markdown英译中),
|
||||||
},
|
},
|
||||||
"翻译Markdown或README(支持Github链接)": {
|
"翻译Markdown或README(支持Github链接)": {
|
||||||
"Group": "编程",
|
"Group": "编程",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False,
|
"AsButton": False,
|
||||||
"Info": "将Markdown或README翻译为中文 | 输入参数为路径或URL",
|
"Info": "将Markdown或README翻译为中文 | 输入参数为路径或URL",
|
||||||
"Function": HotReload(Markdown英译中)
|
"Function": HotReload(Markdown英译中),
|
||||||
},
|
},
|
||||||
"批量生成函数注释": {
|
"批量生成函数注释": {
|
||||||
"Group": "编程",
|
"Group": "编程",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False, # 加入下拉菜单中
|
"AsButton": False, # 加入下拉菜单中
|
||||||
"Info": "批量生成函数的注释 | 输入参数为路径",
|
"Info": "批量生成函数的注释 | 输入参数为路径",
|
||||||
"Function": HotReload(批量生成函数注释)
|
"Function": HotReload(批量生成函数注释),
|
||||||
},
|
},
|
||||||
"保存当前的对话": {
|
"保存当前的对话": {
|
||||||
"Group": "对话",
|
"Group": "对话",
|
||||||
"AsButton": True,
|
"AsButton": True,
|
||||||
"Info": "保存当前的对话 | 不需要输入参数",
|
"Info": "保存当前的对话 | 不需要输入参数",
|
||||||
"Function": HotReload(对话历史存档)
|
"Function": HotReload(对话历史存档),
|
||||||
},
|
},
|
||||||
"[多线程Demo]解析此项目本身(源码自译解)": {
|
"[多线程Demo]解析此项目本身(源码自译解)": {
|
||||||
"Group": "对话|编程",
|
"Group": "对话|编程",
|
||||||
"AsButton": False, # 加入下拉菜单中
|
"AsButton": False, # 加入下拉菜单中
|
||||||
"Info": "多线程解析并翻译此项目的源码 | 不需要输入参数",
|
"Info": "多线程解析并翻译此项目的源码 | 不需要输入参数",
|
||||||
"Function": HotReload(解析项目本身)
|
"Function": HotReload(解析项目本身),
|
||||||
},
|
},
|
||||||
"历史上的今天": {
|
"历史上的今天": {
|
||||||
"Group": "对话",
|
"Group": "对话",
|
||||||
"AsButton": True,
|
"AsButton": True,
|
||||||
"Info": "查看历史上的今天事件 (这是一个面向开发者的插件Demo) | 不需要输入参数",
|
"Info": "查看历史上的今天事件 (这是一个面向开发者的插件Demo) | 不需要输入参数",
|
||||||
"Function": HotReload(高阶功能模板函数)
|
"Function": HotReload(高阶功能模板函数),
|
||||||
},
|
},
|
||||||
"精准翻译PDF论文": {
|
"精准翻译PDF论文": {
|
||||||
"Group": "学术",
|
"Group": "学术",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": True,
|
"AsButton": True,
|
||||||
"Info": "精准翻译PDF论文为中文 | 输入参数为路径",
|
"Info": "精准翻译PDF论文为中文 | 输入参数为路径",
|
||||||
"Function": HotReload(批量翻译PDF文档)
|
"Function": HotReload(批量翻译PDF文档),
|
||||||
},
|
},
|
||||||
"询问多个GPT模型": {
|
"询问多个GPT模型": {
|
||||||
"Group": "对话",
|
"Group": "对话",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": True,
|
"AsButton": True,
|
||||||
"Function": HotReload(同时问询)
|
"Function": HotReload(同时问询),
|
||||||
},
|
},
|
||||||
"批量总结PDF文档": {
|
"批量总结PDF文档": {
|
||||||
"Group": "学术",
|
"Group": "学术",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False, # 加入下拉菜单中
|
"AsButton": False, # 加入下拉菜单中
|
||||||
"Info": "批量总结PDF文档的内容 | 输入参数为路径",
|
"Info": "批量总结PDF文档的内容 | 输入参数为路径",
|
||||||
"Function": HotReload(批量总结PDF文档)
|
"Function": HotReload(批量总结PDF文档),
|
||||||
},
|
},
|
||||||
"谷歌学术检索助手(输入谷歌学术搜索页url)": {
|
"谷歌学术检索助手(输入谷歌学术搜索页url)": {
|
||||||
"Group": "学术",
|
"Group": "学术",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False, # 加入下拉菜单中
|
"AsButton": False, # 加入下拉菜单中
|
||||||
"Info": "使用谷歌学术检索助手搜索指定URL的结果 | 输入参数为谷歌学术搜索页的URL",
|
"Info": "使用谷歌学术检索助手搜索指定URL的结果 | 输入参数为谷歌学术搜索页的URL",
|
||||||
"Function": HotReload(谷歌检索小助手)
|
"Function": HotReload(谷歌检索小助手),
|
||||||
},
|
},
|
||||||
"理解PDF文档内容 (模仿ChatPDF)": {
|
"理解PDF文档内容 (模仿ChatPDF)": {
|
||||||
"Group": "学术",
|
"Group": "学术",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False, # 加入下拉菜单中
|
"AsButton": False, # 加入下拉菜单中
|
||||||
"Info": "理解PDF文档的内容并进行回答 | 输入参数为路径",
|
"Info": "理解PDF文档的内容并进行回答 | 输入参数为路径",
|
||||||
"Function": HotReload(理解PDF文档内容标准文件输入)
|
"Function": HotReload(理解PDF文档内容标准文件输入),
|
||||||
},
|
},
|
||||||
"英文Latex项目全文润色(输入路径或上传压缩包)": {
|
"英文Latex项目全文润色(输入路径或上传压缩包)": {
|
||||||
"Group": "学术",
|
"Group": "学术",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False, # 加入下拉菜单中
|
"AsButton": False, # 加入下拉菜单中
|
||||||
"Info": "对英文Latex项目全文进行润色处理 | 输入参数为路径或上传压缩包",
|
"Info": "对英文Latex项目全文进行润色处理 | 输入参数为路径或上传压缩包",
|
||||||
"Function": HotReload(Latex英文润色)
|
"Function": HotReload(Latex英文润色),
|
||||||
},
|
},
|
||||||
"英文Latex项目全文纠错(输入路径或上传压缩包)": {
|
"英文Latex项目全文纠错(输入路径或上传压缩包)": {
|
||||||
"Group": "学术",
|
"Group": "学术",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False, # 加入下拉菜单中
|
"AsButton": False, # 加入下拉菜单中
|
||||||
"Info": "对英文Latex项目全文进行纠错处理 | 输入参数为路径或上传压缩包",
|
"Info": "对英文Latex项目全文进行纠错处理 | 输入参数为路径或上传压缩包",
|
||||||
"Function": HotReload(Latex英文纠错)
|
"Function": HotReload(Latex英文纠错),
|
||||||
},
|
},
|
||||||
"中文Latex项目全文润色(输入路径或上传压缩包)": {
|
"中文Latex项目全文润色(输入路径或上传压缩包)": {
|
||||||
"Group": "学术",
|
"Group": "学术",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False, # 加入下拉菜单中
|
"AsButton": False, # 加入下拉菜单中
|
||||||
"Info": "对中文Latex项目全文进行润色处理 | 输入参数为路径或上传压缩包",
|
"Info": "对中文Latex项目全文进行润色处理 | 输入参数为路径或上传压缩包",
|
||||||
"Function": HotReload(Latex中文润色)
|
"Function": HotReload(Latex中文润色),
|
||||||
},
|
},
|
||||||
|
|
||||||
# 已经被新插件取代
|
# 已经被新插件取代
|
||||||
# "Latex项目全文中译英(输入路径或上传压缩包)": {
|
# "Latex项目全文中译英(输入路径或上传压缩包)": {
|
||||||
# "Group": "学术",
|
# "Group": "学术",
|
||||||
@@ -261,7 +259,6 @@ def get_crazy_functions():
|
|||||||
# "Info": "对Latex项目全文进行中译英处理 | 输入参数为路径或上传压缩包",
|
# "Info": "对Latex项目全文进行中译英处理 | 输入参数为路径或上传压缩包",
|
||||||
# "Function": HotReload(Latex中译英)
|
# "Function": HotReload(Latex中译英)
|
||||||
# },
|
# },
|
||||||
|
|
||||||
# 已经被新插件取代
|
# 已经被新插件取代
|
||||||
# "Latex项目全文英译中(输入路径或上传压缩包)": {
|
# "Latex项目全文英译中(输入路径或上传压缩包)": {
|
||||||
# "Group": "学术",
|
# "Group": "学术",
|
||||||
@@ -270,339 +267,414 @@ def get_crazy_functions():
|
|||||||
# "Info": "对Latex项目全文进行英译中处理 | 输入参数为路径或上传压缩包",
|
# "Info": "对Latex项目全文进行英译中处理 | 输入参数为路径或上传压缩包",
|
||||||
# "Function": HotReload(Latex英译中)
|
# "Function": HotReload(Latex英译中)
|
||||||
# },
|
# },
|
||||||
|
|
||||||
"批量Markdown中译英(输入路径或上传压缩包)": {
|
"批量Markdown中译英(输入路径或上传压缩包)": {
|
||||||
"Group": "编程",
|
"Group": "编程",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False, # 加入下拉菜单中
|
"AsButton": False, # 加入下拉菜单中
|
||||||
"Info": "批量将Markdown文件中文翻译为英文 | 输入参数为路径或上传压缩包",
|
"Info": "批量将Markdown文件中文翻译为英文 | 输入参数为路径或上传压缩包",
|
||||||
"Function": HotReload(Markdown中译英)
|
"Function": HotReload(Markdown中译英),
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
# -=--=- 尚未充分测试的实验性插件 & 需要额外依赖的插件 -=--=-
|
# -=--=- 尚未充分测试的实验性插件 & 需要额外依赖的插件 -=--=-
|
||||||
try:
|
try:
|
||||||
from crazy_functions.下载arxiv论文翻译摘要 import 下载arxiv论文并翻译摘要
|
from crazy_functions.下载arxiv论文翻译摘要 import 下载arxiv论文并翻译摘要
|
||||||
function_plugins.update({
|
|
||||||
"一键下载arxiv论文并翻译摘要(先在input输入编号,如1812.10695)": {
|
function_plugins.update(
|
||||||
"Group": "学术",
|
{
|
||||||
"Color": "stop",
|
"一键下载arxiv论文并翻译摘要(先在input输入编号,如1812.10695)": {
|
||||||
"AsButton": False, # 加入下拉菜单中
|
"Group": "学术",
|
||||||
# "Info": "下载arxiv论文并翻译摘要 | 输入参数为arxiv编号如1812.10695",
|
"Color": "stop",
|
||||||
"Function": HotReload(下载arxiv论文并翻译摘要)
|
"AsButton": False, # 加入下拉菜单中
|
||||||
|
# "Info": "下载arxiv论文并翻译摘要 | 输入参数为arxiv编号如1812.10695",
|
||||||
|
"Function": HotReload(下载arxiv论文并翻译摘要),
|
||||||
|
}
|
||||||
}
|
}
|
||||||
})
|
)
|
||||||
except:
|
except:
|
||||||
print(trimmed_format_exc())
|
print(trimmed_format_exc())
|
||||||
print('Load function plugin failed')
|
print("Load function plugin failed")
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from crazy_functions.联网的ChatGPT import 连接网络回答问题
|
from crazy_functions.联网的ChatGPT import 连接网络回答问题
|
||||||
function_plugins.update({
|
|
||||||
"连接网络回答问题(输入问题后点击该插件,需要访问谷歌)": {
|
function_plugins.update(
|
||||||
"Group": "对话",
|
{
|
||||||
"Color": "stop",
|
"连接网络回答问题(输入问题后点击该插件,需要访问谷歌)": {
|
||||||
"AsButton": False, # 加入下拉菜单中
|
"Group": "对话",
|
||||||
# "Info": "连接网络回答问题(需要访问谷歌)| 输入参数是一个问题",
|
"Color": "stop",
|
||||||
"Function": HotReload(连接网络回答问题)
|
"AsButton": False, # 加入下拉菜单中
|
||||||
|
# "Info": "连接网络回答问题(需要访问谷歌)| 输入参数是一个问题",
|
||||||
|
"Function": HotReload(连接网络回答问题),
|
||||||
|
}
|
||||||
}
|
}
|
||||||
})
|
)
|
||||||
from crazy_functions.联网的ChatGPT_bing版 import 连接bing搜索回答问题
|
from crazy_functions.联网的ChatGPT_bing版 import 连接bing搜索回答问题
|
||||||
function_plugins.update({
|
|
||||||
"连接网络回答问题(中文Bing版,输入问题后点击该插件)": {
|
function_plugins.update(
|
||||||
"Group": "对话",
|
{
|
||||||
"Color": "stop",
|
"连接网络回答问题(中文Bing版,输入问题后点击该插件)": {
|
||||||
"AsButton": False, # 加入下拉菜单中
|
"Group": "对话",
|
||||||
"Info": "连接网络回答问题(需要访问中文Bing)| 输入参数是一个问题",
|
"Color": "stop",
|
||||||
"Function": HotReload(连接bing搜索回答问题)
|
"AsButton": False, # 加入下拉菜单中
|
||||||
|
"Info": "连接网络回答问题(需要访问中文Bing)| 输入参数是一个问题",
|
||||||
|
"Function": HotReload(连接bing搜索回答问题),
|
||||||
|
}
|
||||||
}
|
}
|
||||||
})
|
)
|
||||||
except:
|
except:
|
||||||
print(trimmed_format_exc())
|
print(trimmed_format_exc())
|
||||||
print('Load function plugin failed')
|
print("Load function plugin failed")
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from crazy_functions.解析项目源代码 import 解析任意code项目
|
from crazy_functions.解析项目源代码 import 解析任意code项目
|
||||||
function_plugins.update({
|
|
||||||
"解析项目源代码(手动指定和筛选源代码文件类型)": {
|
function_plugins.update(
|
||||||
"Group": "编程",
|
{
|
||||||
"Color": "stop",
|
"解析项目源代码(手动指定和筛选源代码文件类型)": {
|
||||||
"AsButton": False,
|
"Group": "编程",
|
||||||
"AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False)
|
"Color": "stop",
|
||||||
"ArgsReminder": "输入时用逗号隔开, *代表通配符, 加了^代表不匹配; 不输入代表全部匹配。例如: \"*.c, ^*.cpp, config.toml, ^*.toml\"", # 高级参数输入区的显示提示
|
"AsButton": False,
|
||||||
"Function": HotReload(解析任意code项目)
|
"AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False)
|
||||||
},
|
"ArgsReminder": '输入时用逗号隔开, *代表通配符, 加了^代表不匹配; 不输入代表全部匹配。例如: "*.c, ^*.cpp, config.toml, ^*.toml"', # 高级参数输入区的显示提示
|
||||||
})
|
"Function": HotReload(解析任意code项目),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
)
|
||||||
except:
|
except:
|
||||||
print(trimmed_format_exc())
|
print(trimmed_format_exc())
|
||||||
print('Load function plugin failed')
|
print("Load function plugin failed")
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from crazy_functions.询问多个大语言模型 import 同时问询_指定模型
|
from crazy_functions.询问多个大语言模型 import 同时问询_指定模型
|
||||||
function_plugins.update({
|
|
||||||
"询问多个GPT模型(手动指定询问哪些模型)": {
|
function_plugins.update(
|
||||||
"Group": "对话",
|
{
|
||||||
"Color": "stop",
|
"询问多个GPT模型(手动指定询问哪些模型)": {
|
||||||
"AsButton": False,
|
"Group": "对话",
|
||||||
"AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False)
|
"Color": "stop",
|
||||||
"ArgsReminder": "支持任意数量的llm接口,用&符号分隔。例如chatglm&gpt-3.5-turbo&gpt-4", # 高级参数输入区的显示提示
|
"AsButton": False,
|
||||||
"Function": HotReload(同时问询_指定模型)
|
"AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False)
|
||||||
},
|
"ArgsReminder": "支持任意数量的llm接口,用&符号分隔。例如chatglm&gpt-3.5-turbo&gpt-4", # 高级参数输入区的显示提示
|
||||||
})
|
"Function": HotReload(同时问询_指定模型),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
)
|
||||||
except:
|
except:
|
||||||
print(trimmed_format_exc())
|
print(trimmed_format_exc())
|
||||||
print('Load function plugin failed')
|
print("Load function plugin failed")
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from crazy_functions.图片生成 import 图片生成_DALLE2, 图片生成_DALLE3, 图片修改_DALLE2
|
from crazy_functions.图片生成 import 图片生成_DALLE2, 图片生成_DALLE3, 图片修改_DALLE2
|
||||||
function_plugins.update({
|
|
||||||
"图片生成_DALLE2 (先切换模型到gpt-*)": {
|
function_plugins.update(
|
||||||
"Group": "对话",
|
{
|
||||||
"Color": "stop",
|
"图片生成_DALLE2 (先切换模型到gpt-*)": {
|
||||||
"AsButton": False,
|
"Group": "对话",
|
||||||
"AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False)
|
"Color": "stop",
|
||||||
"ArgsReminder": "在这里输入分辨率, 如1024x1024(默认),支持 256x256, 512x512, 1024x1024", # 高级参数输入区的显示提示
|
"AsButton": False,
|
||||||
"Info": "使用DALLE2生成图片 | 输入参数字符串,提供图像的内容",
|
"AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False)
|
||||||
"Function": HotReload(图片生成_DALLE2)
|
"ArgsReminder": "在这里输入分辨率, 如1024x1024(默认),支持 256x256, 512x512, 1024x1024", # 高级参数输入区的显示提示
|
||||||
},
|
"Info": "使用DALLE2生成图片 | 输入参数字符串,提供图像的内容",
|
||||||
})
|
"Function": HotReload(图片生成_DALLE2),
|
||||||
function_plugins.update({
|
},
|
||||||
"图片生成_DALLE3 (先切换模型到gpt-*)": {
|
}
|
||||||
"Group": "对话",
|
)
|
||||||
"Color": "stop",
|
function_plugins.update(
|
||||||
"AsButton": False,
|
{
|
||||||
"AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False)
|
"图片生成_DALLE3 (先切换模型到gpt-*)": {
|
||||||
"ArgsReminder": "在这里输入自定义参数「分辨率-质量(可选)-风格(可选)」, 参数示例「1024x1024-hd-vivid」 || 分辨率支持 「1024x1024」(默认) /「1792x1024」/「1024x1792」 || 质量支持 「-standard」(默认) /「-hd」 || 风格支持 「-vivid」(默认) /「-natural」", # 高级参数输入区的显示提示
|
"Group": "对话",
|
||||||
"Info": "使用DALLE3生成图片 | 输入参数字符串,提供图像的内容",
|
"Color": "stop",
|
||||||
"Function": HotReload(图片生成_DALLE3)
|
"AsButton": False,
|
||||||
},
|
"AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False)
|
||||||
})
|
"ArgsReminder": "在这里输入自定义参数「分辨率-质量(可选)-风格(可选)」, 参数示例「1024x1024-hd-vivid」 || 分辨率支持 「1024x1024」(默认) /「1792x1024」/「1024x1792」 || 质量支持 「-standard」(默认) /「-hd」 || 风格支持 「-vivid」(默认) /「-natural」", # 高级参数输入区的显示提示
|
||||||
function_plugins.update({
|
"Info": "使用DALLE3生成图片 | 输入参数字符串,提供图像的内容",
|
||||||
"图片修改_DALLE2 (先切换模型到gpt-*)": {
|
"Function": HotReload(图片生成_DALLE3),
|
||||||
"Group": "对话",
|
},
|
||||||
"Color": "stop",
|
}
|
||||||
"AsButton": False,
|
)
|
||||||
"AdvancedArgs": False, # 调用时,唤起高级参数输入区(默认False)
|
function_plugins.update(
|
||||||
# "Info": "使用DALLE2修改图片 | 输入参数字符串,提供图像的内容",
|
{
|
||||||
"Function": HotReload(图片修改_DALLE2)
|
"图片修改_DALLE2 (先切换模型到gpt-*)": {
|
||||||
},
|
"Group": "对话",
|
||||||
})
|
"Color": "stop",
|
||||||
|
"AsButton": False,
|
||||||
|
"AdvancedArgs": False, # 调用时,唤起高级参数输入区(默认False)
|
||||||
|
# "Info": "使用DALLE2修改图片 | 输入参数字符串,提供图像的内容",
|
||||||
|
"Function": HotReload(图片修改_DALLE2),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
)
|
||||||
except:
|
except:
|
||||||
print(trimmed_format_exc())
|
print(trimmed_format_exc())
|
||||||
print('Load function plugin failed')
|
print("Load function plugin failed")
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from crazy_functions.总结音视频 import 总结音视频
|
from crazy_functions.总结音视频 import 总结音视频
|
||||||
function_plugins.update({
|
|
||||||
"批量总结音视频(输入路径或上传压缩包)": {
|
function_plugins.update(
|
||||||
"Group": "对话",
|
{
|
||||||
"Color": "stop",
|
"批量总结音视频(输入路径或上传压缩包)": {
|
||||||
"AsButton": False,
|
"Group": "对话",
|
||||||
"AdvancedArgs": True,
|
"Color": "stop",
|
||||||
"ArgsReminder": "调用openai api 使用whisper-1模型, 目前支持的格式:mp4, m4a, wav, mpga, mpeg, mp3。此处可以输入解析提示,例如:解析为简体中文(默认)。",
|
"AsButton": False,
|
||||||
"Info": "批量总结音频或视频 | 输入参数为路径",
|
"AdvancedArgs": True,
|
||||||
"Function": HotReload(总结音视频)
|
"ArgsReminder": "调用openai api 使用whisper-1模型, 目前支持的格式:mp4, m4a, wav, mpga, mpeg, mp3。此处可以输入解析提示,例如:解析为简体中文(默认)。",
|
||||||
|
"Info": "批量总结音频或视频 | 输入参数为路径",
|
||||||
|
"Function": HotReload(总结音视频),
|
||||||
|
}
|
||||||
}
|
}
|
||||||
})
|
)
|
||||||
except:
|
except:
|
||||||
print(trimmed_format_exc())
|
print(trimmed_format_exc())
|
||||||
print('Load function plugin failed')
|
print("Load function plugin failed")
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from crazy_functions.数学动画生成manim import 动画生成
|
from crazy_functions.数学动画生成manim import 动画生成
|
||||||
function_plugins.update({
|
|
||||||
"数学动画生成(Manim)": {
|
function_plugins.update(
|
||||||
"Group": "对话",
|
{
|
||||||
"Color": "stop",
|
"数学动画生成(Manim)": {
|
||||||
"AsButton": False,
|
"Group": "对话",
|
||||||
"Info": "按照自然语言描述生成一个动画 | 输入参数是一段话",
|
"Color": "stop",
|
||||||
"Function": HotReload(动画生成)
|
"AsButton": False,
|
||||||
|
"Info": "按照自然语言描述生成一个动画 | 输入参数是一段话",
|
||||||
|
"Function": HotReload(动画生成),
|
||||||
|
}
|
||||||
}
|
}
|
||||||
})
|
)
|
||||||
except:
|
except:
|
||||||
print(trimmed_format_exc())
|
print(trimmed_format_exc())
|
||||||
print('Load function plugin failed')
|
print("Load function plugin failed")
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from crazy_functions.批量Markdown翻译 import Markdown翻译指定语言
|
from crazy_functions.批量Markdown翻译 import Markdown翻译指定语言
|
||||||
function_plugins.update({
|
|
||||||
"Markdown翻译(指定翻译成何种语言)": {
|
function_plugins.update(
|
||||||
"Group": "编程",
|
{
|
||||||
"Color": "stop",
|
"Markdown翻译(指定翻译成何种语言)": {
|
||||||
"AsButton": False,
|
"Group": "编程",
|
||||||
"AdvancedArgs": True,
|
"Color": "stop",
|
||||||
"ArgsReminder": "请输入要翻译成哪种语言,默认为Chinese。",
|
"AsButton": False,
|
||||||
"Function": HotReload(Markdown翻译指定语言)
|
"AdvancedArgs": True,
|
||||||
|
"ArgsReminder": "请输入要翻译成哪种语言,默认为Chinese。",
|
||||||
|
"Function": HotReload(Markdown翻译指定语言),
|
||||||
|
}
|
||||||
}
|
}
|
||||||
})
|
)
|
||||||
except:
|
except:
|
||||||
print(trimmed_format_exc())
|
print(trimmed_format_exc())
|
||||||
print('Load function plugin failed')
|
print("Load function plugin failed")
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from crazy_functions.知识库问答 import 知识库文件注入
|
from crazy_functions.知识库问答 import 知识库文件注入
|
||||||
function_plugins.update({
|
|
||||||
"构建知识库(先上传文件素材,再运行此插件)": {
|
function_plugins.update(
|
||||||
"Group": "对话",
|
{
|
||||||
"Color": "stop",
|
"构建知识库(先上传文件素材,再运行此插件)": {
|
||||||
"AsButton": False,
|
"Group": "对话",
|
||||||
"AdvancedArgs": True,
|
"Color": "stop",
|
||||||
"ArgsReminder": "此处待注入的知识库名称id, 默认为default。文件进入知识库后可长期保存。可以通过再次调用本插件的方式,向知识库追加更多文档。",
|
"AsButton": False,
|
||||||
"Function": HotReload(知识库文件注入)
|
"AdvancedArgs": True,
|
||||||
|
"ArgsReminder": "此处待注入的知识库名称id, 默认为default。文件进入知识库后可长期保存。可以通过再次调用本插件的方式,向知识库追加更多文档。",
|
||||||
|
"Function": HotReload(知识库文件注入),
|
||||||
|
}
|
||||||
}
|
}
|
||||||
})
|
)
|
||||||
except:
|
except:
|
||||||
print(trimmed_format_exc())
|
print(trimmed_format_exc())
|
||||||
print('Load function plugin failed')
|
print("Load function plugin failed")
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from crazy_functions.知识库问答 import 读取知识库作答
|
from crazy_functions.知识库问答 import 读取知识库作答
|
||||||
function_plugins.update({
|
|
||||||
"知识库文件注入(构建知识库后,再运行此插件)": {
|
function_plugins.update(
|
||||||
"Group": "对话",
|
{
|
||||||
"Color": "stop",
|
"知识库文件注入(构建知识库后,再运行此插件)": {
|
||||||
"AsButton": False,
|
"Group": "对话",
|
||||||
"AdvancedArgs": True,
|
"Color": "stop",
|
||||||
"ArgsReminder": "待提取的知识库名称id, 默认为default, 您需要构建知识库后再运行此插件。",
|
"AsButton": False,
|
||||||
"Function": HotReload(读取知识库作答)
|
"AdvancedArgs": True,
|
||||||
|
"ArgsReminder": "待提取的知识库名称id, 默认为default, 您需要构建知识库后再运行此插件。",
|
||||||
|
"Function": HotReload(读取知识库作答),
|
||||||
|
}
|
||||||
}
|
}
|
||||||
})
|
)
|
||||||
except:
|
except:
|
||||||
print(trimmed_format_exc())
|
print(trimmed_format_exc())
|
||||||
print('Load function plugin failed')
|
print("Load function plugin failed")
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from crazy_functions.交互功能函数模板 import 交互功能模板函数
|
from crazy_functions.交互功能函数模板 import 交互功能模板函数
|
||||||
function_plugins.update({
|
|
||||||
"交互功能模板Demo函数(查找wallhaven.cc的壁纸)": {
|
function_plugins.update(
|
||||||
"Group": "对话",
|
{
|
||||||
"Color": "stop",
|
"交互功能模板Demo函数(查找wallhaven.cc的壁纸)": {
|
||||||
"AsButton": False,
|
"Group": "对话",
|
||||||
"Function": HotReload(交互功能模板函数)
|
"Color": "stop",
|
||||||
|
"AsButton": False,
|
||||||
|
"Function": HotReload(交互功能模板函数),
|
||||||
|
}
|
||||||
}
|
}
|
||||||
})
|
)
|
||||||
except:
|
except:
|
||||||
print(trimmed_format_exc())
|
print(trimmed_format_exc())
|
||||||
print('Load function plugin failed')
|
print("Load function plugin failed")
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from crazy_functions.Latex输出PDF结果 import Latex英文纠错加PDF对比
|
from crazy_functions.Latex输出PDF结果 import Latex英文纠错加PDF对比
|
||||||
function_plugins.update({
|
|
||||||
"Latex英文纠错+高亮修正位置 [需Latex]": {
|
function_plugins.update(
|
||||||
"Group": "学术",
|
{
|
||||||
"Color": "stop",
|
"Latex英文纠错+高亮修正位置 [需Latex]": {
|
||||||
"AsButton": False,
|
"Group": "学术",
|
||||||
"AdvancedArgs": True,
|
"Color": "stop",
|
||||||
"ArgsReminder": "如果有必要, 请在此处追加更细致的矫错指令(使用英文)。",
|
"AsButton": False,
|
||||||
"Function": HotReload(Latex英文纠错加PDF对比)
|
"AdvancedArgs": True,
|
||||||
|
"ArgsReminder": "如果有必要, 请在此处追加更细致的矫错指令(使用英文)。",
|
||||||
|
"Function": HotReload(Latex英文纠错加PDF对比),
|
||||||
|
}
|
||||||
}
|
}
|
||||||
})
|
)
|
||||||
from crazy_functions.Latex输出PDF结果 import Latex翻译中文并重新编译PDF
|
from crazy_functions.Latex输出PDF结果 import Latex翻译中文并重新编译PDF
|
||||||
function_plugins.update({
|
|
||||||
"Arxiv论文精细翻译(输入arxivID)[需Latex]": {
|
function_plugins.update(
|
||||||
"Group": "学术",
|
{
|
||||||
"Color": "stop",
|
"Arxiv论文精细翻译(输入arxivID)[需Latex]": {
|
||||||
"AsButton": False,
|
"Group": "学术",
|
||||||
"AdvancedArgs": True,
|
"Color": "stop",
|
||||||
"ArgsReminder":
|
"AsButton": False,
|
||||||
"如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 " +
|
"AdvancedArgs": True,
|
||||||
"例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: " +
|
"ArgsReminder": "如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 "
|
||||||
'If the term "agent" is used in this section, it should be translated to "智能体". ',
|
+ "例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: "
|
||||||
"Info": "Arixv论文精细翻译 | 输入参数arxiv论文的ID,比如1812.10695",
|
+ 'If the term "agent" is used in this section, it should be translated to "智能体". ',
|
||||||
"Function": HotReload(Latex翻译中文并重新编译PDF)
|
"Info": "Arixv论文精细翻译 | 输入参数arxiv论文的ID,比如1812.10695",
|
||||||
|
"Function": HotReload(Latex翻译中文并重新编译PDF),
|
||||||
|
}
|
||||||
}
|
}
|
||||||
})
|
)
|
||||||
function_plugins.update({
|
function_plugins.update(
|
||||||
"本地Latex论文精细翻译(上传Latex项目)[需Latex]": {
|
{
|
||||||
"Group": "学术",
|
"本地Latex论文精细翻译(上传Latex项目)[需Latex]": {
|
||||||
"Color": "stop",
|
"Group": "学术",
|
||||||
"AsButton": False,
|
"Color": "stop",
|
||||||
"AdvancedArgs": True,
|
"AsButton": False,
|
||||||
"ArgsReminder":
|
"AdvancedArgs": True,
|
||||||
"如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 " +
|
"ArgsReminder": "如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 "
|
||||||
"例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: " +
|
+ "例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: "
|
||||||
'If the term "agent" is used in this section, it should be translated to "智能体". ',
|
+ 'If the term "agent" is used in this section, it should be translated to "智能体". ',
|
||||||
"Info": "本地Latex论文精细翻译 | 输入参数是路径",
|
"Info": "本地Latex论文精细翻译 | 输入参数是路径",
|
||||||
"Function": HotReload(Latex翻译中文并重新编译PDF)
|
"Function": HotReload(Latex翻译中文并重新编译PDF),
|
||||||
|
}
|
||||||
}
|
}
|
||||||
})
|
)
|
||||||
except:
|
except:
|
||||||
print(trimmed_format_exc())
|
print(trimmed_format_exc())
|
||||||
print('Load function plugin failed')
|
print("Load function plugin failed")
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from toolbox import get_conf
|
from toolbox import get_conf
|
||||||
ENABLE_AUDIO = get_conf('ENABLE_AUDIO')
|
|
||||||
|
ENABLE_AUDIO = get_conf("ENABLE_AUDIO")
|
||||||
if ENABLE_AUDIO:
|
if ENABLE_AUDIO:
|
||||||
from crazy_functions.语音助手 import 语音助手
|
from crazy_functions.语音助手 import 语音助手
|
||||||
function_plugins.update({
|
|
||||||
"实时语音对话": {
|
function_plugins.update(
|
||||||
"Group": "对话",
|
{
|
||||||
"Color": "stop",
|
"实时语音对话": {
|
||||||
"AsButton": True,
|
"Group": "对话",
|
||||||
"Info": "这是一个时刻聆听着的语音对话助手 | 没有输入参数",
|
"Color": "stop",
|
||||||
"Function": HotReload(语音助手)
|
"AsButton": True,
|
||||||
|
"Info": "这是一个时刻聆听着的语音对话助手 | 没有输入参数",
|
||||||
|
"Function": HotReload(语音助手),
|
||||||
|
}
|
||||||
}
|
}
|
||||||
})
|
)
|
||||||
except:
|
except:
|
||||||
print(trimmed_format_exc())
|
print(trimmed_format_exc())
|
||||||
print('Load function plugin failed')
|
print("Load function plugin failed")
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from crazy_functions.批量翻译PDF文档_NOUGAT import 批量翻译PDF文档
|
from crazy_functions.批量翻译PDF文档_NOUGAT import 批量翻译PDF文档
|
||||||
function_plugins.update({
|
|
||||||
"精准翻译PDF文档(NOUGAT)": {
|
function_plugins.update(
|
||||||
"Group": "学术",
|
{
|
||||||
"Color": "stop",
|
"精准翻译PDF文档(NOUGAT)": {
|
||||||
"AsButton": False,
|
"Group": "学术",
|
||||||
"Function": HotReload(批量翻译PDF文档)
|
"Color": "stop",
|
||||||
|
"AsButton": False,
|
||||||
|
"Function": HotReload(批量翻译PDF文档),
|
||||||
|
}
|
||||||
}
|
}
|
||||||
})
|
)
|
||||||
except:
|
except:
|
||||||
print(trimmed_format_exc())
|
print(trimmed_format_exc())
|
||||||
print('Load function plugin failed')
|
print("Load function plugin failed")
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from crazy_functions.函数动态生成 import 函数动态生成
|
from crazy_functions.函数动态生成 import 函数动态生成
|
||||||
function_plugins.update({
|
|
||||||
"动态代码解释器(CodeInterpreter)": {
|
function_plugins.update(
|
||||||
"Group": "智能体",
|
{
|
||||||
"Color": "stop",
|
"动态代码解释器(CodeInterpreter)": {
|
||||||
"AsButton": False,
|
"Group": "智能体",
|
||||||
"Function": HotReload(函数动态生成)
|
"Color": "stop",
|
||||||
|
"AsButton": False,
|
||||||
|
"Function": HotReload(函数动态生成),
|
||||||
|
}
|
||||||
}
|
}
|
||||||
})
|
)
|
||||||
except:
|
except:
|
||||||
print(trimmed_format_exc())
|
print(trimmed_format_exc())
|
||||||
print('Load function plugin failed')
|
print("Load function plugin failed")
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from crazy_functions.多智能体 import 多智能体终端
|
from crazy_functions.多智能体 import 多智能体终端
|
||||||
function_plugins.update({
|
|
||||||
"AutoGen多智能体终端(仅供测试)": {
|
function_plugins.update(
|
||||||
"Group": "智能体",
|
{
|
||||||
"Color": "stop",
|
"AutoGen多智能体终端(仅供测试)": {
|
||||||
"AsButton": False,
|
"Group": "智能体",
|
||||||
"Function": HotReload(多智能体终端)
|
"Color": "stop",
|
||||||
|
"AsButton": False,
|
||||||
|
"Function": HotReload(多智能体终端),
|
||||||
|
}
|
||||||
}
|
}
|
||||||
})
|
)
|
||||||
except:
|
except:
|
||||||
print(trimmed_format_exc())
|
print(trimmed_format_exc())
|
||||||
print('Load function plugin failed')
|
print("Load function plugin failed")
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from crazy_functions.互动小游戏 import 随机小游戏
|
from crazy_functions.互动小游戏 import 随机小游戏
|
||||||
function_plugins.update({
|
|
||||||
"随机互动小游戏(仅供测试)": {
|
function_plugins.update(
|
||||||
"Group": "智能体",
|
{
|
||||||
"Color": "stop",
|
"随机互动小游戏(仅供测试)": {
|
||||||
"AsButton": False,
|
"Group": "智能体",
|
||||||
"Function": HotReload(随机小游戏)
|
"Color": "stop",
|
||||||
|
"AsButton": False,
|
||||||
|
"Function": HotReload(随机小游戏),
|
||||||
|
}
|
||||||
}
|
}
|
||||||
})
|
)
|
||||||
except:
|
except:
|
||||||
print(trimmed_format_exc())
|
print(trimmed_format_exc())
|
||||||
print('Load function plugin failed')
|
print("Load function plugin failed")
|
||||||
|
|
||||||
|
# try:
|
||||||
|
# from crazy_functions.高级功能函数模板 import 测试图表渲染
|
||||||
|
# function_plugins.update({
|
||||||
|
# "绘制逻辑关系(测试图表渲染)": {
|
||||||
|
# "Group": "智能体",
|
||||||
|
# "Color": "stop",
|
||||||
|
# "AsButton": True,
|
||||||
|
# "Function": HotReload(测试图表渲染)
|
||||||
|
# }
|
||||||
|
# })
|
||||||
|
# except:
|
||||||
|
# print(trimmed_format_exc())
|
||||||
|
# print('Load function plugin failed')
|
||||||
|
|
||||||
# try:
|
# try:
|
||||||
# from crazy_functions.chatglm微调工具 import 微调数据集生成
|
# from crazy_functions.chatglm微调工具 import 微调数据集生成
|
||||||
@@ -618,8 +690,6 @@ def get_crazy_functions():
|
|||||||
# except:
|
# except:
|
||||||
# print('Load function plugin failed')
|
# print('Load function plugin failed')
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
"""
|
"""
|
||||||
设置默认值:
|
设置默认值:
|
||||||
- 默认 Group = 对话
|
- 默认 Group = 对话
|
||||||
@@ -629,12 +699,12 @@ def get_crazy_functions():
|
|||||||
"""
|
"""
|
||||||
for name, function_meta in function_plugins.items():
|
for name, function_meta in function_plugins.items():
|
||||||
if "Group" not in function_meta:
|
if "Group" not in function_meta:
|
||||||
function_plugins[name]["Group"] = '对话'
|
function_plugins[name]["Group"] = "对话"
|
||||||
if "AsButton" not in function_meta:
|
if "AsButton" not in function_meta:
|
||||||
function_plugins[name]["AsButton"] = True
|
function_plugins[name]["AsButton"] = True
|
||||||
if "AdvancedArgs" not in function_meta:
|
if "AdvancedArgs" not in function_meta:
|
||||||
function_plugins[name]["AdvancedArgs"] = False
|
function_plugins[name]["AdvancedArgs"] = False
|
||||||
if "Color" not in function_meta:
|
if "Color" not in function_meta:
|
||||||
function_plugins[name]["Color"] = 'secondary'
|
function_plugins[name]["Color"] = "secondary"
|
||||||
|
|
||||||
return function_plugins
|
return function_plugins
|
||||||
|
|||||||
@@ -5,7 +5,7 @@ import glob, os, requests, time
|
|||||||
pj = os.path.join
|
pj = os.path.join
|
||||||
ARXIV_CACHE_DIR = os.path.expanduser(f"~/arxiv_cache/")
|
ARXIV_CACHE_DIR = os.path.expanduser(f"~/arxiv_cache/")
|
||||||
|
|
||||||
# =================================== 工具函数 ===============================================
|
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- 工具函数 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
||||||
# 专业词汇声明 = 'If the term "agent" is used in this section, it should be translated to "智能体". '
|
# 专业词汇声明 = 'If the term "agent" is used in this section, it should be translated to "智能体". '
|
||||||
def switch_prompt(pfg, mode, more_requirement):
|
def switch_prompt(pfg, mode, more_requirement):
|
||||||
"""
|
"""
|
||||||
@@ -142,7 +142,7 @@ def arxiv_download(chatbot, history, txt, allow_cache=True):
|
|||||||
from toolbox import extract_archive
|
from toolbox import extract_archive
|
||||||
extract_archive(file_path=dst, dest_dir=extract_dst)
|
extract_archive(file_path=dst, dest_dir=extract_dst)
|
||||||
return extract_dst, arxiv_id
|
return extract_dst, arxiv_id
|
||||||
# ========================================= 插件主程序1 =====================================================
|
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= 插件主程序1 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
@@ -218,7 +218,7 @@ def Latex英文纠错加PDF对比(txt, llm_kwargs, plugin_kwargs, chatbot, histo
|
|||||||
# <-------------- we are done ------------->
|
# <-------------- we are done ------------->
|
||||||
return success
|
return success
|
||||||
|
|
||||||
# ========================================= 插件主程序2 =====================================================
|
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= 插件主程序2 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def Latex翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def Latex翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
|
|||||||
@@ -1,15 +1,18 @@
|
|||||||
import os, shutil
|
import os, shutil
|
||||||
import re
|
import re
|
||||||
import numpy as np
|
import numpy as np
|
||||||
|
|
||||||
PRESERVE = 0
|
PRESERVE = 0
|
||||||
TRANSFORM = 1
|
TRANSFORM = 1
|
||||||
|
|
||||||
pj = os.path.join
|
pj = os.path.join
|
||||||
|
|
||||||
class LinkedListNode():
|
|
||||||
|
class LinkedListNode:
|
||||||
"""
|
"""
|
||||||
Linked List Node
|
Linked List Node
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self, string, preserve=True) -> None:
|
def __init__(self, string, preserve=True) -> None:
|
||||||
self.string = string
|
self.string = string
|
||||||
self.preserve = preserve
|
self.preserve = preserve
|
||||||
@@ -18,41 +21,47 @@ class LinkedListNode():
|
|||||||
# self.begin_line = 0
|
# self.begin_line = 0
|
||||||
# self.begin_char = 0
|
# self.begin_char = 0
|
||||||
|
|
||||||
|
|
||||||
def convert_to_linklist(text, mask):
|
def convert_to_linklist(text, mask):
|
||||||
root = LinkedListNode("", preserve=True)
|
root = LinkedListNode("", preserve=True)
|
||||||
current_node = root
|
current_node = root
|
||||||
for c, m, i in zip(text, mask, range(len(text))):
|
for c, m, i in zip(text, mask, range(len(text))):
|
||||||
if (m==PRESERVE and current_node.preserve) \
|
if (m == PRESERVE and current_node.preserve) or (
|
||||||
or (m==TRANSFORM and not current_node.preserve):
|
m == TRANSFORM and not current_node.preserve
|
||||||
|
):
|
||||||
# add
|
# add
|
||||||
current_node.string += c
|
current_node.string += c
|
||||||
else:
|
else:
|
||||||
current_node.next = LinkedListNode(c, preserve=(m==PRESERVE))
|
current_node.next = LinkedListNode(c, preserve=(m == PRESERVE))
|
||||||
current_node = current_node.next
|
current_node = current_node.next
|
||||||
return root
|
return root
|
||||||
|
|
||||||
|
|
||||||
def post_process(root):
|
def post_process(root):
|
||||||
# 修复括号
|
# 修复括号
|
||||||
node = root
|
node = root
|
||||||
while True:
|
while True:
|
||||||
string = node.string
|
string = node.string
|
||||||
if node.preserve:
|
if node.preserve:
|
||||||
node = node.next
|
node = node.next
|
||||||
if node is None: break
|
if node is None:
|
||||||
|
break
|
||||||
continue
|
continue
|
||||||
|
|
||||||
def break_check(string):
|
def break_check(string):
|
||||||
str_stack = [""] # (lv, index)
|
str_stack = [""] # (lv, index)
|
||||||
for i, c in enumerate(string):
|
for i, c in enumerate(string):
|
||||||
if c == '{':
|
if c == "{":
|
||||||
str_stack.append('{')
|
str_stack.append("{")
|
||||||
elif c == '}':
|
elif c == "}":
|
||||||
if len(str_stack) == 1:
|
if len(str_stack) == 1:
|
||||||
print('stack fix')
|
print("stack fix")
|
||||||
return i
|
return i
|
||||||
str_stack.pop(-1)
|
str_stack.pop(-1)
|
||||||
else:
|
else:
|
||||||
str_stack[-1] += c
|
str_stack[-1] += c
|
||||||
return -1
|
return -1
|
||||||
|
|
||||||
bp = break_check(string)
|
bp = break_check(string)
|
||||||
|
|
||||||
if bp == -1:
|
if bp == -1:
|
||||||
@@ -69,51 +78,66 @@ def post_process(root):
|
|||||||
node.next = q
|
node.next = q
|
||||||
|
|
||||||
node = node.next
|
node = node.next
|
||||||
if node is None: break
|
if node is None:
|
||||||
|
break
|
||||||
|
|
||||||
# 屏蔽空行和太短的句子
|
# 屏蔽空行和太短的句子
|
||||||
node = root
|
node = root
|
||||||
while True:
|
while True:
|
||||||
if len(node.string.strip('\n').strip(''))==0: node.preserve = True
|
if len(node.string.strip("\n").strip("")) == 0:
|
||||||
if len(node.string.strip('\n').strip(''))<42: node.preserve = True
|
node.preserve = True
|
||||||
|
if len(node.string.strip("\n").strip("")) < 42:
|
||||||
|
node.preserve = True
|
||||||
node = node.next
|
node = node.next
|
||||||
if node is None: break
|
if node is None:
|
||||||
|
break
|
||||||
node = root
|
node = root
|
||||||
while True:
|
while True:
|
||||||
if node.next and node.preserve and node.next.preserve:
|
if node.next and node.preserve and node.next.preserve:
|
||||||
node.string += node.next.string
|
node.string += node.next.string
|
||||||
node.next = node.next.next
|
node.next = node.next.next
|
||||||
node = node.next
|
node = node.next
|
||||||
if node is None: break
|
if node is None:
|
||||||
|
break
|
||||||
|
|
||||||
# 将前后断行符脱离
|
# 将前后断行符脱离
|
||||||
node = root
|
node = root
|
||||||
prev_node = None
|
prev_node = None
|
||||||
while True:
|
while True:
|
||||||
if not node.preserve:
|
if not node.preserve:
|
||||||
lstriped_ = node.string.lstrip().lstrip('\n')
|
lstriped_ = node.string.lstrip().lstrip("\n")
|
||||||
if (prev_node is not None) and (prev_node.preserve) and (len(lstriped_)!=len(node.string)):
|
if (
|
||||||
prev_node.string += node.string[:-len(lstriped_)]
|
(prev_node is not None)
|
||||||
|
and (prev_node.preserve)
|
||||||
|
and (len(lstriped_) != len(node.string))
|
||||||
|
):
|
||||||
|
prev_node.string += node.string[: -len(lstriped_)]
|
||||||
node.string = lstriped_
|
node.string = lstriped_
|
||||||
rstriped_ = node.string.rstrip().rstrip('\n')
|
rstriped_ = node.string.rstrip().rstrip("\n")
|
||||||
if (node.next is not None) and (node.next.preserve) and (len(rstriped_)!=len(node.string)):
|
if (
|
||||||
node.next.string = node.string[len(rstriped_):] + node.next.string
|
(node.next is not None)
|
||||||
|
and (node.next.preserve)
|
||||||
|
and (len(rstriped_) != len(node.string))
|
||||||
|
):
|
||||||
|
node.next.string = node.string[len(rstriped_) :] + node.next.string
|
||||||
node.string = rstriped_
|
node.string = rstriped_
|
||||||
# =====
|
# =-=-=
|
||||||
prev_node = node
|
prev_node = node
|
||||||
node = node.next
|
node = node.next
|
||||||
if node is None: break
|
if node is None:
|
||||||
|
break
|
||||||
|
|
||||||
# 标注节点的行数范围
|
# 标注节点的行数范围
|
||||||
node = root
|
node = root
|
||||||
n_line = 0
|
n_line = 0
|
||||||
expansion = 2
|
expansion = 2
|
||||||
while True:
|
while True:
|
||||||
n_l = node.string.count('\n')
|
n_l = node.string.count("\n")
|
||||||
node.range = [n_line-expansion, n_line+n_l+expansion] # 失败时,扭转的范围
|
node.range = [n_line - expansion, n_line + n_l + expansion] # 失败时,扭转的范围
|
||||||
n_line = n_line+n_l
|
n_line = n_line + n_l
|
||||||
node = node.next
|
node = node.next
|
||||||
if node is None: break
|
if node is None:
|
||||||
|
break
|
||||||
return root
|
return root
|
||||||
|
|
||||||
|
|
||||||
@@ -128,97 +152,125 @@ def set_forbidden_text(text, mask, pattern, flags=0):
|
|||||||
"""
|
"""
|
||||||
Add a preserve text area in this paper
|
Add a preserve text area in this paper
|
||||||
e.g. with pattern = r"\\begin\{algorithm\}(.*?)\\end\{algorithm\}"
|
e.g. with pattern = r"\\begin\{algorithm\}(.*?)\\end\{algorithm\}"
|
||||||
you can mask out (mask = PRESERVE so that text become untouchable for GPT)
|
you can mask out (mask = PRESERVE so that text become untouchable for GPT)
|
||||||
everything between "\begin{equation}" and "\end{equation}"
|
everything between "\begin{equation}" and "\end{equation}"
|
||||||
"""
|
"""
|
||||||
if isinstance(pattern, list): pattern = '|'.join(pattern)
|
if isinstance(pattern, list):
|
||||||
|
pattern = "|".join(pattern)
|
||||||
pattern_compile = re.compile(pattern, flags)
|
pattern_compile = re.compile(pattern, flags)
|
||||||
for res in pattern_compile.finditer(text):
|
for res in pattern_compile.finditer(text):
|
||||||
mask[res.span()[0]:res.span()[1]] = PRESERVE
|
mask[res.span()[0] : res.span()[1]] = PRESERVE
|
||||||
return text, mask
|
return text, mask
|
||||||
|
|
||||||
|
|
||||||
def reverse_forbidden_text(text, mask, pattern, flags=0, forbid_wrapper=True):
|
def reverse_forbidden_text(text, mask, pattern, flags=0, forbid_wrapper=True):
|
||||||
"""
|
"""
|
||||||
Move area out of preserve area (make text editable for GPT)
|
Move area out of preserve area (make text editable for GPT)
|
||||||
count the number of the braces so as to catch compelete text area.
|
count the number of the braces so as to catch compelete text area.
|
||||||
e.g.
|
e.g.
|
||||||
\begin{abstract} blablablablablabla. \end{abstract}
|
\begin{abstract} blablablablablabla. \end{abstract}
|
||||||
"""
|
"""
|
||||||
if isinstance(pattern, list): pattern = '|'.join(pattern)
|
if isinstance(pattern, list):
|
||||||
|
pattern = "|".join(pattern)
|
||||||
pattern_compile = re.compile(pattern, flags)
|
pattern_compile = re.compile(pattern, flags)
|
||||||
for res in pattern_compile.finditer(text):
|
for res in pattern_compile.finditer(text):
|
||||||
if not forbid_wrapper:
|
if not forbid_wrapper:
|
||||||
mask[res.span()[0]:res.span()[1]] = TRANSFORM
|
mask[res.span()[0] : res.span()[1]] = TRANSFORM
|
||||||
else:
|
else:
|
||||||
mask[res.regs[0][0]: res.regs[1][0]] = PRESERVE # '\\begin{abstract}'
|
mask[res.regs[0][0] : res.regs[1][0]] = PRESERVE # '\\begin{abstract}'
|
||||||
mask[res.regs[1][0]: res.regs[1][1]] = TRANSFORM # abstract
|
mask[res.regs[1][0] : res.regs[1][1]] = TRANSFORM # abstract
|
||||||
mask[res.regs[1][1]: res.regs[0][1]] = PRESERVE # abstract
|
mask[res.regs[1][1] : res.regs[0][1]] = PRESERVE # abstract
|
||||||
return text, mask
|
return text, mask
|
||||||
|
|
||||||
|
|
||||||
def set_forbidden_text_careful_brace(text, mask, pattern, flags=0):
|
def set_forbidden_text_careful_brace(text, mask, pattern, flags=0):
|
||||||
"""
|
"""
|
||||||
Add a preserve text area in this paper (text become untouchable for GPT).
|
Add a preserve text area in this paper (text become untouchable for GPT).
|
||||||
count the number of the braces so as to catch compelete text area.
|
count the number of the braces so as to catch compelete text area.
|
||||||
e.g.
|
e.g.
|
||||||
\caption{blablablablabla\texbf{blablabla}blablabla.}
|
\caption{blablablablabla\texbf{blablabla}blablabla.}
|
||||||
"""
|
"""
|
||||||
pattern_compile = re.compile(pattern, flags)
|
pattern_compile = re.compile(pattern, flags)
|
||||||
for res in pattern_compile.finditer(text):
|
for res in pattern_compile.finditer(text):
|
||||||
brace_level = -1
|
brace_level = -1
|
||||||
p = begin = end = res.regs[0][0]
|
p = begin = end = res.regs[0][0]
|
||||||
for _ in range(1024*16):
|
for _ in range(1024 * 16):
|
||||||
if text[p] == '}' and brace_level == 0: break
|
if text[p] == "}" and brace_level == 0:
|
||||||
elif text[p] == '}': brace_level -= 1
|
break
|
||||||
elif text[p] == '{': brace_level += 1
|
elif text[p] == "}":
|
||||||
|
brace_level -= 1
|
||||||
|
elif text[p] == "{":
|
||||||
|
brace_level += 1
|
||||||
p += 1
|
p += 1
|
||||||
end = p+1
|
end = p + 1
|
||||||
mask[begin:end] = PRESERVE
|
mask[begin:end] = PRESERVE
|
||||||
return text, mask
|
return text, mask
|
||||||
|
|
||||||
def reverse_forbidden_text_careful_brace(text, mask, pattern, flags=0, forbid_wrapper=True):
|
|
||||||
|
def reverse_forbidden_text_careful_brace(
|
||||||
|
text, mask, pattern, flags=0, forbid_wrapper=True
|
||||||
|
):
|
||||||
"""
|
"""
|
||||||
Move area out of preserve area (make text editable for GPT)
|
Move area out of preserve area (make text editable for GPT)
|
||||||
count the number of the braces so as to catch compelete text area.
|
count the number of the braces so as to catch compelete text area.
|
||||||
e.g.
|
e.g.
|
||||||
\caption{blablablablabla\texbf{blablabla}blablabla.}
|
\caption{blablablablabla\texbf{blablabla}blablabla.}
|
||||||
"""
|
"""
|
||||||
pattern_compile = re.compile(pattern, flags)
|
pattern_compile = re.compile(pattern, flags)
|
||||||
for res in pattern_compile.finditer(text):
|
for res in pattern_compile.finditer(text):
|
||||||
brace_level = 0
|
brace_level = 0
|
||||||
p = begin = end = res.regs[1][0]
|
p = begin = end = res.regs[1][0]
|
||||||
for _ in range(1024*16):
|
for _ in range(1024 * 16):
|
||||||
if text[p] == '}' and brace_level == 0: break
|
if text[p] == "}" and brace_level == 0:
|
||||||
elif text[p] == '}': brace_level -= 1
|
break
|
||||||
elif text[p] == '{': brace_level += 1
|
elif text[p] == "}":
|
||||||
|
brace_level -= 1
|
||||||
|
elif text[p] == "{":
|
||||||
|
brace_level += 1
|
||||||
p += 1
|
p += 1
|
||||||
end = p
|
end = p
|
||||||
mask[begin:end] = TRANSFORM
|
mask[begin:end] = TRANSFORM
|
||||||
if forbid_wrapper:
|
if forbid_wrapper:
|
||||||
mask[res.regs[0][0]:begin] = PRESERVE
|
mask[res.regs[0][0] : begin] = PRESERVE
|
||||||
mask[end:res.regs[0][1]] = PRESERVE
|
mask[end : res.regs[0][1]] = PRESERVE
|
||||||
return text, mask
|
return text, mask
|
||||||
|
|
||||||
|
|
||||||
def set_forbidden_text_begin_end(text, mask, pattern, flags=0, limit_n_lines=42):
|
def set_forbidden_text_begin_end(text, mask, pattern, flags=0, limit_n_lines=42):
|
||||||
"""
|
"""
|
||||||
Find all \begin{} ... \end{} text block that with less than limit_n_lines lines.
|
Find all \begin{} ... \end{} text block that with less than limit_n_lines lines.
|
||||||
Add it to preserve area
|
Add it to preserve area
|
||||||
"""
|
"""
|
||||||
pattern_compile = re.compile(pattern, flags)
|
pattern_compile = re.compile(pattern, flags)
|
||||||
|
|
||||||
def search_with_line_limit(text, mask):
|
def search_with_line_limit(text, mask):
|
||||||
for res in pattern_compile.finditer(text):
|
for res in pattern_compile.finditer(text):
|
||||||
cmd = res.group(1) # begin{what}
|
cmd = res.group(1) # begin{what}
|
||||||
this = res.group(2) # content between begin and end
|
this = res.group(2) # content between begin and end
|
||||||
this_mask = mask[res.regs[2][0]:res.regs[2][1]]
|
this_mask = mask[res.regs[2][0] : res.regs[2][1]]
|
||||||
white_list = ['document', 'abstract', 'lemma', 'definition', 'sproof',
|
white_list = [
|
||||||
'em', 'emph', 'textit', 'textbf', 'itemize', 'enumerate']
|
"document",
|
||||||
if (cmd in white_list) or this.count('\n') >= limit_n_lines: # use a magical number 42
|
"abstract",
|
||||||
|
"lemma",
|
||||||
|
"definition",
|
||||||
|
"sproof",
|
||||||
|
"em",
|
||||||
|
"emph",
|
||||||
|
"textit",
|
||||||
|
"textbf",
|
||||||
|
"itemize",
|
||||||
|
"enumerate",
|
||||||
|
]
|
||||||
|
if (cmd in white_list) or this.count(
|
||||||
|
"\n"
|
||||||
|
) >= limit_n_lines: # use a magical number 42
|
||||||
this, this_mask = search_with_line_limit(this, this_mask)
|
this, this_mask = search_with_line_limit(this, this_mask)
|
||||||
mask[res.regs[2][0]:res.regs[2][1]] = this_mask
|
mask[res.regs[2][0] : res.regs[2][1]] = this_mask
|
||||||
else:
|
else:
|
||||||
mask[res.regs[0][0]:res.regs[0][1]] = PRESERVE
|
mask[res.regs[0][0] : res.regs[0][1]] = PRESERVE
|
||||||
return text, mask
|
return text, mask
|
||||||
return search_with_line_limit(text, mask)
|
|
||||||
|
|
||||||
|
return search_with_line_limit(text, mask)
|
||||||
|
|
||||||
|
|
||||||
"""
|
"""
|
||||||
@@ -227,6 +279,7 @@ Latex Merge File
|
|||||||
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
|
||||||
def find_main_tex_file(file_manifest, mode):
|
def find_main_tex_file(file_manifest, mode):
|
||||||
"""
|
"""
|
||||||
在多Tex文档中,寻找主文件,必须包含documentclass,返回找到的第一个。
|
在多Tex文档中,寻找主文件,必须包含documentclass,返回找到的第一个。
|
||||||
@@ -234,27 +287,36 @@ def find_main_tex_file(file_manifest, mode):
|
|||||||
"""
|
"""
|
||||||
canidates = []
|
canidates = []
|
||||||
for texf in file_manifest:
|
for texf in file_manifest:
|
||||||
if os.path.basename(texf).startswith('merge'):
|
if os.path.basename(texf).startswith("merge"):
|
||||||
continue
|
continue
|
||||||
with open(texf, 'r', encoding='utf8', errors='ignore') as f:
|
with open(texf, "r", encoding="utf8", errors="ignore") as f:
|
||||||
file_content = f.read()
|
file_content = f.read()
|
||||||
if r'\documentclass' in file_content:
|
if r"\documentclass" in file_content:
|
||||||
canidates.append(texf)
|
canidates.append(texf)
|
||||||
else:
|
else:
|
||||||
continue
|
continue
|
||||||
|
|
||||||
if len(canidates) == 0:
|
if len(canidates) == 0:
|
||||||
raise RuntimeError('无法找到一个主Tex文件(包含documentclass关键字)')
|
raise RuntimeError("无法找到一个主Tex文件(包含documentclass关键字)")
|
||||||
elif len(canidates) == 1:
|
elif len(canidates) == 1:
|
||||||
return canidates[0]
|
return canidates[0]
|
||||||
else: # if len(canidates) >= 2 通过一些Latex模板中常见(但通常不会出现在正文)的单词,对不同latex源文件扣分,取评分最高者返回
|
else: # if len(canidates) >= 2 通过一些Latex模板中常见(但通常不会出现在正文)的单词,对不同latex源文件扣分,取评分最高者返回
|
||||||
canidates_score = []
|
canidates_score = []
|
||||||
# 给出一些判定模板文档的词作为扣分项
|
# 给出一些判定模板文档的词作为扣分项
|
||||||
unexpected_words = ['\\LaTeX', 'manuscript', 'Guidelines', 'font', 'citations', 'rejected', 'blind review', 'reviewers']
|
unexpected_words = [
|
||||||
expected_words = ['\\input', '\\ref', '\\cite']
|
"\\LaTeX",
|
||||||
|
"manuscript",
|
||||||
|
"Guidelines",
|
||||||
|
"font",
|
||||||
|
"citations",
|
||||||
|
"rejected",
|
||||||
|
"blind review",
|
||||||
|
"reviewers",
|
||||||
|
]
|
||||||
|
expected_words = ["\\input", "\\ref", "\\cite"]
|
||||||
for texf in canidates:
|
for texf in canidates:
|
||||||
canidates_score.append(0)
|
canidates_score.append(0)
|
||||||
with open(texf, 'r', encoding='utf8', errors='ignore') as f:
|
with open(texf, "r", encoding="utf8", errors="ignore") as f:
|
||||||
file_content = f.read()
|
file_content = f.read()
|
||||||
file_content = rm_comments(file_content)
|
file_content = rm_comments(file_content)
|
||||||
for uw in unexpected_words:
|
for uw in unexpected_words:
|
||||||
@@ -263,9 +325,10 @@ def find_main_tex_file(file_manifest, mode):
|
|||||||
for uw in expected_words:
|
for uw in expected_words:
|
||||||
if uw in file_content:
|
if uw in file_content:
|
||||||
canidates_score[-1] += 1
|
canidates_score[-1] += 1
|
||||||
select = np.argmax(canidates_score) # 取评分最高者返回
|
select = np.argmax(canidates_score) # 取评分最高者返回
|
||||||
return canidates[select]
|
return canidates[select]
|
||||||
|
|
||||||
|
|
||||||
def rm_comments(main_file):
|
def rm_comments(main_file):
|
||||||
new_file_remove_comment_lines = []
|
new_file_remove_comment_lines = []
|
||||||
for l in main_file.splitlines():
|
for l in main_file.splitlines():
|
||||||
@@ -274,30 +337,39 @@ def rm_comments(main_file):
|
|||||||
pass
|
pass
|
||||||
else:
|
else:
|
||||||
new_file_remove_comment_lines.append(l)
|
new_file_remove_comment_lines.append(l)
|
||||||
main_file = '\n'.join(new_file_remove_comment_lines)
|
main_file = "\n".join(new_file_remove_comment_lines)
|
||||||
# main_file = re.sub(r"\\include{(.*?)}", r"\\input{\1}", main_file) # 将 \include 命令转换为 \input 命令
|
# main_file = re.sub(r"\\include{(.*?)}", r"\\input{\1}", main_file) # 将 \include 命令转换为 \input 命令
|
||||||
main_file = re.sub(r'(?<!\\)%.*', '', main_file) # 使用正则表达式查找半行注释, 并替换为空字符串
|
main_file = re.sub(r"(?<!\\)%.*", "", main_file) # 使用正则表达式查找半行注释, 并替换为空字符串
|
||||||
return main_file
|
return main_file
|
||||||
|
|
||||||
|
|
||||||
def find_tex_file_ignore_case(fp):
|
def find_tex_file_ignore_case(fp):
|
||||||
dir_name = os.path.dirname(fp)
|
dir_name = os.path.dirname(fp)
|
||||||
base_name = os.path.basename(fp)
|
base_name = os.path.basename(fp)
|
||||||
# 如果输入的文件路径是正确的
|
# 如果输入的文件路径是正确的
|
||||||
if os.path.isfile(pj(dir_name, base_name)): return pj(dir_name, base_name)
|
if os.path.isfile(pj(dir_name, base_name)):
|
||||||
|
return pj(dir_name, base_name)
|
||||||
# 如果不正确,试着加上.tex后缀试试
|
# 如果不正确,试着加上.tex后缀试试
|
||||||
if not base_name.endswith('.tex'): base_name+='.tex'
|
if not base_name.endswith(".tex"):
|
||||||
if os.path.isfile(pj(dir_name, base_name)): return pj(dir_name, base_name)
|
base_name += ".tex"
|
||||||
|
if os.path.isfile(pj(dir_name, base_name)):
|
||||||
|
return pj(dir_name, base_name)
|
||||||
# 如果还找不到,解除大小写限制,再试一次
|
# 如果还找不到,解除大小写限制,再试一次
|
||||||
import glob
|
import glob
|
||||||
for f in glob.glob(dir_name+'/*.tex'):
|
|
||||||
|
for f in glob.glob(dir_name + "/*.tex"):
|
||||||
base_name_s = os.path.basename(fp)
|
base_name_s = os.path.basename(fp)
|
||||||
base_name_f = os.path.basename(f)
|
base_name_f = os.path.basename(f)
|
||||||
if base_name_s.lower() == base_name_f.lower(): return f
|
if base_name_s.lower() == base_name_f.lower():
|
||||||
|
return f
|
||||||
# 试着加上.tex后缀试试
|
# 试着加上.tex后缀试试
|
||||||
if not base_name_s.endswith('.tex'): base_name_s+='.tex'
|
if not base_name_s.endswith(".tex"):
|
||||||
if base_name_s.lower() == base_name_f.lower(): return f
|
base_name_s += ".tex"
|
||||||
|
if base_name_s.lower() == base_name_f.lower():
|
||||||
|
return f
|
||||||
return None
|
return None
|
||||||
|
|
||||||
|
|
||||||
def merge_tex_files_(project_foler, main_file, mode):
|
def merge_tex_files_(project_foler, main_file, mode):
|
||||||
"""
|
"""
|
||||||
Merge Tex project recrusively
|
Merge Tex project recrusively
|
||||||
@@ -309,18 +381,18 @@ def merge_tex_files_(project_foler, main_file, mode):
|
|||||||
fp_ = find_tex_file_ignore_case(fp)
|
fp_ = find_tex_file_ignore_case(fp)
|
||||||
if fp_:
|
if fp_:
|
||||||
try:
|
try:
|
||||||
with open(fp_, 'r', encoding='utf-8', errors='replace') as fx: c = fx.read()
|
with open(fp_, "r", encoding="utf-8", errors="replace") as fx:
|
||||||
|
c = fx.read()
|
||||||
except:
|
except:
|
||||||
c = f"\n\nWarning from GPT-Academic: LaTex source file is missing!\n\n"
|
c = f"\n\nWarning from GPT-Academic: LaTex source file is missing!\n\n"
|
||||||
else:
|
else:
|
||||||
raise RuntimeError(f'找不到{fp},Tex源文件缺失!')
|
raise RuntimeError(f"找不到{fp},Tex源文件缺失!")
|
||||||
c = merge_tex_files_(project_foler, c, mode)
|
c = merge_tex_files_(project_foler, c, mode)
|
||||||
main_file = main_file[:s.span()[0]] + c + main_file[s.span()[1]:]
|
main_file = main_file[: s.span()[0]] + c + main_file[s.span()[1] :]
|
||||||
return main_file
|
return main_file
|
||||||
|
|
||||||
|
|
||||||
def find_title_and_abs(main_file):
|
def find_title_and_abs(main_file):
|
||||||
|
|
||||||
def extract_abstract_1(text):
|
def extract_abstract_1(text):
|
||||||
pattern = r"\\abstract\{(.*?)\}"
|
pattern = r"\\abstract\{(.*?)\}"
|
||||||
match = re.search(pattern, text, re.DOTALL)
|
match = re.search(pattern, text, re.DOTALL)
|
||||||
@@ -362,21 +434,30 @@ def merge_tex_files(project_foler, main_file, mode):
|
|||||||
main_file = merge_tex_files_(project_foler, main_file, mode)
|
main_file = merge_tex_files_(project_foler, main_file, mode)
|
||||||
main_file = rm_comments(main_file)
|
main_file = rm_comments(main_file)
|
||||||
|
|
||||||
if mode == 'translate_zh':
|
if mode == "translate_zh":
|
||||||
# find paper documentclass
|
# find paper documentclass
|
||||||
pattern = re.compile(r'\\documentclass.*\n')
|
pattern = re.compile(r"\\documentclass.*\n")
|
||||||
match = pattern.search(main_file)
|
match = pattern.search(main_file)
|
||||||
assert match is not None, "Cannot find documentclass statement!"
|
assert match is not None, "Cannot find documentclass statement!"
|
||||||
position = match.end()
|
position = match.end()
|
||||||
add_ctex = '\\usepackage{ctex}\n'
|
add_ctex = "\\usepackage{ctex}\n"
|
||||||
add_url = '\\usepackage{url}\n' if '{url}' not in main_file else ''
|
add_url = "\\usepackage{url}\n" if "{url}" not in main_file else ""
|
||||||
main_file = main_file[:position] + add_ctex + add_url + main_file[position:]
|
main_file = main_file[:position] + add_ctex + add_url + main_file[position:]
|
||||||
# fontset=windows
|
# fontset=windows
|
||||||
import platform
|
import platform
|
||||||
main_file = re.sub(r"\\documentclass\[(.*?)\]{(.*?)}", r"\\documentclass[\1,fontset=windows,UTF8]{\2}",main_file)
|
|
||||||
main_file = re.sub(r"\\documentclass{(.*?)}", r"\\documentclass[fontset=windows,UTF8]{\1}",main_file)
|
main_file = re.sub(
|
||||||
|
r"\\documentclass\[(.*?)\]{(.*?)}",
|
||||||
|
r"\\documentclass[\1,fontset=windows,UTF8]{\2}",
|
||||||
|
main_file,
|
||||||
|
)
|
||||||
|
main_file = re.sub(
|
||||||
|
r"\\documentclass{(.*?)}",
|
||||||
|
r"\\documentclass[fontset=windows,UTF8]{\1}",
|
||||||
|
main_file,
|
||||||
|
)
|
||||||
# find paper abstract
|
# find paper abstract
|
||||||
pattern_opt1 = re.compile(r'\\begin\{abstract\}.*\n')
|
pattern_opt1 = re.compile(r"\\begin\{abstract\}.*\n")
|
||||||
pattern_opt2 = re.compile(r"\\abstract\{(.*?)\}", flags=re.DOTALL)
|
pattern_opt2 = re.compile(r"\\abstract\{(.*?)\}", flags=re.DOTALL)
|
||||||
match_opt1 = pattern_opt1.search(main_file)
|
match_opt1 = pattern_opt1.search(main_file)
|
||||||
match_opt2 = pattern_opt2.search(main_file)
|
match_opt2 = pattern_opt2.search(main_file)
|
||||||
@@ -385,7 +466,9 @@ def merge_tex_files(project_foler, main_file, mode):
|
|||||||
main_file = insert_abstract(main_file)
|
main_file = insert_abstract(main_file)
|
||||||
match_opt1 = pattern_opt1.search(main_file)
|
match_opt1 = pattern_opt1.search(main_file)
|
||||||
match_opt2 = pattern_opt2.search(main_file)
|
match_opt2 = pattern_opt2.search(main_file)
|
||||||
assert (match_opt1 is not None) or (match_opt2 is not None), "Cannot find paper abstract section!"
|
assert (match_opt1 is not None) or (
|
||||||
|
match_opt2 is not None
|
||||||
|
), "Cannot find paper abstract section!"
|
||||||
return main_file
|
return main_file
|
||||||
|
|
||||||
|
|
||||||
@@ -395,6 +478,7 @@ The GPT-Academic program cannot find abstract section in this paper.
|
|||||||
\end{abstract}
|
\end{abstract}
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
|
||||||
def insert_abstract(tex_content):
|
def insert_abstract(tex_content):
|
||||||
if "\\maketitle" in tex_content:
|
if "\\maketitle" in tex_content:
|
||||||
# find the position of "\maketitle"
|
# find the position of "\maketitle"
|
||||||
@@ -402,7 +486,13 @@ def insert_abstract(tex_content):
|
|||||||
# find the nearest ending line
|
# find the nearest ending line
|
||||||
end_line_index = tex_content.find("\n", find_index)
|
end_line_index = tex_content.find("\n", find_index)
|
||||||
# insert "abs_str" on the next line
|
# insert "abs_str" on the next line
|
||||||
modified_tex = tex_content[:end_line_index+1] + '\n\n' + insert_missing_abs_str + '\n\n' + tex_content[end_line_index+1:]
|
modified_tex = (
|
||||||
|
tex_content[: end_line_index + 1]
|
||||||
|
+ "\n\n"
|
||||||
|
+ insert_missing_abs_str
|
||||||
|
+ "\n\n"
|
||||||
|
+ tex_content[end_line_index + 1 :]
|
||||||
|
)
|
||||||
return modified_tex
|
return modified_tex
|
||||||
elif r"\begin{document}" in tex_content:
|
elif r"\begin{document}" in tex_content:
|
||||||
# find the position of "\maketitle"
|
# find the position of "\maketitle"
|
||||||
@@ -410,29 +500,39 @@ def insert_abstract(tex_content):
|
|||||||
# find the nearest ending line
|
# find the nearest ending line
|
||||||
end_line_index = tex_content.find("\n", find_index)
|
end_line_index = tex_content.find("\n", find_index)
|
||||||
# insert "abs_str" on the next line
|
# insert "abs_str" on the next line
|
||||||
modified_tex = tex_content[:end_line_index+1] + '\n\n' + insert_missing_abs_str + '\n\n' + tex_content[end_line_index+1:]
|
modified_tex = (
|
||||||
|
tex_content[: end_line_index + 1]
|
||||||
|
+ "\n\n"
|
||||||
|
+ insert_missing_abs_str
|
||||||
|
+ "\n\n"
|
||||||
|
+ tex_content[end_line_index + 1 :]
|
||||||
|
)
|
||||||
return modified_tex
|
return modified_tex
|
||||||
else:
|
else:
|
||||||
return tex_content
|
return tex_content
|
||||||
|
|
||||||
|
|
||||||
"""
|
"""
|
||||||
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
||||||
Post process
|
Post process
|
||||||
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
|
||||||
def mod_inbraket(match):
|
def mod_inbraket(match):
|
||||||
"""
|
"""
|
||||||
为啥chatgpt会把cite里面的逗号换成中文逗号呀
|
为啥chatgpt会把cite里面的逗号换成中文逗号呀
|
||||||
"""
|
"""
|
||||||
# get the matched string
|
# get the matched string
|
||||||
cmd = match.group(1)
|
cmd = match.group(1)
|
||||||
str_to_modify = match.group(2)
|
str_to_modify = match.group(2)
|
||||||
# modify the matched string
|
# modify the matched string
|
||||||
str_to_modify = str_to_modify.replace(':', ':') # 前面是中文冒号,后面是英文冒号
|
str_to_modify = str_to_modify.replace(":", ":") # 前面是中文冒号,后面是英文冒号
|
||||||
str_to_modify = str_to_modify.replace(',', ',') # 前面是中文逗号,后面是英文逗号
|
str_to_modify = str_to_modify.replace(",", ",") # 前面是中文逗号,后面是英文逗号
|
||||||
# str_to_modify = 'BOOM'
|
# str_to_modify = 'BOOM'
|
||||||
return "\\" + cmd + "{" + str_to_modify + "}"
|
return "\\" + cmd + "{" + str_to_modify + "}"
|
||||||
|
|
||||||
|
|
||||||
def fix_content(final_tex, node_string):
|
def fix_content(final_tex, node_string):
|
||||||
"""
|
"""
|
||||||
Fix common GPT errors to increase success rate
|
Fix common GPT errors to increase success rate
|
||||||
@@ -443,10 +543,10 @@ def fix_content(final_tex, node_string):
|
|||||||
final_tex = re.sub(r"\\([a-z]{2,10})\{([^\}]*?)\}", mod_inbraket, string=final_tex)
|
final_tex = re.sub(r"\\([a-z]{2,10})\{([^\}]*?)\}", mod_inbraket, string=final_tex)
|
||||||
|
|
||||||
if "Traceback" in final_tex and "[Local Message]" in final_tex:
|
if "Traceback" in final_tex and "[Local Message]" in final_tex:
|
||||||
final_tex = node_string # 出问题了,还原原文
|
final_tex = node_string # 出问题了,还原原文
|
||||||
if node_string.count('\\begin') != final_tex.count('\\begin'):
|
if node_string.count("\\begin") != final_tex.count("\\begin"):
|
||||||
final_tex = node_string # 出问题了,还原原文
|
final_tex = node_string # 出问题了,还原原文
|
||||||
if node_string.count('\_') > 0 and node_string.count('\_') > final_tex.count('\_'):
|
if node_string.count("\_") > 0 and node_string.count("\_") > final_tex.count("\_"):
|
||||||
# walk and replace any _ without \
|
# walk and replace any _ without \
|
||||||
final_tex = re.sub(r"(?<!\\)_", "\\_", final_tex)
|
final_tex = re.sub(r"(?<!\\)_", "\\_", final_tex)
|
||||||
|
|
||||||
@@ -454,24 +554,32 @@ def fix_content(final_tex, node_string):
|
|||||||
# this function count the number of { and }
|
# this function count the number of { and }
|
||||||
brace_level = 0
|
brace_level = 0
|
||||||
for c in string:
|
for c in string:
|
||||||
if c == "{": brace_level += 1
|
if c == "{":
|
||||||
elif c == "}": brace_level -= 1
|
brace_level += 1
|
||||||
|
elif c == "}":
|
||||||
|
brace_level -= 1
|
||||||
return brace_level
|
return brace_level
|
||||||
|
|
||||||
def join_most(tex_t, tex_o):
|
def join_most(tex_t, tex_o):
|
||||||
# this function join translated string and original string when something goes wrong
|
# this function join translated string and original string when something goes wrong
|
||||||
p_t = 0
|
p_t = 0
|
||||||
p_o = 0
|
p_o = 0
|
||||||
|
|
||||||
def find_next(string, chars, begin):
|
def find_next(string, chars, begin):
|
||||||
p = begin
|
p = begin
|
||||||
while p < len(string):
|
while p < len(string):
|
||||||
if string[p] in chars: return p, string[p]
|
if string[p] in chars:
|
||||||
|
return p, string[p]
|
||||||
p += 1
|
p += 1
|
||||||
return None, None
|
return None, None
|
||||||
|
|
||||||
while True:
|
while True:
|
||||||
res1, char = find_next(tex_o, ['{','}'], p_o)
|
res1, char = find_next(tex_o, ["{", "}"], p_o)
|
||||||
if res1 is None: break
|
if res1 is None:
|
||||||
|
break
|
||||||
res2, char = find_next(tex_t, [char], p_t)
|
res2, char = find_next(tex_t, [char], p_t)
|
||||||
if res2 is None: break
|
if res2 is None:
|
||||||
|
break
|
||||||
p_o = res1 + 1
|
p_o = res1 + 1
|
||||||
p_t = res2 + 1
|
p_t = res2 + 1
|
||||||
return tex_t[:p_t] + tex_o[p_o:]
|
return tex_t[:p_t] + tex_o[p_o:]
|
||||||
@@ -480,10 +588,14 @@ def fix_content(final_tex, node_string):
|
|||||||
# 出问题了,还原部分原文,保证括号正确
|
# 出问题了,还原部分原文,保证括号正确
|
||||||
final_tex = join_most(final_tex, node_string)
|
final_tex = join_most(final_tex, node_string)
|
||||||
return final_tex
|
return final_tex
|
||||||
|
|
||||||
|
|
||||||
def compile_latex_with_timeout(command, cwd, timeout=60):
|
def compile_latex_with_timeout(command, cwd, timeout=60):
|
||||||
import subprocess
|
import subprocess
|
||||||
process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, cwd=cwd)
|
|
||||||
|
process = subprocess.Popen(
|
||||||
|
command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, cwd=cwd
|
||||||
|
)
|
||||||
try:
|
try:
|
||||||
stdout, stderr = process.communicate(timeout=timeout)
|
stdout, stderr = process.communicate(timeout=timeout)
|
||||||
except subprocess.TimeoutExpired:
|
except subprocess.TimeoutExpired:
|
||||||
@@ -493,43 +605,52 @@ def compile_latex_with_timeout(command, cwd, timeout=60):
|
|||||||
return False
|
return False
|
||||||
return True
|
return True
|
||||||
|
|
||||||
|
|
||||||
def run_in_subprocess_wrapper_func(func, args, kwargs, return_dict, exception_dict):
|
def run_in_subprocess_wrapper_func(func, args, kwargs, return_dict, exception_dict):
|
||||||
import sys
|
import sys
|
||||||
|
|
||||||
try:
|
try:
|
||||||
result = func(*args, **kwargs)
|
result = func(*args, **kwargs)
|
||||||
return_dict['result'] = result
|
return_dict["result"] = result
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
exc_info = sys.exc_info()
|
exc_info = sys.exc_info()
|
||||||
exception_dict['exception'] = exc_info
|
exception_dict["exception"] = exc_info
|
||||||
|
|
||||||
|
|
||||||
def run_in_subprocess(func):
|
def run_in_subprocess(func):
|
||||||
import multiprocessing
|
import multiprocessing
|
||||||
|
|
||||||
def wrapper(*args, **kwargs):
|
def wrapper(*args, **kwargs):
|
||||||
return_dict = multiprocessing.Manager().dict()
|
return_dict = multiprocessing.Manager().dict()
|
||||||
exception_dict = multiprocessing.Manager().dict()
|
exception_dict = multiprocessing.Manager().dict()
|
||||||
process = multiprocessing.Process(target=run_in_subprocess_wrapper_func,
|
process = multiprocessing.Process(
|
||||||
args=(func, args, kwargs, return_dict, exception_dict))
|
target=run_in_subprocess_wrapper_func,
|
||||||
|
args=(func, args, kwargs, return_dict, exception_dict),
|
||||||
|
)
|
||||||
process.start()
|
process.start()
|
||||||
process.join()
|
process.join()
|
||||||
process.close()
|
process.close()
|
||||||
if 'exception' in exception_dict:
|
if "exception" in exception_dict:
|
||||||
# ooops, the subprocess ran into an exception
|
# ooops, the subprocess ran into an exception
|
||||||
exc_info = exception_dict['exception']
|
exc_info = exception_dict["exception"]
|
||||||
raise exc_info[1].with_traceback(exc_info[2])
|
raise exc_info[1].with_traceback(exc_info[2])
|
||||||
if 'result' in return_dict.keys():
|
if "result" in return_dict.keys():
|
||||||
# If the subprocess ran successfully, return the result
|
# If the subprocess ran successfully, return the result
|
||||||
return return_dict['result']
|
return return_dict["result"]
|
||||||
|
|
||||||
return wrapper
|
return wrapper
|
||||||
|
|
||||||
|
|
||||||
def _merge_pdfs(pdf1_path, pdf2_path, output_path):
|
def _merge_pdfs(pdf1_path, pdf2_path, output_path):
|
||||||
import PyPDF2 # PyPDF2这个库有严重的内存泄露问题,把它放到子进程中运行,从而方便内存的释放
|
import PyPDF2 # PyPDF2这个库有严重的内存泄露问题,把它放到子进程中运行,从而方便内存的释放
|
||||||
|
|
||||||
Percent = 0.95
|
Percent = 0.95
|
||||||
# raise RuntimeError('PyPDF2 has a serious memory leak problem, please use other tools to merge PDF files.')
|
# raise RuntimeError('PyPDF2 has a serious memory leak problem, please use other tools to merge PDF files.')
|
||||||
# Open the first PDF file
|
# Open the first PDF file
|
||||||
with open(pdf1_path, 'rb') as pdf1_file:
|
with open(pdf1_path, "rb") as pdf1_file:
|
||||||
pdf1_reader = PyPDF2.PdfFileReader(pdf1_file)
|
pdf1_reader = PyPDF2.PdfFileReader(pdf1_file)
|
||||||
# Open the second PDF file
|
# Open the second PDF file
|
||||||
with open(pdf2_path, 'rb') as pdf2_file:
|
with open(pdf2_path, "rb") as pdf2_file:
|
||||||
pdf2_reader = PyPDF2.PdfFileReader(pdf2_file)
|
pdf2_reader = PyPDF2.PdfFileReader(pdf2_file)
|
||||||
# Create a new PDF file to store the merged pages
|
# Create a new PDF file to store the merged pages
|
||||||
output_writer = PyPDF2.PdfFileWriter()
|
output_writer = PyPDF2.PdfFileWriter()
|
||||||
@@ -549,14 +670,25 @@ def _merge_pdfs(pdf1_path, pdf2_path, output_path):
|
|||||||
page2 = PyPDF2.PageObject.createBlankPage(pdf1_reader)
|
page2 = PyPDF2.PageObject.createBlankPage(pdf1_reader)
|
||||||
# Create a new empty page with double width
|
# Create a new empty page with double width
|
||||||
new_page = PyPDF2.PageObject.createBlankPage(
|
new_page = PyPDF2.PageObject.createBlankPage(
|
||||||
width = int(int(page1.mediaBox.getWidth()) + int(page2.mediaBox.getWidth()) * Percent),
|
width=int(
|
||||||
height = max(page1.mediaBox.getHeight(), page2.mediaBox.getHeight())
|
int(page1.mediaBox.getWidth())
|
||||||
|
+ int(page2.mediaBox.getWidth()) * Percent
|
||||||
|
),
|
||||||
|
height=max(page1.mediaBox.getHeight(), page2.mediaBox.getHeight()),
|
||||||
)
|
)
|
||||||
new_page.mergeTranslatedPage(page1, 0, 0)
|
new_page.mergeTranslatedPage(page1, 0, 0)
|
||||||
new_page.mergeTranslatedPage(page2, int(int(page1.mediaBox.getWidth())-int(page2.mediaBox.getWidth())* (1-Percent)), 0)
|
new_page.mergeTranslatedPage(
|
||||||
|
page2,
|
||||||
|
int(
|
||||||
|
int(page1.mediaBox.getWidth())
|
||||||
|
- int(page2.mediaBox.getWidth()) * (1 - Percent)
|
||||||
|
),
|
||||||
|
0,
|
||||||
|
)
|
||||||
output_writer.addPage(new_page)
|
output_writer.addPage(new_page)
|
||||||
# Save the merged PDF file
|
# Save the merged PDF file
|
||||||
with open(output_path, 'wb') as output_file:
|
with open(output_path, "wb") as output_file:
|
||||||
output_writer.write(output_file)
|
output_writer.write(output_file)
|
||||||
|
|
||||||
merge_pdfs = run_in_subprocess(_merge_pdfs) # PyPDF2这个库有严重的内存泄露问题,把它放到子进程中运行,从而方便内存的释放
|
|
||||||
|
merge_pdfs = run_in_subprocess(_merge_pdfs) # PyPDF2这个库有严重的内存泄露问题,把它放到子进程中运行,从而方便内存的释放
|
||||||
|
|||||||
@@ -1,6 +1,7 @@
|
|||||||
from toolbox import CatchException, update_ui, gen_time_str
|
import os
|
||||||
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
from toolbox import CatchException, update_ui, gen_time_str, promote_file_to_downloadzone
|
||||||
from .crazy_utils import input_clipping
|
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
||||||
|
from crazy_functions.crazy_utils import input_clipping
|
||||||
|
|
||||||
def inspect_dependency(chatbot, history):
|
def inspect_dependency(chatbot, history):
|
||||||
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
||||||
@@ -27,9 +28,10 @@ def eval_manim(code):
|
|||||||
class_name = get_class_name(code)
|
class_name = get_class_name(code)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
|
time_str = gen_time_str()
|
||||||
subprocess.check_output([sys.executable, '-c', f"from gpt_log.MyAnimation import {class_name}; {class_name}().render()"])
|
subprocess.check_output([sys.executable, '-c', f"from gpt_log.MyAnimation import {class_name}; {class_name}().render()"])
|
||||||
shutil.move('media/videos/1080p60/{class_name}.mp4', f'gpt_log/{class_name}-{gen_time_str()}.mp4')
|
shutil.move(f'media/videos/1080p60/{class_name}.mp4', f'gpt_log/{class_name}-{time_str}.mp4')
|
||||||
return f'gpt_log/{gen_time_str()}.mp4'
|
return f'gpt_log/{time_str}.mp4'
|
||||||
except subprocess.CalledProcessError as e:
|
except subprocess.CalledProcessError as e:
|
||||||
output = e.output.decode()
|
output = e.output.decode()
|
||||||
print(f"Command returned non-zero exit status {e.returncode}: {output}.")
|
print(f"Command returned non-zero exit status {e.returncode}: {output}.")
|
||||||
@@ -94,6 +96,8 @@ def 动画生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt
|
|||||||
res = eval_manim(code)
|
res = eval_manim(code)
|
||||||
|
|
||||||
chatbot.append(("生成的视频文件路径", res))
|
chatbot.append(("生成的视频文件路径", res))
|
||||||
|
if os.path.exists(res):
|
||||||
|
promote_file_to_downloadzone(res, chatbot=chatbot)
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
|
||||||
|
|
||||||
# 在这里放一些网上搜集的demo,辅助gpt生成代码
|
# 在这里放一些网上搜集的demo,辅助gpt生成代码
|
||||||
|
|||||||
@@ -26,4 +26,46 @@ def 高阶功能模板函数(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
|
|||||||
)
|
)
|
||||||
chatbot[-1] = (i_say, gpt_say)
|
chatbot[-1] = (i_say, gpt_say)
|
||||||
history.append(i_say);history.append(gpt_say)
|
history.append(i_say);history.append(gpt_say)
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
PROMPT = """
|
||||||
|
请你给出围绕“{subject}”的逻辑关系图,使用mermaid语法,mermaid语法举例:
|
||||||
|
```mermaid
|
||||||
|
graph TD
|
||||||
|
P(编程) --> L1(Python)
|
||||||
|
P(编程) --> L2(C)
|
||||||
|
P(编程) --> L3(C++)
|
||||||
|
P(编程) --> L4(Javascipt)
|
||||||
|
P(编程) --> L5(PHP)
|
||||||
|
```
|
||||||
|
"""
|
||||||
|
@CatchException
|
||||||
|
def 测试图表渲染(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
|
"""
|
||||||
|
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||||
|
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||||
|
plugin_kwargs 插件模型的参数,用于灵活调整复杂功能的各种参数
|
||||||
|
chatbot 聊天显示框的句柄,用于显示给用户
|
||||||
|
history 聊天历史,前情提要
|
||||||
|
system_prompt 给gpt的静默提醒
|
||||||
|
web_port 当前软件运行的端口号
|
||||||
|
"""
|
||||||
|
history = [] # 清空历史,以免输入溢出
|
||||||
|
chatbot.append(("这是什么功能?", "一个测试mermaid绘制图表的功能,您可以在输入框中输入一些关键词,然后使用mermaid+llm绘制图表。"))
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
|
||||||
|
|
||||||
|
if txt == "": txt = "空白的输入栏" # 调皮一下
|
||||||
|
|
||||||
|
i_say_show_user = f'请绘制有关“{txt}”的逻辑关系图。'
|
||||||
|
i_say = PROMPT.format(subject=txt)
|
||||||
|
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
||||||
|
inputs=i_say,
|
||||||
|
inputs_show_user=i_say_show_user,
|
||||||
|
llm_kwargs=llm_kwargs, chatbot=chatbot, history=[],
|
||||||
|
sys_prompt=""
|
||||||
|
)
|
||||||
|
history.append(i_say); history.append(gpt_say)
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
|
||||||
Binary file not shown.
@@ -352,9 +352,9 @@ def step_1_core_key_translate():
|
|||||||
chinese_core_keys_norepeat_mapping.update({k:cached_translation[k]})
|
chinese_core_keys_norepeat_mapping.update({k:cached_translation[k]})
|
||||||
chinese_core_keys_norepeat_mapping = dict(sorted(chinese_core_keys_norepeat_mapping.items(), key=lambda x: -len(x[0])))
|
chinese_core_keys_norepeat_mapping = dict(sorted(chinese_core_keys_norepeat_mapping.items(), key=lambda x: -len(x[0])))
|
||||||
|
|
||||||
# ===============================================
|
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
||||||
# copy
|
# copy
|
||||||
# ===============================================
|
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
||||||
def copy_source_code():
|
def copy_source_code():
|
||||||
|
|
||||||
from toolbox import get_conf
|
from toolbox import get_conf
|
||||||
@@ -367,9 +367,9 @@ def step_1_core_key_translate():
|
|||||||
shutil.copytree('./', backup_dir, ignore=lambda x, y: blacklist)
|
shutil.copytree('./', backup_dir, ignore=lambda x, y: blacklist)
|
||||||
copy_source_code()
|
copy_source_code()
|
||||||
|
|
||||||
# ===============================================
|
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
||||||
# primary key replace
|
# primary key replace
|
||||||
# ===============================================
|
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
||||||
directory_path = f'./multi-language/{LANG}/'
|
directory_path = f'./multi-language/{LANG}/'
|
||||||
for root, dirs, files in os.walk(directory_path):
|
for root, dirs, files in os.walk(directory_path):
|
||||||
for file in files:
|
for file in files:
|
||||||
@@ -389,9 +389,9 @@ def step_1_core_key_translate():
|
|||||||
|
|
||||||
def step_2_core_key_translate():
|
def step_2_core_key_translate():
|
||||||
|
|
||||||
# =================================================================================================
|
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
||||||
# step2
|
# step2
|
||||||
# =================================================================================================
|
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
||||||
|
|
||||||
def load_string(strings, string_input):
|
def load_string(strings, string_input):
|
||||||
string_ = string_input.strip().strip(',').strip().strip('.').strip()
|
string_ = string_input.strip().strip(',').strip().strip('.').strip()
|
||||||
@@ -492,9 +492,9 @@ def step_2_core_key_translate():
|
|||||||
cached_translation.update(read_map_from_json(language=LANG_STD))
|
cached_translation.update(read_map_from_json(language=LANG_STD))
|
||||||
cached_translation = dict(sorted(cached_translation.items(), key=lambda x: -len(x[0])))
|
cached_translation = dict(sorted(cached_translation.items(), key=lambda x: -len(x[0])))
|
||||||
|
|
||||||
# ===============================================
|
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
||||||
# literal key replace
|
# literal key replace
|
||||||
# ===============================================
|
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
||||||
directory_path = f'./multi-language/{LANG}/'
|
directory_path = f'./multi-language/{LANG}/'
|
||||||
for root, dirs, files in os.walk(directory_path):
|
for root, dirs, files in os.walk(directory_path):
|
||||||
for file in files:
|
for file in files:
|
||||||
|
|||||||
@@ -498,22 +498,6 @@ if "qwen-turbo" in AVAIL_LLM_MODELS or "qwen-plus" in AVAIL_LLM_MODELS or "qwen-
|
|||||||
})
|
})
|
||||||
except:
|
except:
|
||||||
print(trimmed_format_exc())
|
print(trimmed_format_exc())
|
||||||
if "chatgpt_website" in AVAIL_LLM_MODELS: # 接入一些逆向工程https://github.com/acheong08/ChatGPT-to-API/
|
|
||||||
try:
|
|
||||||
from .bridge_chatgpt_website import predict_no_ui_long_connection as chatgpt_website_noui
|
|
||||||
from .bridge_chatgpt_website import predict as chatgpt_website_ui
|
|
||||||
model_info.update({
|
|
||||||
"chatgpt_website": {
|
|
||||||
"fn_with_ui": chatgpt_website_ui,
|
|
||||||
"fn_without_ui": chatgpt_website_noui,
|
|
||||||
"endpoint": openai_endpoint,
|
|
||||||
"max_token": 4096,
|
|
||||||
"tokenizer": tokenizer_gpt35,
|
|
||||||
"token_cnt": get_token_num_gpt35,
|
|
||||||
}
|
|
||||||
})
|
|
||||||
except:
|
|
||||||
print(trimmed_format_exc())
|
|
||||||
if "spark" in AVAIL_LLM_MODELS: # 讯飞星火认知大模型
|
if "spark" in AVAIL_LLM_MODELS: # 讯飞星火认知大模型
|
||||||
try:
|
try:
|
||||||
from .bridge_spark import predict_no_ui_long_connection as spark_noui
|
from .bridge_spark import predict_no_ui_long_connection as spark_noui
|
||||||
@@ -610,6 +594,23 @@ if "deepseekcoder" in AVAIL_LLM_MODELS: # deepseekcoder
|
|||||||
})
|
})
|
||||||
except:
|
except:
|
||||||
print(trimmed_format_exc())
|
print(trimmed_format_exc())
|
||||||
|
# if "skylark" in AVAIL_LLM_MODELS:
|
||||||
|
# try:
|
||||||
|
# from .bridge_skylark2 import predict_no_ui_long_connection as skylark_noui
|
||||||
|
# from .bridge_skylark2 import predict as skylark_ui
|
||||||
|
# model_info.update({
|
||||||
|
# "skylark": {
|
||||||
|
# "fn_with_ui": skylark_ui,
|
||||||
|
# "fn_without_ui": skylark_noui,
|
||||||
|
# "endpoint": None,
|
||||||
|
# "max_token": 4096,
|
||||||
|
# "tokenizer": tokenizer_gpt35,
|
||||||
|
# "token_cnt": get_token_num_gpt35,
|
||||||
|
# }
|
||||||
|
# })
|
||||||
|
# except:
|
||||||
|
# print(trimmed_format_exc())
|
||||||
|
|
||||||
|
|
||||||
# <-- 用于定义和切换多个azure模型 -->
|
# <-- 用于定义和切换多个azure模型 -->
|
||||||
AZURE_CFG_ARRAY = get_conf("AZURE_CFG_ARRAY")
|
AZURE_CFG_ARRAY = get_conf("AZURE_CFG_ARRAY")
|
||||||
|
|||||||
@@ -244,6 +244,9 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
|||||||
if has_choices and not choice_valid:
|
if has_choices and not choice_valid:
|
||||||
# 一些垃圾第三方接口的出现这样的错误
|
# 一些垃圾第三方接口的出现这样的错误
|
||||||
continue
|
continue
|
||||||
|
if ('data: [DONE]' not in chunk_decoded) and len(chunk_decoded) > 0 and (chunkjson is None):
|
||||||
|
# 传递进来一些奇怪的东西
|
||||||
|
raise ValueError(f'无法读取以下数据,请检查配置。\n\n{chunk_decoded}')
|
||||||
# 前者是API2D的结束条件,后者是OPENAI的结束条件
|
# 前者是API2D的结束条件,后者是OPENAI的结束条件
|
||||||
if ('data: [DONE]' in chunk_decoded) or (len(chunkjson['choices'][0]["delta"]) == 0):
|
if ('data: [DONE]' in chunk_decoded) or (len(chunkjson['choices'][0]["delta"]) == 0):
|
||||||
# 判定为数据流的结束,gpt_replying_buffer也写完了
|
# 判定为数据流的结束,gpt_replying_buffer也写完了
|
||||||
|
|||||||
@@ -19,7 +19,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
|
|||||||
# 检查API_KEY
|
# 检查API_KEY
|
||||||
if get_conf("GEMINI_API_KEY") == "":
|
if get_conf("GEMINI_API_KEY") == "":
|
||||||
raise ValueError(f"请配置 GEMINI_API_KEY。")
|
raise ValueError(f"请配置 GEMINI_API_KEY。")
|
||||||
|
|
||||||
genai = GoogleChatInit()
|
genai = GoogleChatInit()
|
||||||
watch_dog_patience = 5 # 看门狗的耐心, 设置5秒即可
|
watch_dog_patience = 5 # 看门狗的耐心, 设置5秒即可
|
||||||
gpt_replying_buffer = ''
|
gpt_replying_buffer = ''
|
||||||
@@ -50,6 +50,11 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
|||||||
yield from update_ui_lastest_msg(f"请配置 GEMINI_API_KEY。", chatbot=chatbot, history=history, delay=0)
|
yield from update_ui_lastest_msg(f"请配置 GEMINI_API_KEY。", chatbot=chatbot, history=history, delay=0)
|
||||||
return
|
return
|
||||||
|
|
||||||
|
# 适配润色区域
|
||||||
|
if additional_fn is not None:
|
||||||
|
from core_functional import handle_core_functionality
|
||||||
|
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
|
||||||
|
|
||||||
if "vision" in llm_kwargs["llm_model"]:
|
if "vision" in llm_kwargs["llm_model"]:
|
||||||
have_recent_file, image_paths = have_any_recent_upload_image_files(chatbot)
|
have_recent_file, image_paths = have_any_recent_upload_image_files(chatbot)
|
||||||
def make_media_input(inputs, image_paths):
|
def make_media_input(inputs, image_paths):
|
||||||
|
|||||||
@@ -1,16 +1,17 @@
|
|||||||
"""
|
"""
|
||||||
========================================================================
|
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
||||||
第一部分:来自EdgeGPT.py
|
第一部分:来自EdgeGPT.py
|
||||||
https://github.com/acheong08/EdgeGPT
|
https://github.com/acheong08/EdgeGPT
|
||||||
========================================================================
|
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
||||||
"""
|
"""
|
||||||
from .edge_gpt_free import Chatbot as NewbingChatbot
|
from .edge_gpt_free import Chatbot as NewbingChatbot
|
||||||
|
|
||||||
load_message = "等待NewBing响应。"
|
load_message = "等待NewBing响应。"
|
||||||
|
|
||||||
"""
|
"""
|
||||||
========================================================================
|
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
||||||
第二部分:子进程Worker(调用主体)
|
第二部分:子进程Worker(调用主体)
|
||||||
========================================================================
|
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
||||||
"""
|
"""
|
||||||
import time
|
import time
|
||||||
import json
|
import json
|
||||||
@@ -22,19 +23,30 @@ import threading
|
|||||||
from toolbox import update_ui, get_conf, trimmed_format_exc
|
from toolbox import update_ui, get_conf, trimmed_format_exc
|
||||||
from multiprocessing import Process, Pipe
|
from multiprocessing import Process, Pipe
|
||||||
|
|
||||||
|
|
||||||
def preprocess_newbing_out(s):
|
def preprocess_newbing_out(s):
|
||||||
pattern = r'\^(\d+)\^' # 匹配^数字^
|
pattern = r"\^(\d+)\^" # 匹配^数字^
|
||||||
sub = lambda m: '('+m.group(1)+')' # 将匹配到的数字作为替换值
|
sub = lambda m: "(" + m.group(1) + ")" # 将匹配到的数字作为替换值
|
||||||
result = re.sub(pattern, sub, s) # 替换操作
|
result = re.sub(pattern, sub, s) # 替换操作
|
||||||
if '[1]' in result:
|
if "[1]" in result:
|
||||||
result += '\n\n```reference\n' + "\n".join([r for r in result.split('\n') if r.startswith('[')]) + '\n```\n'
|
result += (
|
||||||
|
"\n\n```reference\n"
|
||||||
|
+ "\n".join([r for r in result.split("\n") if r.startswith("[")])
|
||||||
|
+ "\n```\n"
|
||||||
|
)
|
||||||
return result
|
return result
|
||||||
|
|
||||||
|
|
||||||
def preprocess_newbing_out_simple(result):
|
def preprocess_newbing_out_simple(result):
|
||||||
if '[1]' in result:
|
if "[1]" in result:
|
||||||
result += '\n\n```reference\n' + "\n".join([r for r in result.split('\n') if r.startswith('[')]) + '\n```\n'
|
result += (
|
||||||
|
"\n\n```reference\n"
|
||||||
|
+ "\n".join([r for r in result.split("\n") if r.startswith("[")])
|
||||||
|
+ "\n```\n"
|
||||||
|
)
|
||||||
return result
|
return result
|
||||||
|
|
||||||
|
|
||||||
class NewBingHandle(Process):
|
class NewBingHandle(Process):
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
super().__init__(daemon=True)
|
super().__init__(daemon=True)
|
||||||
@@ -46,11 +58,12 @@ class NewBingHandle(Process):
|
|||||||
self.check_dependency()
|
self.check_dependency()
|
||||||
self.start()
|
self.start()
|
||||||
self.threadLock = threading.Lock()
|
self.threadLock = threading.Lock()
|
||||||
|
|
||||||
def check_dependency(self):
|
def check_dependency(self):
|
||||||
try:
|
try:
|
||||||
self.success = False
|
self.success = False
|
||||||
import certifi, httpx, rich
|
import certifi, httpx, rich
|
||||||
|
|
||||||
self.info = "依赖检测通过,等待NewBing响应。注意目前不能多人同时调用NewBing接口(有线程锁),否则将导致每个人的NewBing问询历史互相渗透。调用NewBing时,会自动使用已配置的代理。"
|
self.info = "依赖检测通过,等待NewBing响应。注意目前不能多人同时调用NewBing接口(有线程锁),否则将导致每个人的NewBing问询历史互相渗透。调用NewBing时,会自动使用已配置的代理。"
|
||||||
self.success = True
|
self.success = True
|
||||||
except:
|
except:
|
||||||
@@ -62,18 +75,19 @@ class NewBingHandle(Process):
|
|||||||
|
|
||||||
async def async_run(self):
|
async def async_run(self):
|
||||||
# 读取配置
|
# 读取配置
|
||||||
NEWBING_STYLE = get_conf('NEWBING_STYLE')
|
NEWBING_STYLE = get_conf("NEWBING_STYLE")
|
||||||
from request_llms.bridge_all import model_info
|
from request_llms.bridge_all import model_info
|
||||||
endpoint = model_info['newbing']['endpoint']
|
|
||||||
|
endpoint = model_info["newbing"]["endpoint"]
|
||||||
while True:
|
while True:
|
||||||
# 等待
|
# 等待
|
||||||
kwargs = self.child.recv()
|
kwargs = self.child.recv()
|
||||||
question=kwargs['query']
|
question = kwargs["query"]
|
||||||
history=kwargs['history']
|
history = kwargs["history"]
|
||||||
system_prompt=kwargs['system_prompt']
|
system_prompt = kwargs["system_prompt"]
|
||||||
|
|
||||||
# 是否重置
|
# 是否重置
|
||||||
if len(self.local_history) > 0 and len(history)==0:
|
if len(self.local_history) > 0 and len(history) == 0:
|
||||||
await self.newbing_model.reset()
|
await self.newbing_model.reset()
|
||||||
self.local_history = []
|
self.local_history = []
|
||||||
|
|
||||||
@@ -81,34 +95,33 @@ class NewBingHandle(Process):
|
|||||||
prompt = ""
|
prompt = ""
|
||||||
if system_prompt not in self.local_history:
|
if system_prompt not in self.local_history:
|
||||||
self.local_history.append(system_prompt)
|
self.local_history.append(system_prompt)
|
||||||
prompt += system_prompt + '\n'
|
prompt += system_prompt + "\n"
|
||||||
|
|
||||||
# 追加历史
|
# 追加历史
|
||||||
for ab in history:
|
for ab in history:
|
||||||
a, b = ab
|
a, b = ab
|
||||||
if a not in self.local_history:
|
if a not in self.local_history:
|
||||||
self.local_history.append(a)
|
self.local_history.append(a)
|
||||||
prompt += a + '\n'
|
prompt += a + "\n"
|
||||||
|
|
||||||
# 问题
|
# 问题
|
||||||
prompt += question
|
prompt += question
|
||||||
self.local_history.append(question)
|
self.local_history.append(question)
|
||||||
print('question:', prompt)
|
print("question:", prompt)
|
||||||
# 提交
|
# 提交
|
||||||
async for final, response in self.newbing_model.ask_stream(
|
async for final, response in self.newbing_model.ask_stream(
|
||||||
prompt=question,
|
prompt=question,
|
||||||
conversation_style=NEWBING_STYLE, # ["creative", "balanced", "precise"]
|
conversation_style=NEWBING_STYLE, # ["creative", "balanced", "precise"]
|
||||||
wss_link=endpoint, # "wss://sydney.bing.com/sydney/ChatHub"
|
wss_link=endpoint, # "wss://sydney.bing.com/sydney/ChatHub"
|
||||||
):
|
):
|
||||||
if not final:
|
if not final:
|
||||||
print(response)
|
print(response)
|
||||||
self.child.send(str(response))
|
self.child.send(str(response))
|
||||||
else:
|
else:
|
||||||
print('-------- receive final ---------')
|
print("-------- receive final ---------")
|
||||||
self.child.send('[Finish]')
|
self.child.send("[Finish]")
|
||||||
# self.local_history.append(response)
|
# self.local_history.append(response)
|
||||||
|
|
||||||
|
|
||||||
def run(self):
|
def run(self):
|
||||||
"""
|
"""
|
||||||
这个函数运行在子进程
|
这个函数运行在子进程
|
||||||
@@ -118,32 +131,37 @@ class NewBingHandle(Process):
|
|||||||
self.local_history = []
|
self.local_history = []
|
||||||
if (self.newbing_model is None) or (not self.success):
|
if (self.newbing_model is None) or (not self.success):
|
||||||
# 代理设置
|
# 代理设置
|
||||||
proxies, NEWBING_COOKIES = get_conf('proxies', 'NEWBING_COOKIES')
|
proxies, NEWBING_COOKIES = get_conf("proxies", "NEWBING_COOKIES")
|
||||||
if proxies is None:
|
if proxies is None:
|
||||||
self.proxies_https = None
|
self.proxies_https = None
|
||||||
else:
|
else:
|
||||||
self.proxies_https = proxies['https']
|
self.proxies_https = proxies["https"]
|
||||||
|
|
||||||
if (NEWBING_COOKIES is not None) and len(NEWBING_COOKIES) > 100:
|
if (NEWBING_COOKIES is not None) and len(NEWBING_COOKIES) > 100:
|
||||||
try:
|
try:
|
||||||
cookies = json.loads(NEWBING_COOKIES)
|
cookies = json.loads(NEWBING_COOKIES)
|
||||||
except:
|
except:
|
||||||
self.success = False
|
self.success = False
|
||||||
tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n'
|
tb_str = "\n```\n" + trimmed_format_exc() + "\n```\n"
|
||||||
self.child.send(f'[Local Message] NEWBING_COOKIES未填写或有格式错误。')
|
self.child.send(f"[Local Message] NEWBING_COOKIES未填写或有格式错误。")
|
||||||
self.child.send('[Fail]'); self.child.send('[Finish]')
|
self.child.send("[Fail]")
|
||||||
|
self.child.send("[Finish]")
|
||||||
raise RuntimeError(f"NEWBING_COOKIES未填写或有格式错误。")
|
raise RuntimeError(f"NEWBING_COOKIES未填写或有格式错误。")
|
||||||
else:
|
else:
|
||||||
cookies = None
|
cookies = None
|
||||||
|
|
||||||
try:
|
try:
|
||||||
self.newbing_model = NewbingChatbot(proxy=self.proxies_https, cookies=cookies)
|
self.newbing_model = NewbingChatbot(
|
||||||
|
proxy=self.proxies_https, cookies=cookies
|
||||||
|
)
|
||||||
except:
|
except:
|
||||||
self.success = False
|
self.success = False
|
||||||
tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n'
|
tb_str = "\n```\n" + trimmed_format_exc() + "\n```\n"
|
||||||
self.child.send(f'[Local Message] 不能加载Newbing组件,请注意Newbing组件已不再维护。{tb_str}')
|
self.child.send(
|
||||||
self.child.send('[Fail]')
|
f"[Local Message] 不能加载Newbing组件,请注意Newbing组件已不再维护。{tb_str}"
|
||||||
self.child.send('[Finish]')
|
)
|
||||||
|
self.child.send("[Fail]")
|
||||||
|
self.child.send("[Finish]")
|
||||||
raise RuntimeError(f"不能加载Newbing组件,请注意Newbing组件已不再维护。")
|
raise RuntimeError(f"不能加载Newbing组件,请注意Newbing组件已不再维护。")
|
||||||
|
|
||||||
self.success = True
|
self.success = True
|
||||||
@@ -151,66 +169,100 @@ class NewBingHandle(Process):
|
|||||||
# 进入任务等待状态
|
# 进入任务等待状态
|
||||||
asyncio.run(self.async_run())
|
asyncio.run(self.async_run())
|
||||||
except Exception:
|
except Exception:
|
||||||
tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n'
|
tb_str = "\n```\n" + trimmed_format_exc() + "\n```\n"
|
||||||
self.child.send(f'[Local Message] Newbing 请求失败,报错信息如下. 如果是与网络相关的问题,建议更换代理协议(推荐http)或代理节点 {tb_str}.')
|
self.child.send(
|
||||||
self.child.send('[Fail]')
|
f"[Local Message] Newbing 请求失败,报错信息如下. 如果是与网络相关的问题,建议更换代理协议(推荐http)或代理节点 {tb_str}."
|
||||||
self.child.send('[Finish]')
|
)
|
||||||
|
self.child.send("[Fail]")
|
||||||
|
self.child.send("[Finish]")
|
||||||
|
|
||||||
def stream_chat(self, **kwargs):
|
def stream_chat(self, **kwargs):
|
||||||
"""
|
"""
|
||||||
这个函数运行在主进程
|
这个函数运行在主进程
|
||||||
"""
|
"""
|
||||||
self.threadLock.acquire() # 获取线程锁
|
self.threadLock.acquire() # 获取线程锁
|
||||||
self.parent.send(kwargs) # 请求子进程
|
self.parent.send(kwargs) # 请求子进程
|
||||||
while True:
|
while True:
|
||||||
res = self.parent.recv() # 等待newbing回复的片段
|
res = self.parent.recv() # 等待newbing回复的片段
|
||||||
if res == '[Finish]': break # 结束
|
if res == "[Finish]":
|
||||||
elif res == '[Fail]': self.success = False; break # 失败
|
break # 结束
|
||||||
else: yield res # newbing回复的片段
|
elif res == "[Fail]":
|
||||||
self.threadLock.release() # 释放线程锁
|
self.success = False
|
||||||
|
break # 失败
|
||||||
|
else:
|
||||||
|
yield res # newbing回复的片段
|
||||||
|
self.threadLock.release() # 释放线程锁
|
||||||
|
|
||||||
|
|
||||||
"""
|
"""
|
||||||
========================================================================
|
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
||||||
第三部分:主进程统一调用函数接口
|
第三部分:主进程统一调用函数接口
|
||||||
========================================================================
|
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
||||||
"""
|
"""
|
||||||
global newbingfree_handle
|
global newbingfree_handle
|
||||||
newbingfree_handle = None
|
newbingfree_handle = None
|
||||||
|
|
||||||
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False):
|
|
||||||
|
def predict_no_ui_long_connection(
|
||||||
|
inputs,
|
||||||
|
llm_kwargs,
|
||||||
|
history=[],
|
||||||
|
sys_prompt="",
|
||||||
|
observe_window=[],
|
||||||
|
console_slience=False,
|
||||||
|
):
|
||||||
"""
|
"""
|
||||||
多线程方法
|
多线程方法
|
||||||
函数的说明请见 request_llms/bridge_all.py
|
函数的说明请见 request_llms/bridge_all.py
|
||||||
"""
|
"""
|
||||||
global newbingfree_handle
|
global newbingfree_handle
|
||||||
if (newbingfree_handle is None) or (not newbingfree_handle.success):
|
if (newbingfree_handle is None) or (not newbingfree_handle.success):
|
||||||
newbingfree_handle = NewBingHandle()
|
newbingfree_handle = NewBingHandle()
|
||||||
if len(observe_window) >= 1: observe_window[0] = load_message + "\n\n" + newbingfree_handle.info
|
if len(observe_window) >= 1:
|
||||||
if not newbingfree_handle.success:
|
observe_window[0] = load_message + "\n\n" + newbingfree_handle.info
|
||||||
|
if not newbingfree_handle.success:
|
||||||
error = newbingfree_handle.info
|
error = newbingfree_handle.info
|
||||||
newbingfree_handle = None
|
newbingfree_handle = None
|
||||||
raise RuntimeError(error)
|
raise RuntimeError(error)
|
||||||
|
|
||||||
# 没有 sys_prompt 接口,因此把prompt加入 history
|
# 没有 sys_prompt 接口,因此把prompt加入 history
|
||||||
history_feedin = []
|
history_feedin = []
|
||||||
for i in range(len(history)//2):
|
for i in range(len(history) // 2):
|
||||||
history_feedin.append([history[2*i], history[2*i+1]] )
|
history_feedin.append([history[2 * i], history[2 * i + 1]])
|
||||||
|
|
||||||
watch_dog_patience = 5 # 看门狗 (watchdog) 的耐心, 设置5秒即可
|
watch_dog_patience = 5 # 看门狗 (watchdog) 的耐心, 设置5秒即可
|
||||||
response = ""
|
response = ""
|
||||||
if len(observe_window) >= 1: observe_window[0] = "[Local Message] 等待NewBing响应中 ..."
|
if len(observe_window) >= 1:
|
||||||
for response in newbingfree_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=sys_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
|
observe_window[0] = "[Local Message] 等待NewBing响应中 ..."
|
||||||
if len(observe_window) >= 1: observe_window[0] = preprocess_newbing_out_simple(response)
|
for response in newbingfree_handle.stream_chat(
|
||||||
if len(observe_window) >= 2:
|
query=inputs,
|
||||||
if (time.time()-observe_window[1]) > watch_dog_patience:
|
history=history_feedin,
|
||||||
|
system_prompt=sys_prompt,
|
||||||
|
max_length=llm_kwargs["max_length"],
|
||||||
|
top_p=llm_kwargs["top_p"],
|
||||||
|
temperature=llm_kwargs["temperature"],
|
||||||
|
):
|
||||||
|
if len(observe_window) >= 1:
|
||||||
|
observe_window[0] = preprocess_newbing_out_simple(response)
|
||||||
|
if len(observe_window) >= 2:
|
||||||
|
if (time.time() - observe_window[1]) > watch_dog_patience:
|
||||||
raise RuntimeError("程序终止。")
|
raise RuntimeError("程序终止。")
|
||||||
return preprocess_newbing_out_simple(response)
|
return preprocess_newbing_out_simple(response)
|
||||||
|
|
||||||
def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None):
|
|
||||||
|
def predict(
|
||||||
|
inputs,
|
||||||
|
llm_kwargs,
|
||||||
|
plugin_kwargs,
|
||||||
|
chatbot,
|
||||||
|
history=[],
|
||||||
|
system_prompt="",
|
||||||
|
stream=True,
|
||||||
|
additional_fn=None,
|
||||||
|
):
|
||||||
"""
|
"""
|
||||||
单线程方法
|
单线程方法
|
||||||
函数的说明请见 request_llms/bridge_all.py
|
函数的说明请见 request_llms/bridge_all.py
|
||||||
"""
|
"""
|
||||||
chatbot.append((inputs, "[Local Message] 等待NewBing响应中 ..."))
|
chatbot.append((inputs, "[Local Message] 等待NewBing响应中 ..."))
|
||||||
|
|
||||||
@@ -219,27 +271,41 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
|||||||
newbingfree_handle = NewBingHandle()
|
newbingfree_handle = NewBingHandle()
|
||||||
chatbot[-1] = (inputs, load_message + "\n\n" + newbingfree_handle.info)
|
chatbot[-1] = (inputs, load_message + "\n\n" + newbingfree_handle.info)
|
||||||
yield from update_ui(chatbot=chatbot, history=[])
|
yield from update_ui(chatbot=chatbot, history=[])
|
||||||
if not newbingfree_handle.success:
|
if not newbingfree_handle.success:
|
||||||
newbingfree_handle = None
|
newbingfree_handle = None
|
||||||
return
|
return
|
||||||
|
|
||||||
if additional_fn is not None:
|
if additional_fn is not None:
|
||||||
from core_functional import handle_core_functionality
|
from core_functional import handle_core_functionality
|
||||||
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
|
|
||||||
|
inputs, history = handle_core_functionality(
|
||||||
|
additional_fn, inputs, history, chatbot
|
||||||
|
)
|
||||||
|
|
||||||
history_feedin = []
|
history_feedin = []
|
||||||
for i in range(len(history)//2):
|
for i in range(len(history) // 2):
|
||||||
history_feedin.append([history[2*i], history[2*i+1]] )
|
history_feedin.append([history[2 * i], history[2 * i + 1]])
|
||||||
|
|
||||||
chatbot[-1] = (inputs, "[Local Message] 等待NewBing响应中 ...")
|
chatbot[-1] = (inputs, "[Local Message] 等待NewBing响应中 ...")
|
||||||
response = "[Local Message] 等待NewBing响应中 ..."
|
response = "[Local Message] 等待NewBing响应中 ..."
|
||||||
yield from update_ui(chatbot=chatbot, history=history, msg="NewBing响应缓慢,尚未完成全部响应,请耐心完成后再提交新问题。")
|
yield from update_ui(
|
||||||
for response in newbingfree_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=system_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
|
chatbot=chatbot, history=history, msg="NewBing响应缓慢,尚未完成全部响应,请耐心完成后再提交新问题。"
|
||||||
|
)
|
||||||
|
for response in newbingfree_handle.stream_chat(
|
||||||
|
query=inputs,
|
||||||
|
history=history_feedin,
|
||||||
|
system_prompt=system_prompt,
|
||||||
|
max_length=llm_kwargs["max_length"],
|
||||||
|
top_p=llm_kwargs["top_p"],
|
||||||
|
temperature=llm_kwargs["temperature"],
|
||||||
|
):
|
||||||
chatbot[-1] = (inputs, preprocess_newbing_out(response))
|
chatbot[-1] = (inputs, preprocess_newbing_out(response))
|
||||||
yield from update_ui(chatbot=chatbot, history=history, msg="NewBing响应缓慢,尚未完成全部响应,请耐心完成后再提交新问题。")
|
yield from update_ui(
|
||||||
if response == "[Local Message] 等待NewBing响应中 ...": response = "[Local Message] NewBing响应异常,请刷新界面重试 ..."
|
chatbot=chatbot, history=history, msg="NewBing响应缓慢,尚未完成全部响应,请耐心完成后再提交新问题。"
|
||||||
|
)
|
||||||
|
if response == "[Local Message] 等待NewBing响应中 ...":
|
||||||
|
response = "[Local Message] NewBing响应异常,请刷新界面重试 ..."
|
||||||
history.extend([inputs, response])
|
history.extend([inputs, response])
|
||||||
logging.info(f'[raw_input] {inputs}')
|
logging.info(f"[raw_input] {inputs}")
|
||||||
logging.info(f'[response] {response}')
|
logging.info(f"[response] {response}")
|
||||||
yield from update_ui(chatbot=chatbot, history=history, msg="完成全部响应,请提交新问题。")
|
yield from update_ui(chatbot=chatbot, history=history, msg="完成全部响应,请提交新问题。")
|
||||||
|
|
||||||
|
|||||||
67
request_llms/bridge_skylark2.py
Normal file
67
request_llms/bridge_skylark2.py
Normal file
@@ -0,0 +1,67 @@
|
|||||||
|
import time
|
||||||
|
from toolbox import update_ui, get_conf, update_ui_lastest_msg
|
||||||
|
from toolbox import check_packages, report_exception
|
||||||
|
|
||||||
|
model_name = '云雀大模型'
|
||||||
|
|
||||||
|
def validate_key():
|
||||||
|
YUNQUE_SECRET_KEY = get_conf("YUNQUE_SECRET_KEY")
|
||||||
|
if YUNQUE_SECRET_KEY == '': return False
|
||||||
|
return True
|
||||||
|
|
||||||
|
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False):
|
||||||
|
"""
|
||||||
|
⭐ 多线程方法
|
||||||
|
函数的说明请见 request_llms/bridge_all.py
|
||||||
|
"""
|
||||||
|
watch_dog_patience = 5
|
||||||
|
response = ""
|
||||||
|
|
||||||
|
if validate_key() is False:
|
||||||
|
raise RuntimeError('请配置YUNQUE_SECRET_KEY')
|
||||||
|
|
||||||
|
from .com_skylark2api import YUNQUERequestInstance
|
||||||
|
sri = YUNQUERequestInstance()
|
||||||
|
for response in sri.generate(inputs, llm_kwargs, history, sys_prompt):
|
||||||
|
if len(observe_window) >= 1:
|
||||||
|
observe_window[0] = response
|
||||||
|
if len(observe_window) >= 2:
|
||||||
|
if (time.time()-observe_window[1]) > watch_dog_patience: raise RuntimeError("程序终止。")
|
||||||
|
return response
|
||||||
|
|
||||||
|
def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None):
|
||||||
|
"""
|
||||||
|
⭐ 单线程方法
|
||||||
|
函数的说明请见 request_llms/bridge_all.py
|
||||||
|
"""
|
||||||
|
chatbot.append((inputs, ""))
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history)
|
||||||
|
|
||||||
|
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
||||||
|
try:
|
||||||
|
check_packages(["zhipuai"])
|
||||||
|
except:
|
||||||
|
yield from update_ui_lastest_msg(f"导入软件依赖失败。使用该模型需要额外依赖,安装方法```pip install --upgrade zhipuai```。",
|
||||||
|
chatbot=chatbot, history=history, delay=0)
|
||||||
|
return
|
||||||
|
|
||||||
|
if validate_key() is False:
|
||||||
|
yield from update_ui_lastest_msg(lastmsg="[Local Message] 请配置HUOSHAN_API_KEY", chatbot=chatbot, history=history, delay=0)
|
||||||
|
return
|
||||||
|
|
||||||
|
if additional_fn is not None:
|
||||||
|
from core_functional import handle_core_functionality
|
||||||
|
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
|
||||||
|
|
||||||
|
# 开始接收回复
|
||||||
|
from .com_skylark2api import YUNQUERequestInstance
|
||||||
|
sri = YUNQUERequestInstance()
|
||||||
|
for response in sri.generate(inputs, llm_kwargs, history, system_prompt):
|
||||||
|
chatbot[-1] = (inputs, response)
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history)
|
||||||
|
|
||||||
|
# 总结输出
|
||||||
|
if response == f"[Local Message] 等待{model_name}响应中 ...":
|
||||||
|
response = f"[Local Message] {model_name}响应异常 ..."
|
||||||
|
history.extend([inputs, response])
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history)
|
||||||
@@ -7,14 +7,15 @@ import logging
|
|||||||
import time
|
import time
|
||||||
from toolbox import get_conf
|
from toolbox import get_conf
|
||||||
import asyncio
|
import asyncio
|
||||||
|
|
||||||
load_message = "正在加载Claude组件,请稍候..."
|
load_message = "正在加载Claude组件,请稍候..."
|
||||||
|
|
||||||
try:
|
try:
|
||||||
"""
|
"""
|
||||||
========================================================================
|
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
||||||
第一部分:Slack API Client
|
第一部分:Slack API Client
|
||||||
https://github.com/yokonsan/claude-in-slack-api
|
https://github.com/yokonsan/claude-in-slack-api
|
||||||
========================================================================
|
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
||||||
"""
|
"""
|
||||||
|
|
||||||
from slack_sdk.errors import SlackApiError
|
from slack_sdk.errors import SlackApiError
|
||||||
@@ -23,20 +24,23 @@ try:
|
|||||||
class SlackClient(AsyncWebClient):
|
class SlackClient(AsyncWebClient):
|
||||||
"""SlackClient类用于与Slack API进行交互,实现消息发送、接收等功能。
|
"""SlackClient类用于与Slack API进行交互,实现消息发送、接收等功能。
|
||||||
|
|
||||||
属性:
|
属性:
|
||||||
- CHANNEL_ID:str类型,表示频道ID。
|
- CHANNEL_ID:str类型,表示频道ID。
|
||||||
|
|
||||||
方法:
|
方法:
|
||||||
- open_channel():异步方法。通过调用conversations_open方法打开一个频道,并将返回的频道ID保存在属性CHANNEL_ID中。
|
- open_channel():异步方法。通过调用conversations_open方法打开一个频道,并将返回的频道ID保存在属性CHANNEL_ID中。
|
||||||
- chat(text: str):异步方法。向已打开的频道发送一条文本消息。
|
- chat(text: str):异步方法。向已打开的频道发送一条文本消息。
|
||||||
- get_slack_messages():异步方法。获取已打开频道的最新消息并返回消息列表,目前不支持历史消息查询。
|
- get_slack_messages():异步方法。获取已打开频道的最新消息并返回消息列表,目前不支持历史消息查询。
|
||||||
- get_reply():异步方法。循环监听已打开频道的消息,如果收到"Typing…_"结尾的消息说明Claude还在继续输出,否则结束循环。
|
- get_reply():异步方法。循环监听已打开频道的消息,如果收到"Typing…_"结尾的消息说明Claude还在继续输出,否则结束循环。
|
||||||
|
|
||||||
"""
|
"""
|
||||||
|
|
||||||
CHANNEL_ID = None
|
CHANNEL_ID = None
|
||||||
|
|
||||||
async def open_channel(self):
|
async def open_channel(self):
|
||||||
response = await self.conversations_open(users=get_conf('SLACK_CLAUDE_BOT_ID'))
|
response = await self.conversations_open(
|
||||||
|
users=get_conf("SLACK_CLAUDE_BOT_ID")
|
||||||
|
)
|
||||||
self.CHANNEL_ID = response["channel"]["id"]
|
self.CHANNEL_ID = response["channel"]["id"]
|
||||||
|
|
||||||
async def chat(self, text):
|
async def chat(self, text):
|
||||||
@@ -49,33 +53,39 @@ try:
|
|||||||
async def get_slack_messages(self):
|
async def get_slack_messages(self):
|
||||||
try:
|
try:
|
||||||
# TODO:暂时不支持历史消息,因为在同一个频道里存在多人使用时历史消息渗透问题
|
# TODO:暂时不支持历史消息,因为在同一个频道里存在多人使用时历史消息渗透问题
|
||||||
resp = await self.conversations_history(channel=self.CHANNEL_ID, oldest=self.LAST_TS, limit=1)
|
resp = await self.conversations_history(
|
||||||
msg = [msg for msg in resp["messages"]
|
channel=self.CHANNEL_ID, oldest=self.LAST_TS, limit=1
|
||||||
if msg.get("user") == get_conf('SLACK_CLAUDE_BOT_ID')]
|
)
|
||||||
|
msg = [
|
||||||
|
msg
|
||||||
|
for msg in resp["messages"]
|
||||||
|
if msg.get("user") == get_conf("SLACK_CLAUDE_BOT_ID")
|
||||||
|
]
|
||||||
return msg
|
return msg
|
||||||
except (SlackApiError, KeyError) as e:
|
except (SlackApiError, KeyError) as e:
|
||||||
raise RuntimeError(f"获取Slack消息失败。")
|
raise RuntimeError(f"获取Slack消息失败。")
|
||||||
|
|
||||||
async def get_reply(self):
|
async def get_reply(self):
|
||||||
while True:
|
while True:
|
||||||
slack_msgs = await self.get_slack_messages()
|
slack_msgs = await self.get_slack_messages()
|
||||||
if len(slack_msgs) == 0:
|
if len(slack_msgs) == 0:
|
||||||
await asyncio.sleep(0.5)
|
await asyncio.sleep(0.5)
|
||||||
continue
|
continue
|
||||||
|
|
||||||
msg = slack_msgs[-1]
|
msg = slack_msgs[-1]
|
||||||
if msg["text"].endswith("Typing…_"):
|
if msg["text"].endswith("Typing…_"):
|
||||||
yield False, msg["text"]
|
yield False, msg["text"]
|
||||||
else:
|
else:
|
||||||
yield True, msg["text"]
|
yield True, msg["text"]
|
||||||
break
|
break
|
||||||
|
|
||||||
except:
|
except:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
"""
|
"""
|
||||||
========================================================================
|
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
||||||
第二部分:子进程Worker(调用主体)
|
第二部分:子进程Worker(调用主体)
|
||||||
========================================================================
|
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
|
||||||
@@ -88,7 +98,7 @@ class ClaudeHandle(Process):
|
|||||||
self.success = True
|
self.success = True
|
||||||
self.local_history = []
|
self.local_history = []
|
||||||
self.check_dependency()
|
self.check_dependency()
|
||||||
if self.success:
|
if self.success:
|
||||||
self.start()
|
self.start()
|
||||||
self.threadLock = threading.Lock()
|
self.threadLock = threading.Lock()
|
||||||
|
|
||||||
@@ -96,6 +106,7 @@ class ClaudeHandle(Process):
|
|||||||
try:
|
try:
|
||||||
self.success = False
|
self.success = False
|
||||||
import slack_sdk
|
import slack_sdk
|
||||||
|
|
||||||
self.info = "依赖检测通过,等待Claude响应。注意目前不能多人同时调用Claude接口(有线程锁),否则将导致每个人的Claude问询历史互相渗透。调用Claude时,会自动使用已配置的代理。"
|
self.info = "依赖检测通过,等待Claude响应。注意目前不能多人同时调用Claude接口(有线程锁),否则将导致每个人的Claude问询历史互相渗透。调用Claude时,会自动使用已配置的代理。"
|
||||||
self.success = True
|
self.success = True
|
||||||
except:
|
except:
|
||||||
@@ -103,40 +114,44 @@ class ClaudeHandle(Process):
|
|||||||
self.success = False
|
self.success = False
|
||||||
|
|
||||||
def ready(self):
|
def ready(self):
|
||||||
return self.claude_model is not None
|
return self.claude_model is not None
|
||||||
|
|
||||||
async def async_run(self):
|
async def async_run(self):
|
||||||
await self.claude_model.open_channel()
|
await self.claude_model.open_channel()
|
||||||
while True:
|
while True:
|
||||||
# 等待
|
# 等待
|
||||||
kwargs = self.child.recv()
|
kwargs = self.child.recv()
|
||||||
question = kwargs['query']
|
question = kwargs["query"]
|
||||||
history = kwargs['history']
|
history = kwargs["history"]
|
||||||
|
|
||||||
# 开始问问题
|
# 开始问问题
|
||||||
prompt = ""
|
prompt = ""
|
||||||
|
|
||||||
# 问题
|
# 问题
|
||||||
prompt += question
|
prompt += question
|
||||||
print('question:', prompt)
|
print("question:", prompt)
|
||||||
|
|
||||||
# 提交
|
# 提交
|
||||||
await self.claude_model.chat(prompt)
|
await self.claude_model.chat(prompt)
|
||||||
|
|
||||||
# 获取回复
|
# 获取回复
|
||||||
async for final, response in self.claude_model.get_reply():
|
async for final, response in self.claude_model.get_reply():
|
||||||
if not final:
|
if not final:
|
||||||
print(response)
|
print(response)
|
||||||
self.child.send(str(response))
|
self.child.send(str(response))
|
||||||
else:
|
else:
|
||||||
# 防止丢失最后一条消息
|
# 防止丢失最后一条消息
|
||||||
slack_msgs = await self.claude_model.get_slack_messages()
|
slack_msgs = await self.claude_model.get_slack_messages()
|
||||||
last_msg = slack_msgs[-1]["text"] if slack_msgs and len(slack_msgs) > 0 else ""
|
last_msg = (
|
||||||
|
slack_msgs[-1]["text"]
|
||||||
|
if slack_msgs and len(slack_msgs) > 0
|
||||||
|
else ""
|
||||||
|
)
|
||||||
if last_msg:
|
if last_msg:
|
||||||
self.child.send(last_msg)
|
self.child.send(last_msg)
|
||||||
print('-------- receive final ---------')
|
print("-------- receive final ---------")
|
||||||
self.child.send('[Finish]')
|
self.child.send("[Finish]")
|
||||||
|
|
||||||
def run(self):
|
def run(self):
|
||||||
"""
|
"""
|
||||||
这个函数运行在子进程
|
这个函数运行在子进程
|
||||||
@@ -146,22 +161,24 @@ class ClaudeHandle(Process):
|
|||||||
self.local_history = []
|
self.local_history = []
|
||||||
if (self.claude_model is None) or (not self.success):
|
if (self.claude_model is None) or (not self.success):
|
||||||
# 代理设置
|
# 代理设置
|
||||||
proxies = get_conf('proxies')
|
proxies = get_conf("proxies")
|
||||||
if proxies is None:
|
if proxies is None:
|
||||||
self.proxies_https = None
|
self.proxies_https = None
|
||||||
else:
|
else:
|
||||||
self.proxies_https = proxies['https']
|
self.proxies_https = proxies["https"]
|
||||||
|
|
||||||
try:
|
try:
|
||||||
SLACK_CLAUDE_USER_TOKEN = get_conf('SLACK_CLAUDE_USER_TOKEN')
|
SLACK_CLAUDE_USER_TOKEN = get_conf("SLACK_CLAUDE_USER_TOKEN")
|
||||||
self.claude_model = SlackClient(token=SLACK_CLAUDE_USER_TOKEN, proxy=self.proxies_https)
|
self.claude_model = SlackClient(
|
||||||
print('Claude组件初始化成功。')
|
token=SLACK_CLAUDE_USER_TOKEN, proxy=self.proxies_https
|
||||||
|
)
|
||||||
|
print("Claude组件初始化成功。")
|
||||||
except:
|
except:
|
||||||
self.success = False
|
self.success = False
|
||||||
tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n'
|
tb_str = "\n```\n" + trimmed_format_exc() + "\n```\n"
|
||||||
self.child.send(f'[Local Message] 不能加载Claude组件。{tb_str}')
|
self.child.send(f"[Local Message] 不能加载Claude组件。{tb_str}")
|
||||||
self.child.send('[Fail]')
|
self.child.send("[Fail]")
|
||||||
self.child.send('[Finish]')
|
self.child.send("[Finish]")
|
||||||
raise RuntimeError(f"不能加载Claude组件。")
|
raise RuntimeError(f"不能加载Claude组件。")
|
||||||
|
|
||||||
self.success = True
|
self.success = True
|
||||||
@@ -169,42 +186,49 @@ class ClaudeHandle(Process):
|
|||||||
# 进入任务等待状态
|
# 进入任务等待状态
|
||||||
asyncio.run(self.async_run())
|
asyncio.run(self.async_run())
|
||||||
except Exception:
|
except Exception:
|
||||||
tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n'
|
tb_str = "\n```\n" + trimmed_format_exc() + "\n```\n"
|
||||||
self.child.send(f'[Local Message] Claude失败 {tb_str}.')
|
self.child.send(f"[Local Message] Claude失败 {tb_str}.")
|
||||||
self.child.send('[Fail]')
|
self.child.send("[Fail]")
|
||||||
self.child.send('[Finish]')
|
self.child.send("[Finish]")
|
||||||
|
|
||||||
def stream_chat(self, **kwargs):
|
def stream_chat(self, **kwargs):
|
||||||
"""
|
"""
|
||||||
这个函数运行在主进程
|
这个函数运行在主进程
|
||||||
"""
|
"""
|
||||||
self.threadLock.acquire()
|
self.threadLock.acquire()
|
||||||
self.parent.send(kwargs) # 发送请求到子进程
|
self.parent.send(kwargs) # 发送请求到子进程
|
||||||
while True:
|
while True:
|
||||||
res = self.parent.recv() # 等待Claude回复的片段
|
res = self.parent.recv() # 等待Claude回复的片段
|
||||||
if res == '[Finish]':
|
if res == "[Finish]":
|
||||||
break # 结束
|
break # 结束
|
||||||
elif res == '[Fail]':
|
elif res == "[Fail]":
|
||||||
self.success = False
|
self.success = False
|
||||||
break
|
break
|
||||||
else:
|
else:
|
||||||
yield res # Claude回复的片段
|
yield res # Claude回复的片段
|
||||||
self.threadLock.release()
|
self.threadLock.release()
|
||||||
|
|
||||||
|
|
||||||
"""
|
"""
|
||||||
========================================================================
|
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
||||||
第三部分:主进程统一调用函数接口
|
第三部分:主进程统一调用函数接口
|
||||||
========================================================================
|
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
||||||
"""
|
"""
|
||||||
global claude_handle
|
global claude_handle
|
||||||
claude_handle = None
|
claude_handle = None
|
||||||
|
|
||||||
|
|
||||||
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=None, console_slience=False):
|
def predict_no_ui_long_connection(
|
||||||
|
inputs,
|
||||||
|
llm_kwargs,
|
||||||
|
history=[],
|
||||||
|
sys_prompt="",
|
||||||
|
observe_window=None,
|
||||||
|
console_slience=False,
|
||||||
|
):
|
||||||
"""
|
"""
|
||||||
多线程方法
|
多线程方法
|
||||||
函数的说明请见 request_llms/bridge_all.py
|
函数的说明请见 request_llms/bridge_all.py
|
||||||
"""
|
"""
|
||||||
global claude_handle
|
global claude_handle
|
||||||
if (claude_handle is None) or (not claude_handle.success):
|
if (claude_handle is None) or (not claude_handle.success):
|
||||||
@@ -217,24 +241,40 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
|
|||||||
|
|
||||||
# 没有 sys_prompt 接口,因此把prompt加入 history
|
# 没有 sys_prompt 接口,因此把prompt加入 history
|
||||||
history_feedin = []
|
history_feedin = []
|
||||||
for i in range(len(history)//2):
|
for i in range(len(history) // 2):
|
||||||
history_feedin.append([history[2*i], history[2*i+1]])
|
history_feedin.append([history[2 * i], history[2 * i + 1]])
|
||||||
|
|
||||||
watch_dog_patience = 5 # 看门狗 (watchdog) 的耐心, 设置5秒即可
|
watch_dog_patience = 5 # 看门狗 (watchdog) 的耐心, 设置5秒即可
|
||||||
response = ""
|
response = ""
|
||||||
observe_window[0] = "[Local Message] 等待Claude响应中 ..."
|
observe_window[0] = "[Local Message] 等待Claude响应中 ..."
|
||||||
for response in claude_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=sys_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
|
for response in claude_handle.stream_chat(
|
||||||
|
query=inputs,
|
||||||
|
history=history_feedin,
|
||||||
|
system_prompt=sys_prompt,
|
||||||
|
max_length=llm_kwargs["max_length"],
|
||||||
|
top_p=llm_kwargs["top_p"],
|
||||||
|
temperature=llm_kwargs["temperature"],
|
||||||
|
):
|
||||||
observe_window[0] = preprocess_newbing_out_simple(response)
|
observe_window[0] = preprocess_newbing_out_simple(response)
|
||||||
if len(observe_window) >= 2:
|
if len(observe_window) >= 2:
|
||||||
if (time.time()-observe_window[1]) > watch_dog_patience:
|
if (time.time() - observe_window[1]) > watch_dog_patience:
|
||||||
raise RuntimeError("程序终止。")
|
raise RuntimeError("程序终止。")
|
||||||
return preprocess_newbing_out_simple(response)
|
return preprocess_newbing_out_simple(response)
|
||||||
|
|
||||||
|
|
||||||
def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream=True, additional_fn=None):
|
def predict(
|
||||||
|
inputs,
|
||||||
|
llm_kwargs,
|
||||||
|
plugin_kwargs,
|
||||||
|
chatbot,
|
||||||
|
history=[],
|
||||||
|
system_prompt="",
|
||||||
|
stream=True,
|
||||||
|
additional_fn=None,
|
||||||
|
):
|
||||||
"""
|
"""
|
||||||
单线程方法
|
单线程方法
|
||||||
函数的说明请见 request_llms/bridge_all.py
|
函数的说明请见 request_llms/bridge_all.py
|
||||||
"""
|
"""
|
||||||
chatbot.append((inputs, "[Local Message] 等待Claude响应中 ..."))
|
chatbot.append((inputs, "[Local Message] 等待Claude响应中 ..."))
|
||||||
|
|
||||||
@@ -249,21 +289,30 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
|||||||
|
|
||||||
if additional_fn is not None:
|
if additional_fn is not None:
|
||||||
from core_functional import handle_core_functionality
|
from core_functional import handle_core_functionality
|
||||||
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
|
|
||||||
|
inputs, history = handle_core_functionality(
|
||||||
|
additional_fn, inputs, history, chatbot
|
||||||
|
)
|
||||||
|
|
||||||
history_feedin = []
|
history_feedin = []
|
||||||
for i in range(len(history)//2):
|
for i in range(len(history) // 2):
|
||||||
history_feedin.append([history[2*i], history[2*i+1]])
|
history_feedin.append([history[2 * i], history[2 * i + 1]])
|
||||||
|
|
||||||
chatbot[-1] = (inputs, "[Local Message] 等待Claude响应中 ...")
|
chatbot[-1] = (inputs, "[Local Message] 等待Claude响应中 ...")
|
||||||
response = "[Local Message] 等待Claude响应中 ..."
|
response = "[Local Message] 等待Claude响应中 ..."
|
||||||
yield from update_ui(chatbot=chatbot, history=history, msg="Claude响应缓慢,尚未完成全部响应,请耐心完成后再提交新问题。")
|
yield from update_ui(
|
||||||
for response in claude_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=system_prompt):
|
chatbot=chatbot, history=history, msg="Claude响应缓慢,尚未完成全部响应,请耐心完成后再提交新问题。"
|
||||||
|
)
|
||||||
|
for response in claude_handle.stream_chat(
|
||||||
|
query=inputs, history=history_feedin, system_prompt=system_prompt
|
||||||
|
):
|
||||||
chatbot[-1] = (inputs, preprocess_newbing_out(response))
|
chatbot[-1] = (inputs, preprocess_newbing_out(response))
|
||||||
yield from update_ui(chatbot=chatbot, history=history, msg="Claude响应缓慢,尚未完成全部响应,请耐心完成后再提交新问题。")
|
yield from update_ui(
|
||||||
|
chatbot=chatbot, history=history, msg="Claude响应缓慢,尚未完成全部响应,请耐心完成后再提交新问题。"
|
||||||
|
)
|
||||||
if response == "[Local Message] 等待Claude响应中 ...":
|
if response == "[Local Message] 等待Claude响应中 ...":
|
||||||
response = "[Local Message] Claude响应异常,请刷新界面重试 ..."
|
response = "[Local Message] Claude响应异常,请刷新界面重试 ..."
|
||||||
history.extend([inputs, response])
|
history.extend([inputs, response])
|
||||||
logging.info(f'[raw_input] {inputs}')
|
logging.info(f"[raw_input] {inputs}")
|
||||||
logging.info(f'[response] {response}')
|
logging.info(f"[response] {response}")
|
||||||
yield from update_ui(chatbot=chatbot, history=history, msg="完成全部响应,请提交新问题。")
|
yield from update_ui(chatbot=chatbot, history=history, msg="完成全部响应,请提交新问题。")
|
||||||
|
|||||||
@@ -42,7 +42,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
|||||||
try:
|
try:
|
||||||
check_packages(["zhipuai"])
|
check_packages(["zhipuai"])
|
||||||
except:
|
except:
|
||||||
yield from update_ui_lastest_msg(f"导入软件依赖失败。使用该模型需要额外依赖,安装方法```pip install --upgrade zhipuai```。",
|
yield from update_ui_lastest_msg(f"导入软件依赖失败。使用该模型需要额外依赖,安装方法```pip install zhipuai==1.0.7```。",
|
||||||
chatbot=chatbot, history=history, delay=0)
|
chatbot=chatbot, history=history, delay=0)
|
||||||
return
|
return
|
||||||
|
|
||||||
|
|||||||
@@ -12,7 +12,7 @@ from toolbox import get_conf, encode_image, get_pictures_list
|
|||||||
proxies, TIMEOUT_SECONDS = get_conf("proxies", "TIMEOUT_SECONDS")
|
proxies, TIMEOUT_SECONDS = get_conf("proxies", "TIMEOUT_SECONDS")
|
||||||
|
|
||||||
"""
|
"""
|
||||||
========================================================================
|
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
||||||
第五部分 一些文件处理方法
|
第五部分 一些文件处理方法
|
||||||
files_filter_handler 根据type过滤文件
|
files_filter_handler 根据type过滤文件
|
||||||
input_encode_handler 提取input中的文件,并解析
|
input_encode_handler 提取input中的文件,并解析
|
||||||
@@ -21,6 +21,7 @@ link_mtime_to_md 文件增加本地时间参数,避免下载到缓存文件
|
|||||||
html_view_blank 超链接
|
html_view_blank 超链接
|
||||||
html_local_file 本地文件取相对路径
|
html_local_file 本地文件取相对路径
|
||||||
to_markdown_tabs 文件list 转换为 md tab
|
to_markdown_tabs 文件list 转换为 md tab
|
||||||
|
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
95
request_llms/com_skylark2api.py
Normal file
95
request_llms/com_skylark2api.py
Normal file
@@ -0,0 +1,95 @@
|
|||||||
|
from toolbox import get_conf
|
||||||
|
import threading
|
||||||
|
import logging
|
||||||
|
import os
|
||||||
|
|
||||||
|
timeout_bot_msg = '[Local Message] Request timeout. Network error.'
|
||||||
|
#os.environ['VOLC_ACCESSKEY'] = ''
|
||||||
|
#os.environ['VOLC_SECRETKEY'] = ''
|
||||||
|
|
||||||
|
class YUNQUERequestInstance():
|
||||||
|
def __init__(self):
|
||||||
|
|
||||||
|
self.time_to_yield_event = threading.Event()
|
||||||
|
self.time_to_exit_event = threading.Event()
|
||||||
|
|
||||||
|
self.result_buf = ""
|
||||||
|
|
||||||
|
def generate(self, inputs, llm_kwargs, history, system_prompt):
|
||||||
|
# import _thread as thread
|
||||||
|
from volcengine.maas import MaasService, MaasException
|
||||||
|
|
||||||
|
maas = MaasService('maas-api.ml-platform-cn-beijing.volces.com', 'cn-beijing')
|
||||||
|
|
||||||
|
YUNQUE_SECRET_KEY, YUNQUE_ACCESS_KEY,YUNQUE_MODEL = get_conf("YUNQUE_SECRET_KEY", "YUNQUE_ACCESS_KEY","YUNQUE_MODEL")
|
||||||
|
maas.set_ak(YUNQUE_ACCESS_KEY) #填写 VOLC_ACCESSKEY
|
||||||
|
maas.set_sk(YUNQUE_SECRET_KEY) #填写 'VOLC_SECRETKEY'
|
||||||
|
|
||||||
|
self.result_buf = ""
|
||||||
|
|
||||||
|
req = {
|
||||||
|
"model": {
|
||||||
|
"name": YUNQUE_MODEL,
|
||||||
|
"version": "1.0", # use default version if not specified.
|
||||||
|
},
|
||||||
|
"parameters": {
|
||||||
|
"max_new_tokens": 4000, # 输出文本的最大tokens限制
|
||||||
|
"min_new_tokens": 1, # 输出文本的最小tokens限制
|
||||||
|
"temperature": llm_kwargs['temperature'], # 用于控制生成文本的随机性和创造性,Temperature值越大随机性越大,取值范围0~1
|
||||||
|
"top_p": llm_kwargs['top_p'], # 用于控制输出tokens的多样性,TopP值越大输出的tokens类型越丰富,取值范围0~1
|
||||||
|
"top_k": 0, # 选择预测值最大的k个token进行采样,取值范围0-1000,0表示不生效
|
||||||
|
"max_prompt_tokens": 4000, # 最大输入 token 数,如果给出的 prompt 的 token 长度超过此限制,取最后 max_prompt_tokens 个 token 输入模型。
|
||||||
|
},
|
||||||
|
"messages": self.generate_message_payload(inputs, llm_kwargs, history, system_prompt)
|
||||||
|
}
|
||||||
|
|
||||||
|
response = maas.stream_chat(req)
|
||||||
|
|
||||||
|
for resp in response:
|
||||||
|
self.result_buf += resp.choice.message.content
|
||||||
|
yield self.result_buf
|
||||||
|
'''
|
||||||
|
for event in response.events():
|
||||||
|
if event.event == "add":
|
||||||
|
self.result_buf += event.data
|
||||||
|
yield self.result_buf
|
||||||
|
elif event.event == "error" or event.event == "interrupted":
|
||||||
|
raise RuntimeError("Unknown error:" + event.data)
|
||||||
|
elif event.event == "finish":
|
||||||
|
yield self.result_buf
|
||||||
|
break
|
||||||
|
else:
|
||||||
|
raise RuntimeError("Unknown error:" + str(event))
|
||||||
|
|
||||||
|
logging.info(f'[raw_input] {inputs}')
|
||||||
|
logging.info(f'[response] {self.result_buf}')
|
||||||
|
'''
|
||||||
|
return self.result_buf
|
||||||
|
|
||||||
|
def generate_message_payload(inputs, llm_kwargs, history, system_prompt):
|
||||||
|
from volcengine.maas import ChatRole
|
||||||
|
conversation_cnt = len(history) // 2
|
||||||
|
messages = [{"role": ChatRole.USER, "content": system_prompt},
|
||||||
|
{"role": ChatRole.ASSISTANT, "content": "Certainly!"}]
|
||||||
|
if conversation_cnt:
|
||||||
|
for index in range(0, 2 * conversation_cnt, 2):
|
||||||
|
what_i_have_asked = {}
|
||||||
|
what_i_have_asked["role"] = ChatRole.USER
|
||||||
|
what_i_have_asked["content"] = history[index]
|
||||||
|
what_gpt_answer = {}
|
||||||
|
what_gpt_answer["role"] = ChatRole.ASSISTANT
|
||||||
|
what_gpt_answer["content"] = history[index + 1]
|
||||||
|
if what_i_have_asked["content"] != "":
|
||||||
|
if what_gpt_answer["content"] == "":
|
||||||
|
continue
|
||||||
|
if what_gpt_answer["content"] == timeout_bot_msg:
|
||||||
|
continue
|
||||||
|
messages.append(what_i_have_asked)
|
||||||
|
messages.append(what_gpt_answer)
|
||||||
|
else:
|
||||||
|
messages[-1]['content'] = what_gpt_answer['content']
|
||||||
|
what_i_ask_now = {}
|
||||||
|
what_i_ask_now["role"] = ChatRole.USER
|
||||||
|
what_i_ask_now["content"] = inputs
|
||||||
|
messages.append(what_i_ask_now)
|
||||||
|
return messages
|
||||||
@@ -21,11 +21,13 @@ class ZhipuRequestInstance():
|
|||||||
response = zhipuai.model_api.sse_invoke(
|
response = zhipuai.model_api.sse_invoke(
|
||||||
model=ZHIPUAI_MODEL,
|
model=ZHIPUAI_MODEL,
|
||||||
prompt=generate_message_payload(inputs, llm_kwargs, history, system_prompt),
|
prompt=generate_message_payload(inputs, llm_kwargs, history, system_prompt),
|
||||||
top_p=llm_kwargs['top_p'],
|
top_p=llm_kwargs['top_p']*0.7, # 智谱的API抽风,手动*0.7给做个线性变换
|
||||||
temperature=llm_kwargs['temperature'],
|
temperature=llm_kwargs['temperature']*0.95, # 智谱的API抽风,手动*0.7给做个线性变换
|
||||||
)
|
)
|
||||||
for event in response.events():
|
for event in response.events():
|
||||||
if event.event == "add":
|
if event.event == "add":
|
||||||
|
# if self.result_buf == "" and event.data.startswith(" "):
|
||||||
|
# event.data = event.data.lstrip(" ") # 每次智谱为啥都要带个空格开头呢?
|
||||||
self.result_buf += event.data
|
self.result_buf += event.data
|
||||||
yield self.result_buf
|
yield self.result_buf
|
||||||
elif event.event == "error" or event.event == "interrupted":
|
elif event.event == "error" or event.event == "interrupted":
|
||||||
@@ -35,7 +37,8 @@ class ZhipuRequestInstance():
|
|||||||
break
|
break
|
||||||
else:
|
else:
|
||||||
raise RuntimeError("Unknown error:" + str(event))
|
raise RuntimeError("Unknown error:" + str(event))
|
||||||
|
if self.result_buf == "":
|
||||||
|
yield "智谱没有返回任何数据, 请检查ZHIPUAI_API_KEY和ZHIPUAI_MODEL是否填写正确."
|
||||||
logging.info(f'[raw_input] {inputs}')
|
logging.info(f'[raw_input] {inputs}')
|
||||||
logging.info(f'[response] {self.result_buf}')
|
logging.info(f'[response] {self.result_buf}')
|
||||||
return self.result_buf
|
return self.result_buf
|
||||||
|
|||||||
@@ -1,8 +1,8 @@
|
|||||||
"""
|
"""
|
||||||
========================================================================
|
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
||||||
第一部分:来自EdgeGPT.py
|
第一部分:来自EdgeGPT.py
|
||||||
https://github.com/acheong08/EdgeGPT
|
https://github.com/acheong08/EdgeGPT
|
||||||
========================================================================
|
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
||||||
"""
|
"""
|
||||||
"""
|
"""
|
||||||
Main.py
|
Main.py
|
||||||
@@ -196,9 +196,9 @@ class _ChatHubRequest:
|
|||||||
self,
|
self,
|
||||||
prompt: str,
|
prompt: str,
|
||||||
conversation_style: CONVERSATION_STYLE_TYPE,
|
conversation_style: CONVERSATION_STYLE_TYPE,
|
||||||
options = None,
|
options=None,
|
||||||
webpage_context = None,
|
webpage_context=None,
|
||||||
search_result = False,
|
search_result=False,
|
||||||
) -> None:
|
) -> None:
|
||||||
"""
|
"""
|
||||||
Updates request object
|
Updates request object
|
||||||
@@ -294,9 +294,9 @@ class _Conversation:
|
|||||||
|
|
||||||
def __init__(
|
def __init__(
|
||||||
self,
|
self,
|
||||||
proxy = None,
|
proxy=None,
|
||||||
async_mode = False,
|
async_mode=False,
|
||||||
cookies = None,
|
cookies=None,
|
||||||
) -> None:
|
) -> None:
|
||||||
if async_mode:
|
if async_mode:
|
||||||
return
|
return
|
||||||
@@ -350,8 +350,8 @@ class _Conversation:
|
|||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
async def create(
|
async def create(
|
||||||
proxy = None,
|
proxy=None,
|
||||||
cookies = None,
|
cookies=None,
|
||||||
):
|
):
|
||||||
self = _Conversation(async_mode=True)
|
self = _Conversation(async_mode=True)
|
||||||
self.struct = {
|
self.struct = {
|
||||||
@@ -418,8 +418,8 @@ class _ChatHub:
|
|||||||
def __init__(
|
def __init__(
|
||||||
self,
|
self,
|
||||||
conversation: _Conversation,
|
conversation: _Conversation,
|
||||||
proxy = None,
|
proxy=None,
|
||||||
cookies = None,
|
cookies=None,
|
||||||
) -> None:
|
) -> None:
|
||||||
self.session = None
|
self.session = None
|
||||||
self.wss = None
|
self.wss = None
|
||||||
@@ -441,7 +441,7 @@ class _ChatHub:
|
|||||||
conversation_style: CONVERSATION_STYLE_TYPE = None,
|
conversation_style: CONVERSATION_STYLE_TYPE = None,
|
||||||
raw: bool = False,
|
raw: bool = False,
|
||||||
options: dict = None,
|
options: dict = None,
|
||||||
webpage_context = None,
|
webpage_context=None,
|
||||||
search_result: bool = False,
|
search_result: bool = False,
|
||||||
) -> Generator[str, None, None]:
|
) -> Generator[str, None, None]:
|
||||||
"""
|
"""
|
||||||
@@ -452,10 +452,12 @@ class _ChatHub:
|
|||||||
ws_cookies = []
|
ws_cookies = []
|
||||||
for cookie in self.cookies:
|
for cookie in self.cookies:
|
||||||
ws_cookies.append(f"{cookie['name']}={cookie['value']}")
|
ws_cookies.append(f"{cookie['name']}={cookie['value']}")
|
||||||
req_header.update({
|
req_header.update(
|
||||||
'Cookie': ';'.join(ws_cookies),
|
{
|
||||||
})
|
"Cookie": ";".join(ws_cookies),
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
timeout = aiohttp.ClientTimeout(total=30)
|
timeout = aiohttp.ClientTimeout(total=30)
|
||||||
self.session = aiohttp.ClientSession(timeout=timeout)
|
self.session = aiohttp.ClientSession(timeout=timeout)
|
||||||
|
|
||||||
@@ -521,9 +523,9 @@ class _ChatHub:
|
|||||||
msg = await self.wss.receive()
|
msg = await self.wss.receive()
|
||||||
try:
|
try:
|
||||||
objects = msg.data.split(DELIMITER)
|
objects = msg.data.split(DELIMITER)
|
||||||
except :
|
except:
|
||||||
continue
|
continue
|
||||||
|
|
||||||
for obj in objects:
|
for obj in objects:
|
||||||
if obj is None or not obj:
|
if obj is None or not obj:
|
||||||
continue
|
continue
|
||||||
@@ -624,8 +626,8 @@ class Chatbot:
|
|||||||
|
|
||||||
def __init__(
|
def __init__(
|
||||||
self,
|
self,
|
||||||
proxy = None,
|
proxy=None,
|
||||||
cookies = None,
|
cookies=None,
|
||||||
) -> None:
|
) -> None:
|
||||||
self.proxy = proxy
|
self.proxy = proxy
|
||||||
self.chat_hub: _ChatHub = _ChatHub(
|
self.chat_hub: _ChatHub = _ChatHub(
|
||||||
@@ -636,8 +638,8 @@ class Chatbot:
|
|||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
async def create(
|
async def create(
|
||||||
proxy = None,
|
proxy=None,
|
||||||
cookies = None,
|
cookies=None,
|
||||||
):
|
):
|
||||||
self = Chatbot.__new__(Chatbot)
|
self = Chatbot.__new__(Chatbot)
|
||||||
self.proxy = proxy
|
self.proxy = proxy
|
||||||
@@ -654,7 +656,7 @@ class Chatbot:
|
|||||||
wss_link: str = "wss://sydney.bing.com/sydney/ChatHub",
|
wss_link: str = "wss://sydney.bing.com/sydney/ChatHub",
|
||||||
conversation_style: CONVERSATION_STYLE_TYPE = None,
|
conversation_style: CONVERSATION_STYLE_TYPE = None,
|
||||||
options: dict = None,
|
options: dict = None,
|
||||||
webpage_context = None,
|
webpage_context=None,
|
||||||
search_result: bool = False,
|
search_result: bool = False,
|
||||||
) -> dict:
|
) -> dict:
|
||||||
"""
|
"""
|
||||||
@@ -680,7 +682,7 @@ class Chatbot:
|
|||||||
conversation_style: CONVERSATION_STYLE_TYPE = None,
|
conversation_style: CONVERSATION_STYLE_TYPE = None,
|
||||||
raw: bool = False,
|
raw: bool = False,
|
||||||
options: dict = None,
|
options: dict = None,
|
||||||
webpage_context = None,
|
webpage_context=None,
|
||||||
search_result: bool = False,
|
search_result: bool = False,
|
||||||
) -> Generator[str, None, None]:
|
) -> Generator[str, None, None]:
|
||||||
"""
|
"""
|
||||||
|
|||||||
@@ -1,5 +1,6 @@
|
|||||||
./docs/gradio-3.32.6-py3-none-any.whl
|
https://fastly.jsdelivr.net/gh/binary-husky/gradio-fix@gpt-academic/release/gradio-3.32.7-py3-none-any.whl
|
||||||
pypdf2==2.12.1
|
pypdf2==2.12.1
|
||||||
|
zhipuai<2
|
||||||
tiktoken>=0.3.3
|
tiktoken>=0.3.3
|
||||||
requests[socks]
|
requests[socks]
|
||||||
pydantic==1.10.11
|
pydantic==1.10.11
|
||||||
@@ -7,6 +8,7 @@ protobuf==3.18
|
|||||||
transformers>=4.27.1
|
transformers>=4.27.1
|
||||||
scipdf_parser>=0.52
|
scipdf_parser>=0.52
|
||||||
python-markdown-math
|
python-markdown-math
|
||||||
|
pymdown-extensions
|
||||||
websocket-client
|
websocket-client
|
||||||
beautifulsoup4
|
beautifulsoup4
|
||||||
prompt_toolkit
|
prompt_toolkit
|
||||||
|
|||||||
287
shared_utils/advanced_markdown_format.py
Normal file
287
shared_utils/advanced_markdown_format.py
Normal file
@@ -0,0 +1,287 @@
|
|||||||
|
import markdown
|
||||||
|
import re
|
||||||
|
import os
|
||||||
|
import math
|
||||||
|
from textwrap import dedent
|
||||||
|
from functools import lru_cache
|
||||||
|
from pymdownx.superfences import fence_div_format, fence_code_format
|
||||||
|
from latex2mathml.converter import convert as tex2mathml
|
||||||
|
from shared_utils.config_loader import get_conf as get_conf
|
||||||
|
|
||||||
|
pj = os.path.join
|
||||||
|
default_user_name = 'default_user'
|
||||||
|
|
||||||
|
markdown_extension_configs = {
|
||||||
|
'mdx_math': {
|
||||||
|
'enable_dollar_delimiter': True,
|
||||||
|
'use_gitlab_delimiters': False,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
code_highlight_configs = {
|
||||||
|
"pymdownx.superfences": {
|
||||||
|
'css_class': 'codehilite',
|
||||||
|
"custom_fences": [
|
||||||
|
{
|
||||||
|
'name': 'mermaid',
|
||||||
|
'class': 'mermaid',
|
||||||
|
'format': fence_code_format
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"pymdownx.highlight": {
|
||||||
|
'css_class': 'codehilite',
|
||||||
|
'guess_lang': True,
|
||||||
|
# 'auto_title': True,
|
||||||
|
# 'linenums': True
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
def text_divide_paragraph(text):
|
||||||
|
"""
|
||||||
|
将文本按照段落分隔符分割开,生成带有段落标签的HTML代码。
|
||||||
|
"""
|
||||||
|
pre = '<div class="markdown-body">'
|
||||||
|
suf = '</div>'
|
||||||
|
if text.startswith(pre) and text.endswith(suf):
|
||||||
|
return text
|
||||||
|
|
||||||
|
if '```' in text:
|
||||||
|
# careful input
|
||||||
|
return text
|
||||||
|
elif '</div>' in text:
|
||||||
|
# careful input
|
||||||
|
return text
|
||||||
|
else:
|
||||||
|
# whatever input
|
||||||
|
lines = text.split("\n")
|
||||||
|
for i, line in enumerate(lines):
|
||||||
|
lines[i] = lines[i].replace(" ", " ")
|
||||||
|
text = "</br>".join(lines)
|
||||||
|
return pre + text + suf
|
||||||
|
|
||||||
|
|
||||||
|
def tex2mathml_catch_exception(content, *args, **kwargs):
|
||||||
|
try:
|
||||||
|
content = tex2mathml(content, *args, **kwargs)
|
||||||
|
except:
|
||||||
|
content = content
|
||||||
|
return content
|
||||||
|
|
||||||
|
|
||||||
|
def replace_math_no_render(match):
|
||||||
|
content = match.group(1)
|
||||||
|
if 'mode=display' in match.group(0):
|
||||||
|
content = content.replace('\n', '</br>')
|
||||||
|
return f"<font color=\"#00FF00\">$$</font><font color=\"#FF00FF\">{content}</font><font color=\"#00FF00\">$$</font>"
|
||||||
|
else:
|
||||||
|
return f"<font color=\"#00FF00\">$</font><font color=\"#FF00FF\">{content}</font><font color=\"#00FF00\">$</font>"
|
||||||
|
|
||||||
|
|
||||||
|
def replace_math_render(match):
|
||||||
|
content = match.group(1)
|
||||||
|
if 'mode=display' in match.group(0):
|
||||||
|
if '\\begin{aligned}' in content:
|
||||||
|
content = content.replace('\\begin{aligned}', '\\begin{array}')
|
||||||
|
content = content.replace('\\end{aligned}', '\\end{array}')
|
||||||
|
content = content.replace('&', ' ')
|
||||||
|
content = tex2mathml_catch_exception(content, display="block")
|
||||||
|
return content
|
||||||
|
else:
|
||||||
|
return tex2mathml_catch_exception(content)
|
||||||
|
|
||||||
|
|
||||||
|
def markdown_bug_hunt(content):
|
||||||
|
"""
|
||||||
|
解决一个mdx_math的bug(单$包裹begin命令时多余<script>)
|
||||||
|
"""
|
||||||
|
content = content.replace('<script type="math/tex">\n<script type="math/tex; mode=display">',
|
||||||
|
'<script type="math/tex; mode=display">')
|
||||||
|
content = content.replace('</script>\n</script>', '</script>')
|
||||||
|
return content
|
||||||
|
|
||||||
|
|
||||||
|
def is_equation(txt):
|
||||||
|
"""
|
||||||
|
判定是否为公式 | 测试1 写出洛伦兹定律,使用tex格式公式 测试2 给出柯西不等式,使用latex格式 测试3 写出麦克斯韦方程组
|
||||||
|
"""
|
||||||
|
if '```' in txt and '```reference' not in txt: return False
|
||||||
|
if '$' not in txt and '\\[' not in txt: return False
|
||||||
|
mathpatterns = {
|
||||||
|
r'(?<!\\|\$)(\$)([^\$]+)(\$)': {'allow_multi_lines': False}, # $...$
|
||||||
|
r'(?<!\\)(\$\$)([^\$]+)(\$\$)': {'allow_multi_lines': True}, # $$...$$
|
||||||
|
r'(?<!\\)(\\\[)(.+?)(\\\])': {'allow_multi_lines': False}, # \[...\]
|
||||||
|
# r'(?<!\\)(\\\()(.+?)(\\\))': {'allow_multi_lines': False}, # \(...\)
|
||||||
|
# r'(?<!\\)(\\begin{([a-z]+?\*?)})(.+?)(\\end{\2})': {'allow_multi_lines': True}, # \begin...\end
|
||||||
|
# r'(?<!\\)(\$`)([^`]+)(`\$)': {'allow_multi_lines': False}, # $`...`$
|
||||||
|
}
|
||||||
|
matches = []
|
||||||
|
for pattern, property in mathpatterns.items():
|
||||||
|
flags = re.ASCII | re.DOTALL if property['allow_multi_lines'] else re.ASCII
|
||||||
|
matches.extend(re.findall(pattern, txt, flags))
|
||||||
|
if len(matches) == 0: return False
|
||||||
|
contain_any_eq = False
|
||||||
|
illegal_pattern = re.compile(r'[^\x00-\x7F]|echo')
|
||||||
|
for match in matches:
|
||||||
|
if len(match) != 3: return False
|
||||||
|
eq_canidate = match[1]
|
||||||
|
if illegal_pattern.search(eq_canidate):
|
||||||
|
return False
|
||||||
|
else:
|
||||||
|
contain_any_eq = True
|
||||||
|
return contain_any_eq
|
||||||
|
|
||||||
|
|
||||||
|
def fix_markdown_indent(txt):
|
||||||
|
# fix markdown indent
|
||||||
|
if (' - ' not in txt) or ('. ' not in txt):
|
||||||
|
# do not need to fix, fast escape
|
||||||
|
return txt
|
||||||
|
# walk through the lines and fix non-standard indentation
|
||||||
|
lines = txt.split("\n")
|
||||||
|
pattern = re.compile(r'^\s+-')
|
||||||
|
activated = False
|
||||||
|
for i, line in enumerate(lines):
|
||||||
|
if line.startswith('- ') or line.startswith('1. '):
|
||||||
|
activated = True
|
||||||
|
if activated and pattern.match(line):
|
||||||
|
stripped_string = line.lstrip()
|
||||||
|
num_spaces = len(line) - len(stripped_string)
|
||||||
|
if (num_spaces % 4) == 3:
|
||||||
|
num_spaces_should_be = math.ceil(num_spaces / 4) * 4
|
||||||
|
lines[i] = ' ' * num_spaces_should_be + stripped_string
|
||||||
|
return '\n'.join(lines)
|
||||||
|
|
||||||
|
|
||||||
|
FENCED_BLOCK_RE = re.compile(
|
||||||
|
dedent(r'''
|
||||||
|
(?P<fence>^[ \t]*(?:~{3,}|`{3,}))[ ]* # opening fence
|
||||||
|
((\{(?P<attrs>[^\}\n]*)\})| # (optional {attrs} or
|
||||||
|
(\.?(?P<lang>[\w#.+-]*)[ ]*)? # optional (.)lang
|
||||||
|
(hl_lines=(?P<quot>"|')(?P<hl_lines>.*?)(?P=quot)[ ]*)?) # optional hl_lines)
|
||||||
|
\n # newline (end of opening fence)
|
||||||
|
(?P<code>.*?)(?<=\n) # the code block
|
||||||
|
(?P=fence)[ ]*$ # closing fence
|
||||||
|
'''),
|
||||||
|
re.MULTILINE | re.DOTALL | re.VERBOSE
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def get_line_range(re_match_obj, txt):
|
||||||
|
start_pos, end_pos = re_match_obj.regs[0]
|
||||||
|
num_newlines_before = txt[:start_pos+1].count('\n')
|
||||||
|
line_start = num_newlines_before
|
||||||
|
line_end = num_newlines_before + txt[start_pos:end_pos].count('\n')+1
|
||||||
|
return line_start, line_end
|
||||||
|
|
||||||
|
|
||||||
|
def fix_code_segment_indent(txt):
|
||||||
|
lines = []
|
||||||
|
change_any = False
|
||||||
|
txt_tmp = txt
|
||||||
|
while True:
|
||||||
|
re_match_obj = FENCED_BLOCK_RE.search(txt_tmp)
|
||||||
|
if not re_match_obj: break
|
||||||
|
if len(lines) == 0: lines = txt.split("\n")
|
||||||
|
|
||||||
|
# 清空 txt_tmp 对应的位置方便下次搜索
|
||||||
|
start_pos, end_pos = re_match_obj.regs[0]
|
||||||
|
txt_tmp = txt_tmp[:start_pos] + ' '*(end_pos-start_pos) + txt_tmp[end_pos:]
|
||||||
|
line_start, line_end = get_line_range(re_match_obj, txt)
|
||||||
|
|
||||||
|
# 获取公共缩进
|
||||||
|
shared_indent_cnt = 1e5
|
||||||
|
for i in range(line_start, line_end):
|
||||||
|
stripped_string = lines[i].lstrip()
|
||||||
|
num_spaces = len(lines[i]) - len(stripped_string)
|
||||||
|
if num_spaces < shared_indent_cnt:
|
||||||
|
shared_indent_cnt = num_spaces
|
||||||
|
|
||||||
|
# 修复缩进
|
||||||
|
if (shared_indent_cnt < 1e5) and (shared_indent_cnt % 4) == 3:
|
||||||
|
num_spaces_should_be = math.ceil(shared_indent_cnt / 4) * 4
|
||||||
|
for i in range(line_start, line_end):
|
||||||
|
add_n = num_spaces_should_be - shared_indent_cnt
|
||||||
|
lines[i] = ' ' * add_n + lines[i]
|
||||||
|
if not change_any: # 遇到第一个
|
||||||
|
change_any = True
|
||||||
|
|
||||||
|
if change_any:
|
||||||
|
return '\n'.join(lines)
|
||||||
|
else:
|
||||||
|
return txt
|
||||||
|
|
||||||
|
|
||||||
|
@lru_cache(maxsize=128) # 使用 lru缓存 加快转换速度
|
||||||
|
def markdown_convertion(txt):
|
||||||
|
"""
|
||||||
|
将Markdown格式的文本转换为HTML格式。如果包含数学公式,则先将公式转换为HTML格式。
|
||||||
|
"""
|
||||||
|
pre = '<div class="markdown-body">'
|
||||||
|
suf = '</div>'
|
||||||
|
if txt.startswith(pre) and txt.endswith(suf):
|
||||||
|
# print('警告,输入了已经经过转化的字符串,二次转化可能出问题')
|
||||||
|
return txt # 已经被转化过,不需要再次转化
|
||||||
|
|
||||||
|
find_equation_pattern = r'<script type="math/tex(?:.*?)>(.*?)</script>'
|
||||||
|
|
||||||
|
txt = fix_markdown_indent(txt)
|
||||||
|
# txt = fix_code_segment_indent(txt)
|
||||||
|
if is_equation(txt): # 有$标识的公式符号,且没有代码段```的标识
|
||||||
|
# convert everything to html format
|
||||||
|
split = markdown.markdown(text='---')
|
||||||
|
convert_stage_1 = markdown.markdown(text=txt, extensions=['sane_lists', 'tables', 'mdx_math', 'pymdownx.superfences', 'pymdownx.highlight'],
|
||||||
|
extension_configs={**markdown_extension_configs, **code_highlight_configs})
|
||||||
|
convert_stage_1 = markdown_bug_hunt(convert_stage_1)
|
||||||
|
# 1. convert to easy-to-copy tex (do not render math)
|
||||||
|
convert_stage_2_1, n = re.subn(find_equation_pattern, replace_math_no_render, convert_stage_1, flags=re.DOTALL)
|
||||||
|
# 2. convert to rendered equation
|
||||||
|
convert_stage_2_2, n = re.subn(find_equation_pattern, replace_math_render, convert_stage_1, flags=re.DOTALL)
|
||||||
|
# cat them together
|
||||||
|
return pre + convert_stage_2_1 + f'{split}' + convert_stage_2_2 + suf
|
||||||
|
else:
|
||||||
|
return pre + markdown.markdown(txt, extensions=['sane_lists', 'tables', 'pymdownx.superfences', 'pymdownx.highlight'], extension_configs=code_highlight_configs) + suf
|
||||||
|
|
||||||
|
|
||||||
|
def close_up_code_segment_during_stream(gpt_reply):
|
||||||
|
"""
|
||||||
|
在gpt输出代码的中途(输出了前面的```,但还没输出完后面的```),补上后面的```
|
||||||
|
|
||||||
|
Args:
|
||||||
|
gpt_reply (str): GPT模型返回的回复字符串。
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
str: 返回一个新的字符串,将输出代码片段的“后面的```”补上。
|
||||||
|
|
||||||
|
"""
|
||||||
|
if '```' not in gpt_reply:
|
||||||
|
return gpt_reply
|
||||||
|
if gpt_reply.endswith('```'):
|
||||||
|
return gpt_reply
|
||||||
|
|
||||||
|
# 排除了以上两个情况,我们
|
||||||
|
segments = gpt_reply.split('```')
|
||||||
|
n_mark = len(segments) - 1
|
||||||
|
if n_mark % 2 == 1:
|
||||||
|
return gpt_reply + '\n```' # 输出代码片段中!
|
||||||
|
else:
|
||||||
|
return gpt_reply
|
||||||
|
|
||||||
|
|
||||||
|
def format_io(self, y):
|
||||||
|
"""
|
||||||
|
将输入和输出解析为HTML格式。将y中最后一项的输入部分段落化,并将输出部分的Markdown和数学公式转换为HTML格式。
|
||||||
|
"""
|
||||||
|
if y is None or y == []:
|
||||||
|
return []
|
||||||
|
i_ask, gpt_reply = y[-1]
|
||||||
|
# 输入部分太自由,预处理一波
|
||||||
|
if i_ask is not None: i_ask = text_divide_paragraph(i_ask)
|
||||||
|
# 当代码输出半截的时候,试着补上后个```
|
||||||
|
if gpt_reply is not None: gpt_reply = close_up_code_segment_during_stream(gpt_reply)
|
||||||
|
# process
|
||||||
|
y[-1] = (
|
||||||
|
None if i_ask is None else markdown.markdown(i_ask, extensions=['pymdownx.superfences', 'tables', 'pymdownx.highlight'], extension_configs=code_highlight_configs),
|
||||||
|
None if gpt_reply is None else markdown_convertion(gpt_reply)
|
||||||
|
)
|
||||||
|
return y
|
||||||
131
shared_utils/config_loader.py
Normal file
131
shared_utils/config_loader.py
Normal file
@@ -0,0 +1,131 @@
|
|||||||
|
import importlib
|
||||||
|
import time
|
||||||
|
import os
|
||||||
|
from functools import lru_cache
|
||||||
|
from colorful import print亮红, print亮绿, print亮蓝
|
||||||
|
|
||||||
|
pj = os.path.join
|
||||||
|
default_user_name = 'default_user'
|
||||||
|
|
||||||
|
def read_env_variable(arg, default_value):
|
||||||
|
"""
|
||||||
|
环境变量可以是 `GPT_ACADEMIC_CONFIG`(优先),也可以直接是`CONFIG`
|
||||||
|
例如在windows cmd中,既可以写:
|
||||||
|
set USE_PROXY=True
|
||||||
|
set API_KEY=sk-j7caBpkRoxxxxxxxxxxxxxxxxxxxxxxxxxxxx
|
||||||
|
set proxies={"http":"http://127.0.0.1:10085", "https":"http://127.0.0.1:10085",}
|
||||||
|
set AVAIL_LLM_MODELS=["gpt-3.5-turbo", "chatglm"]
|
||||||
|
set AUTHENTICATION=[("username", "password"), ("username2", "password2")]
|
||||||
|
也可以写:
|
||||||
|
set GPT_ACADEMIC_USE_PROXY=True
|
||||||
|
set GPT_ACADEMIC_API_KEY=sk-j7caBpkRoxxxxxxxxxxxxxxxxxxxxxxxxxxxx
|
||||||
|
set GPT_ACADEMIC_proxies={"http":"http://127.0.0.1:10085", "https":"http://127.0.0.1:10085",}
|
||||||
|
set GPT_ACADEMIC_AVAIL_LLM_MODELS=["gpt-3.5-turbo", "chatglm"]
|
||||||
|
set GPT_ACADEMIC_AUTHENTICATION=[("username", "password"), ("username2", "password2")]
|
||||||
|
"""
|
||||||
|
arg_with_prefix = "GPT_ACADEMIC_" + arg
|
||||||
|
if arg_with_prefix in os.environ:
|
||||||
|
env_arg = os.environ[arg_with_prefix]
|
||||||
|
elif arg in os.environ:
|
||||||
|
env_arg = os.environ[arg]
|
||||||
|
else:
|
||||||
|
raise KeyError
|
||||||
|
print(f"[ENV_VAR] 尝试加载{arg},默认值:{default_value} --> 修正值:{env_arg}")
|
||||||
|
try:
|
||||||
|
if isinstance(default_value, bool):
|
||||||
|
env_arg = env_arg.strip()
|
||||||
|
if env_arg == 'True': r = True
|
||||||
|
elif env_arg == 'False': r = False
|
||||||
|
else: print('Enter True or False, but have:', env_arg); r = default_value
|
||||||
|
elif isinstance(default_value, int):
|
||||||
|
r = int(env_arg)
|
||||||
|
elif isinstance(default_value, float):
|
||||||
|
r = float(env_arg)
|
||||||
|
elif isinstance(default_value, str):
|
||||||
|
r = env_arg.strip()
|
||||||
|
elif isinstance(default_value, dict):
|
||||||
|
r = eval(env_arg)
|
||||||
|
elif isinstance(default_value, list):
|
||||||
|
r = eval(env_arg)
|
||||||
|
elif default_value is None:
|
||||||
|
assert arg == "proxies"
|
||||||
|
r = eval(env_arg)
|
||||||
|
else:
|
||||||
|
print亮红(f"[ENV_VAR] 环境变量{arg}不支持通过环境变量设置! ")
|
||||||
|
raise KeyError
|
||||||
|
except:
|
||||||
|
print亮红(f"[ENV_VAR] 环境变量{arg}加载失败! ")
|
||||||
|
raise KeyError(f"[ENV_VAR] 环境变量{arg}加载失败! ")
|
||||||
|
|
||||||
|
print亮绿(f"[ENV_VAR] 成功读取环境变量{arg}")
|
||||||
|
return r
|
||||||
|
|
||||||
|
|
||||||
|
@lru_cache(maxsize=128)
|
||||||
|
def read_single_conf_with_lru_cache(arg):
|
||||||
|
from shared_utils.key_pattern_manager import is_any_api_key
|
||||||
|
try:
|
||||||
|
# 优先级1. 获取环境变量作为配置
|
||||||
|
default_ref = getattr(importlib.import_module('config'), arg) # 读取默认值作为数据类型转换的参考
|
||||||
|
r = read_env_variable(arg, default_ref)
|
||||||
|
except:
|
||||||
|
try:
|
||||||
|
# 优先级2. 获取config_private中的配置
|
||||||
|
r = getattr(importlib.import_module('config_private'), arg)
|
||||||
|
except:
|
||||||
|
# 优先级3. 获取config中的配置
|
||||||
|
r = getattr(importlib.import_module('config'), arg)
|
||||||
|
|
||||||
|
# 在读取API_KEY时,检查一下是不是忘了改config
|
||||||
|
if arg == 'API_URL_REDIRECT':
|
||||||
|
oai_rd = r.get("https://api.openai.com/v1/chat/completions", None) # API_URL_REDIRECT填写格式是错误的,请阅读`https://github.com/binary-husky/gpt_academic/wiki/项目配置说明`
|
||||||
|
if oai_rd and not oai_rd.endswith('/completions'):
|
||||||
|
print亮红("\n\n[API_URL_REDIRECT] API_URL_REDIRECT填错了。请阅读`https://github.com/binary-husky/gpt_academic/wiki/项目配置说明`。如果您确信自己没填错,无视此消息即可。")
|
||||||
|
time.sleep(5)
|
||||||
|
if arg == 'API_KEY':
|
||||||
|
print亮蓝(f"[API_KEY] 本项目现已支持OpenAI和Azure的api-key。也支持同时填写多个api-key,如API_KEY=\"openai-key1,openai-key2,azure-key3\"")
|
||||||
|
print亮蓝(f"[API_KEY] 您既可以在config.py中修改api-key(s),也可以在问题输入区输入临时的api-key(s),然后回车键提交后即可生效。")
|
||||||
|
if is_any_api_key(r):
|
||||||
|
print亮绿(f"[API_KEY] 您的 API_KEY 是: {r[:15]}*** API_KEY 导入成功")
|
||||||
|
else:
|
||||||
|
print亮红("[API_KEY] 您的 API_KEY 不满足任何一种已知的密钥格式,请在config文件中修改API密钥之后再运行。")
|
||||||
|
if arg == 'proxies':
|
||||||
|
if not read_single_conf_with_lru_cache('USE_PROXY'): r = None # 检查USE_PROXY,防止proxies单独起作用
|
||||||
|
if r is None:
|
||||||
|
print亮红('[PROXY] 网络代理状态:未配置。无代理状态下很可能无法访问OpenAI家族的模型。建议:检查USE_PROXY选项是否修改。')
|
||||||
|
else:
|
||||||
|
print亮绿('[PROXY] 网络代理状态:已配置。配置信息如下:', r)
|
||||||
|
assert isinstance(r, dict), 'proxies格式错误,请注意proxies选项的格式,不要遗漏括号。'
|
||||||
|
return r
|
||||||
|
|
||||||
|
|
||||||
|
@lru_cache(maxsize=128)
|
||||||
|
def get_conf(*args):
|
||||||
|
"""
|
||||||
|
本项目的所有配置都集中在config.py中。 修改配置有三种方法,您只需要选择其中一种即可:
|
||||||
|
- 直接修改config.py
|
||||||
|
- 创建并修改config_private.py
|
||||||
|
- 修改环境变量(修改docker-compose.yml等价于修改容器内部的环境变量)
|
||||||
|
|
||||||
|
注意:如果您使用docker-compose部署,请修改docker-compose(等价于修改容器内部的环境变量)
|
||||||
|
"""
|
||||||
|
res = []
|
||||||
|
for arg in args:
|
||||||
|
r = read_single_conf_with_lru_cache(arg)
|
||||||
|
res.append(r)
|
||||||
|
if len(res) == 1: return res[0]
|
||||||
|
return res
|
||||||
|
|
||||||
|
|
||||||
|
def set_conf(key, value):
|
||||||
|
from toolbox import read_single_conf_with_lru_cache
|
||||||
|
read_single_conf_with_lru_cache.cache_clear()
|
||||||
|
get_conf.cache_clear()
|
||||||
|
os.environ[key] = str(value)
|
||||||
|
altered = get_conf(key)
|
||||||
|
return altered
|
||||||
|
|
||||||
|
|
||||||
|
def set_multi_conf(dic):
|
||||||
|
for k, v in dic.items(): set_conf(k, v)
|
||||||
|
return
|
||||||
91
shared_utils/connect_void_terminal.py
Normal file
91
shared_utils/connect_void_terminal.py
Normal file
@@ -0,0 +1,91 @@
|
|||||||
|
import os
|
||||||
|
|
||||||
|
"""
|
||||||
|
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
||||||
|
接驳void-terminal:
|
||||||
|
- set_conf: 在运行过程中动态地修改配置
|
||||||
|
- set_multi_conf: 在运行过程中动态地修改多个配置
|
||||||
|
- get_plugin_handle: 获取插件的句柄
|
||||||
|
- get_plugin_default_kwargs: 获取插件的默认参数
|
||||||
|
- get_chat_handle: 获取简单聊天的句柄
|
||||||
|
- get_chat_default_kwargs: 获取简单聊天的默认参数
|
||||||
|
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
def get_plugin_handle(plugin_name):
|
||||||
|
"""
|
||||||
|
e.g. plugin_name = 'crazy_functions.批量Markdown翻译->Markdown翻译指定语言'
|
||||||
|
"""
|
||||||
|
import importlib
|
||||||
|
|
||||||
|
assert (
|
||||||
|
"->" in plugin_name
|
||||||
|
), "Example of plugin_name: crazy_functions.批量Markdown翻译->Markdown翻译指定语言"
|
||||||
|
module, fn_name = plugin_name.split("->")
|
||||||
|
f_hot_reload = getattr(importlib.import_module(module, fn_name), fn_name)
|
||||||
|
return f_hot_reload
|
||||||
|
|
||||||
|
|
||||||
|
def get_chat_handle():
|
||||||
|
"""
|
||||||
|
Get chat function
|
||||||
|
"""
|
||||||
|
from request_llms.bridge_all import predict_no_ui_long_connection
|
||||||
|
|
||||||
|
return predict_no_ui_long_connection
|
||||||
|
|
||||||
|
|
||||||
|
def get_plugin_default_kwargs():
|
||||||
|
"""
|
||||||
|
Get Plugin Default Arguments
|
||||||
|
"""
|
||||||
|
from toolbox import ChatBotWithCookies, load_chat_cookies
|
||||||
|
|
||||||
|
cookies = load_chat_cookies()
|
||||||
|
llm_kwargs = {
|
||||||
|
"api_key": cookies["api_key"],
|
||||||
|
"llm_model": cookies["llm_model"],
|
||||||
|
"top_p": 1.0,
|
||||||
|
"max_length": None,
|
||||||
|
"temperature": 1.0,
|
||||||
|
}
|
||||||
|
chatbot = ChatBotWithCookies(llm_kwargs)
|
||||||
|
|
||||||
|
# txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port
|
||||||
|
DEFAULT_FN_GROUPS_kwargs = {
|
||||||
|
"main_input": "./README.md",
|
||||||
|
"llm_kwargs": llm_kwargs,
|
||||||
|
"plugin_kwargs": {},
|
||||||
|
"chatbot_with_cookie": chatbot,
|
||||||
|
"history": [],
|
||||||
|
"system_prompt": "You are a good AI.",
|
||||||
|
"web_port": None,
|
||||||
|
}
|
||||||
|
return DEFAULT_FN_GROUPS_kwargs
|
||||||
|
|
||||||
|
|
||||||
|
def get_chat_default_kwargs():
|
||||||
|
"""
|
||||||
|
Get Chat Default Arguments
|
||||||
|
"""
|
||||||
|
from toolbox import load_chat_cookies
|
||||||
|
|
||||||
|
cookies = load_chat_cookies()
|
||||||
|
llm_kwargs = {
|
||||||
|
"api_key": cookies["api_key"],
|
||||||
|
"llm_model": cookies["llm_model"],
|
||||||
|
"top_p": 1.0,
|
||||||
|
"max_length": None,
|
||||||
|
"temperature": 1.0,
|
||||||
|
}
|
||||||
|
default_chat_kwargs = {
|
||||||
|
"inputs": "Hello there, are you ready?",
|
||||||
|
"llm_kwargs": llm_kwargs,
|
||||||
|
"history": [],
|
||||||
|
"sys_prompt": "You are AI assistant",
|
||||||
|
"observe_window": None,
|
||||||
|
"console_slience": False,
|
||||||
|
}
|
||||||
|
|
||||||
|
return default_chat_kwargs
|
||||||
81
shared_utils/key_pattern_manager.py
Normal file
81
shared_utils/key_pattern_manager.py
Normal file
@@ -0,0 +1,81 @@
|
|||||||
|
import re
|
||||||
|
import os
|
||||||
|
from functools import wraps, lru_cache
|
||||||
|
from shared_utils.advanced_markdown_format import format_io
|
||||||
|
from shared_utils.config_loader import get_conf as get_conf
|
||||||
|
|
||||||
|
|
||||||
|
pj = os.path.join
|
||||||
|
default_user_name = 'default_user'
|
||||||
|
|
||||||
|
|
||||||
|
def is_openai_api_key(key):
|
||||||
|
CUSTOM_API_KEY_PATTERN = get_conf('CUSTOM_API_KEY_PATTERN')
|
||||||
|
if len(CUSTOM_API_KEY_PATTERN) != 0:
|
||||||
|
API_MATCH_ORIGINAL = re.match(CUSTOM_API_KEY_PATTERN, key)
|
||||||
|
else:
|
||||||
|
API_MATCH_ORIGINAL = re.match(r"sk-[a-zA-Z0-9]{48}$", key)
|
||||||
|
return bool(API_MATCH_ORIGINAL)
|
||||||
|
|
||||||
|
|
||||||
|
def is_azure_api_key(key):
|
||||||
|
API_MATCH_AZURE = re.match(r"[a-zA-Z0-9]{32}$", key)
|
||||||
|
return bool(API_MATCH_AZURE)
|
||||||
|
|
||||||
|
|
||||||
|
def is_api2d_key(key):
|
||||||
|
API_MATCH_API2D = re.match(r"fk[a-zA-Z0-9]{6}-[a-zA-Z0-9]{32}$", key)
|
||||||
|
return bool(API_MATCH_API2D)
|
||||||
|
|
||||||
|
|
||||||
|
def is_any_api_key(key):
|
||||||
|
if ',' in key:
|
||||||
|
keys = key.split(',')
|
||||||
|
for k in keys:
|
||||||
|
if is_any_api_key(k): return True
|
||||||
|
return False
|
||||||
|
else:
|
||||||
|
return is_openai_api_key(key) or is_api2d_key(key) or is_azure_api_key(key)
|
||||||
|
|
||||||
|
|
||||||
|
def what_keys(keys):
|
||||||
|
avail_key_list = {'OpenAI Key': 0, "Azure Key": 0, "API2D Key": 0}
|
||||||
|
key_list = keys.split(',')
|
||||||
|
|
||||||
|
for k in key_list:
|
||||||
|
if is_openai_api_key(k):
|
||||||
|
avail_key_list['OpenAI Key'] += 1
|
||||||
|
|
||||||
|
for k in key_list:
|
||||||
|
if is_api2d_key(k):
|
||||||
|
avail_key_list['API2D Key'] += 1
|
||||||
|
|
||||||
|
for k in key_list:
|
||||||
|
if is_azure_api_key(k):
|
||||||
|
avail_key_list['Azure Key'] += 1
|
||||||
|
|
||||||
|
return f"检测到: OpenAI Key {avail_key_list['OpenAI Key']} 个, Azure Key {avail_key_list['Azure Key']} 个, API2D Key {avail_key_list['API2D Key']} 个"
|
||||||
|
|
||||||
|
|
||||||
|
def select_api_key(keys, llm_model):
|
||||||
|
import random
|
||||||
|
avail_key_list = []
|
||||||
|
key_list = keys.split(',')
|
||||||
|
|
||||||
|
if llm_model.startswith('gpt-'):
|
||||||
|
for k in key_list:
|
||||||
|
if is_openai_api_key(k): avail_key_list.append(k)
|
||||||
|
|
||||||
|
if llm_model.startswith('api2d-'):
|
||||||
|
for k in key_list:
|
||||||
|
if is_api2d_key(k): avail_key_list.append(k)
|
||||||
|
|
||||||
|
if llm_model.startswith('azure-'):
|
||||||
|
for k in key_list:
|
||||||
|
if is_azure_api_key(k): avail_key_list.append(k)
|
||||||
|
|
||||||
|
if len(avail_key_list) == 0:
|
||||||
|
raise RuntimeError(f"您提供的api-key不满足要求,不包含任何可用于{llm_model}的api-key。您可能选择了错误的模型或请求源(右下角更换模型菜单中可切换openai,azure,claude,api2d等请求源)。")
|
||||||
|
|
||||||
|
api_key = random.choice(avail_key_list) # 随机负载均衡
|
||||||
|
return api_key
|
||||||
@@ -1,35 +1,37 @@
|
|||||||
md = """
|
md = """
|
||||||
作为您的写作和编程助手,我可以为您提供以下服务:
|
You can use the following Python script to rename files matching the pattern '* - 副本.tex' to '* - wushiguang.tex' in a directory:
|
||||||
|
|
||||||
1. 写作:
|
```python
|
||||||
- 帮助您撰写文章、报告、散文、故事等。
|
import os
|
||||||
- 提供写作建议和技巧。
|
|
||||||
- 协助您进行文案策划和内容创作。
|
|
||||||
|
|
||||||
2. 编程:
|
# Directory containing the files
|
||||||
- 帮助您解决编程问题,提供编程思路和建议。
|
directory = 'Tex/'
|
||||||
- 协助您编写代码,包括但不限于 Python、Java、C++ 等。
|
|
||||||
- 为您解释复杂的技术概念,让您更容易理解。
|
|
||||||
|
|
||||||
3. 项目支持:
|
for filename in os.listdir(directory):
|
||||||
- 协助您规划项目进度和任务分配。
|
if filename.endswith(' - 副本.tex'):
|
||||||
- 提供项目管理和协作建议。
|
new_filename = filename.replace(' - 副本.tex', ' - wushiguang.tex')
|
||||||
- 在项目实施过程中提供支持,确保项目顺利进行。
|
os.rename(os.path.join(directory, filename), os.path.join(directory, new_filename))
|
||||||
|
```
|
||||||
|
|
||||||
4. 学习辅导:
|
Replace 'Tex/' with the actual directory path where your files are located before running the script.
|
||||||
- 帮助您巩固编程基础,提高编程能力。
|
|
||||||
- 提供计算机科学、数据科学、人工智能等相关领域的学习资源和建议。
|
|
||||||
- 解答您在学习过程中遇到的问题,让您更好地掌握知识。
|
|
||||||
|
|
||||||
5. 行业动态和趋势分析:
|
|
||||||
- 为您提供业界最新的新闻和技术趋势。
|
|
||||||
- 分析行业动态,帮助您了解市场发展和竞争态势。
|
|
||||||
- 为您制定技术战略提供参考和建议。
|
|
||||||
|
|
||||||
请随时告诉我您的需求,我会尽力提供帮助。如果您有任何问题或需要解答的议题,请随时提问。
|
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
md = """
|
||||||
|
Following code including wrapper
|
||||||
|
|
||||||
|
```mermaid
|
||||||
|
graph TD
|
||||||
|
A[Enter Chart Definition] --> B(Preview)
|
||||||
|
B --> C{decide}
|
||||||
|
C --> D[Keep]
|
||||||
|
C --> E[Edit Definition]
|
||||||
|
E --> B
|
||||||
|
D --> F[Save Image and Code]
|
||||||
|
F --> B
|
||||||
|
```
|
||||||
|
|
||||||
|
"""
|
||||||
def validate_path():
|
def validate_path():
|
||||||
import os, sys
|
import os, sys
|
||||||
|
|
||||||
@@ -43,6 +45,9 @@ validate_path() # validate path so you can run from base directory
|
|||||||
from toolbox import markdown_convertion
|
from toolbox import markdown_convertion
|
||||||
|
|
||||||
html = markdown_convertion(md)
|
html = markdown_convertion(md)
|
||||||
print(html)
|
# print(html)
|
||||||
with open("test.html", "w", encoding="utf-8") as f:
|
with open("test.html", "w", encoding="utf-8") as f:
|
||||||
f.write(html)
|
f.write(html)
|
||||||
|
|
||||||
|
|
||||||
|
# TODO: 列出10个经典名著
|
||||||
113
themes/common.js
113
themes/common.js
@@ -109,7 +109,7 @@ function begin_loading_status() {
|
|||||||
C1.style.borderRadius = "50%";
|
C1.style.borderRadius = "50%";
|
||||||
C1.style.margin = "-40px 0 0 -40px";
|
C1.style.margin = "-40px 0 0 -40px";
|
||||||
C1.style.animation = "spinAndPulse 2s linear infinite";
|
C1.style.animation = "spinAndPulse 2s linear infinite";
|
||||||
|
|
||||||
C2.style.position = "fixed";
|
C2.style.position = "fixed";
|
||||||
C2.style.top = "50%";
|
C2.style.top = "50%";
|
||||||
C2.style.left = "50%";
|
C2.style.left = "50%";
|
||||||
@@ -229,6 +229,33 @@ function addCopyButton(botElement) {
|
|||||||
botElement.appendChild(messageBtnColumn);
|
botElement.appendChild(messageBtnColumn);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
let timeoutID = null;
|
||||||
|
let lastInvocationTime = 0;
|
||||||
|
let lastArgs = null;
|
||||||
|
function do_something_but_not_too_frequently(min_interval, func) {
|
||||||
|
return function(...args) {
|
||||||
|
lastArgs = args;
|
||||||
|
const now = Date.now();
|
||||||
|
if (!lastInvocationTime || (now - lastInvocationTime) >= min_interval) {
|
||||||
|
lastInvocationTime = now;
|
||||||
|
// 现在就执行
|
||||||
|
setTimeout(() => {
|
||||||
|
func.apply(this, lastArgs);
|
||||||
|
}, 0);
|
||||||
|
} else if (!timeoutID) {
|
||||||
|
// 等一会执行
|
||||||
|
timeoutID = setTimeout(() => {
|
||||||
|
timeoutID = null;
|
||||||
|
lastInvocationTime = Date.now();
|
||||||
|
func.apply(this, lastArgs);
|
||||||
|
}, min_interval - (now - lastInvocationTime));
|
||||||
|
} else {
|
||||||
|
// 压根不执行
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
function chatbotContentChanged(attempt = 1, force = false) {
|
function chatbotContentChanged(attempt = 1, force = false) {
|
||||||
// https://github.com/GaiZhenbiao/ChuanhuChatGPT/tree/main/web_assets/javascript
|
// https://github.com/GaiZhenbiao/ChuanhuChatGPT/tree/main/web_assets/javascript
|
||||||
for (var i = 0; i < attempt; i++) {
|
for (var i = 0; i < attempt; i++) {
|
||||||
@@ -236,6 +263,13 @@ function chatbotContentChanged(attempt = 1, force = false) {
|
|||||||
gradioApp().querySelectorAll('#gpt-chatbot .message-wrap .message.bot').forEach(addCopyButton);
|
gradioApp().querySelectorAll('#gpt-chatbot .message-wrap .message.bot').forEach(addCopyButton);
|
||||||
}, i === 0 ? 0 : 200);
|
}, i === 0 ? 0 : 200);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
const run_mermaid_render = do_something_but_not_too_frequently(1000, function () {
|
||||||
|
const blocks = document.querySelectorAll(`pre.mermaid, diagram-div`);
|
||||||
|
if (blocks.length == 0) { return; }
|
||||||
|
uml("mermaid");
|
||||||
|
});
|
||||||
|
run_mermaid_render();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
@@ -270,8 +304,8 @@ function chatbotAutoHeight() {
|
|||||||
}
|
}
|
||||||
monitoring_input_box()
|
monitoring_input_box()
|
||||||
update_height();
|
update_height();
|
||||||
window.addEventListener('resize', function() { update_height(); });
|
window.addEventListener('resize', function () { update_height(); });
|
||||||
window.addEventListener('scroll', function() { update_height_slow(); });
|
window.addEventListener('scroll', function () { update_height_slow(); });
|
||||||
setInterval(function () { update_height_slow() }, 50); // 每50毫秒执行一次
|
setInterval(function () { update_height_slow() }, 50); // 每50毫秒执行一次
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -290,8 +324,8 @@ function swap_input_area() {
|
|||||||
// Swap the elements
|
// Swap the elements
|
||||||
parent.insertBefore(element2, element1);
|
parent.insertBefore(element2, element1);
|
||||||
parent.insertBefore(element1, nextSibling);
|
parent.insertBefore(element1, nextSibling);
|
||||||
if (swapped) {swapped = false;}
|
if (swapped) { swapped = false; }
|
||||||
else {swapped = true;}
|
else { swapped = true; }
|
||||||
}
|
}
|
||||||
|
|
||||||
function get_elements(consider_state_panel = false) {
|
function get_elements(consider_state_panel = false) {
|
||||||
@@ -314,18 +348,18 @@ function get_elements(consider_state_panel = false) {
|
|||||||
var height_target = parseInt(height_target);
|
var height_target = parseInt(height_target);
|
||||||
var chatbot_height = chatbot.style.height;
|
var chatbot_height = chatbot.style.height;
|
||||||
// 交换输入区位置,使得输入区始终可用
|
// 交换输入区位置,使得输入区始终可用
|
||||||
if (!swapped){
|
if (!swapped) {
|
||||||
if (panel1.top!=0 && (panel1.bottom + panel1.top)/2 < 0){ swap_input_area(); }
|
if (panel1.top != 0 && (panel1.bottom + panel1.top) / 2 < 0) { swap_input_area(); }
|
||||||
}
|
}
|
||||||
else if (swapped){
|
else if (swapped) {
|
||||||
if (panel2.top!=0 && panel2.top > 0){ swap_input_area(); }
|
if (panel2.top != 0 && panel2.top > 0) { swap_input_area(); }
|
||||||
}
|
}
|
||||||
// 调整高度
|
// 调整高度
|
||||||
const err_tor = 5;
|
const err_tor = 5;
|
||||||
if (Math.abs(panel1.left - chatbot.getBoundingClientRect().left) < err_tor){
|
if (Math.abs(panel1.left - chatbot.getBoundingClientRect().left) < err_tor) {
|
||||||
// 是否处于窄屏模式
|
// 是否处于窄屏模式
|
||||||
height_target = window.innerHeight * 0.6;
|
height_target = window.innerHeight * 0.6;
|
||||||
}else{
|
} else {
|
||||||
// 调整高度
|
// 调整高度
|
||||||
const chatbot_height_exceed = 15;
|
const chatbot_height_exceed = 15;
|
||||||
const chatbot_height_exceed_m = 10;
|
const chatbot_height_exceed_m = 10;
|
||||||
@@ -356,7 +390,7 @@ var elem_upload_component_float = null;
|
|||||||
var elem_upload_component = null;
|
var elem_upload_component = null;
|
||||||
var exist_file_msg = '⚠️请先删除上传区(左上方)中的历史文件,再尝试上传。'
|
var exist_file_msg = '⚠️请先删除上传区(左上方)中的历史文件,再尝试上传。'
|
||||||
|
|
||||||
function locate_upload_elems(){
|
function locate_upload_elems() {
|
||||||
elem_upload = document.getElementById('elem_upload')
|
elem_upload = document.getElementById('elem_upload')
|
||||||
elem_upload_float = document.getElementById('elem_upload_float')
|
elem_upload_float = document.getElementById('elem_upload_float')
|
||||||
elem_input_main = document.getElementById('user_input_main')
|
elem_input_main = document.getElementById('user_input_main')
|
||||||
@@ -386,7 +420,6 @@ async function upload_files(files) {
|
|||||||
Object.defineProperty(elem_upload_component_float, "files", { value: files, enumerable: true });
|
Object.defineProperty(elem_upload_component_float, "files", { value: files, enumerable: true });
|
||||||
elem_upload_component_float.dispatchEvent(event);
|
elem_upload_component_float.dispatchEvent(event);
|
||||||
} else {
|
} else {
|
||||||
console.log(exist_file_msg);
|
|
||||||
toast_push(exist_file_msg, 3000);
|
toast_push(exist_file_msg, 3000);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -500,7 +533,7 @@ function register_upload_event() {
|
|||||||
toast_push('正在上传中,请稍等。', 2000);
|
toast_push('正在上传中,请稍等。', 2000);
|
||||||
begin_loading_status();
|
begin_loading_status();
|
||||||
});
|
});
|
||||||
}else{
|
} else {
|
||||||
toast_push("oppps", 3000);
|
toast_push("oppps", 3000);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -583,16 +616,16 @@ function minor_ui_adjustment() {
|
|||||||
function auto_hide_toolbar() {
|
function auto_hide_toolbar() {
|
||||||
var qq = document.getElementById('tooltip');
|
var qq = document.getElementById('tooltip');
|
||||||
var tab_nav = qq.getElementsByClassName('tab-nav');
|
var tab_nav = qq.getElementsByClassName('tab-nav');
|
||||||
if (tab_nav.length == 0){ return; }
|
if (tab_nav.length == 0) { return; }
|
||||||
var btn_list = tab_nav[0].getElementsByTagName('button')
|
var btn_list = tab_nav[0].getElementsByTagName('button')
|
||||||
if (btn_list.length == 0){ return; }
|
if (btn_list.length == 0) { return; }
|
||||||
// 获取页面宽度
|
// 获取页面宽度
|
||||||
var page_width = document.documentElement.clientWidth;
|
var page_width = document.documentElement.clientWidth;
|
||||||
// 总是保留的按钮数量
|
// 总是保留的按钮数量
|
||||||
const always_preserve = 2;
|
const always_preserve = 2;
|
||||||
// 获取最后一个按钮的右侧位置
|
// 获取最后一个按钮的右侧位置
|
||||||
var cur_right = btn_list[always_preserve-1].getBoundingClientRect().right;
|
var cur_right = btn_list[always_preserve - 1].getBoundingClientRect().right;
|
||||||
if (bar_btn_width.length == 0){
|
if (bar_btn_width.length == 0) {
|
||||||
// 首次运行,记录每个按钮的宽度
|
// 首次运行,记录每个按钮的宽度
|
||||||
for (var i = 0; i < btn_list.length; i++) {
|
for (var i = 0; i < btn_list.length; i++) {
|
||||||
bar_btn_width.push(btn_list[i].getBoundingClientRect().width);
|
bar_btn_width.push(btn_list[i].getBoundingClientRect().width);
|
||||||
@@ -602,14 +635,13 @@ function minor_ui_adjustment() {
|
|||||||
for (var i = always_preserve; i < btn_list.length; i++) {
|
for (var i = always_preserve; i < btn_list.length; i++) {
|
||||||
var element = btn_list[i];
|
var element = btn_list[i];
|
||||||
var element_right = element.getBoundingClientRect().right;
|
var element_right = element.getBoundingClientRect().right;
|
||||||
if (element_right!=0){ cur_right = element_right; }
|
if (element_right != 0) { cur_right = element_right; }
|
||||||
if (element.style.display === 'none') {
|
if (element.style.display === 'none') {
|
||||||
if ((cur_right + bar_btn_width[i]) < (page_width * 0.37)) {
|
if ((cur_right + bar_btn_width[i]) < (page_width * 0.37)) {
|
||||||
// 恢复显示当前按钮
|
// 恢复显示当前按钮
|
||||||
element.style.display = 'block';
|
element.style.display = 'block';
|
||||||
// console.log('show');
|
|
||||||
return;
|
return;
|
||||||
}else{
|
} else {
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
@@ -620,7 +652,6 @@ function minor_ui_adjustment() {
|
|||||||
btn_list[j].style.display = 'none';
|
btn_list[j].style.display = 'none';
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
// console.log('show');
|
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -632,8 +663,41 @@ function minor_ui_adjustment() {
|
|||||||
}, 200); // 每50毫秒执行一次
|
}, 200); // 每50毫秒执行一次
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
||||||
// 第 6 部分: JS初始化函数
|
// 第 6 部分: 避免滑动
|
||||||
|
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
||||||
|
let prevented_offset = 0;
|
||||||
|
function limit_scroll_position() {
|
||||||
|
let scrollableDiv = document.querySelector('#gpt-chatbot > div.wrap');
|
||||||
|
scrollableDiv.addEventListener('wheel', function (e) {
|
||||||
|
let preventScroll = false;
|
||||||
|
if (e.deltaX != 0) { prevented_offset = 0; return;}
|
||||||
|
if (this.scrollHeight == this.clientHeight) { prevented_offset = 0; return;}
|
||||||
|
if (e.deltaY < 0) { prevented_offset = 0; return;}
|
||||||
|
if (e.deltaY > 0 && this.scrollHeight - this.clientHeight - this.scrollTop <= 1) { preventScroll = true; }
|
||||||
|
|
||||||
|
if (preventScroll) {
|
||||||
|
prevented_offset += e.deltaY;
|
||||||
|
if (Math.abs(prevented_offset) > 499) {
|
||||||
|
if (prevented_offset > 500) { prevented_offset = 500; }
|
||||||
|
if (prevented_offset < -500) { prevented_offset = -500; }
|
||||||
|
preventScroll = false;
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
prevented_offset = 0;
|
||||||
|
}
|
||||||
|
if (preventScroll) {
|
||||||
|
e.preventDefault();
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
}, { passive: false }); // Passive event listener option should be false
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
||||||
|
// 第 7 部分: JS初始化函数
|
||||||
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
||||||
|
|
||||||
function GptAcademicJavaScriptInit(LAYOUT = "LEFT-RIGHT") {
|
function GptAcademicJavaScriptInit(LAYOUT = "LEFT-RIGHT") {
|
||||||
@@ -645,4 +709,7 @@ function GptAcademicJavaScriptInit(LAYOUT = "LEFT-RIGHT") {
|
|||||||
});
|
});
|
||||||
chatbotObserver.observe(chatbotIndicator, { attributes: true, childList: true, subtree: true });
|
chatbotObserver.observe(chatbotIndicator, { attributes: true, childList: true, subtree: true });
|
||||||
if (LAYOUT === "LEFT-RIGHT") { chatbotAutoHeight(); }
|
if (LAYOUT === "LEFT-RIGHT") { chatbotAutoHeight(); }
|
||||||
|
if (LAYOUT === "LEFT-RIGHT") { limit_scroll_position(); }
|
||||||
|
// setInterval(function () { uml("mermaid") }, 5000); // 每50毫秒执行一次
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -67,8 +67,14 @@ def adjust_theme():
|
|||||||
button_cancel_text_color_dark="white",
|
button_cancel_text_color_dark="white",
|
||||||
)
|
)
|
||||||
|
|
||||||
with open(os.path.join(theme_dir, "common.js"), "r", encoding="utf8") as f:
|
js = ""
|
||||||
js = f"<script>{f.read()}</script>"
|
for jsf in [
|
||||||
|
os.path.join(theme_dir, "common.js"),
|
||||||
|
os.path.join(theme_dir, "mermaid.min.js"),
|
||||||
|
os.path.join(theme_dir, "mermaid_loader.js"),
|
||||||
|
]:
|
||||||
|
with open(jsf, "r", encoding="utf8") as f:
|
||||||
|
js += f"<script>{f.read()}</script>"
|
||||||
|
|
||||||
# 添加一个萌萌的看板娘
|
# 添加一个萌萌的看板娘
|
||||||
if ADD_WAIFU:
|
if ADD_WAIFU:
|
||||||
|
|||||||
@@ -67,8 +67,14 @@ def adjust_theme():
|
|||||||
button_cancel_text_color_dark="white",
|
button_cancel_text_color_dark="white",
|
||||||
)
|
)
|
||||||
|
|
||||||
with open(os.path.join(theme_dir, "common.js"), "r", encoding="utf8") as f:
|
js = ""
|
||||||
js = f"<script>{f.read()}</script>"
|
for jsf in [
|
||||||
|
os.path.join(theme_dir, "common.js"),
|
||||||
|
os.path.join(theme_dir, "mermaid.min.js"),
|
||||||
|
os.path.join(theme_dir, "mermaid_loader.js"),
|
||||||
|
]:
|
||||||
|
with open(jsf, "r", encoding="utf8") as f:
|
||||||
|
js += f"<script>{f.read()}</script>"
|
||||||
|
|
||||||
# 添加一个萌萌的看板娘
|
# 添加一个萌萌的看板娘
|
||||||
if ADD_WAIFU:
|
if ADD_WAIFU:
|
||||||
|
|||||||
@@ -31,8 +31,15 @@ def adjust_theme():
|
|||||||
THEME = THEME.lstrip("huggingface-")
|
THEME = THEME.lstrip("huggingface-")
|
||||||
set_theme = set_theme.from_hub(THEME.lower())
|
set_theme = set_theme.from_hub(THEME.lower())
|
||||||
|
|
||||||
with open(os.path.join(theme_dir, "common.js"), "r", encoding="utf8") as f:
|
js = ""
|
||||||
js = f"<script>{f.read()}</script>"
|
for jsf in [
|
||||||
|
os.path.join(theme_dir, "common.js"),
|
||||||
|
os.path.join(theme_dir, "mermaid.min.js"),
|
||||||
|
os.path.join(theme_dir, "mermaid_loader.js"),
|
||||||
|
]:
|
||||||
|
with open(jsf, "r", encoding="utf8") as f:
|
||||||
|
js += f"<script>{f.read()}</script>"
|
||||||
|
|
||||||
|
|
||||||
# 添加一个萌萌的看板娘
|
# 添加一个萌萌的看板娘
|
||||||
if ADD_WAIFU:
|
if ADD_WAIFU:
|
||||||
|
|||||||
@@ -76,8 +76,14 @@ def adjust_theme():
|
|||||||
chatbot_code_background_color_dark="*neutral_950",
|
chatbot_code_background_color_dark="*neutral_950",
|
||||||
)
|
)
|
||||||
|
|
||||||
with open(os.path.join(theme_dir, "common.js"), "r", encoding="utf8") as f:
|
js = ""
|
||||||
js = f"<script>{f.read()}</script>"
|
for jsf in [
|
||||||
|
os.path.join(theme_dir, "common.js"),
|
||||||
|
os.path.join(theme_dir, "mermaid.min.js"),
|
||||||
|
os.path.join(theme_dir, "mermaid_loader.js"),
|
||||||
|
]:
|
||||||
|
with open(jsf, "r", encoding="utf8") as f:
|
||||||
|
js += f"<script>{f.read()}</script>"
|
||||||
|
|
||||||
# 添加一个萌萌的看板娘
|
# 添加一个萌萌的看板娘
|
||||||
if ADD_WAIFU:
|
if ADD_WAIFU:
|
||||||
|
|||||||
1589
themes/mermaid.min.js
vendored
Normal file
1589
themes/mermaid.min.js
vendored
Normal file
File diff suppressed because one or more lines are too long
55
themes/mermaid_editor.js
Normal file
55
themes/mermaid_editor.js
Normal file
@@ -0,0 +1,55 @@
|
|||||||
|
import { deflate, inflate } from 'https://fastly.jsdelivr.net/gh/nodeca/pako@master/dist/pako.esm.mjs';
|
||||||
|
import { toUint8Array, fromUint8Array, toBase64, fromBase64 } from 'https://cdn.jsdelivr.net/npm/js-base64@3.7.2/base64.mjs';
|
||||||
|
|
||||||
|
const base64Serde = {
|
||||||
|
serialize: (state) => {
|
||||||
|
return toBase64(state, true);
|
||||||
|
},
|
||||||
|
deserialize: (state) => {
|
||||||
|
return fromBase64(state);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
const pakoSerde = {
|
||||||
|
serialize: (state) => {
|
||||||
|
const data = new TextEncoder().encode(state);
|
||||||
|
const compressed = deflate(data, { level: 9 });
|
||||||
|
return fromUint8Array(compressed, true);
|
||||||
|
},
|
||||||
|
deserialize: (state) => {
|
||||||
|
const data = toUint8Array(state);
|
||||||
|
return inflate(data, { to: 'string' });
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
const serdes = {
|
||||||
|
base64: base64Serde,
|
||||||
|
pako: pakoSerde
|
||||||
|
};
|
||||||
|
|
||||||
|
export const serializeState = (state, serde = 'pako') => {
|
||||||
|
if (!(serde in serdes)) {
|
||||||
|
throw new Error(`Unknown serde type: ${serde}`);
|
||||||
|
}
|
||||||
|
const json = JSON.stringify(state);
|
||||||
|
const serialized = serdes[serde].serialize(json);
|
||||||
|
return `${serde}:${serialized}`;
|
||||||
|
};
|
||||||
|
|
||||||
|
const deserializeState = (state) => {
|
||||||
|
let type, serialized;
|
||||||
|
if (state.includes(':')) {
|
||||||
|
let tempType;
|
||||||
|
[tempType, serialized] = state.split(':');
|
||||||
|
if (tempType in serdes) {
|
||||||
|
type = tempType;
|
||||||
|
} else {
|
||||||
|
throw new Error(`Unknown serde type: ${tempType}`);
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
type = 'base64';
|
||||||
|
serialized = state;
|
||||||
|
}
|
||||||
|
const json = serdes[type].deserialize(serialized);
|
||||||
|
return JSON.parse(json);
|
||||||
|
};
|
||||||
189
themes/mermaid_loader.js
Normal file
189
themes/mermaid_loader.js
Normal file
@@ -0,0 +1,189 @@
|
|||||||
|
const uml = async className => {
|
||||||
|
|
||||||
|
// Custom element to encapsulate Mermaid content.
|
||||||
|
class MermaidDiv extends HTMLElement {
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Creates a special Mermaid div shadow DOM.
|
||||||
|
* Works around issues of shared IDs.
|
||||||
|
* @return {void}
|
||||||
|
*/
|
||||||
|
constructor() {
|
||||||
|
super()
|
||||||
|
|
||||||
|
// Create the Shadow DOM and attach style
|
||||||
|
const shadow = this.attachShadow({ mode: "open" })
|
||||||
|
const style = document.createElement("style")
|
||||||
|
style.textContent = `
|
||||||
|
:host {
|
||||||
|
display: block;
|
||||||
|
line-height: initial;
|
||||||
|
font-size: 16px;
|
||||||
|
}
|
||||||
|
div.diagram {
|
||||||
|
margin: 0;
|
||||||
|
overflow: visible;
|
||||||
|
}`
|
||||||
|
shadow.appendChild(style)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (typeof customElements.get("diagram-div") === "undefined") {
|
||||||
|
customElements.define("diagram-div", MermaidDiv)
|
||||||
|
}
|
||||||
|
|
||||||
|
const getFromCode = parent => {
|
||||||
|
// Handles <pre><code> text extraction.
|
||||||
|
let text = ""
|
||||||
|
for (let j = 0; j < parent.childNodes.length; j++) {
|
||||||
|
const subEl = parent.childNodes[j]
|
||||||
|
if (subEl.tagName.toLowerCase() === "code") {
|
||||||
|
for (let k = 0; k < subEl.childNodes.length; k++) {
|
||||||
|
const child = subEl.childNodes[k]
|
||||||
|
const whitespace = /^\s*$/
|
||||||
|
if (child.nodeName === "#text" && !(whitespace.test(child.nodeValue))) {
|
||||||
|
text = child.nodeValue
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return text
|
||||||
|
}
|
||||||
|
|
||||||
|
function createOrUpdateHyperlink(parentElement, linkText, linkHref) {
|
||||||
|
// Search for an existing anchor element within the parentElement
|
||||||
|
let existingAnchor = parentElement.querySelector("a");
|
||||||
|
|
||||||
|
// Check if an anchor element already exists
|
||||||
|
if (existingAnchor) {
|
||||||
|
// Update the hyperlink reference if it's different from the current one
|
||||||
|
if (existingAnchor.href !== linkHref) {
|
||||||
|
existingAnchor.href = linkHref;
|
||||||
|
}
|
||||||
|
// Update the target attribute to ensure it opens in a new tab
|
||||||
|
existingAnchor.target = '_blank';
|
||||||
|
|
||||||
|
// If the text must be dynamic, uncomment and use the following line:
|
||||||
|
// existingAnchor.textContent = linkText;
|
||||||
|
} else {
|
||||||
|
// If no anchor exists, create one and append it to the parentElement
|
||||||
|
let anchorElement = document.createElement("a");
|
||||||
|
anchorElement.href = linkHref; // Set hyperlink reference
|
||||||
|
anchorElement.textContent = linkText; // Set text displayed
|
||||||
|
anchorElement.target = '_blank'; // Ensure it opens in a new tab
|
||||||
|
parentElement.appendChild(anchorElement); // Append the new anchor element to the parent
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function removeLastLine(str) {
|
||||||
|
// 将字符串按换行符分割成数组
|
||||||
|
var lines = str.split('\n');
|
||||||
|
lines.pop();
|
||||||
|
// 将数组重新连接成字符串,并按换行符连接
|
||||||
|
var result = lines.join('\n');
|
||||||
|
return result;
|
||||||
|
}
|
||||||
|
|
||||||
|
// 给出配置 Provide a default config in case one is not specified
|
||||||
|
const defaultConfig = {
|
||||||
|
startOnLoad: false,
|
||||||
|
theme: "default",
|
||||||
|
flowchart: {
|
||||||
|
htmlLabels: false
|
||||||
|
},
|
||||||
|
er: {
|
||||||
|
useMaxWidth: false
|
||||||
|
},
|
||||||
|
sequence: {
|
||||||
|
useMaxWidth: false,
|
||||||
|
noteFontWeight: "14px",
|
||||||
|
actorFontSize: "14px",
|
||||||
|
messageFontSize: "16px"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if (document.body.classList.contains("dark")) {
|
||||||
|
defaultConfig.theme = "dark"
|
||||||
|
}
|
||||||
|
|
||||||
|
const Module = await import('./file=themes/mermaid_editor.js');
|
||||||
|
|
||||||
|
function do_render(block, code, codeContent, cnt) {
|
||||||
|
var rendered_content = mermaid.render(`_diagram_${cnt}`, code);
|
||||||
|
////////////////////////////// 记录有哪些代码已经被渲染了 ///////////////////////////////////
|
||||||
|
let codeFinishRenderElement = block.querySelector("code_finish_render"); // 如果block下已存在code_already_rendered元素,则获取它
|
||||||
|
if (codeFinishRenderElement) { // 如果block下已存在code_already_rendered元素
|
||||||
|
codeFinishRenderElement.style.display = "none";
|
||||||
|
} else {
|
||||||
|
// 如果不存在code_finish_render元素,则将code元素中的内容添加到新创建的code_finish_render元素中
|
||||||
|
let codeFinishRenderElementNew = document.createElement("code_finish_render"); // 创建一个新的code_already_rendered元素
|
||||||
|
codeFinishRenderElementNew.style.display = "none";
|
||||||
|
codeFinishRenderElementNew.textContent = "";
|
||||||
|
block.appendChild(codeFinishRenderElementNew); // 将新创建的code_already_rendered元素添加到block中
|
||||||
|
codeFinishRenderElement = codeFinishRenderElementNew;
|
||||||
|
}
|
||||||
|
|
||||||
|
////////////////////////////// 创建一个用于渲染的容器 ///////////////////////////////////
|
||||||
|
let mermaidRender = block.querySelector(".mermaid_render"); // 尝试获取已存在的<div class='mermaid_render'>
|
||||||
|
if (!mermaidRender) {
|
||||||
|
mermaidRender = document.createElement("div"); // 不存在,创建新的<div class='mermaid_render'>
|
||||||
|
mermaidRender.classList.add("mermaid_render");
|
||||||
|
block.appendChild(mermaidRender); // 将新创建的元素附加到block
|
||||||
|
}
|
||||||
|
mermaidRender.innerHTML = rendered_content
|
||||||
|
codeFinishRenderElement.textContent = code // 标记已经渲染的部分
|
||||||
|
|
||||||
|
////////////////////////////// 创建一个“点击这里编辑脑图” ///////////////////////////////
|
||||||
|
let pako_encode = Module.serializeState({
|
||||||
|
"code": codeContent,
|
||||||
|
"mermaid": "{\n \"theme\": \"default\"\n}",
|
||||||
|
"autoSync": true,
|
||||||
|
"updateDiagram": false
|
||||||
|
});
|
||||||
|
createOrUpdateHyperlink(block, "点击这里编辑脑图", "https://mermaid.live/edit#" + pako_encode)
|
||||||
|
}
|
||||||
|
|
||||||
|
// 加载配置 Load up the config
|
||||||
|
mermaid.mermaidAPI.globalReset() // 全局复位
|
||||||
|
const config = (typeof mermaidConfig === "undefined") ? defaultConfig : mermaidConfig
|
||||||
|
mermaid.initialize(config)
|
||||||
|
// 查找需要渲染的元素 Find all of our Mermaid sources and render them.
|
||||||
|
const blocks = document.querySelectorAll(`pre.mermaid`);
|
||||||
|
|
||||||
|
for (let i = 0; i < blocks.length; i++) {
|
||||||
|
var block = blocks[i]
|
||||||
|
////////////////////////////// 如果代码没有发生变化,就不渲染了 ///////////////////////////////////
|
||||||
|
var code = getFromCode(block);
|
||||||
|
let codeContent = block.querySelector("code").textContent; // 获取code元素中的文本内容
|
||||||
|
let codePendingRenderElement = block.querySelector("code_pending_render"); // 如果block下已存在code_already_rendered元素,则获取它
|
||||||
|
if (codePendingRenderElement) { // 如果block下已存在code_pending_render元素
|
||||||
|
codePendingRenderElement.style.display = "none";
|
||||||
|
if (codePendingRenderElement.textContent !== codeContent) {
|
||||||
|
codePendingRenderElement.textContent = codeContent; // 如果现有的code_pending_render元素中的内容与code元素中的内容不同,更新code_pending_render元素中的内容
|
||||||
|
}
|
||||||
|
else {
|
||||||
|
continue; // 如果相同,就不处理了
|
||||||
|
}
|
||||||
|
} else { // 如果不存在code_pending_render元素,则将code元素中的内容添加到新创建的code_pending_render元素中
|
||||||
|
let codePendingRenderElementNew = document.createElement("code_pending_render"); // 创建一个新的code_already_rendered元素
|
||||||
|
codePendingRenderElementNew.style.display = "none";
|
||||||
|
codePendingRenderElementNew.textContent = codeContent;
|
||||||
|
block.appendChild(codePendingRenderElementNew); // 将新创建的code_pending_render元素添加到block中
|
||||||
|
codePendingRenderElement = codePendingRenderElementNew;
|
||||||
|
}
|
||||||
|
|
||||||
|
////////////////////////////// 在这里才真正开始渲染 ///////////////////////////////////
|
||||||
|
try {
|
||||||
|
do_render(block, code, codeContent, i);
|
||||||
|
// console.log("渲染", codeContent);
|
||||||
|
} catch (err) {
|
||||||
|
try {
|
||||||
|
var lines = code.split('\n'); if (lines.length < 2) { continue; }
|
||||||
|
do_render(block, removeLastLine(code), codeContent, i);
|
||||||
|
// console.log("渲染", codeContent);
|
||||||
|
} catch (err) {
|
||||||
|
console.log("以下代码不能渲染", code, removeLastLine(code), err);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -111,6 +111,7 @@ js_code_for_toggle_darkmode = """() => {
|
|||||||
} else {
|
} else {
|
||||||
document.querySelector('body').classList.add('dark');
|
document.querySelector('body').classList.add('dark');
|
||||||
}
|
}
|
||||||
|
document.querySelectorAll('code_pending_render').forEach(code => {code.remove();})
|
||||||
}"""
|
}"""
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
901
toolbox.py
901
toolbox.py
File diff suppressed because it is too large
Load Diff
4
version
4
version
@@ -1,5 +1,5 @@
|
|||||||
{
|
{
|
||||||
"version": 3.65,
|
"version": 3.70,
|
||||||
"show_feature": true,
|
"show_feature": true,
|
||||||
"new_feature": "支持Gemini-pro <-> 支持直接拖拽文件到上传区 <-> 支持将图片粘贴到输入区 <-> 修复若干隐蔽的内存BUG <-> 修复多用户冲突问题 <-> 接入Deepseek Coder <-> AutoGen多智能体插件测试版"
|
"new_feature": "支持Mermaid绘图库(让大模型绘制脑图) <-> 支持Gemini-pro <-> 支持直接拖拽文件到上传区 <-> 支持将图片粘贴到输入区 <-> 修复若干隐蔽的内存BUG <-> 修复多用户冲突问题 <-> 接入Deepseek Coder <-> AutoGen多智能体插件测试版"
|
||||||
}
|
}
|
||||||
|
|||||||
Reference in New Issue
Block a user