Compare commits
29 Commits
version3.7
...
hongyi-zha
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
d5be61b0b8 | ||
|
|
f889ef7625 | ||
|
|
a93bf4410d | ||
|
|
1c0764753a | ||
|
|
c847209ac9 | ||
|
|
4f9d40c14f | ||
|
|
91926d24b7 | ||
|
|
ef311c4859 | ||
|
|
82795d3817 | ||
|
|
49e28a5a00 | ||
|
|
01def2e329 | ||
|
|
2291be2b28 | ||
|
|
c89ec7969f | ||
|
|
1506c19834 | ||
|
|
a6fdc493b7 | ||
|
|
113067c6ab | ||
|
|
7b6828ab07 | ||
|
|
d818c38dfe | ||
|
|
08b4e9796e | ||
|
|
b55d573819 | ||
|
|
06b0e800a2 | ||
|
|
7bbaf05961 | ||
|
|
3b83279855 | ||
|
|
37164a826e | ||
|
|
dd2a97e7a9 | ||
|
|
e579006c4a | ||
|
|
031f19b6dd | ||
|
|
142b516749 | ||
|
|
f2e73aa580 |
57
README.md
57
README.md
@@ -55,6 +55,11 @@ Read this in [English](docs/README.English.md) | [日本語](docs/README.Japanes
|
||||
功能(⭐= 近期新增功能) | 描述
|
||||
--- | ---
|
||||
⭐[接入新模型](https://github.com/binary-husky/gpt_academic/wiki/%E5%A6%82%E4%BD%95%E5%88%87%E6%8D%A2%E6%A8%A1%E5%9E%8B) | 百度[千帆](https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Nlks5zkzu)与文心一言, 通义千问[Qwen](https://modelscope.cn/models/qwen/Qwen-7B-Chat/summary),上海AI-Lab[书生](https://github.com/InternLM/InternLM),讯飞[星火](https://xinghuo.xfyun.cn/),[LLaMa2](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf),[智谱GLM4](https://open.bigmodel.cn/),DALLE3, [DeepseekCoder](https://coder.deepseek.com/)
|
||||
⭐支持mermaid图像渲染 | 支持让GPT生成[流程图](https://www.bilibili.com/video/BV18c41147H9/)、状态转移图、甘特图、饼状图、GitGraph等等(3.7版本)
|
||||
⭐Arxiv论文精细翻译 ([Docker](https://github.com/binary-husky/gpt_academic/pkgs/container/gpt_academic_with_latex)) | [插件] 一键[以超高质量翻译arxiv论文](https://www.bilibili.com/video/BV1dz4y1v77A/),目前最好的论文翻译工具
|
||||
⭐[实时语音对话输入](https://github.com/binary-husky/gpt_academic/blob/master/docs/use_audio.md) | [插件] 异步[监听音频](https://www.bilibili.com/video/BV1AV4y187Uy/),自动断句,自动寻找回答时机
|
||||
⭐AutoGen多智能体插件 | [插件] 借助微软AutoGen,探索多Agent的智能涌现可能!
|
||||
⭐虚空终端插件 | [插件] 能够使用自然语言直接调度本项目其他插件
|
||||
润色、翻译、代码解释 | 一键润色、翻译、查找论文语法错误、解释代码
|
||||
[自定义快捷键](https://www.bilibili.com/video/BV14s4y1E7jN) | 支持自定义快捷键
|
||||
模块化设计 | 支持自定义强大的[插件](https://github.com/binary-husky/gpt_academic/tree/master/crazy_functions),插件支持[热更新](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
|
||||
@@ -63,21 +68,16 @@ Read this in [English](docs/README.English.md) | [日本語](docs/README.Japanes
|
||||
Latex全文[翻译](https://www.bilibili.com/video/BV1nk4y1Y7Js/)、[润色](https://www.bilibili.com/video/BV1FT411H7c5/) | [插件] 一键翻译或润色latex论文
|
||||
批量注释生成 | [插件] 一键批量生成函数注释
|
||||
Markdown[中英互译](https://www.bilibili.com/video/BV1yo4y157jV/) | [插件] 看到上面5种语言的[README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md)了吗?就是出自他的手笔
|
||||
⭐支持mermaid图像渲染 | 支持让GPT生成[流程图](https://www.bilibili.com/video/BV18c41147H9/)、状态转移图、甘特图、饼状图、GitGraph等等(3.7版本)
|
||||
[PDF论文全文翻译功能](https://www.bilibili.com/video/BV1KT411x7Wn) | [插件] PDF论文提取题目&摘要+翻译全文(多线程)
|
||||
[Arxiv小助手](https://www.bilibili.com/video/BV1LM4y1279X) | [插件] 输入arxiv文章url即可一键翻译摘要+下载PDF
|
||||
Latex论文一键校对 | [插件] 仿Grammarly对Latex文章进行语法、拼写纠错+输出对照PDF
|
||||
[谷歌学术统合小助手](https://www.bilibili.com/video/BV19L411U7ia) | [插件] 给定任意谷歌学术搜索页面URL,让gpt帮你[写relatedworks](https://www.bilibili.com/video/BV1GP411U7Az/)
|
||||
互联网信息聚合+GPT | [插件] 一键[让GPT从互联网获取信息](https://www.bilibili.com/video/BV1om4y127ck)回答问题,让信息永不过时
|
||||
⭐Arxiv论文精细翻译 ([Docker](https://github.com/binary-husky/gpt_academic/pkgs/container/gpt_academic_with_latex)) | [插件] 一键[以超高质量翻译arxiv论文](https://www.bilibili.com/video/BV1dz4y1v77A/),目前最好的论文翻译工具
|
||||
⭐[实时语音对话输入](https://github.com/binary-husky/gpt_academic/blob/master/docs/use_audio.md) | [插件] 异步[监听音频](https://www.bilibili.com/video/BV1AV4y187Uy/),自动断句,自动寻找回答时机
|
||||
公式/图片/表格显示 | 可以同时显示公式的[tex形式和渲染形式](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png),支持公式、代码高亮
|
||||
⭐AutoGen多智能体插件 | [插件] 借助微软AutoGen,探索多Agent的智能涌现可能!
|
||||
启动暗色[主题](https://github.com/binary-husky/gpt_academic/issues/173) | 在浏览器url后面添加```/?__theme=dark```可以切换dark主题
|
||||
[多LLM模型](https://www.bilibili.com/video/BV1wT411p7yf)支持 | 同时被GPT3.5、GPT4、[清华ChatGLM2](https://github.com/THUDM/ChatGLM2-6B)、[复旦MOSS](https://github.com/OpenLMLab/MOSS)伺候的感觉一定会很不错吧?
|
||||
更多LLM模型接入,支持[huggingface部署](https://huggingface.co/spaces/qingxu98/gpt-academic) | 加入Newbing接口(新必应),引入清华[Jittorllms](https://github.com/Jittor/JittorLLMs)支持[LLaMA](https://github.com/facebookresearch/llama)和[盘古α](https://openi.org.cn/pangu/)
|
||||
⭐[void-terminal](https://github.com/binary-husky/void-terminal) pip包 | 脱离GUI,在Python中直接调用本项目的所有函数插件(开发中)
|
||||
⭐虚空终端插件 | [插件] 能够使用自然语言直接调度本项目其他插件
|
||||
更多新功能展示 (图像生成等) …… | 见本文档结尾处 ……
|
||||
</div>
|
||||
|
||||
@@ -116,6 +116,25 @@ Latex论文一键校对 | [插件] 仿Grammarly对Latex文章进行语法、拼
|
||||
<br><br>
|
||||
|
||||
# Installation
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
A{"安装方法"} --> W1("I. 🔑直接运行 (Windows, Linux or MacOS)")
|
||||
W1 --> W11["1. Python pip包管理依赖"]
|
||||
W1 --> W12["2. Anaconda包管理依赖(推荐⭐)"]
|
||||
|
||||
A --> W2["II. 🐳使用Docker (Windows, Linux or MacOS)"]
|
||||
|
||||
W2 --> k1["1. 部署项目全部能力的大镜像(推荐⭐)"]
|
||||
W2 --> k2["2. 仅在线模型(GPT, GLM4等)镜像"]
|
||||
W2 --> k3["3. 在线模型 + Latex的大镜像"]
|
||||
|
||||
A --> W4["IV. 🚀其他部署方法"]
|
||||
W4 --> C1["1. Windows/MacOS 一键安装运行脚本(推荐⭐)"]
|
||||
W4 --> C2["2. Huggingface, Sealos远程部署"]
|
||||
W4 --> C4["3. ... 其他 ..."]
|
||||
```
|
||||
|
||||
### 安装方法I:直接运行 (Windows, Linux or MacOS)
|
||||
|
||||
1. 下载项目
|
||||
@@ -129,7 +148,7 @@ Latex论文一键校对 | [插件] 仿Grammarly对Latex文章进行语法、拼
|
||||
|
||||
在`config.py`中,配置API KEY等变量。[特殊网络环境设置方法](https://github.com/binary-husky/gpt_academic/issues/1)、[Wiki-项目配置说明](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)。
|
||||
|
||||
「 程序会优先检查是否存在名为`config_private.py`的私密配置文件,并用其中的配置覆盖`config.py`的同名配置。如您能理解以上读取逻辑,我们强烈建议您在`config.py`同路径下创建一个名为`config_private.py`的新配置文件,并使用`config_private.py`配置项目,以确保更新或其他用户无法轻易查看您的私有配置 」。
|
||||
「 程序会优先检查是否存在名为`config_private.py`的私密配置文件,并用其中的配置覆盖`config.py`的同名配置。如您能理解以上读取逻辑,我们强烈建议您在`config.py`同路径下创建一个名为`config_private.py`的新配置文件,并使用`config_private.py`配置项目,从而确保自动更新时不会丢失配置 」。
|
||||
|
||||
「 支持通过`环境变量`配置项目,环境变量的书写格式参考`docker-compose.yml`文件或者我们的[Wiki页面](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)。配置读取优先级: `环境变量` > `config_private.py` > `config.py` 」。
|
||||
|
||||
@@ -358,6 +377,32 @@ GPT Academic开发者QQ群:`610599535`
|
||||
- 某些浏览器翻译插件干扰此软件前端的运行
|
||||
- 官方Gradio目前有很多兼容性问题,请**务必使用`requirement.txt`安装Gradio**
|
||||
|
||||
```mermaid
|
||||
timeline LR
|
||||
title GPT-Academic项目发展历程
|
||||
section 2.x
|
||||
1.0~2.2: 基础功能: 引入模块化函数插件: 可折叠式布局: 函数插件支持热重载
|
||||
2.3~2.5: 增强多线程交互性: 新增PDF全文翻译功能: 新增输入区切换位置的功能: 自更新
|
||||
2.6: 重构了插件结构: 提高了交互性: 加入更多插件
|
||||
section 3.x
|
||||
3.0~3.1: 对chatglm支持: 对其他小型llm支持: 支持同时问询多个gpt模型: 支持多个apikey负载均衡
|
||||
3.2~3.3: 函数插件支持更多参数接口: 保存对话功能: 解读任意语言代码: 同时询问任意的LLM组合: 互联网信息综合功能
|
||||
3.4: 加入arxiv论文翻译: 加入latex论文批改功能
|
||||
3.44: 正式支持Azure: 优化界面易用性
|
||||
3.46: 自定义ChatGLM2微调模型: 实时语音对话
|
||||
3.49: 支持阿里达摩院通义千问: 上海AI-Lab书生: 讯飞星火: 支持百度千帆平台 & 文心一言
|
||||
3.50: 虚空终端: 支持插件分类: 改进UI: 设计新主题
|
||||
3.53: 动态选择不同界面主题: 提高稳定性: 解决多用户冲突问题
|
||||
3.55: 动态代码解释器: 重构前端界面: 引入悬浮窗口与菜单栏
|
||||
3.56: 动态追加基础功能按钮: 新汇报PDF汇总页面
|
||||
3.57: GLM3, 星火v3: 支持文心一言v4: 修复本地模型的并发BUG
|
||||
3.60: 引入AutoGen
|
||||
3.70: 引入Mermaid绘图: 实现GPT画脑图等功能
|
||||
3.80(TODO): 优化AutoGen插件主题: 设计衍生插件
|
||||
|
||||
```
|
||||
|
||||
|
||||
### III:主题
|
||||
可以通过修改`THEME`选项(config.py)变更主题
|
||||
1. `Chuanhu-Small-and-Beautiful` [网址](https://github.com/GaiZhenbiao/ChuanhuChatGPT/)
|
||||
|
||||
@@ -3,18 +3,27 @@
|
||||
# 'stop' 颜色对应 theme.py 中的 color_er
|
||||
import importlib
|
||||
from toolbox import clear_line_break
|
||||
from toolbox import apply_gpt_academic_string_mask_langbased
|
||||
from toolbox import build_gpt_academic_masked_string_langbased
|
||||
from textwrap import dedent
|
||||
|
||||
def get_core_functions():
|
||||
return {
|
||||
|
||||
"英语学术润色": {
|
||||
# [1*] 前缀,会被加在你的输入之前。例如,用来描述你的要求,例如翻译、解释代码、润色等等
|
||||
"Prefix": r"Below is a paragraph from an academic paper. Polish the writing to meet the academic style, "
|
||||
r"improve the spelling, grammar, clarity, concision and overall readability. When necessary, rewrite the whole sentence. "
|
||||
r"Firstly, you should provide the polished paragraph. "
|
||||
r"Secondly, you should list all your modification and explain the reasons to do so in markdown table." + "\n\n",
|
||||
# [2*] 后缀,会被加在你的输入之后。例如,配合前缀可以把你的输入内容用引号圈起来
|
||||
"学术语料润色": {
|
||||
# [1*] 前缀字符串,会被加在你的输入之前。例如,用来描述你的要求,例如翻译、解释代码、润色等等。
|
||||
# 这里填一个提示词字符串就行了,这里为了区分中英文情景搞复杂了一点
|
||||
"Prefix": build_gpt_academic_masked_string_langbased(
|
||||
text_show_english=
|
||||
r"Below is a paragraph from an academic paper. Polish the writing to meet the academic style, "
|
||||
r"improve the spelling, grammar, clarity, concision and overall readability. When necessary, rewrite the whole sentence. "
|
||||
r"Firstly, you should provide the polished paragraph. "
|
||||
r"Secondly, you should list all your modification and explain the reasons to do so in markdown table.",
|
||||
text_show_chinese=
|
||||
r"作为一名中文学术论文写作改进助理,你的任务是改进所提供文本的拼写、语法、清晰、简洁和整体可读性,"
|
||||
r"同时分解长句,减少重复,并提供改进建议。请先提供文本的更正版本,然后在markdown表格中列出修改的内容,并给出修改的理由:"
|
||||
) + "\n\n",
|
||||
# [2*] 后缀字符串,会被加在你的输入之后。例如,配合前缀可以把你的输入内容用引号圈起来
|
||||
"Suffix": r"",
|
||||
# [3] 按钮颜色 (可选参数,默认 secondary)
|
||||
"Color": r"secondary",
|
||||
@@ -32,8 +41,10 @@ def get_core_functions():
|
||||
"Prefix": r"",
|
||||
# 后缀,会被加在你的输入之后。例如,配合前缀可以把你的输入内容用引号圈起来
|
||||
"Suffix":
|
||||
# dedent() 函数用于去除多行字符串的缩进
|
||||
dedent("\n"+r'''
|
||||
==============================
|
||||
|
||||
使用mermaid flowchart对以上文本进行总结,概括上述段落的内容以及内在逻辑关系,例如:
|
||||
|
||||
以下是对以上文本的总结,以mermaid flowchart的形式展示:
|
||||
@@ -83,14 +94,22 @@ def get_core_functions():
|
||||
|
||||
|
||||
"学术英中互译": {
|
||||
"Prefix": r"I want you to act as a scientific English-Chinese translator, " +
|
||||
r"I will provide you with some paragraphs in one language " +
|
||||
r"and your task is to accurately and academically translate the paragraphs only into the other language. " +
|
||||
r"Do not repeat the original provided paragraphs after translation. " +
|
||||
r"You should use artificial intelligence tools, " +
|
||||
r"such as natural language processing, and rhetorical knowledge " +
|
||||
r"and experience about effective writing techniques to reply. " +
|
||||
r"I'll give you my paragraphs as follows, tell me what language it is written in, and then translate:" + "\n\n",
|
||||
"Prefix": build_gpt_academic_masked_string_langbased(
|
||||
text_show_chinese=
|
||||
r"I want you to act as a scientific English-Chinese translator, "
|
||||
r"I will provide you with some paragraphs in one language "
|
||||
r"and your task is to accurately and academically translate the paragraphs only into the other language. "
|
||||
r"Do not repeat the original provided paragraphs after translation. "
|
||||
r"You should use artificial intelligence tools, "
|
||||
r"such as natural language processing, and rhetorical knowledge "
|
||||
r"and experience about effective writing techniques to reply. "
|
||||
r"I'll give you my paragraphs as follows, tell me what language it is written in, and then translate:",
|
||||
text_show_english=
|
||||
r"你是经验丰富的翻译,请把以下学术文章段落翻译成中文,"
|
||||
r"并同时充分考虑中文的语法、清晰、简洁和整体可读性,"
|
||||
r"必要时,你可以修改整个句子的顺序以确保翻译后的段落符合中文的语言习惯。"
|
||||
r"你需要翻译的文本如下:"
|
||||
) + "\n\n",
|
||||
"Suffix": r"",
|
||||
},
|
||||
|
||||
@@ -140,7 +159,11 @@ def handle_core_functionality(additional_fn, inputs, history, chatbot):
|
||||
if "PreProcess" in core_functional[additional_fn]:
|
||||
if core_functional[additional_fn]["PreProcess"] is not None:
|
||||
inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
|
||||
inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"]
|
||||
# 为字符串加上上面定义的前缀和后缀。
|
||||
inputs = apply_gpt_academic_string_mask_langbased(
|
||||
string = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"],
|
||||
lang_reference = inputs,
|
||||
)
|
||||
if core_functional[additional_fn].get("AutoClearHistory", False):
|
||||
history = []
|
||||
return inputs, history
|
||||
|
||||
@@ -32,10 +32,9 @@ def get_crazy_functions():
|
||||
from crazy_functions.理解PDF文档内容 import 理解PDF文档内容标准文件输入
|
||||
from crazy_functions.Latex全文润色 import Latex中文润色
|
||||
from crazy_functions.Latex全文润色 import Latex英文纠错
|
||||
from crazy_functions.Latex全文翻译 import Latex中译英
|
||||
from crazy_functions.Latex全文翻译 import Latex英译中
|
||||
from crazy_functions.批量Markdown翻译 import Markdown中译英
|
||||
from crazy_functions.虚空终端 import 虚空终端
|
||||
from crazy_functions.生成多种Mermaid图表 import 生成多种Mermaid图表
|
||||
|
||||
function_plugins = {
|
||||
"虚空终端": {
|
||||
@@ -71,6 +70,15 @@ def get_crazy_functions():
|
||||
"Info": "清除所有缓存文件,谨慎操作 | 不需要输入参数",
|
||||
"Function": HotReload(清除缓存),
|
||||
},
|
||||
"生成多种Mermaid图表(从当前对话或文件(.pdf/.md)中生产图表)": {
|
||||
"Group": "对话",
|
||||
"Color": "stop",
|
||||
"AsButton": False,
|
||||
"Info" : "基于当前对话或PDF生成多种Mermaid图表,图表类型由模型判断",
|
||||
"Function": HotReload(生成多种Mermaid图表),
|
||||
"AdvancedArgs": True,
|
||||
"ArgsReminder": "请输入图类型对应的数字,不输入则为模型自行判断:1-流程图,2-序列图,3-类图,4-饼图,5-甘特图,6-状态图,7-实体关系图,8-象限提示图,9-思维导图",
|
||||
},
|
||||
"批量总结Word文档": {
|
||||
"Group": "学术",
|
||||
"Color": "stop",
|
||||
@@ -237,13 +245,7 @@ def get_crazy_functions():
|
||||
"Info": "对英文Latex项目全文进行润色处理 | 输入参数为路径或上传压缩包",
|
||||
"Function": HotReload(Latex英文润色),
|
||||
},
|
||||
"英文Latex项目全文纠错(输入路径或上传压缩包)": {
|
||||
"Group": "学术",
|
||||
"Color": "stop",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Info": "对英文Latex项目全文进行纠错处理 | 输入参数为路径或上传压缩包",
|
||||
"Function": HotReload(Latex英文纠错),
|
||||
},
|
||||
|
||||
"中文Latex项目全文润色(输入路径或上传压缩包)": {
|
||||
"Group": "学术",
|
||||
"Color": "stop",
|
||||
@@ -252,6 +254,14 @@ def get_crazy_functions():
|
||||
"Function": HotReload(Latex中文润色),
|
||||
},
|
||||
# 已经被新插件取代
|
||||
# "英文Latex项目全文纠错(输入路径或上传压缩包)": {
|
||||
# "Group": "学术",
|
||||
# "Color": "stop",
|
||||
# "AsButton": False, # 加入下拉菜单中
|
||||
# "Info": "对英文Latex项目全文进行纠错处理 | 输入参数为路径或上传压缩包",
|
||||
# "Function": HotReload(Latex英文纠错),
|
||||
# },
|
||||
# 已经被新插件取代
|
||||
# "Latex项目全文中译英(输入路径或上传压缩包)": {
|
||||
# "Group": "学术",
|
||||
# "Color": "stop",
|
||||
@@ -523,6 +533,7 @@ def get_crazy_functions():
|
||||
|
||||
try:
|
||||
from crazy_functions.Latex输出PDF结果 import Latex英文纠错加PDF对比
|
||||
from crazy_functions.Latex输出PDF结果 import Latex翻译中文并重新编译PDF
|
||||
|
||||
function_plugins.update(
|
||||
{
|
||||
@@ -533,13 +544,7 @@ def get_crazy_functions():
|
||||
"AdvancedArgs": True,
|
||||
"ArgsReminder": "如果有必要, 请在此处追加更细致的矫错指令(使用英文)。",
|
||||
"Function": HotReload(Latex英文纠错加PDF对比),
|
||||
}
|
||||
}
|
||||
)
|
||||
from crazy_functions.Latex输出PDF结果 import Latex翻译中文并重新编译PDF
|
||||
|
||||
function_plugins.update(
|
||||
{
|
||||
},
|
||||
"Arxiv论文精细翻译(输入arxivID)[需Latex]": {
|
||||
"Group": "学术",
|
||||
"Color": "stop",
|
||||
@@ -550,11 +555,7 @@ def get_crazy_functions():
|
||||
+ 'If the term "agent" is used in this section, it should be translated to "智能体". ',
|
||||
"Info": "Arixv论文精细翻译 | 输入参数arxiv论文的ID,比如1812.10695",
|
||||
"Function": HotReload(Latex翻译中文并重新编译PDF),
|
||||
}
|
||||
}
|
||||
)
|
||||
function_plugins.update(
|
||||
{
|
||||
},
|
||||
"本地Latex论文精细翻译(上传Latex项目)[需Latex]": {
|
||||
"Group": "学术",
|
||||
"Color": "stop",
|
||||
|
||||
@@ -137,7 +137,7 @@ def get_recent_file_prompt_support(chatbot):
|
||||
return path
|
||||
|
||||
@CatchException
|
||||
def 虚空终端CodeInterpreter(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 虚空终端CodeInterpreter(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
"""
|
||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||
@@ -145,7 +145,7 @@ def 虚空终端CodeInterpreter(txt, llm_kwargs, plugin_kwargs, chatbot, history
|
||||
chatbot 聊天显示框的句柄,用于显示给用户
|
||||
history 聊天历史,前情提要
|
||||
system_prompt 给gpt的静默提醒
|
||||
web_port 当前软件运行的端口号
|
||||
user_request 当前用户的请求信息(IP地址等)
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
|
||||
@@ -135,11 +135,11 @@ def 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
|
||||
|
||||
|
||||
@CatchException
|
||||
def Latex英文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def Latex英文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
# 基本信息:功能、贡献者
|
||||
chatbot.append([
|
||||
"函数插件功能?",
|
||||
"对整个Latex项目进行润色。函数插件贡献者: Binary-Husky。(注意,此插件不调用Latex,如果有Latex环境,请使用“Latex英文纠错+高亮”插件)"])
|
||||
"对整个Latex项目进行润色。函数插件贡献者: Binary-Husky。(注意,此插件不调用Latex,如果有Latex环境,请使用「Latex英文纠错+高亮修正位置(需Latex)插件」"])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
||||
@@ -173,7 +173,7 @@ def Latex英文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
|
||||
|
||||
|
||||
@CatchException
|
||||
def Latex中文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def Latex中文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
# 基本信息:功能、贡献者
|
||||
chatbot.append([
|
||||
"函数插件功能?",
|
||||
@@ -209,7 +209,7 @@ def Latex中文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
|
||||
|
||||
|
||||
@CatchException
|
||||
def Latex英文纠错(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def Latex英文纠错(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
# 基本信息:功能、贡献者
|
||||
chatbot.append([
|
||||
"函数插件功能?",
|
||||
|
||||
@@ -106,7 +106,7 @@ def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
|
||||
|
||||
|
||||
@CatchException
|
||||
def Latex英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def Latex英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
# 基本信息:功能、贡献者
|
||||
chatbot.append([
|
||||
"函数插件功能?",
|
||||
@@ -143,7 +143,7 @@ def Latex英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prom
|
||||
|
||||
|
||||
@CatchException
|
||||
def Latex中译英(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def Latex中译英(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
# 基本信息:功能、贡献者
|
||||
chatbot.append([
|
||||
"函数插件功能?",
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
from toolbox import update_ui, trimmed_format_exc, get_conf, get_log_folder, promote_file_to_downloadzone
|
||||
from toolbox import CatchException, report_exception, update_ui_lastest_msg, zip_result, gen_time_str
|
||||
from functools import partial
|
||||
import glob, os, requests, time
|
||||
import glob, os, requests, time, tarfile
|
||||
pj = os.path.join
|
||||
ARXIV_CACHE_DIR = os.path.expanduser(f"~/arxiv_cache/")
|
||||
|
||||
@@ -104,7 +104,7 @@ def arxiv_download(chatbot, history, txt, allow_cache=True):
|
||||
if ('.' in txt) and ('/' not in txt) and is_float(txt[:10]): # is arxiv ID
|
||||
txt = 'https://arxiv.org/abs/' + txt[:10]
|
||||
if not txt.startswith('https://arxiv.org'):
|
||||
return txt, None
|
||||
return txt, None # 是本地文件,跳过下载
|
||||
|
||||
# <-------------- inspect format ------------->
|
||||
chatbot.append([f"检测到arxiv文档连接", '尝试下载 ...'])
|
||||
@@ -146,7 +146,7 @@ def arxiv_download(chatbot, history, txt, allow_cache=True):
|
||||
|
||||
|
||||
@CatchException
|
||||
def Latex英文纠错加PDF对比(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def Latex英文纠错加PDF对比(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
# <-------------- information about this plugin ------------->
|
||||
chatbot.append([ "函数插件功能?",
|
||||
"对整个Latex项目进行纠错, 用latex编译为PDF对修正处做高亮。函数插件贡献者: Binary-Husky。注意事项: 目前仅支持GPT3.5/GPT4,其他模型转化效果未知。目前对机器学习类文献转化效果最好,其他类型文献转化效果未知。仅在Windows系统进行了测试,其他操作系统表现未知。"])
|
||||
@@ -221,7 +221,7 @@ def Latex英文纠错加PDF对比(txt, llm_kwargs, plugin_kwargs, chatbot, histo
|
||||
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= 插件主程序2 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
||||
|
||||
@CatchException
|
||||
def Latex翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def Latex翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
# <-------------- information about this plugin ------------->
|
||||
chatbot.append([
|
||||
"函数插件功能?",
|
||||
@@ -250,7 +250,14 @@ def Latex翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot,
|
||||
|
||||
# <-------------- clear history and read input ------------->
|
||||
history = []
|
||||
txt, arxiv_id = yield from arxiv_download(chatbot, history, txt, allow_cache)
|
||||
try:
|
||||
txt, arxiv_id = yield from arxiv_download(chatbot, history, txt, allow_cache)
|
||||
except tarfile.ReadError as e:
|
||||
yield from update_ui_lastest_msg(
|
||||
"无法自动下载该论文的Latex源码,请前往arxiv打开此论文下载页面,点other Formats,然后download source手动下载latex源码包。接下来调用本地Latex翻译插件即可。",
|
||||
chatbot=chatbot, history=history)
|
||||
return
|
||||
|
||||
if txt.endswith('.pdf'):
|
||||
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"发现已经存在翻译好的PDF文档")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
@@ -35,7 +35,11 @@ def gpt_academic_generate_oai_reply(
|
||||
class AutoGenGeneral(PluginMultiprocessManager):
|
||||
def gpt_academic_print_override(self, user_proxy, message, sender):
|
||||
# ⭐⭐ run in subprocess
|
||||
self.child_conn.send(PipeCom("show", sender.name + "\n\n---\n\n" + message["content"]))
|
||||
try:
|
||||
print_msg = sender.name + "\n\n---\n\n" + message["content"]
|
||||
except:
|
||||
print_msg = sender.name + "\n\n---\n\n" + message
|
||||
self.child_conn.send(PipeCom("show", print_msg))
|
||||
|
||||
def gpt_academic_get_human_input(self, user_proxy, message):
|
||||
# ⭐⭐ run in subprocess
|
||||
@@ -62,33 +66,33 @@ class AutoGenGeneral(PluginMultiprocessManager):
|
||||
def exe_autogen(self, input):
|
||||
# ⭐⭐ run in subprocess
|
||||
input = input.content
|
||||
with ProxyNetworkActivate("AutoGen"):
|
||||
code_execution_config = {"work_dir": self.autogen_work_dir, "use_docker": self.use_docker}
|
||||
agents = self.define_agents()
|
||||
user_proxy = None
|
||||
assistant = None
|
||||
for agent_kwargs in agents:
|
||||
agent_cls = agent_kwargs.pop('cls')
|
||||
kwargs = {
|
||||
'llm_config':self.llm_kwargs,
|
||||
'code_execution_config':code_execution_config
|
||||
}
|
||||
kwargs.update(agent_kwargs)
|
||||
agent_handle = agent_cls(**kwargs)
|
||||
agent_handle._print_received_message = lambda a,b: self.gpt_academic_print_override(agent_kwargs, a, b)
|
||||
for d in agent_handle._reply_func_list:
|
||||
if hasattr(d['reply_func'],'__name__') and d['reply_func'].__name__ == 'generate_oai_reply':
|
||||
d['reply_func'] = gpt_academic_generate_oai_reply
|
||||
if agent_kwargs['name'] == 'user_proxy':
|
||||
agent_handle.get_human_input = lambda a: self.gpt_academic_get_human_input(user_proxy, a)
|
||||
user_proxy = agent_handle
|
||||
if agent_kwargs['name'] == 'assistant': assistant = agent_handle
|
||||
try:
|
||||
if user_proxy is None or assistant is None: raise Exception("用户代理或助理代理未定义")
|
||||
code_execution_config = {"work_dir": self.autogen_work_dir, "use_docker": self.use_docker}
|
||||
agents = self.define_agents()
|
||||
user_proxy = None
|
||||
assistant = None
|
||||
for agent_kwargs in agents:
|
||||
agent_cls = agent_kwargs.pop('cls')
|
||||
kwargs = {
|
||||
'llm_config':self.llm_kwargs,
|
||||
'code_execution_config':code_execution_config
|
||||
}
|
||||
kwargs.update(agent_kwargs)
|
||||
agent_handle = agent_cls(**kwargs)
|
||||
agent_handle._print_received_message = lambda a,b: self.gpt_academic_print_override(agent_kwargs, a, b)
|
||||
for d in agent_handle._reply_func_list:
|
||||
if hasattr(d['reply_func'],'__name__') and d['reply_func'].__name__ == 'generate_oai_reply':
|
||||
d['reply_func'] = gpt_academic_generate_oai_reply
|
||||
if agent_kwargs['name'] == 'user_proxy':
|
||||
agent_handle.get_human_input = lambda a: self.gpt_academic_get_human_input(user_proxy, a)
|
||||
user_proxy = agent_handle
|
||||
if agent_kwargs['name'] == 'assistant': assistant = agent_handle
|
||||
try:
|
||||
if user_proxy is None or assistant is None: raise Exception("用户代理或助理代理未定义")
|
||||
with ProxyNetworkActivate("AutoGen"):
|
||||
user_proxy.initiate_chat(assistant, message=input)
|
||||
except Exception as e:
|
||||
tb_str = '```\n' + trimmed_format_exc() + '```'
|
||||
self.child_conn.send(PipeCom("done", "AutoGen 执行失败: \n\n" + tb_str))
|
||||
except Exception as e:
|
||||
tb_str = '```\n' + trimmed_format_exc() + '```'
|
||||
self.child_conn.send(PipeCom("done", "AutoGen 执行失败: \n\n" + tb_str))
|
||||
|
||||
def subprocess_worker(self, child_conn):
|
||||
# ⭐⭐ run in subprocess
|
||||
|
||||
@@ -9,7 +9,7 @@ class PipeCom:
|
||||
|
||||
|
||||
class PluginMultiprocessManager:
|
||||
def __init__(self, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def __init__(self, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
# ⭐ run in main process
|
||||
self.autogen_work_dir = os.path.join(get_log_folder("autogen"), gen_time_str())
|
||||
self.previous_work_dir_files = {}
|
||||
@@ -18,7 +18,7 @@ class PluginMultiprocessManager:
|
||||
self.chatbot = chatbot
|
||||
self.history = history
|
||||
self.system_prompt = system_prompt
|
||||
# self.web_port = web_port
|
||||
# self.user_request = user_request
|
||||
self.alive = True
|
||||
self.use_docker = get_conf("AUTOGEN_USE_DOCKER")
|
||||
self.last_user_input = ""
|
||||
|
||||
@@ -32,7 +32,7 @@ def string_to_options(arguments):
|
||||
return args
|
||||
|
||||
@CatchException
|
||||
def 微调数据集生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 微调数据集生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
"""
|
||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||
@@ -40,7 +40,7 @@ def 微调数据集生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
|
||||
chatbot 聊天显示框的句柄,用于显示给用户
|
||||
history 聊天历史,前情提要
|
||||
system_prompt 给gpt的静默提醒
|
||||
web_port 当前软件运行的端口号
|
||||
user_request 当前用户的请求信息(IP地址等)
|
||||
"""
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
chatbot.append(("这是什么功能?", "[Local Message] 微调数据集生成"))
|
||||
@@ -80,7 +80,7 @@ def 微调数据集生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
|
||||
|
||||
|
||||
@CatchException
|
||||
def 启动微调(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 启动微调(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
"""
|
||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||
@@ -88,7 +88,7 @@ def 启动微调(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt
|
||||
chatbot 聊天显示框的句柄,用于显示给用户
|
||||
history 聊天历史,前情提要
|
||||
system_prompt 给gpt的静默提醒
|
||||
web_port 当前软件运行的端口号
|
||||
user_request 当前用户的请求信息(IP地址等)
|
||||
"""
|
||||
import subprocess
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
|
||||
@@ -284,8 +284,7 @@ def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
|
||||
# 在前端打印些好玩的东西
|
||||
for thread_index, _ in enumerate(worker_done):
|
||||
print_something_really_funny = "[ ...`"+mutable[thread_index][0][-scroller_max_len:].\
|
||||
replace('\n', '').replace('`', '.').replace(
|
||||
' ', '.').replace('<br/>', '.....').replace('$', '.')+"`... ]"
|
||||
replace('\n', '').replace('`', '.').replace(' ', '.').replace('<br/>', '.....').replace('$', '.')+"`... ]"
|
||||
observe_win.append(print_something_really_funny)
|
||||
# 在前端打印些好玩的东西
|
||||
stat_str = ''.join([f'`{mutable[thread_index][2]}`: {obs}\n\n'
|
||||
|
||||
122
crazy_functions/diagram_fns/file_tree.py
Normal file
122
crazy_functions/diagram_fns/file_tree.py
Normal file
@@ -0,0 +1,122 @@
|
||||
import os
|
||||
from textwrap import indent
|
||||
|
||||
class FileNode:
|
||||
def __init__(self, name):
|
||||
self.name = name
|
||||
self.children = []
|
||||
self.is_leaf = False
|
||||
self.level = 0
|
||||
self.parenting_ship = []
|
||||
self.comment = ""
|
||||
self.comment_maxlen_show = 50
|
||||
|
||||
@staticmethod
|
||||
def add_linebreaks_at_spaces(string, interval=10):
|
||||
return '\n'.join(string[i:i+interval] for i in range(0, len(string), interval))
|
||||
|
||||
def sanitize_comment(self, comment):
|
||||
if len(comment) > self.comment_maxlen_show: suf = '...'
|
||||
else: suf = ''
|
||||
comment = comment[:self.comment_maxlen_show]
|
||||
comment = comment.replace('\"', '').replace('`', '').replace('\n', '').replace('`', '').replace('$', '')
|
||||
comment = self.add_linebreaks_at_spaces(comment, 10)
|
||||
return '`' + comment + suf + '`'
|
||||
|
||||
def add_file(self, file_path, file_comment):
|
||||
directory_names, file_name = os.path.split(file_path)
|
||||
current_node = self
|
||||
level = 1
|
||||
if directory_names == "":
|
||||
new_node = FileNode(file_name)
|
||||
current_node.children.append(new_node)
|
||||
new_node.is_leaf = True
|
||||
new_node.comment = self.sanitize_comment(file_comment)
|
||||
new_node.level = level
|
||||
current_node = new_node
|
||||
else:
|
||||
dnamesplit = directory_names.split(os.sep)
|
||||
for i, directory_name in enumerate(dnamesplit):
|
||||
found_child = False
|
||||
level += 1
|
||||
for child in current_node.children:
|
||||
if child.name == directory_name:
|
||||
current_node = child
|
||||
found_child = True
|
||||
break
|
||||
if not found_child:
|
||||
new_node = FileNode(directory_name)
|
||||
current_node.children.append(new_node)
|
||||
new_node.level = level - 1
|
||||
current_node = new_node
|
||||
term = FileNode(file_name)
|
||||
term.level = level
|
||||
term.comment = self.sanitize_comment(file_comment)
|
||||
term.is_leaf = True
|
||||
current_node.children.append(term)
|
||||
|
||||
def print_files_recursively(self, level=0, code="R0"):
|
||||
print(' '*level + self.name + ' ' + str(self.is_leaf) + ' ' + str(self.level))
|
||||
for j, child in enumerate(self.children):
|
||||
child.print_files_recursively(level=level+1, code=code+str(j))
|
||||
self.parenting_ship.extend(child.parenting_ship)
|
||||
p1 = f"""{code}[\"🗎{self.name}\"]""" if self.is_leaf else f"""{code}[[\"📁{self.name}\"]]"""
|
||||
p2 = """ --> """
|
||||
p3 = f"""{code+str(j)}[\"🗎{child.name}\"]""" if child.is_leaf else f"""{code+str(j)}[[\"📁{child.name}\"]]"""
|
||||
edge_code = p1 + p2 + p3
|
||||
if edge_code in self.parenting_ship:
|
||||
continue
|
||||
self.parenting_ship.append(edge_code)
|
||||
if self.comment != "":
|
||||
pc1 = f"""{code}[\"🗎{self.name}\"]""" if self.is_leaf else f"""{code}[[\"📁{self.name}\"]]"""
|
||||
pc2 = f""" -.-x """
|
||||
pc3 = f"""C{code}[\"{self.comment}\"]:::Comment"""
|
||||
edge_code = pc1 + pc2 + pc3
|
||||
self.parenting_ship.append(edge_code)
|
||||
|
||||
|
||||
MERMAID_TEMPLATE = r"""
|
||||
```mermaid
|
||||
flowchart LR
|
||||
%% <gpt_academic_hide_mermaid_code> 一个特殊标记,用于在生成mermaid图表时隐藏代码块
|
||||
classDef Comment stroke-dasharray: 5 5
|
||||
subgraph {graph_name}
|
||||
{relationship}
|
||||
end
|
||||
```
|
||||
"""
|
||||
|
||||
def build_file_tree_mermaid_diagram(file_manifest, file_comments, graph_name):
|
||||
# Create the root node
|
||||
file_tree_struct = FileNode("root")
|
||||
# Build the tree structure
|
||||
for file_path, file_comment in zip(file_manifest, file_comments):
|
||||
file_tree_struct.add_file(file_path, file_comment)
|
||||
file_tree_struct.print_files_recursively()
|
||||
cc = "\n".join(file_tree_struct.parenting_ship)
|
||||
ccc = indent(cc, prefix=" "*8)
|
||||
return MERMAID_TEMPLATE.format(graph_name=graph_name, relationship=ccc)
|
||||
|
||||
if __name__ == "__main__":
|
||||
# File manifest
|
||||
file_manifest = [
|
||||
"cradle_void_terminal.ipynb",
|
||||
"tests/test_utils.py",
|
||||
"tests/test_plugins.py",
|
||||
"tests/test_llms.py",
|
||||
"config.py",
|
||||
"build/ChatGLM-6b-onnx-u8s8/chatglm-6b-int8-onnx-merged/model_weights_0.bin",
|
||||
"crazy_functions/latex_fns/latex_actions.py",
|
||||
"crazy_functions/latex_fns/latex_toolbox.py"
|
||||
]
|
||||
file_comments = [
|
||||
"根据位置和名称,可能是一个模块的初始化文件根据位置和名称,可能是一个模块的初始化文件根据位置和名称,可能是一个模块的初始化文件",
|
||||
"包含一些用于文本处理和模型微调的函数和装饰器包含一些用于文本处理和模型微调的函数和装饰器包含一些用于文本处理和模型微调的函数和装饰器",
|
||||
"用于构建HTML报告的类和方法用于构建HTML报告的类和方法用于构建HTML报告的类和方法",
|
||||
"包含了用于文本切分的函数,以及处理PDF文件的示例代码包含了用于文本切分的函数,以及处理PDF文件的示例代码包含了用于文本切分的函数,以及处理PDF文件的示例代码",
|
||||
"用于解析和翻译PDF文件的功能和相关辅助函数用于解析和翻译PDF文件的功能和相关辅助函数用于解析和翻译PDF文件的功能和相关辅助函数",
|
||||
"是一个包的初始化文件,用于初始化包的属性和导入模块是一个包的初始化文件,用于初始化包的属性和导入模块是一个包的初始化文件,用于初始化包的属性和导入模块",
|
||||
"用于加载和分割文件中的文本的通用文件加载器用于加载和分割文件中的文本的通用文件加载器用于加载和分割文件中的文本的通用文件加载器",
|
||||
"包含了用于构建和管理向量数据库的函数和类包含了用于构建和管理向量数据库的函数和类包含了用于构建和管理向量数据库的函数和类",
|
||||
]
|
||||
print(build_file_tree_mermaid_diagram(file_manifest, file_comments, "项目文件树"))
|
||||
@@ -130,7 +130,7 @@ def get_name(_url_):
|
||||
|
||||
|
||||
@CatchException
|
||||
def 下载arxiv论文并翻译摘要(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 下载arxiv论文并翻译摘要(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
|
||||
CRAZY_FUNCTION_INFO = "下载arxiv论文并翻译摘要,函数插件作者[binary-husky]。正在提取摘要并下载PDF文档……"
|
||||
import glob
|
||||
|
||||
@@ -5,7 +5,7 @@ from request_llms.bridge_all import predict_no_ui_long_connection
|
||||
from crazy_functions.game_fns.game_utils import get_code_block, is_same_thing
|
||||
|
||||
@CatchException
|
||||
def 随机小游戏(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 随机小游戏(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
from crazy_functions.game_fns.game_interactive_story import MiniGame_ResumeStory
|
||||
# 清空历史
|
||||
history = []
|
||||
@@ -23,7 +23,7 @@ def 随机小游戏(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_
|
||||
|
||||
|
||||
@CatchException
|
||||
def 随机小游戏1(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 随机小游戏1(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
from crazy_functions.game_fns.game_ascii_art import MiniGame_ASCII_Art
|
||||
# 清空历史
|
||||
history = []
|
||||
|
||||
@@ -3,7 +3,7 @@ from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
||||
|
||||
|
||||
@CatchException
|
||||
def 交互功能模板函数(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 交互功能模板函数(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
"""
|
||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||
llm_kwargs gpt模型参数, 如温度和top_p等, 一般原样传递下去就行
|
||||
@@ -11,7 +11,7 @@ def 交互功能模板函数(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
|
||||
chatbot 聊天显示框的句柄,用于显示给用户
|
||||
history 聊天历史,前情提要
|
||||
system_prompt 给gpt的静默提醒
|
||||
web_port 当前软件运行的端口号
|
||||
user_request 当前用户的请求信息(IP地址等)
|
||||
"""
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
chatbot.append(("这是什么功能?", "交互功能函数模板。在执行完成之后, 可以将自身的状态存储到cookie中, 等待用户的再次调用。"))
|
||||
|
||||
@@ -139,7 +139,7 @@ def get_recent_file_prompt_support(chatbot):
|
||||
return path
|
||||
|
||||
@CatchException
|
||||
def 函数动态生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 函数动态生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
"""
|
||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||
@@ -147,7 +147,7 @@ def 函数动态生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_
|
||||
chatbot 聊天显示框的句柄,用于显示给用户
|
||||
history 聊天历史,前情提要
|
||||
system_prompt 给gpt的静默提醒
|
||||
web_port 当前软件运行的端口号
|
||||
user_request 当前用户的请求信息(IP地址等)
|
||||
"""
|
||||
|
||||
# 清空历史
|
||||
|
||||
@@ -4,7 +4,7 @@ from .crazy_utils import input_clipping
|
||||
import copy, json
|
||||
|
||||
@CatchException
|
||||
def 命令行助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 命令行助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
"""
|
||||
txt 输入栏用户输入的文本, 例如需要翻译的一段话, 再例如一个包含了待处理文件的路径
|
||||
llm_kwargs gpt模型参数, 如温度和top_p等, 一般原样传递下去就行
|
||||
@@ -12,7 +12,7 @@ def 命令行助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_pro
|
||||
chatbot 聊天显示框的句柄, 用于显示给用户
|
||||
history 聊天历史, 前情提要
|
||||
system_prompt 给gpt的静默提醒
|
||||
web_port 当前软件运行的端口号
|
||||
user_request 当前用户的请求信息(IP地址等)
|
||||
"""
|
||||
# 清空历史, 以免输入溢出
|
||||
history = []
|
||||
|
||||
@@ -93,7 +93,7 @@ def edit_image(llm_kwargs, prompt, image_path, resolution="1024x1024", model="da
|
||||
|
||||
|
||||
@CatchException
|
||||
def 图片生成_DALLE2(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 图片生成_DALLE2(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
"""
|
||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||
@@ -101,7 +101,7 @@ def 图片生成_DALLE2(prompt, llm_kwargs, plugin_kwargs, chatbot, history, sys
|
||||
chatbot 聊天显示框的句柄,用于显示给用户
|
||||
history 聊天历史,前情提要
|
||||
system_prompt 给gpt的静默提醒
|
||||
web_port 当前软件运行的端口号
|
||||
user_request 当前用户的请求信息(IP地址等)
|
||||
"""
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
if prompt.strip() == "":
|
||||
@@ -123,7 +123,7 @@ def 图片生成_DALLE2(prompt, llm_kwargs, plugin_kwargs, chatbot, history, sys
|
||||
|
||||
|
||||
@CatchException
|
||||
def 图片生成_DALLE3(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 图片生成_DALLE3(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
if prompt.strip() == "":
|
||||
chatbot.append((prompt, "[Local Message] 图像生成提示为空白,请在“输入区”输入图像生成提示。"))
|
||||
@@ -209,7 +209,7 @@ class ImageEditState(GptAcademicState):
|
||||
return all([x['value'] is not None for x in self.req])
|
||||
|
||||
@CatchException
|
||||
def 图片修改_DALLE2(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 图片修改_DALLE2(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
# 尚未完成
|
||||
history = [] # 清空历史
|
||||
state = ImageEditState.get_state(chatbot, ImageEditState)
|
||||
|
||||
@@ -21,7 +21,7 @@ def remove_model_prefix(llm):
|
||||
|
||||
|
||||
@CatchException
|
||||
def 多智能体终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 多智能体终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
"""
|
||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||
@@ -29,7 +29,7 @@ def 多智能体终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_
|
||||
chatbot 聊天显示框的句柄,用于显示给用户
|
||||
history 聊天历史,前情提要
|
||||
system_prompt 给gpt的静默提醒
|
||||
web_port 当前软件运行的端口号
|
||||
user_request 当前用户的请求信息(IP地址等)
|
||||
"""
|
||||
# 检查当前的模型是否符合要求
|
||||
supported_llms = [
|
||||
@@ -37,7 +37,7 @@ def 多智能体终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_
|
||||
'gpt-3.5-turbo-1106',
|
||||
"gpt-4",
|
||||
"gpt-4-32k",
|
||||
'gpt-4-1106-preview',
|
||||
'gpt-4-turbo-preview',
|
||||
"azure-gpt-3.5-turbo-16k",
|
||||
"azure-gpt-3.5-16k",
|
||||
"azure-gpt-4",
|
||||
@@ -50,14 +50,7 @@ def 多智能体终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_
|
||||
return
|
||||
if model_info[llm_kwargs['llm_model']]["endpoint"] is not None: # 如果不是本地模型,加载API_KEY
|
||||
llm_kwargs['api_key'] = select_api_key(llm_kwargs['api_key'], llm_kwargs['llm_model'])
|
||||
|
||||
# 检查当前的模型是否符合要求
|
||||
API_URL_REDIRECT = get_conf('API_URL_REDIRECT')
|
||||
if len(API_URL_REDIRECT) > 0:
|
||||
chatbot.append([f"处理任务: {txt}", f"暂不支持中转."])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
|
||||
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
||||
try:
|
||||
import autogen
|
||||
@@ -96,7 +89,7 @@ def 多智能体终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_
|
||||
history = []
|
||||
chatbot.append(["正在启动: 多智能体终端", "插件动态生成, 执行开始, 作者 Microsoft & Binary-Husky."])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
executor = AutoGenMath(llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port)
|
||||
executor = AutoGenMath(llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request)
|
||||
persistent_class_multi_user_manager.set(persistent_key, executor)
|
||||
exit_reason = yield from executor.main_process_ui_control(txt, create_or_resume="create")
|
||||
|
||||
|
||||
@@ -69,7 +69,7 @@ def read_file_to_chat(chatbot, history, file_name):
|
||||
return chatbot, history
|
||||
|
||||
@CatchException
|
||||
def 对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
"""
|
||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||
@@ -77,7 +77,7 @@ def 对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_
|
||||
chatbot 聊天显示框的句柄,用于显示给用户
|
||||
history 聊天历史,前情提要
|
||||
system_prompt 给gpt的静默提醒
|
||||
web_port 当前软件运行的端口号
|
||||
user_request 当前用户的请求信息(IP地址等)
|
||||
"""
|
||||
|
||||
chatbot.append(("保存当前对话",
|
||||
@@ -91,7 +91,7 @@ def hide_cwd(str):
|
||||
return str.replace(current_path, replace_path)
|
||||
|
||||
@CatchException
|
||||
def 载入对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 载入对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
"""
|
||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||
@@ -99,7 +99,7 @@ def 载入对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
|
||||
chatbot 聊天显示框的句柄,用于显示给用户
|
||||
history 聊天历史,前情提要
|
||||
system_prompt 给gpt的静默提醒
|
||||
web_port 当前软件运行的端口号
|
||||
user_request 当前用户的请求信息(IP地址等)
|
||||
"""
|
||||
from .crazy_utils import get_files_from_everything
|
||||
success, file_manifest, _ = get_files_from_everything(txt, type='.html')
|
||||
@@ -126,7 +126,7 @@ def 载入对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
|
||||
return
|
||||
|
||||
@CatchException
|
||||
def 删除所有本地对话历史记录(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 删除所有本地对话历史记录(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
"""
|
||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||
@@ -134,7 +134,7 @@ def 删除所有本地对话历史记录(txt, llm_kwargs, plugin_kwargs, chatbot
|
||||
chatbot 聊天显示框的句柄,用于显示给用户
|
||||
history 聊天历史,前情提要
|
||||
system_prompt 给gpt的静默提醒
|
||||
web_port 当前软件运行的端口号
|
||||
user_request 当前用户的请求信息(IP地址等)
|
||||
"""
|
||||
|
||||
import glob, os
|
||||
|
||||
@@ -79,7 +79,7 @@ def 解析docx(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot
|
||||
|
||||
|
||||
@CatchException
|
||||
def 总结word文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 总结word文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
import glob, os
|
||||
|
||||
# 基本信息:功能、贡献者
|
||||
|
||||
@@ -153,7 +153,7 @@ def get_files_from_everything(txt, preference=''):
|
||||
|
||||
|
||||
@CatchException
|
||||
def Markdown英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def Markdown英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
# 基本信息:功能、贡献者
|
||||
chatbot.append([
|
||||
"函数插件功能?",
|
||||
@@ -193,7 +193,7 @@ def Markdown英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
|
||||
|
||||
|
||||
@CatchException
|
||||
def Markdown中译英(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def Markdown中译英(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
# 基本信息:功能、贡献者
|
||||
chatbot.append([
|
||||
"函数插件功能?",
|
||||
@@ -226,7 +226,7 @@ def Markdown中译英(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
|
||||
|
||||
|
||||
@CatchException
|
||||
def Markdown翻译指定语言(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def Markdown翻译指定语言(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
# 基本信息:功能、贡献者
|
||||
chatbot.append([
|
||||
"函数插件功能?",
|
||||
|
||||
@@ -101,7 +101,7 @@ do not have too much repetitive information, numerical values using the original
|
||||
|
||||
|
||||
@CatchException
|
||||
def 批量总结PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 批量总结PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
import glob, os
|
||||
|
||||
# 基本信息:功能、贡献者
|
||||
|
||||
@@ -124,7 +124,7 @@ def 解析Paper(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbo
|
||||
|
||||
|
||||
@CatchException
|
||||
def 批量总结PDF文档pdfminer(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 批量总结PDF文档pdfminer(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
import glob, os
|
||||
|
||||
|
||||
@@ -48,7 +48,7 @@ def markdown_to_dict(article_content):
|
||||
|
||||
|
||||
@CatchException
|
||||
def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
|
||||
disable_auto_promotion(chatbot)
|
||||
# 基本信息:功能、贡献者
|
||||
|
||||
@@ -10,7 +10,7 @@ import os
|
||||
|
||||
|
||||
@CatchException
|
||||
def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
|
||||
disable_auto_promotion(chatbot)
|
||||
# 基本信息:功能、贡献者
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
from toolbox import CatchException, update_ui, gen_time_str
|
||||
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
||||
from .crazy_utils import input_clipping
|
||||
import os
|
||||
from toolbox import CatchException, update_ui, gen_time_str, promote_file_to_downloadzone
|
||||
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
||||
from crazy_functions.crazy_utils import input_clipping
|
||||
|
||||
def inspect_dependency(chatbot, history):
|
||||
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
||||
@@ -27,9 +28,10 @@ def eval_manim(code):
|
||||
class_name = get_class_name(code)
|
||||
|
||||
try:
|
||||
time_str = gen_time_str()
|
||||
subprocess.check_output([sys.executable, '-c', f"from gpt_log.MyAnimation import {class_name}; {class_name}().render()"])
|
||||
shutil.move('media/videos/1080p60/{class_name}.mp4', f'gpt_log/{class_name}-{gen_time_str()}.mp4')
|
||||
return f'gpt_log/{gen_time_str()}.mp4'
|
||||
shutil.move(f'media/videos/1080p60/{class_name}.mp4', f'gpt_log/{class_name}-{time_str}.mp4')
|
||||
return f'gpt_log/{time_str}.mp4'
|
||||
except subprocess.CalledProcessError as e:
|
||||
output = e.output.decode()
|
||||
print(f"Command returned non-zero exit status {e.returncode}: {output}.")
|
||||
@@ -48,7 +50,7 @@ def get_code_block(reply):
|
||||
return matches[0].strip('python') # code block
|
||||
|
||||
@CatchException
|
||||
def 动画生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 动画生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
"""
|
||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||
@@ -56,7 +58,7 @@ def 动画生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt
|
||||
chatbot 聊天显示框的句柄,用于显示给用户
|
||||
history 聊天历史,前情提要
|
||||
system_prompt 给gpt的静默提醒
|
||||
web_port 当前软件运行的端口号
|
||||
user_request 当前用户的请求信息(IP地址等)
|
||||
"""
|
||||
# 清空历史,以免输入溢出
|
||||
history = []
|
||||
@@ -94,6 +96,8 @@ def 动画生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt
|
||||
res = eval_manim(code)
|
||||
|
||||
chatbot.append(("生成的视频文件路径", res))
|
||||
if os.path.exists(res):
|
||||
promote_file_to_downloadzone(res, chatbot=chatbot)
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
|
||||
|
||||
# 在这里放一些网上搜集的demo,辅助gpt生成代码
|
||||
|
||||
@@ -63,7 +63,7 @@ def 解析PDF(file_name, llm_kwargs, plugin_kwargs, chatbot, history, system_pro
|
||||
|
||||
|
||||
@CatchException
|
||||
def 理解PDF文档内容标准文件输入(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 理解PDF文档内容标准文件输入(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
import glob, os
|
||||
|
||||
# 基本信息:功能、贡献者
|
||||
|
||||
@@ -36,7 +36,7 @@ def 生成函数注释(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
|
||||
|
||||
|
||||
@CatchException
|
||||
def 批量生成函数注释(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 批量生成函数注释(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
import glob, os
|
||||
if os.path.exists(txt):
|
||||
|
||||
302
crazy_functions/生成多种Mermaid图表.py
Normal file
302
crazy_functions/生成多种Mermaid图表.py
Normal file
@@ -0,0 +1,302 @@
|
||||
from toolbox import CatchException, update_ui, report_exception
|
||||
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
||||
from .crazy_utils import read_and_clean_pdf_text
|
||||
import datetime
|
||||
|
||||
#以下是每类图表的PROMPT
|
||||
SELECT_PROMPT = """
|
||||
“{subject}”
|
||||
=============
|
||||
以上是从文章中提取的摘要,将会使用这些摘要绘制图表。请你选择一个合适的图表类型:
|
||||
1 流程图
|
||||
2 序列图
|
||||
3 类图
|
||||
4 饼图
|
||||
5 甘特图
|
||||
6 状态图
|
||||
7 实体关系图
|
||||
8 象限提示图
|
||||
不需要解释原因,仅需要输出单个不带任何标点符号的数字。
|
||||
"""
|
||||
#没有思维导图!!!测试发现模型始终会优先选择思维导图
|
||||
#流程图
|
||||
PROMPT_1 = """
|
||||
请你给出围绕“{subject}”的逻辑关系图,使用mermaid语法,mermaid语法举例:
|
||||
```mermaid
|
||||
graph TD
|
||||
P(编程) --> L1(Python)
|
||||
P(编程) --> L2(C)
|
||||
P(编程) --> L3(C++)
|
||||
P(编程) --> L4(Javascipt)
|
||||
P(编程) --> L5(PHP)
|
||||
```
|
||||
"""
|
||||
#序列图
|
||||
PROMPT_2 = """
|
||||
请你给出围绕“{subject}”的序列图,使用mermaid语法,mermaid语法举例:
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant A as 用户
|
||||
participant B as 系统
|
||||
A->>B: 登录请求
|
||||
B->>A: 登录成功
|
||||
A->>B: 获取数据
|
||||
B->>A: 返回数据
|
||||
```
|
||||
"""
|
||||
#类图
|
||||
PROMPT_3 = """
|
||||
请你给出围绕“{subject}”的类图,使用mermaid语法,mermaid语法举例:
|
||||
```mermaid
|
||||
classDiagram
|
||||
Class01 <|-- AveryLongClass : Cool
|
||||
Class03 *-- Class04
|
||||
Class05 o-- Class06
|
||||
Class07 .. Class08
|
||||
Class09 --> C2 : Where am i?
|
||||
Class09 --* C3
|
||||
Class09 --|> Class07
|
||||
Class07 : equals()
|
||||
Class07 : Object[] elementData
|
||||
Class01 : size()
|
||||
Class01 : int chimp
|
||||
Class01 : int gorilla
|
||||
Class08 <--> C2: Cool label
|
||||
```
|
||||
"""
|
||||
#饼图
|
||||
PROMPT_4 = """
|
||||
请你给出围绕“{subject}”的饼图,使用mermaid语法,mermaid语法举例:
|
||||
```mermaid
|
||||
pie title Pets adopted by volunteers
|
||||
"狗" : 386
|
||||
"猫" : 85
|
||||
"兔子" : 15
|
||||
```
|
||||
"""
|
||||
#甘特图
|
||||
PROMPT_5 = """
|
||||
请你给出围绕“{subject}”的甘特图,使用mermaid语法,mermaid语法举例:
|
||||
```mermaid
|
||||
gantt
|
||||
title 项目开发流程
|
||||
dateFormat YYYY-MM-DD
|
||||
section 设计
|
||||
需求分析 :done, des1, 2024-01-06,2024-01-08
|
||||
原型设计 :active, des2, 2024-01-09, 3d
|
||||
UI设计 : des3, after des2, 5d
|
||||
section 开发
|
||||
前端开发 :2024-01-20, 10d
|
||||
后端开发 :2024-01-20, 10d
|
||||
```
|
||||
"""
|
||||
#状态图
|
||||
PROMPT_6 = """
|
||||
请你给出围绕“{subject}”的状态图,使用mermaid语法,mermaid语法举例:
|
||||
```mermaid
|
||||
stateDiagram-v2
|
||||
[*] --> Still
|
||||
Still --> [*]
|
||||
Still --> Moving
|
||||
Moving --> Still
|
||||
Moving --> Crash
|
||||
Crash --> [*]
|
||||
```
|
||||
"""
|
||||
#实体关系图
|
||||
PROMPT_7 = """
|
||||
请你给出围绕“{subject}”的实体关系图,使用mermaid语法,mermaid语法举例:
|
||||
```mermaid
|
||||
erDiagram
|
||||
CUSTOMER ||--o{ ORDER : places
|
||||
ORDER ||--|{ LINE-ITEM : contains
|
||||
CUSTOMER {
|
||||
string name
|
||||
string id
|
||||
}
|
||||
ORDER {
|
||||
string orderNumber
|
||||
date orderDate
|
||||
string customerID
|
||||
}
|
||||
LINE-ITEM {
|
||||
number quantity
|
||||
string productID
|
||||
}
|
||||
```
|
||||
"""
|
||||
#象限提示图
|
||||
PROMPT_8 = """
|
||||
请你给出围绕“{subject}”的象限图,使用mermaid语法,mermaid语法举例:
|
||||
```mermaid
|
||||
graph LR
|
||||
A[Hard skill] --> B(Programming)
|
||||
A[Hard skill] --> C(Design)
|
||||
D[Soft skill] --> E(Coordination)
|
||||
D[Soft skill] --> F(Communication)
|
||||
```
|
||||
"""
|
||||
#思维导图
|
||||
PROMPT_9 = """
|
||||
{subject}
|
||||
==========
|
||||
请给出上方内容的思维导图,充分考虑其之间的逻辑,使用mermaid语法,mermaid语法举例:
|
||||
```mermaid
|
||||
mindmap
|
||||
root((mindmap))
|
||||
Origins
|
||||
Long history
|
||||
::icon(fa fa-book)
|
||||
Popularisation
|
||||
British popular psychology author Tony Buzan
|
||||
Research
|
||||
On effectiveness<br/>and features
|
||||
On Automatic creation
|
||||
Uses
|
||||
Creative techniques
|
||||
Strategic planning
|
||||
Argument mapping
|
||||
Tools
|
||||
Pen and paper
|
||||
Mermaid
|
||||
```
|
||||
"""
|
||||
|
||||
def 解析历史输入(history,llm_kwargs,chatbot,plugin_kwargs):
|
||||
############################## <第 0 步,切割输入> ##################################
|
||||
# 借用PDF切割中的函数对文本进行切割
|
||||
TOKEN_LIMIT_PER_FRAGMENT = 2500
|
||||
txt = str(history).encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
|
||||
from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
|
||||
txt = breakdown_text_to_satisfy_token_limit(txt=txt, limit=TOKEN_LIMIT_PER_FRAGMENT, llm_model=llm_kwargs['llm_model'])
|
||||
############################## <第 1 步,迭代地历遍整个文章,提取精炼信息> ##################################
|
||||
i_say_show_user = f'首先你从历史记录或文件中提取摘要。'; gpt_say = "[Local Message] 收到。" # 用户提示
|
||||
chatbot.append([i_say_show_user, gpt_say]); yield from update_ui(chatbot=chatbot, history=history) # 更新UI
|
||||
results = []
|
||||
MAX_WORD_TOTAL = 4096
|
||||
n_txt = len(txt)
|
||||
last_iteration_result = "从以下文本中提取摘要。"
|
||||
if n_txt >= 20: print('文章极长,不能达到预期效果')
|
||||
for i in range(n_txt):
|
||||
NUM_OF_WORD = MAX_WORD_TOTAL // n_txt
|
||||
i_say = f"Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} words: {txt[i]}"
|
||||
i_say_show_user = f"[{i+1}/{n_txt}] Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} words: {txt[i][:200]} ...."
|
||||
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(i_say, i_say_show_user, # i_say=真正给chatgpt的提问, i_say_show_user=给用户看的提问
|
||||
llm_kwargs, chatbot,
|
||||
history=["The main content of the previous section is?", last_iteration_result], # 迭代上一次的结果
|
||||
sys_prompt="Extracts the main content from the text section where it is located for graphing purposes, answer me with Chinese." # 提示
|
||||
)
|
||||
results.append(gpt_say)
|
||||
last_iteration_result = gpt_say
|
||||
############################## <第 2 步,根据整理的摘要选择图表类型> ##################################
|
||||
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
|
||||
gpt_say = plugin_kwargs.get("advanced_arg", "") #将图表类型参数赋值为插件参数
|
||||
results_txt = '\n'.join(results) #合并摘要
|
||||
if gpt_say not in ['1','2','3','4','5','6','7','8','9']: #如插件参数不正确则使用对话模型判断
|
||||
i_say_show_user = f'接下来将判断适合的图表类型,如连续3次判断失败将会使用流程图进行绘制'; gpt_say = "[Local Message] 收到。" # 用户提示
|
||||
chatbot.append([i_say_show_user, gpt_say]); yield from update_ui(chatbot=chatbot, history=[]) # 更新UI
|
||||
i_say = SELECT_PROMPT.format(subject=results_txt)
|
||||
i_say_show_user = f'请判断适合使用的流程图类型,其中数字对应关系为:1-流程图,2-序列图,3-类图,4-饼图,5-甘特图,6-状态图,7-实体关系图,8-象限提示图。由于不管提供文本是什么,模型大概率认为"思维导图"最合适,因此思维导图仅能通过参数调用。'
|
||||
for i in range(3):
|
||||
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
||||
inputs=i_say,
|
||||
inputs_show_user=i_say_show_user,
|
||||
llm_kwargs=llm_kwargs, chatbot=chatbot, history=[],
|
||||
sys_prompt=""
|
||||
)
|
||||
if gpt_say in ['1','2','3','4','5','6','7','8','9']: #判断返回是否正确
|
||||
break
|
||||
if gpt_say not in ['1','2','3','4','5','6','7','8','9']:
|
||||
gpt_say = '1'
|
||||
############################## <第 3 步,根据选择的图表类型绘制图表> ##################################
|
||||
if gpt_say == '1':
|
||||
i_say = PROMPT_1.format(subject=results_txt)
|
||||
elif gpt_say == '2':
|
||||
i_say = PROMPT_2.format(subject=results_txt)
|
||||
elif gpt_say == '3':
|
||||
i_say = PROMPT_3.format(subject=results_txt)
|
||||
elif gpt_say == '4':
|
||||
i_say = PROMPT_4.format(subject=results_txt)
|
||||
elif gpt_say == '5':
|
||||
i_say = PROMPT_5.format(subject=results_txt)
|
||||
elif gpt_say == '6':
|
||||
i_say = PROMPT_6.format(subject=results_txt)
|
||||
elif gpt_say == '7':
|
||||
i_say = PROMPT_7.replace("{subject}", results_txt) #由于实体关系图用到了{}符号
|
||||
elif gpt_say == '8':
|
||||
i_say = PROMPT_8.format(subject=results_txt)
|
||||
elif gpt_say == '9':
|
||||
i_say = PROMPT_9.format(subject=results_txt)
|
||||
i_say_show_user = f'请根据判断结果绘制相应的图表。如需绘制思维导图请使用参数调用,同时过大的图表可能需要复制到在线编辑器中进行渲染。'
|
||||
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
||||
inputs=i_say,
|
||||
inputs_show_user=i_say_show_user,
|
||||
llm_kwargs=llm_kwargs, chatbot=chatbot, history=[],
|
||||
sys_prompt="你精通使用mermaid语法来绘制图表,首先确保语法正确,其次避免在mermaid语法中使用不允许的字符,此外也应当分考虑图表的可读性。"
|
||||
)
|
||||
history.append(gpt_say)
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
|
||||
|
||||
def 输入区文件处理(txt):
|
||||
if txt == "": return False, txt
|
||||
success = True
|
||||
import glob
|
||||
from .crazy_utils import get_files_from_everything
|
||||
file_pdf,pdf_manifest,folder_pdf = get_files_from_everything(txt, '.pdf')
|
||||
file_md,md_manifest,folder_md = get_files_from_everything(txt, '.md')
|
||||
if len(pdf_manifest) == 0 and len(md_manifest) == 0:
|
||||
return False, txt #如输入区内容不是文件则直接返回输入区内容
|
||||
|
||||
final_result = ""
|
||||
if file_pdf:
|
||||
for index, fp in enumerate(pdf_manifest):
|
||||
file_content, page_one = read_and_clean_pdf_text(fp) # (尝试)按照章节切割PDF
|
||||
file_content = file_content.encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
|
||||
final_result += "\n" + file_content
|
||||
if file_md:
|
||||
for index, fp in enumerate(md_manifest):
|
||||
with open(fp, 'r', encoding='utf-8', errors='replace') as f:
|
||||
file_content = f.read()
|
||||
file_content = file_content.encode('utf-8', 'ignore').decode()
|
||||
final_result += "\n" + file_content
|
||||
return True, final_result
|
||||
|
||||
@CatchException
|
||||
def 生成多种Mermaid图表(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
"""
|
||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||
plugin_kwargs 插件模型的参数,用于灵活调整复杂功能的各种参数
|
||||
chatbot 聊天显示框的句柄,用于显示给用户
|
||||
history 聊天历史,前情提要
|
||||
system_prompt 给gpt的静默提醒
|
||||
web_port 当前软件运行的端口号
|
||||
"""
|
||||
import os
|
||||
|
||||
# 基本信息:功能、贡献者
|
||||
chatbot.append([
|
||||
"函数插件功能?",
|
||||
"根据当前聊天历史或文件中(文件内容优先)绘制多种mermaid图表,将会由对话模型首先判断适合的图表类型,随后绘制图表。\
|
||||
\n您也可以使用插件参数指定绘制的图表类型,函数插件贡献者: Menghuan1918"])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
||||
try:
|
||||
import fitz
|
||||
except:
|
||||
report_exception(chatbot, history,
|
||||
a = f"解析项目: {txt}",
|
||||
b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pymupdf```。")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
if os.path.exists(txt): #如输入区无内容则直接解析历史记录
|
||||
file_exist, txt = 输入区文件处理(txt)
|
||||
else:
|
||||
file_exist = False
|
||||
|
||||
if file_exist : history = [] #如输入区内容为文件则清空历史记录
|
||||
history.append(txt) #将解析后的txt传递加入到历史中
|
||||
|
||||
yield from 解析历史输入(history,llm_kwargs,chatbot,plugin_kwargs)
|
||||
@@ -13,7 +13,7 @@ install_msg ="""
|
||||
"""
|
||||
|
||||
@CatchException
|
||||
def 知识库文件注入(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 知识库文件注入(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
"""
|
||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||
llm_kwargs gpt模型参数, 如温度和top_p等, 一般原样传递下去就行
|
||||
@@ -21,7 +21,7 @@ def 知识库文件注入(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
|
||||
chatbot 聊天显示框的句柄,用于显示给用户
|
||||
history 聊天历史,前情提要
|
||||
system_prompt 给gpt的静默提醒
|
||||
web_port 当前软件运行的端口号
|
||||
user_request 当前用户的请求信息(IP地址等)
|
||||
"""
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
|
||||
@@ -84,7 +84,7 @@ def 知识库文件注入(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
|
||||
|
||||
@CatchException
|
||||
def 读取知识库作答(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port=-1):
|
||||
def 读取知识库作答(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request=-1):
|
||||
# resolve deps
|
||||
try:
|
||||
# from zh_langchain import construct_vector_store
|
||||
|
||||
@@ -55,7 +55,7 @@ def scrape_text(url, proxies) -> str:
|
||||
return text
|
||||
|
||||
@CatchException
|
||||
def 连接网络回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 连接网络回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
"""
|
||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||
@@ -63,7 +63,7 @@ def 连接网络回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
|
||||
chatbot 聊天显示框的句柄,用于显示给用户
|
||||
history 聊天历史,前情提要
|
||||
system_prompt 给gpt的静默提醒
|
||||
web_port 当前软件运行的端口号
|
||||
user_request 当前用户的请求信息(IP地址等)
|
||||
"""
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
chatbot.append((f"请结合互联网信息回答以下问题:{txt}",
|
||||
|
||||
@@ -55,7 +55,7 @@ def scrape_text(url, proxies) -> str:
|
||||
return text
|
||||
|
||||
@CatchException
|
||||
def 连接bing搜索回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 连接bing搜索回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
"""
|
||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||
@@ -63,7 +63,7 @@ def 连接bing搜索回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, histor
|
||||
chatbot 聊天显示框的句柄,用于显示给用户
|
||||
history 聊天历史,前情提要
|
||||
system_prompt 给gpt的静默提醒
|
||||
web_port 当前软件运行的端口号
|
||||
user_request 当前用户的请求信息(IP地址等)
|
||||
"""
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
chatbot.append((f"请结合互联网信息回答以下问题:{txt}",
|
||||
|
||||
@@ -104,7 +104,7 @@ def analyze_intention_with_simple_rules(txt):
|
||||
|
||||
|
||||
@CatchException
|
||||
def 虚空终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 虚空终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
disable_auto_promotion(chatbot=chatbot)
|
||||
# 获取当前虚空终端状态
|
||||
state = VoidTerminalState.get_state(chatbot)
|
||||
@@ -121,7 +121,7 @@ def 虚空终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt
|
||||
state.set_state(chatbot=chatbot, key='has_provided_explaination', value=True)
|
||||
state.unlock_plugin(chatbot=chatbot)
|
||||
yield from update_ui(chatbot=chatbot, history=history)
|
||||
yield from 虚空终端主路由(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port)
|
||||
yield from 虚空终端主路由(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request)
|
||||
return
|
||||
else:
|
||||
# 如果意图模糊,提示
|
||||
@@ -133,7 +133,7 @@ def 虚空终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt
|
||||
|
||||
|
||||
|
||||
def 虚空终端主路由(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 虚空终端主路由(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
history = []
|
||||
chatbot.append(("虚空终端状态: ", f"正在执行任务: {txt}"))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
@@ -109,7 +109,7 @@ def ipynb解释(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbo
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
@CatchException
|
||||
def 解析ipynb文件(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 解析ipynb文件(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
chatbot.append([
|
||||
"函数插件功能?",
|
||||
"对IPynb文件进行解析。Contributor: codycjy."])
|
||||
|
||||
@@ -83,7 +83,8 @@ def 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
|
||||
history=this_iteration_history_feed, # 迭代之前的分析
|
||||
sys_prompt="你是一个程序架构分析师,正在分析一个项目的源代码。" + sys_prompt_additional)
|
||||
|
||||
summary = "请用一句话概括这些文件的整体功能"
|
||||
diagram_code = make_diagram(this_iteration_files, result, this_iteration_history_feed)
|
||||
summary = "请用一句话概括这些文件的整体功能。\n\n" + diagram_code
|
||||
summary_result = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
||||
inputs=summary,
|
||||
inputs_show_user=summary,
|
||||
@@ -104,9 +105,12 @@ def 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
|
||||
chatbot.append(("完成了吗?", res))
|
||||
yield from update_ui(chatbot=chatbot, history=history_to_return) # 刷新界面
|
||||
|
||||
def make_diagram(this_iteration_files, result, this_iteration_history_feed):
|
||||
from crazy_functions.diagram_fns.file_tree import build_file_tree_mermaid_diagram
|
||||
return build_file_tree_mermaid_diagram(this_iteration_history_feed[0::2], this_iteration_history_feed[1::2], "项目示意图")
|
||||
|
||||
@CatchException
|
||||
def 解析项目本身(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 解析项目本身(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
import glob
|
||||
file_manifest = [f for f in glob.glob('./*.py')] + \
|
||||
@@ -119,7 +123,7 @@ def 解析项目本身(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_
|
||||
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
||||
|
||||
@CatchException
|
||||
def 解析一个Python项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 解析一个Python项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
import glob, os
|
||||
if os.path.exists(txt):
|
||||
@@ -137,7 +141,7 @@ def 解析一个Python项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
|
||||
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
||||
|
||||
@CatchException
|
||||
def 解析一个Matlab项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 解析一个Matlab项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
import glob, os
|
||||
if os.path.exists(txt):
|
||||
@@ -155,7 +159,7 @@ def 解析一个Matlab项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
|
||||
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
||||
|
||||
@CatchException
|
||||
def 解析一个C项目的头文件(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 解析一个C项目的头文件(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
import glob, os
|
||||
if os.path.exists(txt):
|
||||
@@ -175,7 +179,7 @@ def 解析一个C项目的头文件(txt, llm_kwargs, plugin_kwargs, chatbot, his
|
||||
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
||||
|
||||
@CatchException
|
||||
def 解析一个C项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 解析一个C项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
import glob, os
|
||||
if os.path.exists(txt):
|
||||
@@ -197,7 +201,7 @@ def 解析一个C项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system
|
||||
|
||||
|
||||
@CatchException
|
||||
def 解析一个Java项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 解析一个Java项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
import glob, os
|
||||
if os.path.exists(txt):
|
||||
@@ -219,7 +223,7 @@ def 解析一个Java项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, sys
|
||||
|
||||
|
||||
@CatchException
|
||||
def 解析一个前端项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 解析一个前端项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
import glob, os
|
||||
if os.path.exists(txt):
|
||||
@@ -248,7 +252,7 @@ def 解析一个前端项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
|
||||
|
||||
|
||||
@CatchException
|
||||
def 解析一个Golang项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 解析一个Golang项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
import glob, os
|
||||
if os.path.exists(txt):
|
||||
@@ -269,7 +273,7 @@ def 解析一个Golang项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
|
||||
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
||||
|
||||
@CatchException
|
||||
def 解析一个Rust项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 解析一个Rust项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
import glob, os
|
||||
if os.path.exists(txt):
|
||||
@@ -289,7 +293,7 @@ def 解析一个Rust项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, sys
|
||||
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
||||
|
||||
@CatchException
|
||||
def 解析一个Lua项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 解析一个Lua项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
import glob, os
|
||||
if os.path.exists(txt):
|
||||
@@ -311,7 +315,7 @@ def 解析一个Lua项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
|
||||
|
||||
|
||||
@CatchException
|
||||
def 解析一个CSharp项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 解析一个CSharp项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
import glob, os
|
||||
if os.path.exists(txt):
|
||||
@@ -331,7 +335,7 @@ def 解析一个CSharp项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
|
||||
|
||||
|
||||
@CatchException
|
||||
def 解析任意code项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 解析任意code项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
txt_pattern = plugin_kwargs.get("advanced_arg")
|
||||
txt_pattern = txt_pattern.replace(",", ",")
|
||||
# 将要匹配的模式(例如: *.c, *.cpp, *.py, config.toml)
|
||||
|
||||
@@ -2,7 +2,7 @@ from toolbox import CatchException, update_ui, get_conf
|
||||
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
||||
import datetime
|
||||
@CatchException
|
||||
def 同时问询(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 同时问询(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
"""
|
||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||
@@ -10,7 +10,7 @@ def 同时问询(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt
|
||||
chatbot 聊天显示框的句柄,用于显示给用户
|
||||
history 聊天历史,前情提要
|
||||
system_prompt 给gpt的静默提醒
|
||||
web_port 当前软件运行的端口号
|
||||
user_request 当前用户的请求信息(IP地址等)
|
||||
"""
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
MULTI_QUERY_LLM_MODELS = get_conf('MULTI_QUERY_LLM_MODELS')
|
||||
@@ -32,7 +32,7 @@ def 同时问询(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt
|
||||
|
||||
|
||||
@CatchException
|
||||
def 同时问询_指定模型(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 同时问询_指定模型(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
"""
|
||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||
@@ -40,7 +40,7 @@ def 同时问询_指定模型(txt, llm_kwargs, plugin_kwargs, chatbot, history,
|
||||
chatbot 聊天显示框的句柄,用于显示给用户
|
||||
history 聊天历史,前情提要
|
||||
system_prompt 给gpt的静默提醒
|
||||
web_port 当前软件运行的端口号
|
||||
user_request 当前用户的请求信息(IP地址等)
|
||||
"""
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
|
||||
|
||||
@@ -166,7 +166,7 @@ class InterviewAssistant(AliyunASR):
|
||||
|
||||
|
||||
@CatchException
|
||||
def 语音助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 语音助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
# pip install -U openai-whisper
|
||||
chatbot.append(["对话助手函数插件:使用时,双手离开鼠标键盘吧", "音频助手, 正在听您讲话(点击“停止”键可终止程序)..."])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
@@ -44,7 +44,7 @@ def 解析Paper(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbo
|
||||
|
||||
|
||||
@CatchException
|
||||
def 读文章写摘要(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 读文章写摘要(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
import glob, os
|
||||
if os.path.exists(txt):
|
||||
|
||||
@@ -132,7 +132,7 @@ def get_meta_information(url, chatbot, history):
|
||||
return profile
|
||||
|
||||
@CatchException
|
||||
def 谷歌检索小助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 谷歌检索小助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
disable_auto_promotion(chatbot=chatbot)
|
||||
# 基本信息:功能、贡献者
|
||||
chatbot.append([
|
||||
|
||||
@@ -11,7 +11,7 @@ import os
|
||||
|
||||
|
||||
@CatchException
|
||||
def 猜你想问(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 猜你想问(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
if txt:
|
||||
show_say = txt
|
||||
prompt = txt+'\n回答完问题后,再列出用户可能提出的三个问题。'
|
||||
@@ -32,7 +32,7 @@ def 猜你想问(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt
|
||||
|
||||
|
||||
@CatchException
|
||||
def 清除缓存(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 清除缓存(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
chatbot.append(['清除本地缓存数据', '执行中. 删除数据'])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
|
||||
@@ -1,19 +1,47 @@
|
||||
from toolbox import CatchException, update_ui
|
||||
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
||||
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
||||
import datetime
|
||||
|
||||
高阶功能模板函数示意图 = f"""
|
||||
```mermaid
|
||||
flowchart TD
|
||||
%% <gpt_academic_hide_mermaid_code> 一个特殊标记,用于在生成mermaid图表时隐藏代码块
|
||||
subgraph 函数调用["函数调用过程"]
|
||||
AA["输入栏用户输入的文本(txt)"] --> BB["gpt模型参数(llm_kwargs)"]
|
||||
BB --> CC["插件模型参数(plugin_kwargs)"]
|
||||
CC --> DD["对话显示框的句柄(chatbot)"]
|
||||
DD --> EE["对话历史(history)"]
|
||||
EE --> FF["系统提示词(system_prompt)"]
|
||||
FF --> GG["当前用户信息(web_port)"]
|
||||
|
||||
A["开始(查询5天历史事件)"]
|
||||
A --> B["获取当前月份和日期"]
|
||||
B --> C["生成历史事件查询提示词"]
|
||||
C --> D["调用大模型"]
|
||||
D --> E["更新界面"]
|
||||
E --> F["记录历史"]
|
||||
F --> |"下一天"| B
|
||||
end
|
||||
```
|
||||
"""
|
||||
|
||||
@CatchException
|
||||
def 高阶功能模板函数(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 高阶功能模板函数(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
"""
|
||||
# 高阶功能模板函数示意图:https://mermaid.live/edit#pako:eNptk1tvEkEYhv8KmattQpvlvOyFCcdeeaVXuoYssBwie8gyhCIlqVoLhrbbtAWNUpEGUkyMEDW2Fmn_DDOL_8LZHdOwxrnamX3f7_3mmZk6yKhZCfAgV1KrmYKoQ9fDuKC4yChX0nld1Aou1JzjznQ5fWmejh8LYHW6vG2a47YAnlCLNSIRolnenKBXI_zRIBrcuqRT890u7jZx7zMDt-AaMbnW1--5olGiz2sQjwfoQxsZL0hxplSSU0-rop4vrzmKR6O2JxYjHmwcL2Y_HDatVMkXlf86YzHbGY9bO5j8XE7O8Nsbc3iNB3ukL2SMcH-XIQBgWoVOZzxuOxOJOyc63EPGV6ZQLENVrznViYStTiaJ2vw2M2d9bByRnOXkgCnXylCSU5quyto_IcmkbdvctELmJ-j1ASW3uB3g5xOmKqVTmqr_Na3AtuS_dtBFm8H90XJyHkDDT7S9xXWb4HGmRChx64AOL5HRpUm411rM5uh4H78Z4V7fCZzytjZz2seto9XaNPFue07clLaVZF8UNLygJ-VES8lah_n-O-5Ozc7-77NzJ0-K0yr0ZYrmHdqAk50t2RbA4qq9uNohBASw7YpSgaRkLWCCAtxAlnRZLGbJba9bPwUAC5IsCYAnn1kpJ1ZKUACC0iBSsQLVBzUlA3ioVyQ3qGhZEUrxokiehAz4nFgqk1VNVABfB1uAD_g2_AGPl-W8nMcbCvsDblADfNCz4feyobDPy3rYEMtxwYYbPFNVUoHdCPmDHBv2cP4AMfrCbiBli-Q-3afv0X6WdsIjW2-10fgDy1SAig
|
||||
|
||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||
plugin_kwargs 插件模型的参数,用于灵活调整复杂功能的各种参数
|
||||
chatbot 聊天显示框的句柄,用于显示给用户
|
||||
history 聊天历史,前情提要
|
||||
system_prompt 给gpt的静默提醒
|
||||
web_port 当前软件运行的端口号
|
||||
user_request 当前用户的请求信息(IP地址等)
|
||||
"""
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
chatbot.append(("这是什么功能?", "[Local Message] 请注意,您正在调用一个[函数插件]的模板,该函数面向希望实现更多有趣功能的开发者,它可以作为创建新功能函数的模板(该函数只有20多行代码)。此外我们也提供可同步处理大量文件的多线程Demo供您参考。您若希望分享新的功能模组,请不吝PR!"))
|
||||
chatbot.append((
|
||||
"您正在调用插件:历史上的今天",
|
||||
"[Local Message] 请注意,您正在调用一个[函数插件]的模板,该函数面向希望实现更多有趣功能的开发者,它可以作为创建新功能函数的模板(该函数只有20多行代码)。此外我们也提供可同步处理大量文件的多线程Demo供您参考。您若希望分享新的功能模组,请不吝PR!" + 高阶功能模板函数示意图))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
|
||||
for i in range(5):
|
||||
currentMonth = (datetime.date.today() + datetime.timedelta(days=i)).month
|
||||
@@ -43,7 +71,7 @@ graph TD
|
||||
```
|
||||
"""
|
||||
@CatchException
|
||||
def 测试图表渲染(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 测试图表渲染(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
"""
|
||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||
@@ -51,7 +79,7 @@ def 测试图表渲染(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_
|
||||
chatbot 聊天显示框的句柄,用于显示给用户
|
||||
history 聊天历史,前情提要
|
||||
system_prompt 给gpt的静默提醒
|
||||
web_port 当前软件运行的端口号
|
||||
user_request 当前用户的请求信息(IP地址等)
|
||||
"""
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
chatbot.append(("这是什么功能?", "一个测试mermaid绘制图表的功能,您可以在输入框中输入一些关键词,然后使用mermaid+llm绘制图表。"))
|
||||
|
||||
@@ -165,7 +165,7 @@ toolbox.py是一个工具类库,其中主要包含了一些函数装饰器和
|
||||
|
||||
3. read_file_to_chat(chatbot, history, file_name):从传入的文件中读取内容,解析出对话历史记录并更新聊天显示框。
|
||||
|
||||
4. 对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):一个主要函数,用于保存当前对话记录并提醒用户。如果用户希望加载历史记录,则调用read_file_to_chat()来更新聊天显示框。如果用户希望删除历史记录,调用删除所有本地对话历史记录()函数完成删除操作。
|
||||
4. 对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):一个主要函数,用于保存当前对话记录并提醒用户。如果用户希望加载历史记录,则调用read_file_to_chat()来更新聊天显示框。如果用户希望删除历史记录,调用删除所有本地对话历史记录()函数完成删除操作。
|
||||
|
||||
## [19/48] 请对下面的程序文件做一个概述: crazy_functions\总结word文档.py
|
||||
|
||||
|
||||
@@ -11,7 +11,7 @@
|
||||
import tiktoken, copy
|
||||
from functools import lru_cache
|
||||
from concurrent.futures import ThreadPoolExecutor
|
||||
from toolbox import get_conf, trimmed_format_exc
|
||||
from toolbox import get_conf, trimmed_format_exc, apply_gpt_academic_string_mask
|
||||
|
||||
from .bridge_chatgpt import predict_no_ui_long_connection as chatgpt_noui
|
||||
from .bridge_chatgpt import predict as chatgpt_ui
|
||||
@@ -668,6 +668,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history, sys_prompt, obser
|
||||
"""
|
||||
import threading, time, copy
|
||||
|
||||
inputs = apply_gpt_academic_string_mask(inputs, mode="show_llm")
|
||||
model = llm_kwargs['llm_model']
|
||||
n_model = 1
|
||||
if '&' not in model:
|
||||
@@ -741,6 +742,7 @@ def predict(inputs, llm_kwargs, *args, **kwargs):
|
||||
additional_fn代表点击的哪个按钮,按钮见functional.py
|
||||
"""
|
||||
|
||||
inputs = apply_gpt_academic_string_mask(inputs, mode="show_llm")
|
||||
method = model_info[llm_kwargs['llm_model']]["fn_with_ui"] # 如果这里报错,检查config中的AVAIL_LLM_MODELS选项
|
||||
yield from method(inputs, llm_kwargs, *args, **kwargs)
|
||||
|
||||
|
||||
@@ -21,8 +21,8 @@ class ZhipuRequestInstance():
|
||||
response = zhipuai.model_api.sse_invoke(
|
||||
model=ZHIPUAI_MODEL,
|
||||
prompt=generate_message_payload(inputs, llm_kwargs, history, system_prompt),
|
||||
top_p=llm_kwargs['top_p'],
|
||||
temperature=llm_kwargs['temperature'],
|
||||
top_p=llm_kwargs['top_p']*0.7, # 智谱的API抽风,手动*0.7给做个线性变换
|
||||
temperature=llm_kwargs['temperature']*0.95, # 智谱的API抽风,手动*0.7给做个线性变换
|
||||
)
|
||||
for event in response.events():
|
||||
if event.event == "add":
|
||||
|
||||
@@ -4,62 +4,47 @@ import os
|
||||
import math
|
||||
from textwrap import dedent
|
||||
from functools import lru_cache
|
||||
from pymdownx.superfences import fence_div_format, fence_code_format
|
||||
from pymdownx.superfences import fence_code_format
|
||||
from latex2mathml.converter import convert as tex2mathml
|
||||
from shared_utils.config_loader import get_conf as get_conf
|
||||
|
||||
pj = os.path.join
|
||||
default_user_name = 'default_user'
|
||||
from shared_utils.text_mask import apply_gpt_academic_string_mask
|
||||
|
||||
markdown_extension_configs = {
|
||||
'mdx_math': {
|
||||
'enable_dollar_delimiter': True,
|
||||
'use_gitlab_delimiters': False,
|
||||
"mdx_math": {
|
||||
"enable_dollar_delimiter": True,
|
||||
"use_gitlab_delimiters": False,
|
||||
},
|
||||
}
|
||||
|
||||
code_highlight_configs = {
|
||||
"pymdownx.superfences": {
|
||||
'css_class': 'codehilite',
|
||||
"css_class": "codehilite",
|
||||
"custom_fences": [
|
||||
{
|
||||
'name': 'mermaid',
|
||||
'class': 'mermaid',
|
||||
'format': fence_code_format
|
||||
}
|
||||
]
|
||||
{"name": "mermaid", "class": "mermaid", "format": fence_code_format}
|
||||
],
|
||||
},
|
||||
"pymdownx.highlight": {
|
||||
'css_class': 'codehilite',
|
||||
'guess_lang': True,
|
||||
"css_class": "codehilite",
|
||||
"guess_lang": True,
|
||||
# 'auto_title': True,
|
||||
# 'linenums': True
|
||||
}
|
||||
},
|
||||
}
|
||||
|
||||
def text_divide_paragraph(text):
|
||||
"""
|
||||
将文本按照段落分隔符分割开,生成带有段落标签的HTML代码。
|
||||
"""
|
||||
pre = '<div class="markdown-body">'
|
||||
suf = '</div>'
|
||||
if text.startswith(pre) and text.endswith(suf):
|
||||
return text
|
||||
|
||||
if '```' in text:
|
||||
# careful input
|
||||
return text
|
||||
elif '</div>' in text:
|
||||
# careful input
|
||||
return text
|
||||
else:
|
||||
# whatever input
|
||||
lines = text.split("\n")
|
||||
for i, line in enumerate(lines):
|
||||
lines[i] = lines[i].replace(" ", " ")
|
||||
text = "</br>".join(lines)
|
||||
return pre + text + suf
|
||||
|
||||
code_highlight_configs_block_mermaid = {
|
||||
"pymdownx.superfences": {
|
||||
"css_class": "codehilite",
|
||||
# "custom_fences": [
|
||||
# {"name": "mermaid", "class": "mermaid", "format": fence_code_format}
|
||||
# ],
|
||||
},
|
||||
"pymdownx.highlight": {
|
||||
"css_class": "codehilite",
|
||||
"guess_lang": True,
|
||||
# 'auto_title': True,
|
||||
# 'linenums': True
|
||||
},
|
||||
}
|
||||
|
||||
def tex2mathml_catch_exception(content, *args, **kwargs):
|
||||
try:
|
||||
@@ -71,20 +56,20 @@ def tex2mathml_catch_exception(content, *args, **kwargs):
|
||||
|
||||
def replace_math_no_render(match):
|
||||
content = match.group(1)
|
||||
if 'mode=display' in match.group(0):
|
||||
content = content.replace('\n', '</br>')
|
||||
return f"<font color=\"#00FF00\">$$</font><font color=\"#FF00FF\">{content}</font><font color=\"#00FF00\">$$</font>"
|
||||
if "mode=display" in match.group(0):
|
||||
content = content.replace("\n", "</br>")
|
||||
return f'<font color="#00FF00">$$</font><font color="#FF00FF">{content}</font><font color="#00FF00">$$</font>'
|
||||
else:
|
||||
return f"<font color=\"#00FF00\">$</font><font color=\"#FF00FF\">{content}</font><font color=\"#00FF00\">$</font>"
|
||||
return f'<font color="#00FF00">$</font><font color="#FF00FF">{content}</font><font color="#00FF00">$</font>'
|
||||
|
||||
|
||||
def replace_math_render(match):
|
||||
content = match.group(1)
|
||||
if 'mode=display' in match.group(0):
|
||||
if '\\begin{aligned}' in content:
|
||||
content = content.replace('\\begin{aligned}', '\\begin{array}')
|
||||
content = content.replace('\\end{aligned}', '\\end{array}')
|
||||
content = content.replace('&', ' ')
|
||||
if "mode=display" in match.group(0):
|
||||
if "\\begin{aligned}" in content:
|
||||
content = content.replace("\\begin{aligned}", "\\begin{array}")
|
||||
content = content.replace("\\end{aligned}", "\\end{array}")
|
||||
content = content.replace("&", " ")
|
||||
content = tex2mathml_catch_exception(content, display="block")
|
||||
return content
|
||||
else:
|
||||
@@ -95,9 +80,11 @@ def markdown_bug_hunt(content):
|
||||
"""
|
||||
解决一个mdx_math的bug(单$包裹begin命令时多余<script>)
|
||||
"""
|
||||
content = content.replace('<script type="math/tex">\n<script type="math/tex; mode=display">',
|
||||
'<script type="math/tex; mode=display">')
|
||||
content = content.replace('</script>\n</script>', '</script>')
|
||||
content = content.replace(
|
||||
'<script type="math/tex">\n<script type="math/tex; mode=display">',
|
||||
'<script type="math/tex; mode=display">',
|
||||
)
|
||||
content = content.replace("</script>\n</script>", "</script>")
|
||||
return content
|
||||
|
||||
|
||||
@@ -105,25 +92,29 @@ def is_equation(txt):
|
||||
"""
|
||||
判定是否为公式 | 测试1 写出洛伦兹定律,使用tex格式公式 测试2 给出柯西不等式,使用latex格式 测试3 写出麦克斯韦方程组
|
||||
"""
|
||||
if '```' in txt and '```reference' not in txt: return False
|
||||
if '$' not in txt and '\\[' not in txt: return False
|
||||
if "```" in txt and "```reference" not in txt:
|
||||
return False
|
||||
if "$" not in txt and "\\[" not in txt:
|
||||
return False
|
||||
mathpatterns = {
|
||||
r'(?<!\\|\$)(\$)([^\$]+)(\$)': {'allow_multi_lines': False}, # $...$
|
||||
r'(?<!\\)(\$\$)([^\$]+)(\$\$)': {'allow_multi_lines': True}, # $$...$$
|
||||
r'(?<!\\)(\\\[)(.+?)(\\\])': {'allow_multi_lines': False}, # \[...\]
|
||||
r"(?<!\\|\$)(\$)([^\$]+)(\$)": {"allow_multi_lines": False}, # $...$
|
||||
r"(?<!\\)(\$\$)([^\$]+)(\$\$)": {"allow_multi_lines": True}, # $$...$$
|
||||
r"(?<!\\)(\\\[)(.+?)(\\\])": {"allow_multi_lines": False}, # \[...\]
|
||||
# r'(?<!\\)(\\\()(.+?)(\\\))': {'allow_multi_lines': False}, # \(...\)
|
||||
# r'(?<!\\)(\\begin{([a-z]+?\*?)})(.+?)(\\end{\2})': {'allow_multi_lines': True}, # \begin...\end
|
||||
# r'(?<!\\)(\$`)([^`]+)(`\$)': {'allow_multi_lines': False}, # $`...`$
|
||||
}
|
||||
matches = []
|
||||
for pattern, property in mathpatterns.items():
|
||||
flags = re.ASCII | re.DOTALL if property['allow_multi_lines'] else re.ASCII
|
||||
flags = re.ASCII | re.DOTALL if property["allow_multi_lines"] else re.ASCII
|
||||
matches.extend(re.findall(pattern, txt, flags))
|
||||
if len(matches) == 0: return False
|
||||
if len(matches) == 0:
|
||||
return False
|
||||
contain_any_eq = False
|
||||
illegal_pattern = re.compile(r'[^\x00-\x7F]|echo')
|
||||
illegal_pattern = re.compile(r"[^\x00-\x7F]|echo")
|
||||
for match in matches:
|
||||
if len(match) != 3: return False
|
||||
if len(match) != 3:
|
||||
return False
|
||||
eq_canidate = match[1]
|
||||
if illegal_pattern.search(eq_canidate):
|
||||
return False
|
||||
@@ -134,27 +125,28 @@ def is_equation(txt):
|
||||
|
||||
def fix_markdown_indent(txt):
|
||||
# fix markdown indent
|
||||
if (' - ' not in txt) or ('. ' not in txt):
|
||||
if (" - " not in txt) or (". " not in txt):
|
||||
# do not need to fix, fast escape
|
||||
return txt
|
||||
# walk through the lines and fix non-standard indentation
|
||||
lines = txt.split("\n")
|
||||
pattern = re.compile(r'^\s+-')
|
||||
pattern = re.compile(r"^\s+-")
|
||||
activated = False
|
||||
for i, line in enumerate(lines):
|
||||
if line.startswith('- ') or line.startswith('1. '):
|
||||
if line.startswith("- ") or line.startswith("1. "):
|
||||
activated = True
|
||||
if activated and pattern.match(line):
|
||||
stripped_string = line.lstrip()
|
||||
num_spaces = len(line) - len(stripped_string)
|
||||
if (num_spaces % 4) == 3:
|
||||
num_spaces_should_be = math.ceil(num_spaces / 4) * 4
|
||||
lines[i] = ' ' * num_spaces_should_be + stripped_string
|
||||
return '\n'.join(lines)
|
||||
lines[i] = " " * num_spaces_should_be + stripped_string
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
FENCED_BLOCK_RE = re.compile(
|
||||
dedent(r'''
|
||||
dedent(
|
||||
r"""
|
||||
(?P<fence>^[ \t]*(?:~{3,}|`{3,}))[ ]* # opening fence
|
||||
((\{(?P<attrs>[^\}\n]*)\})| # (optional {attrs} or
|
||||
(\.?(?P<lang>[\w#.+-]*)[ ]*)? # optional (.)lang
|
||||
@@ -162,16 +154,17 @@ FENCED_BLOCK_RE = re.compile(
|
||||
\n # newline (end of opening fence)
|
||||
(?P<code>.*?)(?<=\n) # the code block
|
||||
(?P=fence)[ ]*$ # closing fence
|
||||
'''),
|
||||
re.MULTILINE | re.DOTALL | re.VERBOSE
|
||||
"""
|
||||
),
|
||||
re.MULTILINE | re.DOTALL | re.VERBOSE,
|
||||
)
|
||||
|
||||
|
||||
def get_line_range(re_match_obj, txt):
|
||||
start_pos, end_pos = re_match_obj.regs[0]
|
||||
num_newlines_before = txt[:start_pos+1].count('\n')
|
||||
num_newlines_before = txt[: start_pos + 1].count("\n")
|
||||
line_start = num_newlines_before
|
||||
line_end = num_newlines_before + txt[start_pos:end_pos].count('\n')+1
|
||||
line_end = num_newlines_before + txt[start_pos:end_pos].count("\n") + 1
|
||||
return line_start, line_end
|
||||
|
||||
|
||||
@@ -181,14 +174,16 @@ def fix_code_segment_indent(txt):
|
||||
txt_tmp = txt
|
||||
while True:
|
||||
re_match_obj = FENCED_BLOCK_RE.search(txt_tmp)
|
||||
if not re_match_obj: break
|
||||
if len(lines) == 0: lines = txt.split("\n")
|
||||
|
||||
if not re_match_obj:
|
||||
break
|
||||
if len(lines) == 0:
|
||||
lines = txt.split("\n")
|
||||
|
||||
# 清空 txt_tmp 对应的位置方便下次搜索
|
||||
start_pos, end_pos = re_match_obj.regs[0]
|
||||
txt_tmp = txt_tmp[:start_pos] + ' '*(end_pos-start_pos) + txt_tmp[end_pos:]
|
||||
txt_tmp = txt_tmp[:start_pos] + " " * (end_pos - start_pos) + txt_tmp[end_pos:]
|
||||
line_start, line_end = get_line_range(re_match_obj, txt)
|
||||
|
||||
|
||||
# 获取公共缩进
|
||||
shared_indent_cnt = 1e5
|
||||
for i in range(line_start, line_end):
|
||||
@@ -202,26 +197,26 @@ def fix_code_segment_indent(txt):
|
||||
num_spaces_should_be = math.ceil(shared_indent_cnt / 4) * 4
|
||||
for i in range(line_start, line_end):
|
||||
add_n = num_spaces_should_be - shared_indent_cnt
|
||||
lines[i] = ' ' * add_n + lines[i]
|
||||
if not change_any: # 遇到第一个
|
||||
lines[i] = " " * add_n + lines[i]
|
||||
if not change_any: # 遇到第一个
|
||||
change_any = True
|
||||
|
||||
if change_any:
|
||||
return '\n'.join(lines)
|
||||
return "\n".join(lines)
|
||||
else:
|
||||
return txt
|
||||
|
||||
|
||||
@lru_cache(maxsize=128) # 使用 lru缓存 加快转换速度
|
||||
|
||||
|
||||
@lru_cache(maxsize=128) # 使用 lru缓存 加快转换速度
|
||||
def markdown_convertion(txt):
|
||||
"""
|
||||
将Markdown格式的文本转换为HTML格式。如果包含数学公式,则先将公式转换为HTML格式。
|
||||
"""
|
||||
pre = '<div class="markdown-body">'
|
||||
suf = '</div>'
|
||||
suf = "</div>"
|
||||
if txt.startswith(pre) and txt.endswith(suf):
|
||||
# print('警告,输入了已经经过转化的字符串,二次转化可能出问题')
|
||||
return txt # 已经被转化过,不需要再次转化
|
||||
return txt # 已经被转化过,不需要再次转化
|
||||
|
||||
find_equation_pattern = r'<script type="math/tex(?:.*?)>(.*?)</script>'
|
||||
|
||||
@@ -229,18 +224,47 @@ def markdown_convertion(txt):
|
||||
# txt = fix_code_segment_indent(txt)
|
||||
if is_equation(txt): # 有$标识的公式符号,且没有代码段```的标识
|
||||
# convert everything to html format
|
||||
split = markdown.markdown(text='---')
|
||||
convert_stage_1 = markdown.markdown(text=txt, extensions=['sane_lists', 'tables', 'mdx_math', 'pymdownx.superfences', 'pymdownx.highlight'],
|
||||
extension_configs={**markdown_extension_configs, **code_highlight_configs})
|
||||
split = markdown.markdown(text="---")
|
||||
convert_stage_1 = markdown.markdown(
|
||||
text=txt,
|
||||
extensions=[
|
||||
"sane_lists",
|
||||
"tables",
|
||||
"mdx_math",
|
||||
"pymdownx.superfences",
|
||||
"pymdownx.highlight",
|
||||
],
|
||||
extension_configs={**markdown_extension_configs, **code_highlight_configs},
|
||||
)
|
||||
convert_stage_1 = markdown_bug_hunt(convert_stage_1)
|
||||
# 1. convert to easy-to-copy tex (do not render math)
|
||||
convert_stage_2_1, n = re.subn(find_equation_pattern, replace_math_no_render, convert_stage_1, flags=re.DOTALL)
|
||||
convert_stage_2_1, n = re.subn(
|
||||
find_equation_pattern,
|
||||
replace_math_no_render,
|
||||
convert_stage_1,
|
||||
flags=re.DOTALL,
|
||||
)
|
||||
# 2. convert to rendered equation
|
||||
convert_stage_2_2, n = re.subn(find_equation_pattern, replace_math_render, convert_stage_1, flags=re.DOTALL)
|
||||
convert_stage_2_2, n = re.subn(
|
||||
find_equation_pattern, replace_math_render, convert_stage_1, flags=re.DOTALL
|
||||
)
|
||||
# cat them together
|
||||
return pre + convert_stage_2_1 + f'{split}' + convert_stage_2_2 + suf
|
||||
return pre + convert_stage_2_1 + f"{split}" + convert_stage_2_2 + suf
|
||||
else:
|
||||
return pre + markdown.markdown(txt, extensions=['sane_lists', 'tables', 'pymdownx.superfences', 'pymdownx.highlight'], extension_configs=code_highlight_configs) + suf
|
||||
return (
|
||||
pre
|
||||
+ markdown.markdown(
|
||||
txt,
|
||||
extensions=[
|
||||
"sane_lists",
|
||||
"tables",
|
||||
"pymdownx.superfences",
|
||||
"pymdownx.highlight",
|
||||
],
|
||||
extension_configs=code_highlight_configs,
|
||||
)
|
||||
+ suf
|
||||
)
|
||||
|
||||
|
||||
def close_up_code_segment_during_stream(gpt_reply):
|
||||
@@ -254,20 +278,67 @@ def close_up_code_segment_during_stream(gpt_reply):
|
||||
str: 返回一个新的字符串,将输出代码片段的“后面的```”补上。
|
||||
|
||||
"""
|
||||
if '```' not in gpt_reply:
|
||||
if "```" not in gpt_reply:
|
||||
return gpt_reply
|
||||
if gpt_reply.endswith('```'):
|
||||
if gpt_reply.endswith("```"):
|
||||
return gpt_reply
|
||||
|
||||
# 排除了以上两个情况,我们
|
||||
segments = gpt_reply.split('```')
|
||||
segments = gpt_reply.split("```")
|
||||
n_mark = len(segments) - 1
|
||||
if n_mark % 2 == 1:
|
||||
return gpt_reply + '\n```' # 输出代码片段中!
|
||||
return gpt_reply + "\n```" # 输出代码片段中!
|
||||
else:
|
||||
return gpt_reply
|
||||
|
||||
|
||||
def special_render_issues_for_mermaid(text):
|
||||
# 用不太优雅的方式处理一个core_functional.py中出现的mermaid渲染特例:
|
||||
# 我不希望"总结绘制脑图"prompt中的mermaid渲染出来
|
||||
@lru_cache(maxsize=1)
|
||||
def get_special_case():
|
||||
from core_functional import get_core_functions
|
||||
special_case = get_core_functions()["总结绘制脑图"]["Suffix"]
|
||||
return special_case
|
||||
if text.endswith(get_special_case()): text = text.replace("```mermaid", "```")
|
||||
return text
|
||||
|
||||
|
||||
def compat_non_markdown_input(text):
|
||||
"""
|
||||
改善非markdown输入的显示效果,例如将空格转换为 ,将换行符转换为</br>等。
|
||||
"""
|
||||
if "```" in text:
|
||||
# careful input:markdown输入
|
||||
text = special_render_issues_for_mermaid(text) # 处理特殊的渲染问题
|
||||
return text
|
||||
elif "</div>" in text:
|
||||
# careful input:html输入
|
||||
return text
|
||||
else:
|
||||
# whatever input:非markdown输入
|
||||
lines = text.split("\n")
|
||||
for i, line in enumerate(lines):
|
||||
lines[i] = lines[i].replace(" ", " ") # 空格转换为
|
||||
text = "</br>".join(lines) # 换行符转换为</br>
|
||||
return text
|
||||
|
||||
|
||||
@lru_cache(maxsize=128) # 使用lru缓存
|
||||
def simple_markdown_convertion(text):
|
||||
pre = '<div class="markdown-body">'
|
||||
suf = "</div>"
|
||||
if text.startswith(pre) and text.endswith(suf):
|
||||
return text # 已经被转化过,不需要再次转化
|
||||
text = compat_non_markdown_input(text) # 兼容非markdown输入
|
||||
text = markdown.markdown(
|
||||
text,
|
||||
extensions=["pymdownx.superfences", "tables", "pymdownx.highlight"],
|
||||
extension_configs=code_highlight_configs,
|
||||
)
|
||||
return pre + text + suf
|
||||
|
||||
|
||||
def format_io(self, y):
|
||||
"""
|
||||
将输入和输出解析为HTML格式。将y中最后一项的输入部分段落化,并将输出部分的Markdown和数学公式转换为HTML格式。
|
||||
@@ -275,13 +346,16 @@ def format_io(self, y):
|
||||
if y is None or y == []:
|
||||
return []
|
||||
i_ask, gpt_reply = y[-1]
|
||||
# 输入部分太自由,预处理一波
|
||||
if i_ask is not None: i_ask = text_divide_paragraph(i_ask)
|
||||
i_ask = apply_gpt_academic_string_mask(i_ask, mode="show_render")
|
||||
gpt_reply = apply_gpt_academic_string_mask(gpt_reply, mode="show_render")
|
||||
# 当代码输出半截的时候,试着补上后个```
|
||||
if gpt_reply is not None: gpt_reply = close_up_code_segment_during_stream(gpt_reply)
|
||||
# process
|
||||
if gpt_reply is not None:
|
||||
gpt_reply = close_up_code_segment_during_stream(gpt_reply)
|
||||
# 处理提问与输出
|
||||
y[-1] = (
|
||||
None if i_ask is None else markdown.markdown(i_ask, extensions=['pymdownx.superfences', 'tables', 'pymdownx.highlight'], extension_configs=code_highlight_configs),
|
||||
None if gpt_reply is None else markdown_convertion(gpt_reply)
|
||||
# 输入部分
|
||||
None if i_ask is None else simple_markdown_convertion(i_ask),
|
||||
# 输出部分
|
||||
None if gpt_reply is None else markdown_convertion(gpt_reply),
|
||||
)
|
||||
return y
|
||||
|
||||
@@ -52,7 +52,7 @@ def get_plugin_default_kwargs():
|
||||
}
|
||||
chatbot = ChatBotWithCookies(llm_kwargs)
|
||||
|
||||
# txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port
|
||||
# txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request
|
||||
DEFAULT_FN_GROUPS_kwargs = {
|
||||
"main_input": "./README.md",
|
||||
"llm_kwargs": llm_kwargs,
|
||||
@@ -60,7 +60,7 @@ def get_plugin_default_kwargs():
|
||||
"chatbot_with_cookie": chatbot,
|
||||
"history": [],
|
||||
"system_prompt": "You are a good AI.",
|
||||
"web_port": None,
|
||||
"user_request": None,
|
||||
}
|
||||
return DEFAULT_FN_GROUPS_kwargs
|
||||
|
||||
|
||||
107
shared_utils/text_mask.py
Normal file
107
shared_utils/text_mask.py
Normal file
@@ -0,0 +1,107 @@
|
||||
import re
|
||||
from functools import lru_cache
|
||||
|
||||
# 这段代码是使用Python编程语言中的re模块,即正则表达式库,来定义了一个正则表达式模式。
|
||||
# 这个模式被编译成一个正则表达式对象,存储在名为const_extract_exp的变量中,以便于后续快速的匹配和查找操作。
|
||||
# 这里解释一下正则表达式中的几个特殊字符:
|
||||
# - . 表示任意单一字符。
|
||||
# - * 表示前一个字符可以出现0次或多次。
|
||||
# - ? 在这里用作非贪婪匹配,也就是说它会匹配尽可能少的字符。在(.*?)中,它确保我们匹配的任意文本是尽可能短的,也就是说,它会在</show_llm>和</show_render>标签之前停止匹配。
|
||||
# - () 括号在正则表达式中表示捕获组。
|
||||
# - 在这个例子中,(.*?)表示捕获任意长度的文本,直到遇到括号外部最近的限定符,即</show_llm>和</show_render>。
|
||||
|
||||
# -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-==-=-=-=/1=-=-=-=-=-=-=-=-=-=-=-=-=-=/2-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
||||
const_extract_re = re.compile(
|
||||
r"<gpt_academic_string_mask><show_llm>(.*?)</show_llm><show_render>(.*?)</show_render></gpt_academic_string_mask>"
|
||||
)
|
||||
# -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-==-=-=-=-=-=/1=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-/2-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
||||
const_extract_langbased_re = re.compile(
|
||||
r"<gpt_academic_string_mask><lang_english>(.*?)</lang_english><lang_chinese>(.*?)</lang_chinese></gpt_academic_string_mask>",
|
||||
flags=re.DOTALL,
|
||||
)
|
||||
|
||||
@lru_cache(maxsize=128)
|
||||
def apply_gpt_academic_string_mask(string, mode="show_all"):
|
||||
"""
|
||||
当字符串中有掩码tag时(<gpt_academic_string_mask><show_...>),根据字符串要给谁看(大模型,还是web渲染),对字符串进行处理,返回处理后的字符串
|
||||
示意图:https://mermaid.live/edit#pako:eNqlkUtLw0AUhf9KuOta0iaTplkIPlpduFJwoZEwJGNbzItpita2O6tF8QGKogXFtwu7cSHiq3-mk_oznFR8IYLgrGbuOd9hDrcCpmcR0GDW9ubNPKaBMDauuwI_A9M6YN-3y0bODwxsYos4BdMoBrTg5gwHF-d0mBH6-vqFQe58ed5m9XPW2uteX3Tubrj0ljLYcwxxR3h1zB43WeMs3G19yEM9uapDMe_NG9i2dagKw1Fee4c1D9nGEbtc-5n6HbNtJ8IyHOs8tbs7V2HrlDX2w2Y7XD_5haHEtQiNsOwfMVa_7TzsvrWIuJGo02qTrdwLk9gukQylHv3Afv1ML270s-HZUndrmW1tdA-WfvbM_jMFYuAQ6uCCxVdciTJ1CPLEITpo_GphypeouzXuw6XAmyi7JmgBLZEYlHwLB2S4gHMUO-9DH7tTnvf1CVoFFkBLSOk4QmlRTqpIlaWUHINyNFXjaQWpCYRURUKiWovBYo8X4ymEJFlECQUpqaQkJmuvWygPpg
|
||||
"""
|
||||
if "<gpt_academic_string_mask>" not in string: # No need to process
|
||||
return string
|
||||
|
||||
if mode == "show_all":
|
||||
return string
|
||||
if mode == "show_llm":
|
||||
string = const_extract_re.sub(r"\1", string)
|
||||
elif mode == "show_render":
|
||||
string = const_extract_re.sub(r"\2", string)
|
||||
else:
|
||||
raise ValueError("Invalid mode")
|
||||
return string
|
||||
|
||||
|
||||
@lru_cache(maxsize=128)
|
||||
def build_gpt_academic_masked_string(text_show_llm="", text_show_render=""):
|
||||
"""
|
||||
根据字符串要给谁看(大模型,还是web渲染),生成带掩码tag的字符串
|
||||
"""
|
||||
return f"<gpt_academic_string_mask><show_llm>{text_show_llm}</show_llm><show_render>{text_show_render}</show_render></gpt_academic_string_mask>"
|
||||
|
||||
|
||||
@lru_cache(maxsize=128)
|
||||
def apply_gpt_academic_string_mask_langbased(string, lang_reference):
|
||||
"""
|
||||
当字符串中有掩码tag时(<gpt_academic_string_mask><lang_...>),根据语言,选择提示词,对字符串进行处理,返回处理后的字符串
|
||||
例如,如果lang_reference是英文,那么就只显示英文提示词,中文提示词就不显示了
|
||||
举例:
|
||||
输入1
|
||||
string = "注意,lang_reference这段文字是:<gpt_academic_string_mask><lang_english>英语</lang_english><lang_chinese>中文</lang_chinese></gpt_academic_string_mask>"
|
||||
lang_reference = "hello world"
|
||||
输出1
|
||||
"注意,lang_reference这段文字是:英语"
|
||||
|
||||
输入2
|
||||
string = "注意,lang_reference这段文字是中文" # 注意这里没有掩码tag,所以不会被处理
|
||||
lang_reference = "hello world"
|
||||
输出2
|
||||
"注意,lang_reference这段文字是中文" # 原样返回
|
||||
"""
|
||||
|
||||
if "<gpt_academic_string_mask>" not in string: # No need to process
|
||||
return string
|
||||
|
||||
def contains_chinese(string):
|
||||
chinese_regex = re.compile(u'[\u4e00-\u9fff]+')
|
||||
return chinese_regex.search(string) is not None
|
||||
|
||||
mode = "english" if not contains_chinese(lang_reference) else "chinese"
|
||||
if mode == "english":
|
||||
string = const_extract_langbased_re.sub(r"\1", string)
|
||||
elif mode == "chinese":
|
||||
string = const_extract_langbased_re.sub(r"\2", string)
|
||||
else:
|
||||
raise ValueError("Invalid mode")
|
||||
return string
|
||||
|
||||
|
||||
@lru_cache(maxsize=128)
|
||||
def build_gpt_academic_masked_string_langbased(text_show_english="", text_show_chinese=""):
|
||||
"""
|
||||
根据语言,选择提示词,对字符串进行处理,返回处理后的字符串
|
||||
"""
|
||||
return f"<gpt_academic_string_mask><lang_english>{text_show_english}</lang_english><lang_chinese>{text_show_chinese}</lang_chinese></gpt_academic_string_mask>"
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Test
|
||||
input_string = (
|
||||
"你好\n"
|
||||
+ build_gpt_academic_masked_string(text_show_llm="mermaid", text_show_render="")
|
||||
+ "你好\n"
|
||||
)
|
||||
print(
|
||||
apply_gpt_academic_string_mask(input_string, "show_llm")
|
||||
) # Should print the strings with 'abc' in place of the academic mask tags
|
||||
print(
|
||||
apply_gpt_academic_string_mask(input_string, "show_render")
|
||||
) # Should print the strings with 'xyz' in place of the academic mask tags
|
||||
296
themes/base64.mjs
Normal file
296
themes/base64.mjs
Normal file
@@ -0,0 +1,296 @@
|
||||
/**
|
||||
* base64.ts
|
||||
*
|
||||
* Licensed under the BSD 3-Clause License.
|
||||
* http://opensource.org/licenses/BSD-3-Clause
|
||||
*
|
||||
* References:
|
||||
* http://en.wikipedia.org/wiki/Base64
|
||||
*
|
||||
* @author Dan Kogai (https://github.com/dankogai)
|
||||
*/
|
||||
const version = '3.7.2';
|
||||
/**
|
||||
* @deprecated use lowercase `version`.
|
||||
*/
|
||||
const VERSION = version;
|
||||
const _hasatob = typeof atob === 'function';
|
||||
const _hasbtoa = typeof btoa === 'function';
|
||||
const _hasBuffer = typeof Buffer === 'function';
|
||||
const _TD = typeof TextDecoder === 'function' ? new TextDecoder() : undefined;
|
||||
const _TE = typeof TextEncoder === 'function' ? new TextEncoder() : undefined;
|
||||
const b64ch = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=';
|
||||
const b64chs = Array.prototype.slice.call(b64ch);
|
||||
const b64tab = ((a) => {
|
||||
let tab = {};
|
||||
a.forEach((c, i) => tab[c] = i);
|
||||
return tab;
|
||||
})(b64chs);
|
||||
const b64re = /^(?:[A-Za-z\d+\/]{4})*?(?:[A-Za-z\d+\/]{2}(?:==)?|[A-Za-z\d+\/]{3}=?)?$/;
|
||||
const _fromCC = String.fromCharCode.bind(String);
|
||||
const _U8Afrom = typeof Uint8Array.from === 'function'
|
||||
? Uint8Array.from.bind(Uint8Array)
|
||||
: (it, fn = (x) => x) => new Uint8Array(Array.prototype.slice.call(it, 0).map(fn));
|
||||
const _mkUriSafe = (src) => src
|
||||
.replace(/=/g, '').replace(/[+\/]/g, (m0) => m0 == '+' ? '-' : '_');
|
||||
const _tidyB64 = (s) => s.replace(/[^A-Za-z0-9\+\/]/g, '');
|
||||
/**
|
||||
* polyfill version of `btoa`
|
||||
*/
|
||||
const btoaPolyfill = (bin) => {
|
||||
// console.log('polyfilled');
|
||||
let u32, c0, c1, c2, asc = '';
|
||||
const pad = bin.length % 3;
|
||||
for (let i = 0; i < bin.length;) {
|
||||
if ((c0 = bin.charCodeAt(i++)) > 255 ||
|
||||
(c1 = bin.charCodeAt(i++)) > 255 ||
|
||||
(c2 = bin.charCodeAt(i++)) > 255)
|
||||
throw new TypeError('invalid character found');
|
||||
u32 = (c0 << 16) | (c1 << 8) | c2;
|
||||
asc += b64chs[u32 >> 18 & 63]
|
||||
+ b64chs[u32 >> 12 & 63]
|
||||
+ b64chs[u32 >> 6 & 63]
|
||||
+ b64chs[u32 & 63];
|
||||
}
|
||||
return pad ? asc.slice(0, pad - 3) + "===".substring(pad) : asc;
|
||||
};
|
||||
/**
|
||||
* does what `window.btoa` of web browsers do.
|
||||
* @param {String} bin binary string
|
||||
* @returns {string} Base64-encoded string
|
||||
*/
|
||||
const _btoa = _hasbtoa ? (bin) => btoa(bin)
|
||||
: _hasBuffer ? (bin) => Buffer.from(bin, 'binary').toString('base64')
|
||||
: btoaPolyfill;
|
||||
const _fromUint8Array = _hasBuffer
|
||||
? (u8a) => Buffer.from(u8a).toString('base64')
|
||||
: (u8a) => {
|
||||
// cf. https://stackoverflow.com/questions/12710001/how-to-convert-uint8-array-to-base64-encoded-string/12713326#12713326
|
||||
const maxargs = 0x1000;
|
||||
let strs = [];
|
||||
for (let i = 0, l = u8a.length; i < l; i += maxargs) {
|
||||
strs.push(_fromCC.apply(null, u8a.subarray(i, i + maxargs)));
|
||||
}
|
||||
return _btoa(strs.join(''));
|
||||
};
|
||||
/**
|
||||
* converts a Uint8Array to a Base64 string.
|
||||
* @param {boolean} [urlsafe] URL-and-filename-safe a la RFC4648 §5
|
||||
* @returns {string} Base64 string
|
||||
*/
|
||||
const fromUint8Array = (u8a, urlsafe = false) => urlsafe ? _mkUriSafe(_fromUint8Array(u8a)) : _fromUint8Array(u8a);
|
||||
// This trick is found broken https://github.com/dankogai/js-base64/issues/130
|
||||
// const utob = (src: string) => unescape(encodeURIComponent(src));
|
||||
// reverting good old fationed regexp
|
||||
const cb_utob = (c) => {
|
||||
if (c.length < 2) {
|
||||
var cc = c.charCodeAt(0);
|
||||
return cc < 0x80 ? c
|
||||
: cc < 0x800 ? (_fromCC(0xc0 | (cc >>> 6))
|
||||
+ _fromCC(0x80 | (cc & 0x3f)))
|
||||
: (_fromCC(0xe0 | ((cc >>> 12) & 0x0f))
|
||||
+ _fromCC(0x80 | ((cc >>> 6) & 0x3f))
|
||||
+ _fromCC(0x80 | (cc & 0x3f)));
|
||||
}
|
||||
else {
|
||||
var cc = 0x10000
|
||||
+ (c.charCodeAt(0) - 0xD800) * 0x400
|
||||
+ (c.charCodeAt(1) - 0xDC00);
|
||||
return (_fromCC(0xf0 | ((cc >>> 18) & 0x07))
|
||||
+ _fromCC(0x80 | ((cc >>> 12) & 0x3f))
|
||||
+ _fromCC(0x80 | ((cc >>> 6) & 0x3f))
|
||||
+ _fromCC(0x80 | (cc & 0x3f)));
|
||||
}
|
||||
};
|
||||
const re_utob = /[\uD800-\uDBFF][\uDC00-\uDFFFF]|[^\x00-\x7F]/g;
|
||||
/**
|
||||
* @deprecated should have been internal use only.
|
||||
* @param {string} src UTF-8 string
|
||||
* @returns {string} UTF-16 string
|
||||
*/
|
||||
const utob = (u) => u.replace(re_utob, cb_utob);
|
||||
//
|
||||
const _encode = _hasBuffer
|
||||
? (s) => Buffer.from(s, 'utf8').toString('base64')
|
||||
: _TE
|
||||
? (s) => _fromUint8Array(_TE.encode(s))
|
||||
: (s) => _btoa(utob(s));
|
||||
/**
|
||||
* converts a UTF-8-encoded string to a Base64 string.
|
||||
* @param {boolean} [urlsafe] if `true` make the result URL-safe
|
||||
* @returns {string} Base64 string
|
||||
*/
|
||||
const encode = (src, urlsafe = false) => urlsafe
|
||||
? _mkUriSafe(_encode(src))
|
||||
: _encode(src);
|
||||
/**
|
||||
* converts a UTF-8-encoded string to URL-safe Base64 RFC4648 §5.
|
||||
* @returns {string} Base64 string
|
||||
*/
|
||||
const encodeURI = (src) => encode(src, true);
|
||||
// This trick is found broken https://github.com/dankogai/js-base64/issues/130
|
||||
// const btou = (src: string) => decodeURIComponent(escape(src));
|
||||
// reverting good old fationed regexp
|
||||
const re_btou = /[\xC0-\xDF][\x80-\xBF]|[\xE0-\xEF][\x80-\xBF]{2}|[\xF0-\xF7][\x80-\xBF]{3}/g;
|
||||
const cb_btou = (cccc) => {
|
||||
switch (cccc.length) {
|
||||
case 4:
|
||||
var cp = ((0x07 & cccc.charCodeAt(0)) << 18)
|
||||
| ((0x3f & cccc.charCodeAt(1)) << 12)
|
||||
| ((0x3f & cccc.charCodeAt(2)) << 6)
|
||||
| (0x3f & cccc.charCodeAt(3)), offset = cp - 0x10000;
|
||||
return (_fromCC((offset >>> 10) + 0xD800)
|
||||
+ _fromCC((offset & 0x3FF) + 0xDC00));
|
||||
case 3:
|
||||
return _fromCC(((0x0f & cccc.charCodeAt(0)) << 12)
|
||||
| ((0x3f & cccc.charCodeAt(1)) << 6)
|
||||
| (0x3f & cccc.charCodeAt(2)));
|
||||
default:
|
||||
return _fromCC(((0x1f & cccc.charCodeAt(0)) << 6)
|
||||
| (0x3f & cccc.charCodeAt(1)));
|
||||
}
|
||||
};
|
||||
/**
|
||||
* @deprecated should have been internal use only.
|
||||
* @param {string} src UTF-16 string
|
||||
* @returns {string} UTF-8 string
|
||||
*/
|
||||
const btou = (b) => b.replace(re_btou, cb_btou);
|
||||
/**
|
||||
* polyfill version of `atob`
|
||||
*/
|
||||
const atobPolyfill = (asc) => {
|
||||
// console.log('polyfilled');
|
||||
asc = asc.replace(/\s+/g, '');
|
||||
if (!b64re.test(asc))
|
||||
throw new TypeError('malformed base64.');
|
||||
asc += '=='.slice(2 - (asc.length & 3));
|
||||
let u24, bin = '', r1, r2;
|
||||
for (let i = 0; i < asc.length;) {
|
||||
u24 = b64tab[asc.charAt(i++)] << 18
|
||||
| b64tab[asc.charAt(i++)] << 12
|
||||
| (r1 = b64tab[asc.charAt(i++)]) << 6
|
||||
| (r2 = b64tab[asc.charAt(i++)]);
|
||||
bin += r1 === 64 ? _fromCC(u24 >> 16 & 255)
|
||||
: r2 === 64 ? _fromCC(u24 >> 16 & 255, u24 >> 8 & 255)
|
||||
: _fromCC(u24 >> 16 & 255, u24 >> 8 & 255, u24 & 255);
|
||||
}
|
||||
return bin;
|
||||
};
|
||||
/**
|
||||
* does what `window.atob` of web browsers do.
|
||||
* @param {String} asc Base64-encoded string
|
||||
* @returns {string} binary string
|
||||
*/
|
||||
const _atob = _hasatob ? (asc) => atob(_tidyB64(asc))
|
||||
: _hasBuffer ? (asc) => Buffer.from(asc, 'base64').toString('binary')
|
||||
: atobPolyfill;
|
||||
//
|
||||
const _toUint8Array = _hasBuffer
|
||||
? (a) => _U8Afrom(Buffer.from(a, 'base64'))
|
||||
: (a) => _U8Afrom(_atob(a), c => c.charCodeAt(0));
|
||||
/**
|
||||
* converts a Base64 string to a Uint8Array.
|
||||
*/
|
||||
const toUint8Array = (a) => _toUint8Array(_unURI(a));
|
||||
//
|
||||
const _decode = _hasBuffer
|
||||
? (a) => Buffer.from(a, 'base64').toString('utf8')
|
||||
: _TD
|
||||
? (a) => _TD.decode(_toUint8Array(a))
|
||||
: (a) => btou(_atob(a));
|
||||
const _unURI = (a) => _tidyB64(a.replace(/[-_]/g, (m0) => m0 == '-' ? '+' : '/'));
|
||||
/**
|
||||
* converts a Base64 string to a UTF-8 string.
|
||||
* @param {String} src Base64 string. Both normal and URL-safe are supported
|
||||
* @returns {string} UTF-8 string
|
||||
*/
|
||||
const decode = (src) => _decode(_unURI(src));
|
||||
/**
|
||||
* check if a value is a valid Base64 string
|
||||
* @param {String} src a value to check
|
||||
*/
|
||||
const isValid = (src) => {
|
||||
if (typeof src !== 'string')
|
||||
return false;
|
||||
const s = src.replace(/\s+/g, '').replace(/={0,2}$/, '');
|
||||
return !/[^\s0-9a-zA-Z\+/]/.test(s) || !/[^\s0-9a-zA-Z\-_]/.test(s);
|
||||
};
|
||||
//
|
||||
const _noEnum = (v) => {
|
||||
return {
|
||||
value: v, enumerable: false, writable: true, configurable: true
|
||||
};
|
||||
};
|
||||
/**
|
||||
* extend String.prototype with relevant methods
|
||||
*/
|
||||
const extendString = function () {
|
||||
const _add = (name, body) => Object.defineProperty(String.prototype, name, _noEnum(body));
|
||||
_add('fromBase64', function () { return decode(this); });
|
||||
_add('toBase64', function (urlsafe) { return encode(this, urlsafe); });
|
||||
_add('toBase64URI', function () { return encode(this, true); });
|
||||
_add('toBase64URL', function () { return encode(this, true); });
|
||||
_add('toUint8Array', function () { return toUint8Array(this); });
|
||||
};
|
||||
/**
|
||||
* extend Uint8Array.prototype with relevant methods
|
||||
*/
|
||||
const extendUint8Array = function () {
|
||||
const _add = (name, body) => Object.defineProperty(Uint8Array.prototype, name, _noEnum(body));
|
||||
_add('toBase64', function (urlsafe) { return fromUint8Array(this, urlsafe); });
|
||||
_add('toBase64URI', function () { return fromUint8Array(this, true); });
|
||||
_add('toBase64URL', function () { return fromUint8Array(this, true); });
|
||||
};
|
||||
/**
|
||||
* extend Builtin prototypes with relevant methods
|
||||
*/
|
||||
const extendBuiltins = () => {
|
||||
extendString();
|
||||
extendUint8Array();
|
||||
};
|
||||
const gBase64 = {
|
||||
version: version,
|
||||
VERSION: VERSION,
|
||||
atob: _atob,
|
||||
atobPolyfill: atobPolyfill,
|
||||
btoa: _btoa,
|
||||
btoaPolyfill: btoaPolyfill,
|
||||
fromBase64: decode,
|
||||
toBase64: encode,
|
||||
encode: encode,
|
||||
encodeURI: encodeURI,
|
||||
encodeURL: encodeURI,
|
||||
utob: utob,
|
||||
btou: btou,
|
||||
decode: decode,
|
||||
isValid: isValid,
|
||||
fromUint8Array: fromUint8Array,
|
||||
toUint8Array: toUint8Array,
|
||||
extendString: extendString,
|
||||
extendUint8Array: extendUint8Array,
|
||||
extendBuiltins: extendBuiltins,
|
||||
};
|
||||
// makecjs:CUT //
|
||||
export { version };
|
||||
export { VERSION };
|
||||
export { _atob as atob };
|
||||
export { atobPolyfill };
|
||||
export { _btoa as btoa };
|
||||
export { btoaPolyfill };
|
||||
export { decode as fromBase64 };
|
||||
export { encode as toBase64 };
|
||||
export { utob };
|
||||
export { encode };
|
||||
export { encodeURI };
|
||||
export { encodeURI as encodeURL };
|
||||
export { btou };
|
||||
export { decode };
|
||||
export { isValid };
|
||||
export { fromUint8Array };
|
||||
export { toUint8Array };
|
||||
export { extendString };
|
||||
export { extendUint8Array };
|
||||
export { extendBuiltins };
|
||||
// and finally,
|
||||
export { gBase64 as Base64 };
|
||||
21
themes/common.py
Normal file
21
themes/common.py
Normal file
@@ -0,0 +1,21 @@
|
||||
from toolbox import get_conf
|
||||
CODE_HIGHLIGHT, ADD_WAIFU, LAYOUT = get_conf("CODE_HIGHLIGHT", "ADD_WAIFU", "LAYOUT")
|
||||
|
||||
def get_common_html_javascript_code():
|
||||
js = "\n"
|
||||
for jsf in [
|
||||
"file=themes/common.js",
|
||||
"file=themes/mermaid.min.js",
|
||||
"file=themes/mermaid_loader.js",
|
||||
]:
|
||||
js += f"""<script src="{jsf}"></script>\n"""
|
||||
|
||||
# 添加Live2D
|
||||
if ADD_WAIFU:
|
||||
for jsf in [
|
||||
"file=docs/waifu_plugin/jquery.min.js",
|
||||
"file=docs/waifu_plugin/jquery-ui.min.js",
|
||||
"file=docs/waifu_plugin/autoload.js",
|
||||
]:
|
||||
js += f"""<script src="{jsf}"></script>\n"""
|
||||
return js
|
||||
@@ -67,22 +67,9 @@ def adjust_theme():
|
||||
button_cancel_text_color_dark="white",
|
||||
)
|
||||
|
||||
js = ""
|
||||
for jsf in [
|
||||
os.path.join(theme_dir, "common.js"),
|
||||
os.path.join(theme_dir, "mermaid.min.js"),
|
||||
os.path.join(theme_dir, "mermaid_loader.js"),
|
||||
]:
|
||||
with open(jsf, "r", encoding="utf8") as f:
|
||||
js += f"<script>{f.read()}</script>"
|
||||
|
||||
# 添加一个萌萌的看板娘
|
||||
if ADD_WAIFU:
|
||||
js += """
|
||||
<script src="file=docs/waifu_plugin/jquery.min.js"></script>
|
||||
<script src="file=docs/waifu_plugin/jquery-ui.min.js"></script>
|
||||
<script src="file=docs/waifu_plugin/autoload.js"></script>
|
||||
"""
|
||||
from themes.common import get_common_html_javascript_code
|
||||
js = get_common_html_javascript_code()
|
||||
|
||||
if not hasattr(gr, "RawTemplateResponse"):
|
||||
gr.RawTemplateResponse = gr.routes.templates.TemplateResponse
|
||||
gradio_original_template_fn = gr.RawTemplateResponse
|
||||
|
||||
@@ -67,22 +67,8 @@ def adjust_theme():
|
||||
button_cancel_text_color_dark="white",
|
||||
)
|
||||
|
||||
js = ""
|
||||
for jsf in [
|
||||
os.path.join(theme_dir, "common.js"),
|
||||
os.path.join(theme_dir, "mermaid.min.js"),
|
||||
os.path.join(theme_dir, "mermaid_loader.js"),
|
||||
]:
|
||||
with open(jsf, "r", encoding="utf8") as f:
|
||||
js += f"<script>{f.read()}</script>"
|
||||
|
||||
# 添加一个萌萌的看板娘
|
||||
if ADD_WAIFU:
|
||||
js += """
|
||||
<script src="file=docs/waifu_plugin/jquery.min.js"></script>
|
||||
<script src="file=docs/waifu_plugin/jquery-ui.min.js"></script>
|
||||
<script src="file=docs/waifu_plugin/autoload.js"></script>
|
||||
"""
|
||||
from themes.common import get_common_html_javascript_code
|
||||
js = get_common_html_javascript_code()
|
||||
if not hasattr(gr, "RawTemplateResponse"):
|
||||
gr.RawTemplateResponse = gr.routes.templates.TemplateResponse
|
||||
gradio_original_template_fn = gr.RawTemplateResponse
|
||||
|
||||
@@ -31,23 +31,9 @@ def adjust_theme():
|
||||
THEME = THEME.lstrip("huggingface-")
|
||||
set_theme = set_theme.from_hub(THEME.lower())
|
||||
|
||||
js = ""
|
||||
for jsf in [
|
||||
os.path.join(theme_dir, "common.js"),
|
||||
os.path.join(theme_dir, "mermaid.min.js"),
|
||||
os.path.join(theme_dir, "mermaid_loader.js"),
|
||||
]:
|
||||
with open(jsf, "r", encoding="utf8") as f:
|
||||
js += f"<script>{f.read()}</script>"
|
||||
|
||||
|
||||
# 添加一个萌萌的看板娘
|
||||
if ADD_WAIFU:
|
||||
js += """
|
||||
<script src="file=docs/waifu_plugin/jquery.min.js"></script>
|
||||
<script src="file=docs/waifu_plugin/jquery-ui.min.js"></script>
|
||||
<script src="file=docs/waifu_plugin/autoload.js"></script>
|
||||
"""
|
||||
from themes.common import get_common_html_javascript_code
|
||||
js = get_common_html_javascript_code()
|
||||
|
||||
if not hasattr(gr, "RawTemplateResponse"):
|
||||
gr.RawTemplateResponse = gr.routes.templates.TemplateResponse
|
||||
gradio_original_template_fn = gr.RawTemplateResponse
|
||||
|
||||
@@ -76,22 +76,8 @@ def adjust_theme():
|
||||
chatbot_code_background_color_dark="*neutral_950",
|
||||
)
|
||||
|
||||
js = ""
|
||||
for jsf in [
|
||||
os.path.join(theme_dir, "common.js"),
|
||||
os.path.join(theme_dir, "mermaid.min.js"),
|
||||
os.path.join(theme_dir, "mermaid_loader.js"),
|
||||
]:
|
||||
with open(jsf, "r", encoding="utf8") as f:
|
||||
js += f"<script>{f.read()}</script>"
|
||||
|
||||
# 添加一个萌萌的看板娘
|
||||
if ADD_WAIFU:
|
||||
js += """
|
||||
<script src="file=docs/waifu_plugin/jquery.min.js"></script>
|
||||
<script src="file=docs/waifu_plugin/jquery-ui.min.js"></script>
|
||||
<script src="file=docs/waifu_plugin/autoload.js"></script>
|
||||
"""
|
||||
from themes.common import get_common_html_javascript_code
|
||||
js = get_common_html_javascript_code()
|
||||
|
||||
with open(os.path.join(theme_dir, "green.js"), "r", encoding="utf8") as f:
|
||||
js += f"<script>{f.read()}</script>"
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
import { deflate, inflate } from 'https://fastly.jsdelivr.net/gh/nodeca/pako@master/dist/pako.esm.mjs';
|
||||
import { toUint8Array, fromUint8Array, toBase64, fromBase64 } from 'https://cdn.jsdelivr.net/npm/js-base64@3.7.2/base64.mjs';
|
||||
import { deflate, inflate } from '/file=themes/pako.esm.mjs';
|
||||
import { toUint8Array, fromUint8Array, toBase64, fromBase64 } from '/file=themes/base64.mjs';
|
||||
|
||||
const base64Serde = {
|
||||
serialize: (state) => {
|
||||
|
||||
@@ -106,7 +106,7 @@ const uml = async className => {
|
||||
defaultConfig.theme = "dark"
|
||||
}
|
||||
|
||||
const Module = await import('./file=themes/mermaid_editor.js');
|
||||
const Module = await import('/file=themes/mermaid_editor.js');
|
||||
|
||||
function do_render(block, code, codeContent, cnt) {
|
||||
var rendered_content = mermaid.render(`_diagram_${cnt}`, code);
|
||||
@@ -154,8 +154,16 @@ const uml = async className => {
|
||||
var block = blocks[i]
|
||||
////////////////////////////// 如果代码没有发生变化,就不渲染了 ///////////////////////////////////
|
||||
var code = getFromCode(block);
|
||||
let codeContent = block.querySelector("code").textContent; // 获取code元素中的文本内容
|
||||
let codePendingRenderElement = block.querySelector("code_pending_render"); // 如果block下已存在code_already_rendered元素,则获取它
|
||||
let code_elem = block.querySelector("code");
|
||||
let codeContent = code_elem.textContent; // 获取code元素中的文本内容
|
||||
|
||||
// 判断codeContent是否包含'<gpt_academic_hide_mermaid_code>',如果是,则使code_elem隐藏
|
||||
if (codeContent.indexOf('<gpt_academic_hide_mermaid_code>') !== -1) {
|
||||
code_elem.style.display = "none";
|
||||
}
|
||||
|
||||
// 如果block下已存在code_already_rendered元素,则获取它
|
||||
let codePendingRenderElement = block.querySelector("code_pending_render");
|
||||
if (codePendingRenderElement) { // 如果block下已存在code_pending_render元素
|
||||
codePendingRenderElement.style.display = "none";
|
||||
if (codePendingRenderElement.textContent !== codeContent) {
|
||||
|
||||
6877
themes/pako.esm.mjs
Normal file
6877
themes/pako.esm.mjs
Normal file
File diff suppressed because it is too large
Load Diff
10
toolbox.py
10
toolbox.py
@@ -10,6 +10,8 @@ import glob
|
||||
from functools import wraps
|
||||
from shared_utils.config_loader import get_conf
|
||||
from shared_utils.config_loader import set_conf
|
||||
from shared_utils.config_loader import set_multi_conf
|
||||
from shared_utils.config_loader import read_single_conf_with_lru_cache
|
||||
from shared_utils.advanced_markdown_format import format_io
|
||||
from shared_utils.advanced_markdown_format import markdown_convertion
|
||||
from shared_utils.key_pattern_manager import select_api_key
|
||||
@@ -19,6 +21,10 @@ from shared_utils.connect_void_terminal import get_chat_handle
|
||||
from shared_utils.connect_void_terminal import get_plugin_handle
|
||||
from shared_utils.connect_void_terminal import get_plugin_default_kwargs
|
||||
from shared_utils.connect_void_terminal import get_chat_default_kwargs
|
||||
from shared_utils.text_mask import apply_gpt_academic_string_mask
|
||||
from shared_utils.text_mask import build_gpt_academic_masked_string
|
||||
from shared_utils.text_mask import apply_gpt_academic_string_mask_langbased
|
||||
from shared_utils.text_mask import build_gpt_academic_masked_string_langbased
|
||||
|
||||
pj = os.path.join
|
||||
default_user_name = "default_user"
|
||||
@@ -67,7 +73,9 @@ class ChatBotWithCookies(list):
|
||||
|
||||
def ArgsGeneralWrapper(f):
|
||||
"""
|
||||
装饰器函数,用于重组输入参数,改变输入参数的顺序与结构。
|
||||
装饰器函数ArgsGeneralWrapper,用于重组输入参数,改变输入参数的顺序与结构。
|
||||
该装饰器是大多数功能调用的入口。
|
||||
函数示意图:https://mermaid.live/edit#pako:eNqNVFtPGkEY_StkntoEDQtLoTw0sWqapjQxVWPabmOm7AiEZZcsQ9QiiW012qixqdeqqIn10geBh6ZR8PJnmAWe-hc6l3VhrWnLEzNzzvnO953ZyYOYoSIQAWOaMR5LQBN7hvoU3UN_g5iu7imAXEyT4wUF3Pd0dT3y9KGYYUJsmK8V0GPGs0-QjkyojZgwk0Fm82C2dVghX08U8EaoOHjOfoEMU0XmADRhOksVWnNLjdpM82qFzB6S5Q_WWsUhuqCc3JtAsVR_OoMnhyZwXgHWwbS1d4gnsLVZJp-P6mfVxveqAgqC70Jz_pQCOGDKM5xFdNNPDdilF6uSU_hOYqu4a3MHYDZLDzq5fodrC3PWcEaFGPUaRiqJWK_W9g9rvRITa4dhy_0nw67SiePMp3oSR6PPn41DGgllkvkizYwsrmtaejTFd8V4yekGmT1zqrt4XGlAy8WTuiPULF01LksZvukSajfQQRAxmYi5S0D81sDcyzapVdn6sYFHkjhhGyel3frVQnvsnbR23lEjlhIlaOJiFPWzU5G4tfNJo8ejwp47-TbvJkKKZvmxA6SKo16oaazJysfG6klr9T0pbTW2ZqzlL_XaT8fYbQLXe4mSmvoCZXMaa7FePW6s7jVqK9bujvse3WFjY5_Z4KfsA4oiPY4T7Drvn1tLJTbG1to1qR79ulgk89-oJbvZzbIwJty6u20LOReWa9BvwserUd9s9MIKc3x5TUWEoAhUyJK5y85w_yG-dFu_R9waoU7K581y8W_qLle35-rG9Nxcrz8QHRsc0K-r9NViYRT36KsFvCCNzDRMqvSVyzOKAnACpZECIvSvCs2UAhS9QHEwh43BST0GItjMIS_I8e-sLwnj9A262cxA_ZVh0OUY1LJiDSJ5MAEiUijYLUtBORR6KElyQPaCSRDpksNSd8AfluSgHPaFC17wjrOlbgbzyyFf4IFPDvoD_sJvnkdK-g
|
||||
"""
|
||||
def decorated(request: gradio.Request, cookies, max_length, llm_model, txt, txt2, top_p, temperature, chatbot, history, system_prompt, plugin_advanced_arg, *args):
|
||||
txt_passon = txt
|
||||
|
||||
4
version
4
version
@@ -1,5 +1,5 @@
|
||||
{
|
||||
"version": 3.70,
|
||||
"version": 3.71,
|
||||
"show_feature": true,
|
||||
"new_feature": "支持Mermaid绘图库(让大模型绘制脑图) <-> 支持Gemini-pro <-> 支持直接拖拽文件到上传区 <-> 支持将图片粘贴到输入区 <-> 修复若干隐蔽的内存BUG <-> 修复多用户冲突问题 <-> 接入Deepseek Coder <-> AutoGen多智能体插件测试版"
|
||||
"new_feature": "用绘图功能增强部分插件 <-> 基础功能区支持自动切换中英提示词 <-> 支持Mermaid绘图库(让大模型绘制脑图) <-> 支持Gemini-pro <-> 支持直接拖拽文件到上传区 <-> 支持将图片粘贴到输入区"
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user