Compare commits
29 Commits
hongyi-zha
...
production
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
ffb5655a23 | ||
|
|
cb92ccb409 | ||
|
|
cc4df91900 | ||
|
|
89707a1c58 | ||
|
|
d539ad809e | ||
|
|
02b18ff67a | ||
|
|
6896b10be9 | ||
|
|
0ec5a8e5f8 | ||
|
|
79a0b687b8 | ||
|
|
70766cdd44 | ||
|
|
97f33b8bea | ||
|
|
7280ea17fd | ||
|
|
535a901991 | ||
|
|
56f42397b1 | ||
|
|
aa7c47e821 | ||
|
|
62fb2794ec | ||
|
|
3121dee04a | ||
|
|
cad541d8d7 | ||
|
|
9023aa6732 | ||
|
|
2d37b74a0c | ||
|
|
fdc350cfe8 | ||
|
|
58c6d45d84 | ||
|
|
4cc6ff65ac | ||
|
|
8632413011 | ||
|
|
46e279b5dd | ||
|
|
25cf86dae6 | ||
|
|
19e202ddfd | ||
|
|
65dab46a28 | ||
|
|
ecb473bc8b |
6
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
6
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
@@ -69,3 +69,9 @@ body:
|
|||||||
attributes:
|
attributes:
|
||||||
label: Terminal Traceback & Material to Help Reproduce Bugs | 终端traceback(如有) + 帮助我们复现的测试材料样本(如有)
|
label: Terminal Traceback & Material to Help Reproduce Bugs | 终端traceback(如有) + 帮助我们复现的测试材料样本(如有)
|
||||||
description: Terminal Traceback & Material to Help Reproduce Bugs | 终端traceback(如有) + 帮助我们复现的测试材料样本(如有)
|
description: Terminal Traceback & Material to Help Reproduce Bugs | 终端traceback(如有) + 帮助我们复现的测试材料样本(如有)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
5
.github/ISSUE_TEMPLATE/feature_request.yml
vendored
5
.github/ISSUE_TEMPLATE/feature_request.yml
vendored
@@ -21,3 +21,8 @@ body:
|
|||||||
attributes:
|
attributes:
|
||||||
label: Feature Request | 功能请求
|
label: Feature Request | 功能请求
|
||||||
description: Feature Request | 功能请求
|
description: Feature Request | 功能请求
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -1,44 +0,0 @@
|
|||||||
# https://docs.github.com/en/actions/publishing-packages/publishing-docker-images#publishing-images-to-github-packages
|
|
||||||
name: build-with-all-capacity-beta
|
|
||||||
|
|
||||||
on:
|
|
||||||
push:
|
|
||||||
branches:
|
|
||||||
- 'master'
|
|
||||||
|
|
||||||
env:
|
|
||||||
REGISTRY: ghcr.io
|
|
||||||
IMAGE_NAME: ${{ github.repository }}_with_all_capacity_beta
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
build-and-push-image:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
permissions:
|
|
||||||
contents: read
|
|
||||||
packages: write
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- name: Checkout repository
|
|
||||||
uses: actions/checkout@v3
|
|
||||||
|
|
||||||
- name: Log in to the Container registry
|
|
||||||
uses: docker/login-action@v2
|
|
||||||
with:
|
|
||||||
registry: ${{ env.REGISTRY }}
|
|
||||||
username: ${{ github.actor }}
|
|
||||||
password: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
|
|
||||||
- name: Extract metadata (tags, labels) for Docker
|
|
||||||
id: meta
|
|
||||||
uses: docker/metadata-action@v4
|
|
||||||
with:
|
|
||||||
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
|
|
||||||
|
|
||||||
- name: Build and push Docker image
|
|
||||||
uses: docker/build-push-action@v4
|
|
||||||
with:
|
|
||||||
context: .
|
|
||||||
push: true
|
|
||||||
file: docs/GithubAction+AllCapacityBeta
|
|
||||||
tags: ${{ steps.meta.outputs.tags }}
|
|
||||||
labels: ${{ steps.meta.outputs.labels }}
|
|
||||||
1
.gitignore
vendored
1
.gitignore
vendored
@@ -152,4 +152,3 @@ request_llms/moss
|
|||||||
media
|
media
|
||||||
flagged
|
flagged
|
||||||
request_llms/ChatGLM-6b-onnx-u8s8
|
request_llms/ChatGLM-6b-onnx-u8s8
|
||||||
.pre-commit-config.yaml
|
|
||||||
|
|||||||
@@ -18,6 +18,7 @@ WORKDIR /gpt
|
|||||||
|
|
||||||
# 安装大部分依赖,利用Docker缓存加速以后的构建 (以下三行,可以删除)
|
# 安装大部分依赖,利用Docker缓存加速以后的构建 (以下三行,可以删除)
|
||||||
COPY requirements.txt ./
|
COPY requirements.txt ./
|
||||||
|
COPY ./docs/gradio-3.32.6-py3-none-any.whl ./docs/gradio-3.32.6-py3-none-any.whl
|
||||||
RUN pip3 install -r requirements.txt
|
RUN pip3 install -r requirements.txt
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
100
README.md
100
README.md
@@ -1,8 +1,8 @@
|
|||||||
> [!IMPORTANT]
|
> **Caution**
|
||||||
> 2024.1.18: 更新3.70版本,支持Mermaid绘图库(让大模型绘制脑图)
|
>
|
||||||
> 2024.1.17: 恭迎GLM4,全力支持Qwen、GLM、DeepseekCoder等国内中文大语言基座模型!
|
> 2023.11.12: 某些依赖包尚不兼容python 3.12,推荐python 3.11。
|
||||||
> 2024.1.17: 某些依赖包尚不兼容python 3.12,推荐python 3.11。
|
>
|
||||||
> 2024.1.17: 安装依赖时,请选择`requirements.txt`中**指定的版本**。 安装命令:`pip install -r requirements.txt`。本项目完全开源免费,您可通过订阅[在线服务](https://github.com/binary-husky/gpt_academic/wiki/online)的方式鼓励本项目的发展。
|
> 2023.11.7: 安装依赖时,请选择`requirements.txt`中**指定的版本**。 安装命令:`pip install -r requirements.txt`。本项目开源免费,近期发现有人蔑视开源协议并利用本项目违规圈钱,请提高警惕,谨防上当受骗。
|
||||||
|
|
||||||
<br>
|
<br>
|
||||||
|
|
||||||
@@ -42,11 +42,13 @@ If you like this project, please give it a Star.
|
|||||||
Read this in [English](docs/README.English.md) | [日本語](docs/README.Japanese.md) | [한국어](docs/README.Korean.md) | [Русский](docs/README.Russian.md) | [Français](docs/README.French.md). All translations have been provided by the project itself. To translate this project to arbitrary language with GPT, read and run [`multi_language.py`](multi_language.py) (experimental).
|
Read this in [English](docs/README.English.md) | [日本語](docs/README.Japanese.md) | [한국어](docs/README.Korean.md) | [Русский](docs/README.Russian.md) | [Français](docs/README.French.md). All translations have been provided by the project itself. To translate this project to arbitrary language with GPT, read and run [`multi_language.py`](multi_language.py) (experimental).
|
||||||
<br>
|
<br>
|
||||||
|
|
||||||
> [!NOTE]
|
|
||||||
> 1.本项目中每个文件的功能都在[自译解报告](https://github.com/binary-husky/gpt_academic/wiki/GPT‐Academic项目自译解报告)`self_analysis.md`详细说明。随着版本的迭代,您也可以随时自行点击相关函数插件,调用GPT重新生成项目的自我解析报告。常见问题请查阅wiki。
|
> 1.请注意只有 **高亮** 标识的插件(按钮)才支持读取文件,部分插件位于插件区的**下拉菜单**中。另外我们以**最高优先级**欢迎和处理任何新插件的PR。
|
||||||
|
>
|
||||||
|
> 2.本项目中每个文件的功能都在[自译解报告](https://github.com/binary-husky/gpt_academic/wiki/GPT‐Academic项目自译解报告)`self_analysis.md`详细说明。随着版本的迭代,您也可以随时自行点击相关函数插件,调用GPT重新生成项目的自我解析报告。常见问题请查阅wiki。
|
||||||
> [](#installation) [](https://github.com/binary-husky/gpt_academic/releases) [](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明) []([https://github.com/binary-husky/gpt_academic/wiki/项目配置说明](https://github.com/binary-husky/gpt_academic/wiki))
|
> [](#installation) [](https://github.com/binary-husky/gpt_academic/releases) [](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明) []([https://github.com/binary-husky/gpt_academic/wiki/项目配置说明](https://github.com/binary-husky/gpt_academic/wiki))
|
||||||
>
|
>
|
||||||
> 2.本项目兼容并鼓励尝试国内中文大语言基座模型如通义千问,智谱GLM等。支持多个api-key共存,可在配置文件中填写如`API_KEY="openai-key1,openai-key2,azure-key3,api2d-key4"`。需要临时更换`API_KEY`时,在输入区输入临时的`API_KEY`然后回车键提交即可生效。
|
> 3.本项目兼容并鼓励尝试国产大语言模型ChatGLM等。支持多个api-key共存,可在配置文件中填写如`API_KEY="openai-key1,openai-key2,azure-key3,api2d-key4"`。需要临时更换`API_KEY`时,在输入区输入临时的`API_KEY`然后回车键提交即可生效。
|
||||||
|
|
||||||
<br><br>
|
<br><br>
|
||||||
|
|
||||||
@@ -54,12 +56,7 @@ Read this in [English](docs/README.English.md) | [日本語](docs/README.Japanes
|
|||||||
|
|
||||||
功能(⭐= 近期新增功能) | 描述
|
功能(⭐= 近期新增功能) | 描述
|
||||||
--- | ---
|
--- | ---
|
||||||
⭐[接入新模型](https://github.com/binary-husky/gpt_academic/wiki/%E5%A6%82%E4%BD%95%E5%88%87%E6%8D%A2%E6%A8%A1%E5%9E%8B) | 百度[千帆](https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Nlks5zkzu)与文心一言, 通义千问[Qwen](https://modelscope.cn/models/qwen/Qwen-7B-Chat/summary),上海AI-Lab[书生](https://github.com/InternLM/InternLM),讯飞[星火](https://xinghuo.xfyun.cn/),[LLaMa2](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf),[智谱GLM4](https://open.bigmodel.cn/),DALLE3, [DeepseekCoder](https://coder.deepseek.com/)
|
⭐[接入新模型](https://github.com/binary-husky/gpt_academic/wiki/%E5%A6%82%E4%BD%95%E5%88%87%E6%8D%A2%E6%A8%A1%E5%9E%8B) | 百度[千帆](https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Nlks5zkzu)与文心一言, 通义千问[Qwen](https://modelscope.cn/models/qwen/Qwen-7B-Chat/summary),上海AI-Lab[书生](https://github.com/InternLM/InternLM),讯飞[星火](https://xinghuo.xfyun.cn/),[LLaMa2](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf),[智谱API](https://open.bigmodel.cn/),DALLE3, [DeepseekCoder](https://coder.deepseek.com/)
|
||||||
⭐支持mermaid图像渲染 | 支持让GPT生成[流程图](https://www.bilibili.com/video/BV18c41147H9/)、状态转移图、甘特图、饼状图、GitGraph等等(3.7版本)
|
|
||||||
⭐Arxiv论文精细翻译 ([Docker](https://github.com/binary-husky/gpt_academic/pkgs/container/gpt_academic_with_latex)) | [插件] 一键[以超高质量翻译arxiv论文](https://www.bilibili.com/video/BV1dz4y1v77A/),目前最好的论文翻译工具
|
|
||||||
⭐[实时语音对话输入](https://github.com/binary-husky/gpt_academic/blob/master/docs/use_audio.md) | [插件] 异步[监听音频](https://www.bilibili.com/video/BV1AV4y187Uy/),自动断句,自动寻找回答时机
|
|
||||||
⭐AutoGen多智能体插件 | [插件] 借助微软AutoGen,探索多Agent的智能涌现可能!
|
|
||||||
⭐虚空终端插件 | [插件] 能够使用自然语言直接调度本项目其他插件
|
|
||||||
润色、翻译、代码解释 | 一键润色、翻译、查找论文语法错误、解释代码
|
润色、翻译、代码解释 | 一键润色、翻译、查找论文语法错误、解释代码
|
||||||
[自定义快捷键](https://www.bilibili.com/video/BV14s4y1E7jN) | 支持自定义快捷键
|
[自定义快捷键](https://www.bilibili.com/video/BV14s4y1E7jN) | 支持自定义快捷键
|
||||||
模块化设计 | 支持自定义强大的[插件](https://github.com/binary-husky/gpt_academic/tree/master/crazy_functions),插件支持[热更新](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
|
模块化设计 | 支持自定义强大的[插件](https://github.com/binary-husky/gpt_academic/tree/master/crazy_functions),插件支持[热更新](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
|
||||||
@@ -68,16 +65,22 @@ Read this in [English](docs/README.English.md) | [日本語](docs/README.Japanes
|
|||||||
Latex全文[翻译](https://www.bilibili.com/video/BV1nk4y1Y7Js/)、[润色](https://www.bilibili.com/video/BV1FT411H7c5/) | [插件] 一键翻译或润色latex论文
|
Latex全文[翻译](https://www.bilibili.com/video/BV1nk4y1Y7Js/)、[润色](https://www.bilibili.com/video/BV1FT411H7c5/) | [插件] 一键翻译或润色latex论文
|
||||||
批量注释生成 | [插件] 一键批量生成函数注释
|
批量注释生成 | [插件] 一键批量生成函数注释
|
||||||
Markdown[中英互译](https://www.bilibili.com/video/BV1yo4y157jV/) | [插件] 看到上面5种语言的[README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md)了吗?就是出自他的手笔
|
Markdown[中英互译](https://www.bilibili.com/video/BV1yo4y157jV/) | [插件] 看到上面5种语言的[README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md)了吗?就是出自他的手笔
|
||||||
|
chat分析报告生成 | [插件] 运行后自动生成总结汇报
|
||||||
[PDF论文全文翻译功能](https://www.bilibili.com/video/BV1KT411x7Wn) | [插件] PDF论文提取题目&摘要+翻译全文(多线程)
|
[PDF论文全文翻译功能](https://www.bilibili.com/video/BV1KT411x7Wn) | [插件] PDF论文提取题目&摘要+翻译全文(多线程)
|
||||||
[Arxiv小助手](https://www.bilibili.com/video/BV1LM4y1279X) | [插件] 输入arxiv文章url即可一键翻译摘要+下载PDF
|
[Arxiv小助手](https://www.bilibili.com/video/BV1LM4y1279X) | [插件] 输入arxiv文章url即可一键翻译摘要+下载PDF
|
||||||
Latex论文一键校对 | [插件] 仿Grammarly对Latex文章进行语法、拼写纠错+输出对照PDF
|
Latex论文一键校对 | [插件] 仿Grammarly对Latex文章进行语法、拼写纠错+输出对照PDF
|
||||||
[谷歌学术统合小助手](https://www.bilibili.com/video/BV19L411U7ia) | [插件] 给定任意谷歌学术搜索页面URL,让gpt帮你[写relatedworks](https://www.bilibili.com/video/BV1GP411U7Az/)
|
[谷歌学术统合小助手](https://www.bilibili.com/video/BV19L411U7ia) | [插件] 给定任意谷歌学术搜索页面URL,让gpt帮你[写relatedworks](https://www.bilibili.com/video/BV1GP411U7Az/)
|
||||||
互联网信息聚合+GPT | [插件] 一键[让GPT从互联网获取信息](https://www.bilibili.com/video/BV1om4y127ck)回答问题,让信息永不过时
|
互联网信息聚合+GPT | [插件] 一键[让GPT从互联网获取信息](https://www.bilibili.com/video/BV1om4y127ck)回答问题,让信息永不过时
|
||||||
|
⭐Arxiv论文精细翻译 ([Docker](https://github.com/binary-husky/gpt_academic/pkgs/container/gpt_academic_with_latex)) | [插件] 一键[以超高质量翻译arxiv论文](https://www.bilibili.com/video/BV1dz4y1v77A/),目前最好的论文翻译工具
|
||||||
|
⭐[实时语音对话输入](https://github.com/binary-husky/gpt_academic/blob/master/docs/use_audio.md) | [插件] 异步[监听音频](https://www.bilibili.com/video/BV1AV4y187Uy/),自动断句,自动寻找回答时机
|
||||||
公式/图片/表格显示 | 可以同时显示公式的[tex形式和渲染形式](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png),支持公式、代码高亮
|
公式/图片/表格显示 | 可以同时显示公式的[tex形式和渲染形式](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png),支持公式、代码高亮
|
||||||
|
⭐AutoGen多智能体插件 | [插件] 借助微软AutoGen,探索多Agent的智能涌现可能!
|
||||||
启动暗色[主题](https://github.com/binary-husky/gpt_academic/issues/173) | 在浏览器url后面添加```/?__theme=dark```可以切换dark主题
|
启动暗色[主题](https://github.com/binary-husky/gpt_academic/issues/173) | 在浏览器url后面添加```/?__theme=dark```可以切换dark主题
|
||||||
[多LLM模型](https://www.bilibili.com/video/BV1wT411p7yf)支持 | 同时被GPT3.5、GPT4、[清华ChatGLM2](https://github.com/THUDM/ChatGLM2-6B)、[复旦MOSS](https://github.com/OpenLMLab/MOSS)伺候的感觉一定会很不错吧?
|
[多LLM模型](https://www.bilibili.com/video/BV1wT411p7yf)支持 | 同时被GPT3.5、GPT4、[清华ChatGLM2](https://github.com/THUDM/ChatGLM2-6B)、[复旦MOSS](https://github.com/OpenLMLab/MOSS)伺候的感觉一定会很不错吧?
|
||||||
|
⭐ChatGLM2微调模型 | 支持加载ChatGLM2微调模型,提供ChatGLM2微调辅助插件
|
||||||
更多LLM模型接入,支持[huggingface部署](https://huggingface.co/spaces/qingxu98/gpt-academic) | 加入Newbing接口(新必应),引入清华[Jittorllms](https://github.com/Jittor/JittorLLMs)支持[LLaMA](https://github.com/facebookresearch/llama)和[盘古α](https://openi.org.cn/pangu/)
|
更多LLM模型接入,支持[huggingface部署](https://huggingface.co/spaces/qingxu98/gpt-academic) | 加入Newbing接口(新必应),引入清华[Jittorllms](https://github.com/Jittor/JittorLLMs)支持[LLaMA](https://github.com/facebookresearch/llama)和[盘古α](https://openi.org.cn/pangu/)
|
||||||
⭐[void-terminal](https://github.com/binary-husky/void-terminal) pip包 | 脱离GUI,在Python中直接调用本项目的所有函数插件(开发中)
|
⭐[void-terminal](https://github.com/binary-husky/void-terminal) pip包 | 脱离GUI,在Python中直接调用本项目的所有函数插件(开发中)
|
||||||
|
⭐虚空终端插件 | [插件] 能够使用自然语言直接调度本项目其他插件
|
||||||
更多新功能展示 (图像生成等) …… | 见本文档结尾处 ……
|
更多新功能展示 (图像生成等) …… | 见本文档结尾处 ……
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
@@ -108,7 +111,7 @@ Latex论文一键校对 | [插件] 仿Grammarly对Latex文章进行语法、拼
|
|||||||
<img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="700" >
|
<img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="700" >
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
- 多种大语言模型混合调用(ChatGLM + OpenAI-GPT3.5 + GPT4)
|
- 多种大语言模型混合调用(ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4)
|
||||||
<div align="center">
|
<div align="center">
|
||||||
<img src="https://user-images.githubusercontent.com/96192199/232537274-deca0563-7aa6-4b5d-94a2-b7c453c47794.png" width="700" >
|
<img src="https://user-images.githubusercontent.com/96192199/232537274-deca0563-7aa6-4b5d-94a2-b7c453c47794.png" width="700" >
|
||||||
</div>
|
</div>
|
||||||
@@ -116,25 +119,6 @@ Latex论文一键校对 | [插件] 仿Grammarly对Latex文章进行语法、拼
|
|||||||
<br><br>
|
<br><br>
|
||||||
|
|
||||||
# Installation
|
# Installation
|
||||||
|
|
||||||
```mermaid
|
|
||||||
flowchart TD
|
|
||||||
A{"安装方法"} --> W1("I. 🔑直接运行 (Windows, Linux or MacOS)")
|
|
||||||
W1 --> W11["1. Python pip包管理依赖"]
|
|
||||||
W1 --> W12["2. Anaconda包管理依赖(推荐⭐)"]
|
|
||||||
|
|
||||||
A --> W2["II. 🐳使用Docker (Windows, Linux or MacOS)"]
|
|
||||||
|
|
||||||
W2 --> k1["1. 部署项目全部能力的大镜像(推荐⭐)"]
|
|
||||||
W2 --> k2["2. 仅在线模型(GPT, GLM4等)镜像"]
|
|
||||||
W2 --> k3["3. 在线模型 + Latex的大镜像"]
|
|
||||||
|
|
||||||
A --> W4["IV. 🚀其他部署方法"]
|
|
||||||
W4 --> C1["1. Windows/MacOS 一键安装运行脚本(推荐⭐)"]
|
|
||||||
W4 --> C2["2. Huggingface, Sealos远程部署"]
|
|
||||||
W4 --> C4["3. ... 其他 ..."]
|
|
||||||
```
|
|
||||||
|
|
||||||
### 安装方法I:直接运行 (Windows, Linux or MacOS)
|
### 安装方法I:直接运行 (Windows, Linux or MacOS)
|
||||||
|
|
||||||
1. 下载项目
|
1. 下载项目
|
||||||
@@ -148,7 +132,7 @@ flowchart TD
|
|||||||
|
|
||||||
在`config.py`中,配置API KEY等变量。[特殊网络环境设置方法](https://github.com/binary-husky/gpt_academic/issues/1)、[Wiki-项目配置说明](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)。
|
在`config.py`中,配置API KEY等变量。[特殊网络环境设置方法](https://github.com/binary-husky/gpt_academic/issues/1)、[Wiki-项目配置说明](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)。
|
||||||
|
|
||||||
「 程序会优先检查是否存在名为`config_private.py`的私密配置文件,并用其中的配置覆盖`config.py`的同名配置。如您能理解以上读取逻辑,我们强烈建议您在`config.py`同路径下创建一个名为`config_private.py`的新配置文件,并使用`config_private.py`配置项目,从而确保自动更新时不会丢失配置 」。
|
「 程序会优先检查是否存在名为`config_private.py`的私密配置文件,并用其中的配置覆盖`config.py`的同名配置。如您能理解以上读取逻辑,我们强烈建议您在`config.py`同路径下创建一个名为`config_private.py`的新配置文件,并使用`config_private.py`配置项目,以确保更新或其他用户无法轻易查看您的私有配置 」。
|
||||||
|
|
||||||
「 支持通过`环境变量`配置项目,环境变量的书写格式参考`docker-compose.yml`文件或者我们的[Wiki页面](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)。配置读取优先级: `环境变量` > `config_private.py` > `config.py` 」。
|
「 支持通过`环境变量`配置项目,环境变量的书写格式参考`docker-compose.yml`文件或者我们的[Wiki页面](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)。配置读取优先级: `环境变量` > `config_private.py` > `config.py` 」。
|
||||||
|
|
||||||
@@ -168,10 +152,10 @@ flowchart TD
|
|||||||
<details><summary>如果需要支持清华ChatGLM2/复旦MOSS/RWKV作为后端,请点击展开此处</summary>
|
<details><summary>如果需要支持清华ChatGLM2/复旦MOSS/RWKV作为后端,请点击展开此处</summary>
|
||||||
<p>
|
<p>
|
||||||
|
|
||||||
【可选步骤】如果需要支持清华ChatGLM3/复旦MOSS作为后端,需要额外安装更多依赖(前提条件:熟悉Python + 用过Pytorch + 电脑配置够强):
|
【可选步骤】如果需要支持清华ChatGLM2/复旦MOSS作为后端,需要额外安装更多依赖(前提条件:熟悉Python + 用过Pytorch + 电脑配置够强):
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
# 【可选步骤I】支持清华ChatGLM3。清华ChatGLM备注:如果遇到"Call ChatGLM fail 不能正常加载ChatGLM的参数" 错误,参考如下: 1:以上默认安装的为torch+cpu版,使用cuda需要卸载torch重新安装torch+cuda; 2:如因本机配置不够无法加载模型,可以修改request_llm/bridge_chatglm.py中的模型精度, 将 AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) 都修改为 AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)
|
# 【可选步骤I】支持清华ChatGLM2。清华ChatGLM备注:如果遇到"Call ChatGLM fail 不能正常加载ChatGLM的参数" 错误,参考如下: 1:以上默认安装的为torch+cpu版,使用cuda需要卸载torch重新安装torch+cuda; 2:如因本机配置不够无法加载模型,可以修改request_llm/bridge_chatglm.py中的模型精度, 将 AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) 都修改为 AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)
|
||||||
python -m pip install -r request_llms/requirements_chatglm.txt
|
python -m pip install -r request_llms/requirements_chatglm.txt
|
||||||
|
|
||||||
# 【可选步骤II】支持复旦MOSS
|
# 【可选步骤II】支持复旦MOSS
|
||||||
@@ -213,7 +197,7 @@ pip install peft
|
|||||||
docker-compose up
|
docker-compose up
|
||||||
```
|
```
|
||||||
|
|
||||||
1. 仅ChatGPT + GLM4 + 文心一言+spark等在线模型(推荐大多数人选择)
|
1. 仅ChatGPT+文心一言+spark等在线模型(推荐大多数人选择)
|
||||||
[](https://github.com/binary-husky/gpt_academic/actions/workflows/build-without-local-llms.yml)
|
[](https://github.com/binary-husky/gpt_academic/actions/workflows/build-without-local-llms.yml)
|
||||||
[](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-latex.yml)
|
[](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-latex.yml)
|
||||||
[](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-audio-assistant.yml)
|
[](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-audio-assistant.yml)
|
||||||
@@ -225,7 +209,7 @@ pip install peft
|
|||||||
|
|
||||||
P.S. 如果需要依赖Latex的插件功能,请见Wiki。另外,您也可以直接使用方案4或者方案0获取Latex功能。
|
P.S. 如果需要依赖Latex的插件功能,请见Wiki。另外,您也可以直接使用方案4或者方案0获取Latex功能。
|
||||||
|
|
||||||
2. ChatGPT + GLM3 + MOSS + LLAMA2 + 通义千问(需要熟悉[Nvidia Docker](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#installing-on-ubuntu-and-debian)运行时)
|
2. ChatGPT + ChatGLM2 + MOSS + LLAMA2 + 通义千问(需要熟悉[Nvidia Docker](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#installing-on-ubuntu-and-debian)运行时)
|
||||||
[](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-chatglm.yml)
|
[](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-chatglm.yml)
|
||||||
|
|
||||||
``` sh
|
``` sh
|
||||||
@@ -324,9 +308,9 @@ Tip:不指定文件直接点击 `载入对话历史存档` 可以查看历史h
|
|||||||
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/bc7ab234-ad90-48a0-8d62-f703d9e74665" width="500" >
|
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/bc7ab234-ad90-48a0-8d62-f703d9e74665" width="500" >
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
8. 基于mermaid的流图、脑图绘制
|
8. OpenAI音频解析与总结
|
||||||
<div align="center">
|
<div align="center">
|
||||||
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/c518b82f-bd53-46e2-baf5-ad1b081c1da4" width="500" >
|
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/709ccf95-3aee-498a-934a-e1c22d3d5d5b" width="500" >
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
9. Latex全文校对纠错
|
9. Latex全文校对纠错
|
||||||
@@ -343,8 +327,8 @@ Tip:不指定文件直接点击 `载入对话历史存档` 可以查看历史h
|
|||||||
|
|
||||||
|
|
||||||
### II:版本:
|
### II:版本:
|
||||||
- version 3.80(TODO): 优化AutoGen插件主题并设计一系列衍生插件
|
|
||||||
- version 3.70: 引入Mermaid绘图,实现GPT画脑图等功能
|
- version 3.70(todo): 优化AutoGen插件主题并设计一系列衍生插件
|
||||||
- version 3.60: 引入AutoGen作为新一代插件的基石
|
- version 3.60: 引入AutoGen作为新一代插件的基石
|
||||||
- version 3.57: 支持GLM3,星火v3,文心一言v4,修复本地模型的并发BUG
|
- version 3.57: 支持GLM3,星火v3,文心一言v4,修复本地模型的并发BUG
|
||||||
- version 3.56: 支持动态追加基础功能按钮,新汇报PDF汇总页面
|
- version 3.56: 支持动态追加基础功能按钮,新汇报PDF汇总页面
|
||||||
@@ -377,32 +361,6 @@ GPT Academic开发者QQ群:`610599535`
|
|||||||
- 某些浏览器翻译插件干扰此软件前端的运行
|
- 某些浏览器翻译插件干扰此软件前端的运行
|
||||||
- 官方Gradio目前有很多兼容性问题,请**务必使用`requirement.txt`安装Gradio**
|
- 官方Gradio目前有很多兼容性问题,请**务必使用`requirement.txt`安装Gradio**
|
||||||
|
|
||||||
```mermaid
|
|
||||||
timeline LR
|
|
||||||
title GPT-Academic项目发展历程
|
|
||||||
section 2.x
|
|
||||||
1.0~2.2: 基础功能: 引入模块化函数插件: 可折叠式布局: 函数插件支持热重载
|
|
||||||
2.3~2.5: 增强多线程交互性: 新增PDF全文翻译功能: 新增输入区切换位置的功能: 自更新
|
|
||||||
2.6: 重构了插件结构: 提高了交互性: 加入更多插件
|
|
||||||
section 3.x
|
|
||||||
3.0~3.1: 对chatglm支持: 对其他小型llm支持: 支持同时问询多个gpt模型: 支持多个apikey负载均衡
|
|
||||||
3.2~3.3: 函数插件支持更多参数接口: 保存对话功能: 解读任意语言代码: 同时询问任意的LLM组合: 互联网信息综合功能
|
|
||||||
3.4: 加入arxiv论文翻译: 加入latex论文批改功能
|
|
||||||
3.44: 正式支持Azure: 优化界面易用性
|
|
||||||
3.46: 自定义ChatGLM2微调模型: 实时语音对话
|
|
||||||
3.49: 支持阿里达摩院通义千问: 上海AI-Lab书生: 讯飞星火: 支持百度千帆平台 & 文心一言
|
|
||||||
3.50: 虚空终端: 支持插件分类: 改进UI: 设计新主题
|
|
||||||
3.53: 动态选择不同界面主题: 提高稳定性: 解决多用户冲突问题
|
|
||||||
3.55: 动态代码解释器: 重构前端界面: 引入悬浮窗口与菜单栏
|
|
||||||
3.56: 动态追加基础功能按钮: 新汇报PDF汇总页面
|
|
||||||
3.57: GLM3, 星火v3: 支持文心一言v4: 修复本地模型的并发BUG
|
|
||||||
3.60: 引入AutoGen
|
|
||||||
3.70: 引入Mermaid绘图: 实现GPT画脑图等功能
|
|
||||||
3.80(TODO): 优化AutoGen插件主题: 设计衍生插件
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
### III:主题
|
### III:主题
|
||||||
可以通过修改`THEME`选项(config.py)变更主题
|
可以通过修改`THEME`选项(config.py)变更主题
|
||||||
1. `Chuanhu-Small-and-Beautiful` [网址](https://github.com/GaiZhenbiao/ChuanhuChatGPT/)
|
1. `Chuanhu-Small-and-Beautiful` [网址](https://github.com/GaiZhenbiao/ChuanhuChatGPT/)
|
||||||
@@ -412,8 +370,8 @@ timeline LR
|
|||||||
|
|
||||||
1. `master` 分支: 主分支,稳定版
|
1. `master` 分支: 主分支,稳定版
|
||||||
2. `frontier` 分支: 开发分支,测试版
|
2. `frontier` 分支: 开发分支,测试版
|
||||||
3. 如何[接入其他大模型](request_llms/README.md)
|
3. 如何接入其他大模型:[接入其他大模型](request_llms/README.md)
|
||||||
4. 访问GPT-Academic的[在线服务并支持我们](https://github.com/binary-husky/gpt_academic/wiki/online)
|
|
||||||
|
|
||||||
### V:参考与学习
|
### V:参考与学习
|
||||||
|
|
||||||
|
|||||||
37
config.py
37
config.py
@@ -89,14 +89,11 @@ DEFAULT_FN_GROUPS = ['对话', '编程', '学术', '智能体']
|
|||||||
LLM_MODEL = "gpt-3.5-turbo" # 可选 ↓↓↓
|
LLM_MODEL = "gpt-3.5-turbo" # 可选 ↓↓↓
|
||||||
AVAIL_LLM_MODELS = ["gpt-3.5-turbo-1106","gpt-4-1106-preview","gpt-4-vision-preview",
|
AVAIL_LLM_MODELS = ["gpt-3.5-turbo-1106","gpt-4-1106-preview","gpt-4-vision-preview",
|
||||||
"gpt-3.5-turbo-16k", "gpt-3.5-turbo", "azure-gpt-3.5",
|
"gpt-3.5-turbo-16k", "gpt-3.5-turbo", "azure-gpt-3.5",
|
||||||
|
"api2d-gpt-3.5-turbo", 'api2d-gpt-3.5-turbo-16k',
|
||||||
"gpt-4", "gpt-4-32k", "azure-gpt-4", "api2d-gpt-4",
|
"gpt-4", "gpt-4-32k", "azure-gpt-4", "api2d-gpt-4",
|
||||||
"gemini-pro", "chatglm3", "claude-2", "zhipuai"]
|
"chatglm3", "moss", "claude-2"]
|
||||||
# P.S. 其他可用的模型还包括 [
|
# P.S. 其他可用的模型还包括 ["zhipuai", "qianfan", "deepseekcoder", "llama2", "qwen", "gpt-3.5-turbo-0613", "gpt-3.5-turbo-16k-0613", "gpt-3.5-random"
|
||||||
# "moss", "qwen-turbo", "qwen-plus", "qwen-max"
|
# "spark", "sparkv2", "sparkv3", "chatglm_onnx", "claude-1-100k", "claude-2", "internlm", "jittorllms_pangualpha", "jittorllms_llama"]
|
||||||
# "zhipuai", "qianfan", "deepseekcoder", "llama2", "qwen-local", "gpt-3.5-turbo-0613",
|
|
||||||
# "gpt-3.5-turbo-16k-0613", "gpt-3.5-random", "api2d-gpt-3.5-turbo", 'api2d-gpt-3.5-turbo-16k',
|
|
||||||
# "spark", "sparkv2", "sparkv3", "chatglm_onnx", "claude-1-100k", "claude-2", "internlm", "jittorllms_pangualpha", "jittorllms_llama"
|
|
||||||
# ]
|
|
||||||
|
|
||||||
|
|
||||||
# 定义界面上“询问多个GPT模型”插件应该使用哪些模型,请从AVAIL_LLM_MODELS中选择,并在不同模型之间用`&`间隔,例如"gpt-3.5-turbo&chatglm3&azure-gpt-4"
|
# 定义界面上“询问多个GPT模型”插件应该使用哪些模型,请从AVAIL_LLM_MODELS中选择,并在不同模型之间用`&`间隔,例如"gpt-3.5-turbo&chatglm3&azure-gpt-4"
|
||||||
@@ -106,11 +103,7 @@ MULTI_QUERY_LLM_MODELS = "gpt-3.5-turbo&chatglm3"
|
|||||||
# 选择本地模型变体(只有当AVAIL_LLM_MODELS包含了对应本地模型时,才会起作用)
|
# 选择本地模型变体(只有当AVAIL_LLM_MODELS包含了对应本地模型时,才会起作用)
|
||||||
# 如果你选择Qwen系列的模型,那么请在下面的QWEN_MODEL_SELECTION中指定具体的模型
|
# 如果你选择Qwen系列的模型,那么请在下面的QWEN_MODEL_SELECTION中指定具体的模型
|
||||||
# 也可以是具体的模型路径
|
# 也可以是具体的模型路径
|
||||||
QWEN_LOCAL_MODEL_SELECTION = "Qwen/Qwen-1_8B-Chat-Int8"
|
QWEN_MODEL_SELECTION = "Qwen/Qwen-1_8B-Chat-Int8"
|
||||||
|
|
||||||
|
|
||||||
# 接入通义千问在线大模型 https://dashscope.console.aliyun.com/
|
|
||||||
DASHSCOPE_API_KEY = "" # 阿里灵积云API_KEY
|
|
||||||
|
|
||||||
|
|
||||||
# 百度千帆(LLM_MODEL="qianfan")
|
# 百度千帆(LLM_MODEL="qianfan")
|
||||||
@@ -195,13 +188,7 @@ XFYUN_API_KEY = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
|
|||||||
|
|
||||||
# 接入智谱大模型
|
# 接入智谱大模型
|
||||||
ZHIPUAI_API_KEY = ""
|
ZHIPUAI_API_KEY = ""
|
||||||
ZHIPUAI_MODEL = "glm-4" # 可选 "glm-3-turbo" "glm-4"
|
ZHIPUAI_MODEL = "chatglm_turbo"
|
||||||
|
|
||||||
|
|
||||||
# # 火山引擎YUNQUE大模型
|
|
||||||
# YUNQUE_SECRET_KEY = ""
|
|
||||||
# YUNQUE_ACCESS_KEY = ""
|
|
||||||
# YUNQUE_MODEL = ""
|
|
||||||
|
|
||||||
|
|
||||||
# Claude API KEY
|
# Claude API KEY
|
||||||
@@ -212,10 +199,6 @@ ANTHROPIC_API_KEY = ""
|
|||||||
CUSTOM_API_KEY_PATTERN = ""
|
CUSTOM_API_KEY_PATTERN = ""
|
||||||
|
|
||||||
|
|
||||||
# Google Gemini API-Key
|
|
||||||
GEMINI_API_KEY = ''
|
|
||||||
|
|
||||||
|
|
||||||
# HUGGINGFACE的TOKEN,下载LLAMA时起作用 https://huggingface.co/docs/hub/security-tokens
|
# HUGGINGFACE的TOKEN,下载LLAMA时起作用 https://huggingface.co/docs/hub/security-tokens
|
||||||
HUGGINGFACE_ACCESS_TOKEN = "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"
|
HUGGINGFACE_ACCESS_TOKEN = "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"
|
||||||
|
|
||||||
@@ -301,12 +284,6 @@ NUM_CUSTOM_BASIC_BTN = 4
|
|||||||
│ ├── ZHIPUAI_API_KEY
|
│ ├── ZHIPUAI_API_KEY
|
||||||
│ └── ZHIPUAI_MODEL
|
│ └── ZHIPUAI_MODEL
|
||||||
│
|
│
|
||||||
├── "qwen-turbo" 等通义千问大模型
|
|
||||||
│ └── DASHSCOPE_API_KEY
|
|
||||||
│
|
|
||||||
├── "Gemini"
|
|
||||||
│ └── GEMINI_API_KEY
|
|
||||||
│
|
|
||||||
└── "newbing" Newbing接口不再稳定,不推荐使用
|
└── "newbing" Newbing接口不再稳定,不推荐使用
|
||||||
├── NEWBING_STYLE
|
├── NEWBING_STYLE
|
||||||
└── NEWBING_COOKIES
|
└── NEWBING_COOKIES
|
||||||
@@ -323,7 +300,7 @@ NUM_CUSTOM_BASIC_BTN = 4
|
|||||||
├── "jittorllms_pangualpha"
|
├── "jittorllms_pangualpha"
|
||||||
├── "jittorllms_llama"
|
├── "jittorllms_llama"
|
||||||
├── "deepseekcoder"
|
├── "deepseekcoder"
|
||||||
├── "qwen-local"
|
├── "qwen"
|
||||||
├── RWKV的支持见Wiki
|
├── RWKV的支持见Wiki
|
||||||
└── "llama2"
|
└── "llama2"
|
||||||
|
|
||||||
|
|||||||
@@ -3,69 +3,30 @@
|
|||||||
# 'stop' 颜色对应 theme.py 中的 color_er
|
# 'stop' 颜色对应 theme.py 中的 color_er
|
||||||
import importlib
|
import importlib
|
||||||
from toolbox import clear_line_break
|
from toolbox import clear_line_break
|
||||||
from toolbox import apply_gpt_academic_string_mask_langbased
|
|
||||||
from toolbox import build_gpt_academic_masked_string_langbased
|
|
||||||
from textwrap import dedent
|
|
||||||
|
|
||||||
def get_core_functions():
|
def get_core_functions():
|
||||||
return {
|
return {
|
||||||
|
"英语学术润色": {
|
||||||
"学术语料润色": {
|
|
||||||
# [1*] 前缀字符串,会被加在你的输入之前。例如,用来描述你的要求,例如翻译、解释代码、润色等等。
|
|
||||||
# 这里填一个提示词字符串就行了,这里为了区分中英文情景搞复杂了一点
|
|
||||||
"Prefix": build_gpt_academic_masked_string_langbased(
|
|
||||||
text_show_english=
|
|
||||||
r"Below is a paragraph from an academic paper. Polish the writing to meet the academic style, "
|
|
||||||
r"improve the spelling, grammar, clarity, concision and overall readability. When necessary, rewrite the whole sentence. "
|
|
||||||
r"Firstly, you should provide the polished paragraph. "
|
|
||||||
r"Secondly, you should list all your modification and explain the reasons to do so in markdown table.",
|
|
||||||
text_show_chinese=
|
|
||||||
r"作为一名中文学术论文写作改进助理,你的任务是改进所提供文本的拼写、语法、清晰、简洁和整体可读性,"
|
|
||||||
r"同时分解长句,减少重复,并提供改进建议。请先提供文本的更正版本,然后在markdown表格中列出修改的内容,并给出修改的理由:"
|
|
||||||
) + "\n\n",
|
|
||||||
# [2*] 后缀字符串,会被加在你的输入之后。例如,配合前缀可以把你的输入内容用引号圈起来
|
|
||||||
"Suffix": r"",
|
|
||||||
# [3] 按钮颜色 (可选参数,默认 secondary)
|
|
||||||
"Color": r"secondary",
|
|
||||||
# [4] 按钮是否可见 (可选参数,默认 True,即可见)
|
|
||||||
"Visible": True,
|
|
||||||
# [5] 是否在触发时清除历史 (可选参数,默认 False,即不处理之前的对话历史)
|
|
||||||
"AutoClearHistory": False,
|
|
||||||
# [6] 文本预处理 (可选参数,默认 None,举例:写个函数移除所有的换行符)
|
|
||||||
"PreProcess": None,
|
|
||||||
},
|
|
||||||
|
|
||||||
|
|
||||||
"总结绘制脑图": {
|
|
||||||
# 前缀,会被加在你的输入之前。例如,用来描述你的要求,例如翻译、解释代码、润色等等
|
# 前缀,会被加在你的输入之前。例如,用来描述你的要求,例如翻译、解释代码、润色等等
|
||||||
"Prefix": r"",
|
"Prefix": r"Below is a paragraph from an academic paper. Polish the writing to meet the academic style, " +
|
||||||
|
r"improve the spelling, grammar, clarity, concision and overall readability. When necessary, rewrite the whole sentence. " +
|
||||||
|
r"Firstly, you should provide the polished paragraph. "
|
||||||
|
r"Secondly, you should list all your modification and explain the reasons to do so in markdown table." + "\n\n",
|
||||||
# 后缀,会被加在你的输入之后。例如,配合前缀可以把你的输入内容用引号圈起来
|
# 后缀,会被加在你的输入之后。例如,配合前缀可以把你的输入内容用引号圈起来
|
||||||
"Suffix":
|
"Suffix": r"",
|
||||||
# dedent() 函数用于去除多行字符串的缩进
|
# 按钮颜色 (默认 secondary)
|
||||||
dedent("\n"+r'''
|
"Color": r"secondary",
|
||||||
==============================
|
# 按钮是否可见 (默认 True,即可见)
|
||||||
|
"Visible": True,
|
||||||
使用mermaid flowchart对以上文本进行总结,概括上述段落的内容以及内在逻辑关系,例如:
|
# 是否在触发时清除历史 (默认 False,即不处理之前的对话历史)
|
||||||
|
"AutoClearHistory": False
|
||||||
以下是对以上文本的总结,以mermaid flowchart的形式展示:
|
},
|
||||||
```mermaid
|
"中文学术润色": {
|
||||||
flowchart LR
|
"Prefix": r"作为一名中文学术论文写作改进助理,你的任务是改进所提供文本的拼写、语法、清晰、简洁和整体可读性," +
|
||||||
A["节点名1"] --> B("节点名2")
|
r"同时分解长句,减少重复,并提供改进建议。请只提供文本的更正版本,避免包括解释。请编辑以下文本" + "\n\n",
|
||||||
B --> C{"节点名3"}
|
"Suffix": r"",
|
||||||
C --> D["节点名4"]
|
|
||||||
C --> |"箭头名1"| E["节点名5"]
|
|
||||||
C --> |"箭头名2"| F["节点名6"]
|
|
||||||
```
|
|
||||||
|
|
||||||
警告:
|
|
||||||
(1)使用中文
|
|
||||||
(2)节点名字使用引号包裹,如["Laptop"]
|
|
||||||
(3)`|` 和 `"`之间不要存在空格
|
|
||||||
(4)根据情况选择flowchart LR(从左到右)或者flowchart TD(从上到下)
|
|
||||||
'''),
|
|
||||||
},
|
},
|
||||||
|
|
||||||
|
|
||||||
"查找语法错误": {
|
"查找语法错误": {
|
||||||
"Prefix": r"Help me ensure that the grammar and the spelling is correct. "
|
"Prefix": r"Help me ensure that the grammar and the spelling is correct. "
|
||||||
r"Do not try to polish the text, if no mistake is found, tell me that this paragraph is good. "
|
r"Do not try to polish the text, if no mistake is found, tell me that this paragraph is good. "
|
||||||
@@ -85,60 +46,41 @@ def get_core_functions():
|
|||||||
"Suffix": r"",
|
"Suffix": r"",
|
||||||
"PreProcess": clear_line_break, # 预处理:清除换行符
|
"PreProcess": clear_line_break, # 预处理:清除换行符
|
||||||
},
|
},
|
||||||
|
|
||||||
|
|
||||||
"中译英": {
|
"中译英": {
|
||||||
"Prefix": r"Please translate following sentence to English:" + "\n\n",
|
"Prefix": r"Please translate following sentence to English:" + "\n\n",
|
||||||
"Suffix": r"",
|
"Suffix": r"",
|
||||||
},
|
},
|
||||||
|
"学术中英互译": {
|
||||||
|
"Prefix": r"I want you to act as a scientific English-Chinese translator, " +
|
||||||
"学术英中互译": {
|
r"I will provide you with some paragraphs in one language " +
|
||||||
"Prefix": build_gpt_academic_masked_string_langbased(
|
r"and your task is to accurately and academically translate the paragraphs only into the other language. " +
|
||||||
text_show_chinese=
|
r"Do not repeat the original provided paragraphs after translation. " +
|
||||||
r"I want you to act as a scientific English-Chinese translator, "
|
r"You should use artificial intelligence tools, " +
|
||||||
r"I will provide you with some paragraphs in one language "
|
r"such as natural language processing, and rhetorical knowledge " +
|
||||||
r"and your task is to accurately and academically translate the paragraphs only into the other language. "
|
r"and experience about effective writing techniques to reply. " +
|
||||||
r"Do not repeat the original provided paragraphs after translation. "
|
r"I'll give you my paragraphs as follows, tell me what language it is written in, and then translate:" + "\n\n",
|
||||||
r"You should use artificial intelligence tools, "
|
"Suffix": "",
|
||||||
r"such as natural language processing, and rhetorical knowledge "
|
"Color": "secondary",
|
||||||
r"and experience about effective writing techniques to reply. "
|
|
||||||
r"I'll give you my paragraphs as follows, tell me what language it is written in, and then translate:",
|
|
||||||
text_show_english=
|
|
||||||
r"你是经验丰富的翻译,请把以下学术文章段落翻译成中文,"
|
|
||||||
r"并同时充分考虑中文的语法、清晰、简洁和整体可读性,"
|
|
||||||
r"必要时,你可以修改整个句子的顺序以确保翻译后的段落符合中文的语言习惯。"
|
|
||||||
r"你需要翻译的文本如下:"
|
|
||||||
) + "\n\n",
|
|
||||||
"Suffix": r"",
|
|
||||||
},
|
},
|
||||||
|
|
||||||
|
|
||||||
"英译中": {
|
"英译中": {
|
||||||
"Prefix": r"翻译成地道的中文:" + "\n\n",
|
"Prefix": r"翻译成地道的中文:" + "\n\n",
|
||||||
"Suffix": r"",
|
"Suffix": r"",
|
||||||
"Visible": False,
|
"Visible": False,
|
||||||
},
|
},
|
||||||
|
|
||||||
|
|
||||||
"找图片": {
|
"找图片": {
|
||||||
"Prefix": r"我需要你找一张网络图片。使用Unsplash API(https://source.unsplash.com/960x640/?<英语关键词>)获取图片URL,"
|
"Prefix": r"我需要你找一张网络图片。使用Unsplash API(https://source.unsplash.com/960x640/?<英语关键词>)获取图片URL," +
|
||||||
r"然后请使用Markdown格式封装,并且不要有反斜线,不要用代码块。现在,请按以下描述给我发送图片:" + "\n\n",
|
r"然后请使用Markdown格式封装,并且不要有反斜线,不要用代码块。现在,请按以下描述给我发送图片:" + "\n\n",
|
||||||
"Suffix": r"",
|
"Suffix": r"",
|
||||||
"Visible": False,
|
"Visible": False,
|
||||||
},
|
},
|
||||||
|
|
||||||
|
|
||||||
"解释代码": {
|
"解释代码": {
|
||||||
"Prefix": r"请解释以下代码:" + "\n```\n",
|
"Prefix": r"请解释以下代码:" + "\n```\n",
|
||||||
"Suffix": "\n```\n",
|
"Suffix": "\n```\n",
|
||||||
},
|
},
|
||||||
|
|
||||||
|
|
||||||
"参考文献转Bib": {
|
"参考文献转Bib": {
|
||||||
"Prefix": r"Here are some bibliography items, please transform them into bibtex style."
|
"Prefix": r"Here are some bibliography items, please transform them into bibtex style." +
|
||||||
r"Note that, reference styles maybe more than one kind, you should transform each item correctly."
|
r"Note that, reference styles maybe more than one kind, you should transform each item correctly." +
|
||||||
r"Items need to be transformed:" + "\n\n",
|
r"Items need to be transformed:",
|
||||||
"Visible": False,
|
"Visible": False,
|
||||||
"Suffix": r"",
|
"Suffix": r"",
|
||||||
}
|
}
|
||||||
@@ -156,18 +98,8 @@ def handle_core_functionality(additional_fn, inputs, history, chatbot):
|
|||||||
return inputs, history
|
return inputs, history
|
||||||
else:
|
else:
|
||||||
# 预制功能
|
# 预制功能
|
||||||
if "PreProcess" in core_functional[additional_fn]:
|
if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
|
||||||
if core_functional[additional_fn]["PreProcess"] is not None:
|
inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"]
|
||||||
inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
|
|
||||||
# 为字符串加上上面定义的前缀和后缀。
|
|
||||||
inputs = apply_gpt_academic_string_mask_langbased(
|
|
||||||
string = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"],
|
|
||||||
lang_reference = inputs,
|
|
||||||
)
|
|
||||||
if core_functional[additional_fn].get("AutoClearHistory", False):
|
if core_functional[additional_fn].get("AutoClearHistory", False):
|
||||||
history = []
|
history = []
|
||||||
return inputs, history
|
return inputs, history
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
t = get_core_functions()["总结绘制脑图"]
|
|
||||||
print(t["Prefix"] + t["Suffix"])
|
|
||||||
@@ -32,122 +32,115 @@ def get_crazy_functions():
|
|||||||
from crazy_functions.理解PDF文档内容 import 理解PDF文档内容标准文件输入
|
from crazy_functions.理解PDF文档内容 import 理解PDF文档内容标准文件输入
|
||||||
from crazy_functions.Latex全文润色 import Latex中文润色
|
from crazy_functions.Latex全文润色 import Latex中文润色
|
||||||
from crazy_functions.Latex全文润色 import Latex英文纠错
|
from crazy_functions.Latex全文润色 import Latex英文纠错
|
||||||
|
from crazy_functions.Latex全文翻译 import Latex中译英
|
||||||
|
from crazy_functions.Latex全文翻译 import Latex英译中
|
||||||
from crazy_functions.批量Markdown翻译 import Markdown中译英
|
from crazy_functions.批量Markdown翻译 import Markdown中译英
|
||||||
from crazy_functions.虚空终端 import 虚空终端
|
from crazy_functions.虚空终端 import 虚空终端
|
||||||
from crazy_functions.生成多种Mermaid图表 import 生成多种Mermaid图表
|
|
||||||
|
|
||||||
function_plugins = {
|
function_plugins = {
|
||||||
"虚空终端": {
|
"虚空终端": {
|
||||||
"Group": "对话|编程|学术|智能体",
|
"Group": "对话|编程|学术|智能体",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": True,
|
"AsButton": True,
|
||||||
"Function": HotReload(虚空终端),
|
"Function": HotReload(虚空终端)
|
||||||
},
|
},
|
||||||
"解析整个Python项目": {
|
"解析整个Python项目": {
|
||||||
"Group": "编程",
|
"Group": "编程",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": True,
|
"AsButton": True,
|
||||||
"Info": "解析一个Python项目的所有源文件(.py) | 输入参数为路径",
|
"Info": "解析一个Python项目的所有源文件(.py) | 输入参数为路径",
|
||||||
"Function": HotReload(解析一个Python项目),
|
"Function": HotReload(解析一个Python项目)
|
||||||
},
|
},
|
||||||
"载入对话历史存档(先上传存档或输入路径)": {
|
"载入对话历史存档(先上传存档或输入路径)": {
|
||||||
"Group": "对话",
|
"Group": "对话",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False,
|
"AsButton": False,
|
||||||
"Info": "载入对话历史存档 | 输入参数为路径",
|
"Info": "载入对话历史存档 | 输入参数为路径",
|
||||||
"Function": HotReload(载入对话历史存档),
|
"Function": HotReload(载入对话历史存档)
|
||||||
},
|
},
|
||||||
"删除所有本地对话历史记录(谨慎操作)": {
|
"删除所有本地对话历史记录(谨慎操作)": {
|
||||||
"Group": "对话",
|
"Group": "对话",
|
||||||
"AsButton": False,
|
"AsButton": False,
|
||||||
"Info": "删除所有本地对话历史记录,谨慎操作 | 不需要输入参数",
|
"Info": "删除所有本地对话历史记录,谨慎操作 | 不需要输入参数",
|
||||||
"Function": HotReload(删除所有本地对话历史记录),
|
"Function": HotReload(删除所有本地对话历史记录)
|
||||||
},
|
},
|
||||||
"清除所有缓存文件(谨慎操作)": {
|
"清除所有缓存文件(谨慎操作)": {
|
||||||
"Group": "对话",
|
"Group": "对话",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False, # 加入下拉菜单中
|
"AsButton": False, # 加入下拉菜单中
|
||||||
"Info": "清除所有缓存文件,谨慎操作 | 不需要输入参数",
|
"Info": "清除所有缓存文件,谨慎操作 | 不需要输入参数",
|
||||||
"Function": HotReload(清除缓存),
|
"Function": HotReload(清除缓存)
|
||||||
},
|
|
||||||
"生成多种Mermaid图表(从当前对话或文件(.pdf/.md)中生产图表)": {
|
|
||||||
"Group": "对话",
|
|
||||||
"Color": "stop",
|
|
||||||
"AsButton": False,
|
|
||||||
"Info" : "基于当前对话或PDF生成多种Mermaid图表,图表类型由模型判断",
|
|
||||||
"Function": HotReload(生成多种Mermaid图表),
|
|
||||||
"AdvancedArgs": True,
|
|
||||||
"ArgsReminder": "请输入图类型对应的数字,不输入则为模型自行判断:1-流程图,2-序列图,3-类图,4-饼图,5-甘特图,6-状态图,7-实体关系图,8-象限提示图,9-思维导图",
|
|
||||||
},
|
},
|
||||||
"批量总结Word文档": {
|
"批量总结Word文档": {
|
||||||
"Group": "学术",
|
"Group": "学术",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": True,
|
"AsButton": True,
|
||||||
"Info": "批量总结word文档 | 输入参数为路径",
|
"Info": "批量总结word文档 | 输入参数为路径",
|
||||||
"Function": HotReload(总结word文档),
|
"Function": HotReload(总结word文档)
|
||||||
},
|
},
|
||||||
"解析整个Matlab项目": {
|
"解析整个Matlab项目": {
|
||||||
"Group": "编程",
|
"Group": "编程",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False,
|
"AsButton": False,
|
||||||
"Info": "解析一个Matlab项目的所有源文件(.m) | 输入参数为路径",
|
"Info": "解析一个Matlab项目的所有源文件(.m) | 输入参数为路径",
|
||||||
"Function": HotReload(解析一个Matlab项目),
|
"Function": HotReload(解析一个Matlab项目)
|
||||||
},
|
},
|
||||||
"解析整个C++项目头文件": {
|
"解析整个C++项目头文件": {
|
||||||
"Group": "编程",
|
"Group": "编程",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False, # 加入下拉菜单中
|
"AsButton": False, # 加入下拉菜单中
|
||||||
"Info": "解析一个C++项目的所有头文件(.h/.hpp) | 输入参数为路径",
|
"Info": "解析一个C++项目的所有头文件(.h/.hpp) | 输入参数为路径",
|
||||||
"Function": HotReload(解析一个C项目的头文件),
|
"Function": HotReload(解析一个C项目的头文件)
|
||||||
},
|
},
|
||||||
"解析整个C++项目(.cpp/.hpp/.c/.h)": {
|
"解析整个C++项目(.cpp/.hpp/.c/.h)": {
|
||||||
"Group": "编程",
|
"Group": "编程",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False, # 加入下拉菜单中
|
"AsButton": False, # 加入下拉菜单中
|
||||||
"Info": "解析一个C++项目的所有源文件(.cpp/.hpp/.c/.h)| 输入参数为路径",
|
"Info": "解析一个C++项目的所有源文件(.cpp/.hpp/.c/.h)| 输入参数为路径",
|
||||||
"Function": HotReload(解析一个C项目),
|
"Function": HotReload(解析一个C项目)
|
||||||
},
|
},
|
||||||
"解析整个Go项目": {
|
"解析整个Go项目": {
|
||||||
"Group": "编程",
|
"Group": "编程",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False, # 加入下拉菜单中
|
"AsButton": False, # 加入下拉菜单中
|
||||||
"Info": "解析一个Go项目的所有源文件 | 输入参数为路径",
|
"Info": "解析一个Go项目的所有源文件 | 输入参数为路径",
|
||||||
"Function": HotReload(解析一个Golang项目),
|
"Function": HotReload(解析一个Golang项目)
|
||||||
},
|
},
|
||||||
"解析整个Rust项目": {
|
"解析整个Rust项目": {
|
||||||
"Group": "编程",
|
"Group": "编程",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False, # 加入下拉菜单中
|
"AsButton": False, # 加入下拉菜单中
|
||||||
"Info": "解析一个Rust项目的所有源文件 | 输入参数为路径",
|
"Info": "解析一个Rust项目的所有源文件 | 输入参数为路径",
|
||||||
"Function": HotReload(解析一个Rust项目),
|
"Function": HotReload(解析一个Rust项目)
|
||||||
},
|
},
|
||||||
"解析整个Java项目": {
|
"解析整个Java项目": {
|
||||||
"Group": "编程",
|
"Group": "编程",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False, # 加入下拉菜单中
|
"AsButton": False, # 加入下拉菜单中
|
||||||
"Info": "解析一个Java项目的所有源文件 | 输入参数为路径",
|
"Info": "解析一个Java项目的所有源文件 | 输入参数为路径",
|
||||||
"Function": HotReload(解析一个Java项目),
|
"Function": HotReload(解析一个Java项目)
|
||||||
},
|
},
|
||||||
"解析整个前端项目(js,ts,css等)": {
|
"解析整个前端项目(js,ts,css等)": {
|
||||||
"Group": "编程",
|
"Group": "编程",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False, # 加入下拉菜单中
|
"AsButton": False, # 加入下拉菜单中
|
||||||
"Info": "解析一个前端项目的所有源文件(js,ts,css等) | 输入参数为路径",
|
"Info": "解析一个前端项目的所有源文件(js,ts,css等) | 输入参数为路径",
|
||||||
"Function": HotReload(解析一个前端项目),
|
"Function": HotReload(解析一个前端项目)
|
||||||
},
|
},
|
||||||
"解析整个Lua项目": {
|
"解析整个Lua项目": {
|
||||||
"Group": "编程",
|
"Group": "编程",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False, # 加入下拉菜单中
|
"AsButton": False, # 加入下拉菜单中
|
||||||
"Info": "解析一个Lua项目的所有源文件 | 输入参数为路径",
|
"Info": "解析一个Lua项目的所有源文件 | 输入参数为路径",
|
||||||
"Function": HotReload(解析一个Lua项目),
|
"Function": HotReload(解析一个Lua项目)
|
||||||
},
|
},
|
||||||
"解析整个CSharp项目": {
|
"解析整个CSharp项目": {
|
||||||
"Group": "编程",
|
"Group": "编程",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False, # 加入下拉菜单中
|
"AsButton": False, # 加入下拉菜单中
|
||||||
"Info": "解析一个CSharp项目的所有源文件 | 输入参数为路径",
|
"Info": "解析一个CSharp项目的所有源文件 | 输入参数为路径",
|
||||||
"Function": HotReload(解析一个CSharp项目),
|
"Function": HotReload(解析一个CSharp项目)
|
||||||
},
|
},
|
||||||
"解析Jupyter Notebook文件": {
|
"解析Jupyter Notebook文件": {
|
||||||
"Group": "编程",
|
"Group": "编程",
|
||||||
@@ -163,104 +156,103 @@ def get_crazy_functions():
|
|||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False,
|
"AsButton": False,
|
||||||
"Info": "读取Tex论文并写摘要 | 输入参数为路径",
|
"Info": "读取Tex论文并写摘要 | 输入参数为路径",
|
||||||
"Function": HotReload(读文章写摘要),
|
"Function": HotReload(读文章写摘要)
|
||||||
},
|
},
|
||||||
"翻译README或MD": {
|
"翻译README或MD": {
|
||||||
"Group": "编程",
|
"Group": "编程",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": True,
|
"AsButton": True,
|
||||||
"Info": "将Markdown翻译为中文 | 输入参数为路径或URL",
|
"Info": "将Markdown翻译为中文 | 输入参数为路径或URL",
|
||||||
"Function": HotReload(Markdown英译中),
|
"Function": HotReload(Markdown英译中)
|
||||||
},
|
},
|
||||||
"翻译Markdown或README(支持Github链接)": {
|
"翻译Markdown或README(支持Github链接)": {
|
||||||
"Group": "编程",
|
"Group": "编程",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False,
|
"AsButton": False,
|
||||||
"Info": "将Markdown或README翻译为中文 | 输入参数为路径或URL",
|
"Info": "将Markdown或README翻译为中文 | 输入参数为路径或URL",
|
||||||
"Function": HotReload(Markdown英译中),
|
"Function": HotReload(Markdown英译中)
|
||||||
},
|
},
|
||||||
"批量生成函数注释": {
|
"批量生成函数注释": {
|
||||||
"Group": "编程",
|
"Group": "编程",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False, # 加入下拉菜单中
|
"AsButton": False, # 加入下拉菜单中
|
||||||
"Info": "批量生成函数的注释 | 输入参数为路径",
|
"Info": "批量生成函数的注释 | 输入参数为路径",
|
||||||
"Function": HotReload(批量生成函数注释),
|
"Function": HotReload(批量生成函数注释)
|
||||||
},
|
},
|
||||||
"保存当前的对话": {
|
"保存当前的对话": {
|
||||||
"Group": "对话",
|
"Group": "对话",
|
||||||
"AsButton": True,
|
"AsButton": True,
|
||||||
"Info": "保存当前的对话 | 不需要输入参数",
|
"Info": "保存当前的对话 | 不需要输入参数",
|
||||||
"Function": HotReload(对话历史存档),
|
"Function": HotReload(对话历史存档)
|
||||||
},
|
},
|
||||||
"[多线程Demo]解析此项目本身(源码自译解)": {
|
"[多线程Demo]解析此项目本身(源码自译解)": {
|
||||||
"Group": "对话|编程",
|
"Group": "对话|编程",
|
||||||
"AsButton": False, # 加入下拉菜单中
|
"AsButton": False, # 加入下拉菜单中
|
||||||
"Info": "多线程解析并翻译此项目的源码 | 不需要输入参数",
|
"Info": "多线程解析并翻译此项目的源码 | 不需要输入参数",
|
||||||
"Function": HotReload(解析项目本身),
|
"Function": HotReload(解析项目本身)
|
||||||
},
|
},
|
||||||
"历史上的今天": {
|
"历史上的今天": {
|
||||||
"Group": "对话",
|
"Group": "对话",
|
||||||
"AsButton": True,
|
"AsButton": True,
|
||||||
"Info": "查看历史上的今天事件 (这是一个面向开发者的插件Demo) | 不需要输入参数",
|
"Info": "查看历史上的今天事件 (这是一个面向开发者的插件Demo) | 不需要输入参数",
|
||||||
"Function": HotReload(高阶功能模板函数),
|
"Function": HotReload(高阶功能模板函数)
|
||||||
},
|
},
|
||||||
"精准翻译PDF论文": {
|
"精准翻译PDF论文": {
|
||||||
"Group": "学术",
|
"Group": "学术",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": True,
|
"AsButton": True,
|
||||||
"Info": "精准翻译PDF论文为中文 | 输入参数为路径",
|
"Info": "精准翻译PDF论文为中文 | 输入参数为路径",
|
||||||
"Function": HotReload(批量翻译PDF文档),
|
"Function": HotReload(批量翻译PDF文档)
|
||||||
},
|
},
|
||||||
"询问多个GPT模型": {
|
"询问多个GPT模型": {
|
||||||
"Group": "对话",
|
"Group": "对话",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": True,
|
"AsButton": True,
|
||||||
"Function": HotReload(同时问询),
|
"Function": HotReload(同时问询)
|
||||||
},
|
},
|
||||||
"批量总结PDF文档": {
|
"批量总结PDF文档": {
|
||||||
"Group": "学术",
|
"Group": "学术",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False, # 加入下拉菜单中
|
"AsButton": False, # 加入下拉菜单中
|
||||||
"Info": "批量总结PDF文档的内容 | 输入参数为路径",
|
"Info": "批量总结PDF文档的内容 | 输入参数为路径",
|
||||||
"Function": HotReload(批量总结PDF文档),
|
"Function": HotReload(批量总结PDF文档)
|
||||||
},
|
},
|
||||||
"谷歌学术检索助手(输入谷歌学术搜索页url)": {
|
"谷歌学术检索助手(输入谷歌学术搜索页url)": {
|
||||||
"Group": "学术",
|
"Group": "学术",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False, # 加入下拉菜单中
|
"AsButton": False, # 加入下拉菜单中
|
||||||
"Info": "使用谷歌学术检索助手搜索指定URL的结果 | 输入参数为谷歌学术搜索页的URL",
|
"Info": "使用谷歌学术检索助手搜索指定URL的结果 | 输入参数为谷歌学术搜索页的URL",
|
||||||
"Function": HotReload(谷歌检索小助手),
|
"Function": HotReload(谷歌检索小助手)
|
||||||
},
|
},
|
||||||
"理解PDF文档内容 (模仿ChatPDF)": {
|
"理解PDF文档内容 (模仿ChatPDF)": {
|
||||||
"Group": "学术",
|
"Group": "学术",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False, # 加入下拉菜单中
|
"AsButton": False, # 加入下拉菜单中
|
||||||
"Info": "理解PDF文档的内容并进行回答 | 输入参数为路径",
|
"Info": "理解PDF文档的内容并进行回答 | 输入参数为路径",
|
||||||
"Function": HotReload(理解PDF文档内容标准文件输入),
|
"Function": HotReload(理解PDF文档内容标准文件输入)
|
||||||
},
|
},
|
||||||
"英文Latex项目全文润色(输入路径或上传压缩包)": {
|
"英文Latex项目全文润色(输入路径或上传压缩包)": {
|
||||||
"Group": "学术",
|
"Group": "学术",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False, # 加入下拉菜单中
|
"AsButton": False, # 加入下拉菜单中
|
||||||
"Info": "对英文Latex项目全文进行润色处理 | 输入参数为路径或上传压缩包",
|
"Info": "对英文Latex项目全文进行润色处理 | 输入参数为路径或上传压缩包",
|
||||||
"Function": HotReload(Latex英文润色),
|
"Function": HotReload(Latex英文润色)
|
||||||
|
},
|
||||||
|
"英文Latex项目全文纠错(输入路径或上传压缩包)": {
|
||||||
|
"Group": "学术",
|
||||||
|
"Color": "stop",
|
||||||
|
"AsButton": False, # 加入下拉菜单中
|
||||||
|
"Info": "对英文Latex项目全文进行纠错处理 | 输入参数为路径或上传压缩包",
|
||||||
|
"Function": HotReload(Latex英文纠错)
|
||||||
},
|
},
|
||||||
|
|
||||||
"中文Latex项目全文润色(输入路径或上传压缩包)": {
|
"中文Latex项目全文润色(输入路径或上传压缩包)": {
|
||||||
"Group": "学术",
|
"Group": "学术",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False, # 加入下拉菜单中
|
"AsButton": False, # 加入下拉菜单中
|
||||||
"Info": "对中文Latex项目全文进行润色处理 | 输入参数为路径或上传压缩包",
|
"Info": "对中文Latex项目全文进行润色处理 | 输入参数为路径或上传压缩包",
|
||||||
"Function": HotReload(Latex中文润色),
|
"Function": HotReload(Latex中文润色)
|
||||||
},
|
},
|
||||||
# 已经被新插件取代
|
|
||||||
# "英文Latex项目全文纠错(输入路径或上传压缩包)": {
|
|
||||||
# "Group": "学术",
|
|
||||||
# "Color": "stop",
|
|
||||||
# "AsButton": False, # 加入下拉菜单中
|
|
||||||
# "Info": "对英文Latex项目全文进行纠错处理 | 输入参数为路径或上传压缩包",
|
|
||||||
# "Function": HotReload(Latex英文纠错),
|
|
||||||
# },
|
|
||||||
# 已经被新插件取代
|
# 已经被新插件取代
|
||||||
# "Latex项目全文中译英(输入路径或上传压缩包)": {
|
# "Latex项目全文中译英(输入路径或上传压缩包)": {
|
||||||
# "Group": "学术",
|
# "Group": "学术",
|
||||||
@@ -269,6 +261,7 @@ def get_crazy_functions():
|
|||||||
# "Info": "对Latex项目全文进行中译英处理 | 输入参数为路径或上传压缩包",
|
# "Info": "对Latex项目全文进行中译英处理 | 输入参数为路径或上传压缩包",
|
||||||
# "Function": HotReload(Latex中译英)
|
# "Function": HotReload(Latex中译英)
|
||||||
# },
|
# },
|
||||||
|
|
||||||
# 已经被新插件取代
|
# 已经被新插件取代
|
||||||
# "Latex项目全文英译中(输入路径或上传压缩包)": {
|
# "Latex项目全文英译中(输入路径或上传压缩包)": {
|
||||||
# "Group": "学术",
|
# "Group": "学术",
|
||||||
@@ -277,153 +270,130 @@ def get_crazy_functions():
|
|||||||
# "Info": "对Latex项目全文进行英译中处理 | 输入参数为路径或上传压缩包",
|
# "Info": "对Latex项目全文进行英译中处理 | 输入参数为路径或上传压缩包",
|
||||||
# "Function": HotReload(Latex英译中)
|
# "Function": HotReload(Latex英译中)
|
||||||
# },
|
# },
|
||||||
|
|
||||||
"批量Markdown中译英(输入路径或上传压缩包)": {
|
"批量Markdown中译英(输入路径或上传压缩包)": {
|
||||||
"Group": "编程",
|
"Group": "编程",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False, # 加入下拉菜单中
|
"AsButton": False, # 加入下拉菜单中
|
||||||
"Info": "批量将Markdown文件中文翻译为英文 | 输入参数为路径或上传压缩包",
|
"Info": "批量将Markdown文件中文翻译为英文 | 输入参数为路径或上传压缩包",
|
||||||
"Function": HotReload(Markdown中译英),
|
"Function": HotReload(Markdown中译英)
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
# -=--=- 尚未充分测试的实验性插件 & 需要额外依赖的插件 -=--=-
|
# -=--=- 尚未充分测试的实验性插件 & 需要额外依赖的插件 -=--=-
|
||||||
try:
|
try:
|
||||||
from crazy_functions.下载arxiv论文翻译摘要 import 下载arxiv论文并翻译摘要
|
from crazy_functions.下载arxiv论文翻译摘要 import 下载arxiv论文并翻译摘要
|
||||||
|
function_plugins.update({
|
||||||
function_plugins.update(
|
|
||||||
{
|
|
||||||
"一键下载arxiv论文并翻译摘要(先在input输入编号,如1812.10695)": {
|
"一键下载arxiv论文并翻译摘要(先在input输入编号,如1812.10695)": {
|
||||||
"Group": "学术",
|
"Group": "学术",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False, # 加入下拉菜单中
|
"AsButton": False, # 加入下拉菜单中
|
||||||
# "Info": "下载arxiv论文并翻译摘要 | 输入参数为arxiv编号如1812.10695",
|
# "Info": "下载arxiv论文并翻译摘要 | 输入参数为arxiv编号如1812.10695",
|
||||||
"Function": HotReload(下载arxiv论文并翻译摘要),
|
"Function": HotReload(下载arxiv论文并翻译摘要)
|
||||||
}
|
}
|
||||||
}
|
})
|
||||||
)
|
|
||||||
except:
|
except:
|
||||||
print(trimmed_format_exc())
|
print(trimmed_format_exc())
|
||||||
print("Load function plugin failed")
|
print('Load function plugin failed')
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from crazy_functions.联网的ChatGPT import 连接网络回答问题
|
from crazy_functions.联网的ChatGPT import 连接网络回答问题
|
||||||
|
function_plugins.update({
|
||||||
function_plugins.update(
|
|
||||||
{
|
|
||||||
"连接网络回答问题(输入问题后点击该插件,需要访问谷歌)": {
|
"连接网络回答问题(输入问题后点击该插件,需要访问谷歌)": {
|
||||||
"Group": "对话",
|
"Group": "对话",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False, # 加入下拉菜单中
|
"AsButton": False, # 加入下拉菜单中
|
||||||
# "Info": "连接网络回答问题(需要访问谷歌)| 输入参数是一个问题",
|
# "Info": "连接网络回答问题(需要访问谷歌)| 输入参数是一个问题",
|
||||||
"Function": HotReload(连接网络回答问题),
|
"Function": HotReload(连接网络回答问题)
|
||||||
}
|
}
|
||||||
}
|
})
|
||||||
)
|
|
||||||
from crazy_functions.联网的ChatGPT_bing版 import 连接bing搜索回答问题
|
from crazy_functions.联网的ChatGPT_bing版 import 连接bing搜索回答问题
|
||||||
|
function_plugins.update({
|
||||||
function_plugins.update(
|
|
||||||
{
|
|
||||||
"连接网络回答问题(中文Bing版,输入问题后点击该插件)": {
|
"连接网络回答问题(中文Bing版,输入问题后点击该插件)": {
|
||||||
"Group": "对话",
|
"Group": "对话",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False, # 加入下拉菜单中
|
"AsButton": False, # 加入下拉菜单中
|
||||||
"Info": "连接网络回答问题(需要访问中文Bing)| 输入参数是一个问题",
|
"Info": "连接网络回答问题(需要访问中文Bing)| 输入参数是一个问题",
|
||||||
"Function": HotReload(连接bing搜索回答问题),
|
"Function": HotReload(连接bing搜索回答问题)
|
||||||
}
|
}
|
||||||
}
|
})
|
||||||
)
|
|
||||||
except:
|
except:
|
||||||
print(trimmed_format_exc())
|
print(trimmed_format_exc())
|
||||||
print("Load function plugin failed")
|
print('Load function plugin failed')
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from crazy_functions.解析项目源代码 import 解析任意code项目
|
from crazy_functions.解析项目源代码 import 解析任意code项目
|
||||||
|
function_plugins.update({
|
||||||
function_plugins.update(
|
|
||||||
{
|
|
||||||
"解析项目源代码(手动指定和筛选源代码文件类型)": {
|
"解析项目源代码(手动指定和筛选源代码文件类型)": {
|
||||||
"Group": "编程",
|
"Group": "编程",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False,
|
"AsButton": False,
|
||||||
"AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False)
|
"AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False)
|
||||||
"ArgsReminder": '输入时用逗号隔开, *代表通配符, 加了^代表不匹配; 不输入代表全部匹配。例如: "*.c, ^*.cpp, config.toml, ^*.toml"', # 高级参数输入区的显示提示
|
"ArgsReminder": "输入时用逗号隔开, *代表通配符, 加了^代表不匹配; 不输入代表全部匹配。例如: \"*.c, ^*.cpp, config.toml, ^*.toml\"", # 高级参数输入区的显示提示
|
||||||
"Function": HotReload(解析任意code项目),
|
"Function": HotReload(解析任意code项目)
|
||||||
},
|
},
|
||||||
}
|
})
|
||||||
)
|
|
||||||
except:
|
except:
|
||||||
print(trimmed_format_exc())
|
print(trimmed_format_exc())
|
||||||
print("Load function plugin failed")
|
print('Load function plugin failed')
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from crazy_functions.询问多个大语言模型 import 同时问询_指定模型
|
from crazy_functions.询问多个大语言模型 import 同时问询_指定模型
|
||||||
|
function_plugins.update({
|
||||||
function_plugins.update(
|
|
||||||
{
|
|
||||||
"询问多个GPT模型(手动指定询问哪些模型)": {
|
"询问多个GPT模型(手动指定询问哪些模型)": {
|
||||||
"Group": "对话",
|
"Group": "对话",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False,
|
"AsButton": False,
|
||||||
"AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False)
|
"AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False)
|
||||||
"ArgsReminder": "支持任意数量的llm接口,用&符号分隔。例如chatglm&gpt-3.5-turbo&gpt-4", # 高级参数输入区的显示提示
|
"ArgsReminder": "支持任意数量的llm接口,用&符号分隔。例如chatglm&gpt-3.5-turbo&api2d-gpt-4", # 高级参数输入区的显示提示
|
||||||
"Function": HotReload(同时问询_指定模型),
|
"Function": HotReload(同时问询_指定模型)
|
||||||
},
|
},
|
||||||
}
|
})
|
||||||
)
|
|
||||||
except:
|
except:
|
||||||
print(trimmed_format_exc())
|
print(trimmed_format_exc())
|
||||||
print("Load function plugin failed")
|
print('Load function plugin failed')
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from crazy_functions.图片生成 import 图片生成_DALLE2, 图片生成_DALLE3, 图片修改_DALLE2
|
from crazy_functions.图片生成 import 图片生成_DALLE2, 图片生成_DALLE3, 图片修改_DALLE2
|
||||||
|
function_plugins.update({
|
||||||
function_plugins.update(
|
"图片生成_DALLE2 (先切换模型到openai或api2d)": {
|
||||||
{
|
|
||||||
"图片生成_DALLE2 (先切换模型到gpt-*)": {
|
|
||||||
"Group": "对话",
|
"Group": "对话",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False,
|
"AsButton": False,
|
||||||
"AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False)
|
"AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False)
|
||||||
"ArgsReminder": "在这里输入分辨率, 如1024x1024(默认),支持 256x256, 512x512, 1024x1024", # 高级参数输入区的显示提示
|
"ArgsReminder": "在这里输入分辨率, 如1024x1024(默认),支持 256x256, 512x512, 1024x1024", # 高级参数输入区的显示提示
|
||||||
"Info": "使用DALLE2生成图片 | 输入参数字符串,提供图像的内容",
|
"Info": "使用DALLE2生成图片 | 输入参数字符串,提供图像的内容",
|
||||||
"Function": HotReload(图片生成_DALLE2),
|
"Function": HotReload(图片生成_DALLE2)
|
||||||
},
|
},
|
||||||
}
|
})
|
||||||
)
|
function_plugins.update({
|
||||||
function_plugins.update(
|
"图片生成_DALLE3 (先切换模型到openai或api2d)": {
|
||||||
{
|
|
||||||
"图片生成_DALLE3 (先切换模型到gpt-*)": {
|
|
||||||
"Group": "对话",
|
"Group": "对话",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False,
|
"AsButton": False,
|
||||||
"AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False)
|
"AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False)
|
||||||
"ArgsReminder": "在这里输入自定义参数「分辨率-质量(可选)-风格(可选)」, 参数示例「1024x1024-hd-vivid」 || 分辨率支持 「1024x1024」(默认) /「1792x1024」/「1024x1792」 || 质量支持 「-standard」(默认) /「-hd」 || 风格支持 「-vivid」(默认) /「-natural」", # 高级参数输入区的显示提示
|
"ArgsReminder": "在这里输入自定义参数「分辨率-质量(可选)-风格(可选)」, 参数示例「1024x1024-hd-vivid」 || 分辨率支持 「1024x1024」(默认) /「1792x1024」/「1024x1792」 || 质量支持 「-standard」(默认) /「-hd」 || 风格支持 「-vivid」(默认) /「-natural」", # 高级参数输入区的显示提示
|
||||||
"Info": "使用DALLE3生成图片 | 输入参数字符串,提供图像的内容",
|
"Info": "使用DALLE3生成图片 | 输入参数字符串,提供图像的内容",
|
||||||
"Function": HotReload(图片生成_DALLE3),
|
"Function": HotReload(图片生成_DALLE3)
|
||||||
},
|
},
|
||||||
}
|
})
|
||||||
)
|
function_plugins.update({
|
||||||
function_plugins.update(
|
"图片修改_DALLE2 (先切换模型到openai或api2d)": {
|
||||||
{
|
|
||||||
"图片修改_DALLE2 (先切换模型到gpt-*)": {
|
|
||||||
"Group": "对话",
|
"Group": "对话",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False,
|
"AsButton": False,
|
||||||
"AdvancedArgs": False, # 调用时,唤起高级参数输入区(默认False)
|
"AdvancedArgs": False, # 调用时,唤起高级参数输入区(默认False)
|
||||||
# "Info": "使用DALLE2修改图片 | 输入参数字符串,提供图像的内容",
|
# "Info": "使用DALLE2修改图片 | 输入参数字符串,提供图像的内容",
|
||||||
"Function": HotReload(图片修改_DALLE2),
|
"Function": HotReload(图片修改_DALLE2)
|
||||||
},
|
},
|
||||||
}
|
})
|
||||||
)
|
|
||||||
except:
|
except:
|
||||||
print(trimmed_format_exc())
|
print(trimmed_format_exc())
|
||||||
print("Load function plugin failed")
|
print('Load function plugin failed')
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from crazy_functions.总结音视频 import 总结音视频
|
from crazy_functions.总结音视频 import 总结音视频
|
||||||
|
function_plugins.update({
|
||||||
function_plugins.update(
|
|
||||||
{
|
|
||||||
"批量总结音视频(输入路径或上传压缩包)": {
|
"批量总结音视频(输入路径或上传压缩包)": {
|
||||||
"Group": "对话",
|
"Group": "对话",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
@@ -431,246 +401,203 @@ def get_crazy_functions():
|
|||||||
"AdvancedArgs": True,
|
"AdvancedArgs": True,
|
||||||
"ArgsReminder": "调用openai api 使用whisper-1模型, 目前支持的格式:mp4, m4a, wav, mpga, mpeg, mp3。此处可以输入解析提示,例如:解析为简体中文(默认)。",
|
"ArgsReminder": "调用openai api 使用whisper-1模型, 目前支持的格式:mp4, m4a, wav, mpga, mpeg, mp3。此处可以输入解析提示,例如:解析为简体中文(默认)。",
|
||||||
"Info": "批量总结音频或视频 | 输入参数为路径",
|
"Info": "批量总结音频或视频 | 输入参数为路径",
|
||||||
"Function": HotReload(总结音视频),
|
"Function": HotReload(总结音视频)
|
||||||
}
|
}
|
||||||
}
|
})
|
||||||
)
|
|
||||||
except:
|
except:
|
||||||
print(trimmed_format_exc())
|
print(trimmed_format_exc())
|
||||||
print("Load function plugin failed")
|
print('Load function plugin failed')
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from crazy_functions.数学动画生成manim import 动画生成
|
from crazy_functions.数学动画生成manim import 动画生成
|
||||||
|
function_plugins.update({
|
||||||
function_plugins.update(
|
|
||||||
{
|
|
||||||
"数学动画生成(Manim)": {
|
"数学动画生成(Manim)": {
|
||||||
"Group": "对话",
|
"Group": "对话",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False,
|
"AsButton": False,
|
||||||
"Info": "按照自然语言描述生成一个动画 | 输入参数是一段话",
|
"Info": "按照自然语言描述生成一个动画 | 输入参数是一段话",
|
||||||
"Function": HotReload(动画生成),
|
"Function": HotReload(动画生成)
|
||||||
}
|
}
|
||||||
}
|
})
|
||||||
)
|
|
||||||
except:
|
except:
|
||||||
print(trimmed_format_exc())
|
print(trimmed_format_exc())
|
||||||
print("Load function plugin failed")
|
print('Load function plugin failed')
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from crazy_functions.批量Markdown翻译 import Markdown翻译指定语言
|
from crazy_functions.批量Markdown翻译 import Markdown翻译指定语言
|
||||||
|
function_plugins.update({
|
||||||
function_plugins.update(
|
|
||||||
{
|
|
||||||
"Markdown翻译(指定翻译成何种语言)": {
|
"Markdown翻译(指定翻译成何种语言)": {
|
||||||
"Group": "编程",
|
"Group": "编程",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False,
|
"AsButton": False,
|
||||||
"AdvancedArgs": True,
|
"AdvancedArgs": True,
|
||||||
"ArgsReminder": "请输入要翻译成哪种语言,默认为Chinese。",
|
"ArgsReminder": "请输入要翻译成哪种语言,默认为Chinese。",
|
||||||
"Function": HotReload(Markdown翻译指定语言),
|
"Function": HotReload(Markdown翻译指定语言)
|
||||||
}
|
}
|
||||||
}
|
})
|
||||||
)
|
|
||||||
except:
|
except:
|
||||||
print(trimmed_format_exc())
|
print(trimmed_format_exc())
|
||||||
print("Load function plugin failed")
|
print('Load function plugin failed')
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from crazy_functions.知识库问答 import 知识库文件注入
|
from crazy_functions.知识库问答 import 知识库文件注入
|
||||||
|
function_plugins.update({
|
||||||
function_plugins.update(
|
|
||||||
{
|
|
||||||
"构建知识库(先上传文件素材,再运行此插件)": {
|
"构建知识库(先上传文件素材,再运行此插件)": {
|
||||||
"Group": "对话",
|
"Group": "对话",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False,
|
"AsButton": False,
|
||||||
"AdvancedArgs": True,
|
"AdvancedArgs": True,
|
||||||
"ArgsReminder": "此处待注入的知识库名称id, 默认为default。文件进入知识库后可长期保存。可以通过再次调用本插件的方式,向知识库追加更多文档。",
|
"ArgsReminder": "此处待注入的知识库名称id, 默认为default。文件进入知识库后可长期保存。可以通过再次调用本插件的方式,向知识库追加更多文档。",
|
||||||
"Function": HotReload(知识库文件注入),
|
"Function": HotReload(知识库文件注入)
|
||||||
}
|
}
|
||||||
}
|
})
|
||||||
)
|
|
||||||
except:
|
except:
|
||||||
print(trimmed_format_exc())
|
print(trimmed_format_exc())
|
||||||
print("Load function plugin failed")
|
print('Load function plugin failed')
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from crazy_functions.知识库问答 import 读取知识库作答
|
from crazy_functions.知识库问答 import 读取知识库作答
|
||||||
|
function_plugins.update({
|
||||||
function_plugins.update(
|
|
||||||
{
|
|
||||||
"知识库文件注入(构建知识库后,再运行此插件)": {
|
"知识库文件注入(构建知识库后,再运行此插件)": {
|
||||||
"Group": "对话",
|
"Group": "对话",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False,
|
"AsButton": False,
|
||||||
"AdvancedArgs": True,
|
"AdvancedArgs": True,
|
||||||
"ArgsReminder": "待提取的知识库名称id, 默认为default, 您需要构建知识库后再运行此插件。",
|
"ArgsReminder": "待提取的知识库名称id, 默认为default, 您需要构建知识库后再运行此插件。",
|
||||||
"Function": HotReload(读取知识库作答),
|
"Function": HotReload(读取知识库作答)
|
||||||
}
|
}
|
||||||
}
|
})
|
||||||
)
|
|
||||||
except:
|
except:
|
||||||
print(trimmed_format_exc())
|
print(trimmed_format_exc())
|
||||||
print("Load function plugin failed")
|
print('Load function plugin failed')
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from crazy_functions.交互功能函数模板 import 交互功能模板函数
|
from crazy_functions.交互功能函数模板 import 交互功能模板函数
|
||||||
|
function_plugins.update({
|
||||||
function_plugins.update(
|
|
||||||
{
|
|
||||||
"交互功能模板Demo函数(查找wallhaven.cc的壁纸)": {
|
"交互功能模板Demo函数(查找wallhaven.cc的壁纸)": {
|
||||||
"Group": "对话",
|
"Group": "对话",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False,
|
"AsButton": False,
|
||||||
"Function": HotReload(交互功能模板函数),
|
"Function": HotReload(交互功能模板函数)
|
||||||
}
|
}
|
||||||
}
|
})
|
||||||
)
|
|
||||||
except:
|
except:
|
||||||
print(trimmed_format_exc())
|
print(trimmed_format_exc())
|
||||||
print("Load function plugin failed")
|
print('Load function plugin failed')
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from crazy_functions.Latex输出PDF结果 import Latex英文纠错加PDF对比
|
from crazy_functions.Latex输出PDF结果 import Latex英文纠错加PDF对比
|
||||||
from crazy_functions.Latex输出PDF结果 import Latex翻译中文并重新编译PDF
|
function_plugins.update({
|
||||||
|
|
||||||
function_plugins.update(
|
|
||||||
{
|
|
||||||
"Latex英文纠错+高亮修正位置 [需Latex]": {
|
"Latex英文纠错+高亮修正位置 [需Latex]": {
|
||||||
"Group": "学术",
|
"Group": "学术",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False,
|
"AsButton": False,
|
||||||
"AdvancedArgs": True,
|
"AdvancedArgs": True,
|
||||||
"ArgsReminder": "如果有必要, 请在此处追加更细致的矫错指令(使用英文)。",
|
"ArgsReminder": "如果有必要, 请在此处追加更细致的矫错指令(使用英文)。",
|
||||||
"Function": HotReload(Latex英文纠错加PDF对比),
|
"Function": HotReload(Latex英文纠错加PDF对比)
|
||||||
},
|
}
|
||||||
|
})
|
||||||
|
from crazy_functions.Latex输出PDF结果 import Latex翻译中文并重新编译PDF
|
||||||
|
function_plugins.update({
|
||||||
"Arxiv论文精细翻译(输入arxivID)[需Latex]": {
|
"Arxiv论文精细翻译(输入arxivID)[需Latex]": {
|
||||||
"Group": "学术",
|
"Group": "学术",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False,
|
"AsButton": False,
|
||||||
"AdvancedArgs": True,
|
"AdvancedArgs": True,
|
||||||
"ArgsReminder": "如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 "
|
"ArgsReminder":
|
||||||
+ "例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: "
|
"如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 " +
|
||||||
+ 'If the term "agent" is used in this section, it should be translated to "智能体". ',
|
"例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: " +
|
||||||
|
'If the term "agent" is used in this section, it should be translated to "智能体". ',
|
||||||
"Info": "Arixv论文精细翻译 | 输入参数arxiv论文的ID,比如1812.10695",
|
"Info": "Arixv论文精细翻译 | 输入参数arxiv论文的ID,比如1812.10695",
|
||||||
"Function": HotReload(Latex翻译中文并重新编译PDF),
|
"Function": HotReload(Latex翻译中文并重新编译PDF)
|
||||||
},
|
}
|
||||||
|
})
|
||||||
|
function_plugins.update({
|
||||||
"本地Latex论文精细翻译(上传Latex项目)[需Latex]": {
|
"本地Latex论文精细翻译(上传Latex项目)[需Latex]": {
|
||||||
"Group": "学术",
|
"Group": "学术",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False,
|
"AsButton": False,
|
||||||
"AdvancedArgs": True,
|
"AdvancedArgs": True,
|
||||||
"ArgsReminder": "如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 "
|
"ArgsReminder":
|
||||||
+ "例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: "
|
"如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 " +
|
||||||
+ 'If the term "agent" is used in this section, it should be translated to "智能体". ',
|
"例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: " +
|
||||||
|
'If the term "agent" is used in this section, it should be translated to "智能体". ',
|
||||||
"Info": "本地Latex论文精细翻译 | 输入参数是路径",
|
"Info": "本地Latex论文精细翻译 | 输入参数是路径",
|
||||||
"Function": HotReload(Latex翻译中文并重新编译PDF),
|
"Function": HotReload(Latex翻译中文并重新编译PDF)
|
||||||
}
|
}
|
||||||
}
|
})
|
||||||
)
|
|
||||||
except:
|
except:
|
||||||
print(trimmed_format_exc())
|
print(trimmed_format_exc())
|
||||||
print("Load function plugin failed")
|
print('Load function plugin failed')
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from toolbox import get_conf
|
from toolbox import get_conf
|
||||||
|
ENABLE_AUDIO = get_conf('ENABLE_AUDIO')
|
||||||
ENABLE_AUDIO = get_conf("ENABLE_AUDIO")
|
|
||||||
if ENABLE_AUDIO:
|
if ENABLE_AUDIO:
|
||||||
from crazy_functions.语音助手 import 语音助手
|
from crazy_functions.语音助手 import 语音助手
|
||||||
|
function_plugins.update({
|
||||||
function_plugins.update(
|
|
||||||
{
|
|
||||||
"实时语音对话": {
|
"实时语音对话": {
|
||||||
"Group": "对话",
|
"Group": "对话",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": True,
|
"AsButton": True,
|
||||||
"Info": "这是一个时刻聆听着的语音对话助手 | 没有输入参数",
|
"Info": "这是一个时刻聆听着的语音对话助手 | 没有输入参数",
|
||||||
"Function": HotReload(语音助手),
|
"Function": HotReload(语音助手)
|
||||||
}
|
}
|
||||||
}
|
})
|
||||||
)
|
|
||||||
except:
|
except:
|
||||||
print(trimmed_format_exc())
|
print(trimmed_format_exc())
|
||||||
print("Load function plugin failed")
|
print('Load function plugin failed')
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from crazy_functions.批量翻译PDF文档_NOUGAT import 批量翻译PDF文档
|
from crazy_functions.批量翻译PDF文档_NOUGAT import 批量翻译PDF文档
|
||||||
|
function_plugins.update({
|
||||||
function_plugins.update(
|
|
||||||
{
|
|
||||||
"精准翻译PDF文档(NOUGAT)": {
|
"精准翻译PDF文档(NOUGAT)": {
|
||||||
"Group": "学术",
|
"Group": "学术",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False,
|
"AsButton": False,
|
||||||
"Function": HotReload(批量翻译PDF文档),
|
"Function": HotReload(批量翻译PDF文档)
|
||||||
}
|
}
|
||||||
}
|
})
|
||||||
)
|
|
||||||
except:
|
except:
|
||||||
print(trimmed_format_exc())
|
print(trimmed_format_exc())
|
||||||
print("Load function plugin failed")
|
print('Load function plugin failed')
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from crazy_functions.函数动态生成 import 函数动态生成
|
from crazy_functions.函数动态生成 import 函数动态生成
|
||||||
|
function_plugins.update({
|
||||||
function_plugins.update(
|
|
||||||
{
|
|
||||||
"动态代码解释器(CodeInterpreter)": {
|
"动态代码解释器(CodeInterpreter)": {
|
||||||
"Group": "智能体",
|
"Group": "智能体",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False,
|
"AsButton": False,
|
||||||
"Function": HotReload(函数动态生成),
|
"Function": HotReload(函数动态生成)
|
||||||
}
|
}
|
||||||
}
|
})
|
||||||
)
|
|
||||||
except:
|
except:
|
||||||
print(trimmed_format_exc())
|
print(trimmed_format_exc())
|
||||||
print("Load function plugin failed")
|
print('Load function plugin failed')
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from crazy_functions.多智能体 import 多智能体终端
|
from crazy_functions.多智能体 import 多智能体终端
|
||||||
|
function_plugins.update({
|
||||||
function_plugins.update(
|
|
||||||
{
|
|
||||||
"AutoGen多智能体终端(仅供测试)": {
|
"AutoGen多智能体终端(仅供测试)": {
|
||||||
"Group": "智能体",
|
"Group": "智能体",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False,
|
"AsButton": False,
|
||||||
"Function": HotReload(多智能体终端),
|
"Function": HotReload(多智能体终端)
|
||||||
}
|
}
|
||||||
}
|
})
|
||||||
)
|
|
||||||
except:
|
except:
|
||||||
print(trimmed_format_exc())
|
print(trimmed_format_exc())
|
||||||
print("Load function plugin failed")
|
print('Load function plugin failed')
|
||||||
|
|
||||||
try:
|
|
||||||
from crazy_functions.互动小游戏 import 随机小游戏
|
|
||||||
|
|
||||||
function_plugins.update(
|
|
||||||
{
|
|
||||||
"随机互动小游戏(仅供测试)": {
|
|
||||||
"Group": "智能体",
|
|
||||||
"Color": "stop",
|
|
||||||
"AsButton": False,
|
|
||||||
"Function": HotReload(随机小游戏),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
)
|
|
||||||
except:
|
|
||||||
print(trimmed_format_exc())
|
|
||||||
print("Load function plugin failed")
|
|
||||||
|
|
||||||
# try:
|
# try:
|
||||||
# from crazy_functions.高级功能函数模板 import 测试图表渲染
|
# from crazy_functions.互动小游戏 import 随机小游戏
|
||||||
# function_plugins.update({
|
# function_plugins.update({
|
||||||
# "绘制逻辑关系(测试图表渲染)": {
|
# "随机小游戏": {
|
||||||
# "Group": "智能体",
|
# "Group": "智能体",
|
||||||
# "Color": "stop",
|
# "Color": "stop",
|
||||||
# "AsButton": True,
|
# "AsButton": True,
|
||||||
# "Function": HotReload(测试图表渲染)
|
# "Function": HotReload(随机小游戏)
|
||||||
# }
|
# }
|
||||||
# })
|
# })
|
||||||
# except:
|
# except:
|
||||||
@@ -691,6 +618,8 @@ def get_crazy_functions():
|
|||||||
# except:
|
# except:
|
||||||
# print('Load function plugin failed')
|
# print('Load function plugin failed')
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
"""
|
"""
|
||||||
设置默认值:
|
设置默认值:
|
||||||
- 默认 Group = 对话
|
- 默认 Group = 对话
|
||||||
@@ -700,12 +629,12 @@ def get_crazy_functions():
|
|||||||
"""
|
"""
|
||||||
for name, function_meta in function_plugins.items():
|
for name, function_meta in function_plugins.items():
|
||||||
if "Group" not in function_meta:
|
if "Group" not in function_meta:
|
||||||
function_plugins[name]["Group"] = "对话"
|
function_plugins[name]["Group"] = '对话'
|
||||||
if "AsButton" not in function_meta:
|
if "AsButton" not in function_meta:
|
||||||
function_plugins[name]["AsButton"] = True
|
function_plugins[name]["AsButton"] = True
|
||||||
if "AdvancedArgs" not in function_meta:
|
if "AdvancedArgs" not in function_meta:
|
||||||
function_plugins[name]["AdvancedArgs"] = False
|
function_plugins[name]["AdvancedArgs"] = False
|
||||||
if "Color" not in function_meta:
|
if "Color" not in function_meta:
|
||||||
function_plugins[name]["Color"] = "secondary"
|
function_plugins[name]["Color"] = 'secondary'
|
||||||
|
|
||||||
return function_plugins
|
return function_plugins
|
||||||
|
|||||||
@@ -137,7 +137,7 @@ def get_recent_file_prompt_support(chatbot):
|
|||||||
return path
|
return path
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 虚空终端CodeInterpreter(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def 虚空终端CodeInterpreter(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
"""
|
"""
|
||||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||||
@@ -145,7 +145,7 @@ def 虚空终端CodeInterpreter(txt, llm_kwargs, plugin_kwargs, chatbot, history
|
|||||||
chatbot 聊天显示框的句柄,用于显示给用户
|
chatbot 聊天显示框的句柄,用于显示给用户
|
||||||
history 聊天历史,前情提要
|
history 聊天历史,前情提要
|
||||||
system_prompt 给gpt的静默提醒
|
system_prompt 给gpt的静默提醒
|
||||||
user_request 当前用户的请求信息(IP地址等)
|
web_port 当前软件运行的端口号
|
||||||
"""
|
"""
|
||||||
raise NotImplementedError
|
raise NotImplementedError
|
||||||
|
|
||||||
|
|||||||
@@ -26,8 +26,8 @@ class PaperFileGroup():
|
|||||||
self.sp_file_index.append(index)
|
self.sp_file_index.append(index)
|
||||||
self.sp_file_tag.append(self.file_paths[index])
|
self.sp_file_tag.append(self.file_paths[index])
|
||||||
else:
|
else:
|
||||||
from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
|
from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
|
||||||
segments = breakdown_text_to_satisfy_token_limit(file_content, max_token_limit)
|
segments = breakdown_txt_to_satisfy_token_limit_for_pdf(file_content, self.get_token_num, max_token_limit)
|
||||||
for j, segment in enumerate(segments):
|
for j, segment in enumerate(segments):
|
||||||
self.sp_file_contents.append(segment)
|
self.sp_file_contents.append(segment)
|
||||||
self.sp_file_index.append(index)
|
self.sp_file_index.append(index)
|
||||||
@@ -135,11 +135,11 @@ def 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def Latex英文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def Latex英文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
# 基本信息:功能、贡献者
|
# 基本信息:功能、贡献者
|
||||||
chatbot.append([
|
chatbot.append([
|
||||||
"函数插件功能?",
|
"函数插件功能?",
|
||||||
"对整个Latex项目进行润色。函数插件贡献者: Binary-Husky。(注意,此插件不调用Latex,如果有Latex环境,请使用「Latex英文纠错+高亮修正位置(需Latex)插件」"])
|
"对整个Latex项目进行润色。函数插件贡献者: Binary-Husky。(注意,此插件不调用Latex,如果有Latex环境,请使用“Latex英文纠错+高亮”插件)"])
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
|
||||||
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
||||||
@@ -173,7 +173,7 @@ def Latex英文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def Latex中文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def Latex中文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
# 基本信息:功能、贡献者
|
# 基本信息:功能、贡献者
|
||||||
chatbot.append([
|
chatbot.append([
|
||||||
"函数插件功能?",
|
"函数插件功能?",
|
||||||
@@ -209,7 +209,7 @@ def Latex中文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def Latex英文纠错(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def Latex英文纠错(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
# 基本信息:功能、贡献者
|
# 基本信息:功能、贡献者
|
||||||
chatbot.append([
|
chatbot.append([
|
||||||
"函数插件功能?",
|
"函数插件功能?",
|
||||||
|
|||||||
@@ -26,8 +26,8 @@ class PaperFileGroup():
|
|||||||
self.sp_file_index.append(index)
|
self.sp_file_index.append(index)
|
||||||
self.sp_file_tag.append(self.file_paths[index])
|
self.sp_file_tag.append(self.file_paths[index])
|
||||||
else:
|
else:
|
||||||
from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
|
from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
|
||||||
segments = breakdown_text_to_satisfy_token_limit(file_content, max_token_limit)
|
segments = breakdown_txt_to_satisfy_token_limit_for_pdf(file_content, self.get_token_num, max_token_limit)
|
||||||
for j, segment in enumerate(segments):
|
for j, segment in enumerate(segments):
|
||||||
self.sp_file_contents.append(segment)
|
self.sp_file_contents.append(segment)
|
||||||
self.sp_file_index.append(index)
|
self.sp_file_index.append(index)
|
||||||
@@ -106,7 +106,7 @@ def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def Latex英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def Latex英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
# 基本信息:功能、贡献者
|
# 基本信息:功能、贡献者
|
||||||
chatbot.append([
|
chatbot.append([
|
||||||
"函数插件功能?",
|
"函数插件功能?",
|
||||||
@@ -143,7 +143,7 @@ def Latex英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prom
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def Latex中译英(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def Latex中译英(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
# 基本信息:功能、贡献者
|
# 基本信息:功能、贡献者
|
||||||
chatbot.append([
|
chatbot.append([
|
||||||
"函数插件功能?",
|
"函数插件功能?",
|
||||||
|
|||||||
@@ -1,11 +1,11 @@
|
|||||||
from toolbox import update_ui, trimmed_format_exc, get_conf, get_log_folder, promote_file_to_downloadzone
|
from toolbox import update_ui, trimmed_format_exc, get_conf, get_log_folder, promote_file_to_downloadzone
|
||||||
from toolbox import CatchException, report_exception, update_ui_lastest_msg, zip_result, gen_time_str
|
from toolbox import CatchException, report_exception, update_ui_lastest_msg, zip_result, gen_time_str
|
||||||
from functools import partial
|
from functools import partial
|
||||||
import glob, os, requests, time, tarfile
|
import glob, os, requests, time
|
||||||
pj = os.path.join
|
pj = os.path.join
|
||||||
ARXIV_CACHE_DIR = os.path.expanduser(f"~/arxiv_cache/")
|
ARXIV_CACHE_DIR = os.path.expanduser(f"~/arxiv_cache/")
|
||||||
|
|
||||||
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- 工具函数 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
# =================================== 工具函数 ===============================================
|
||||||
# 专业词汇声明 = 'If the term "agent" is used in this section, it should be translated to "智能体". '
|
# 专业词汇声明 = 'If the term "agent" is used in this section, it should be translated to "智能体". '
|
||||||
def switch_prompt(pfg, mode, more_requirement):
|
def switch_prompt(pfg, mode, more_requirement):
|
||||||
"""
|
"""
|
||||||
@@ -104,7 +104,7 @@ def arxiv_download(chatbot, history, txt, allow_cache=True):
|
|||||||
if ('.' in txt) and ('/' not in txt) and is_float(txt[:10]): # is arxiv ID
|
if ('.' in txt) and ('/' not in txt) and is_float(txt[:10]): # is arxiv ID
|
||||||
txt = 'https://arxiv.org/abs/' + txt[:10]
|
txt = 'https://arxiv.org/abs/' + txt[:10]
|
||||||
if not txt.startswith('https://arxiv.org'):
|
if not txt.startswith('https://arxiv.org'):
|
||||||
return txt, None # 是本地文件,跳过下载
|
return txt, None
|
||||||
|
|
||||||
# <-------------- inspect format ------------->
|
# <-------------- inspect format ------------->
|
||||||
chatbot.append([f"检测到arxiv文档连接", '尝试下载 ...'])
|
chatbot.append([f"检测到arxiv文档连接", '尝试下载 ...'])
|
||||||
@@ -142,11 +142,11 @@ def arxiv_download(chatbot, history, txt, allow_cache=True):
|
|||||||
from toolbox import extract_archive
|
from toolbox import extract_archive
|
||||||
extract_archive(file_path=dst, dest_dir=extract_dst)
|
extract_archive(file_path=dst, dest_dir=extract_dst)
|
||||||
return extract_dst, arxiv_id
|
return extract_dst, arxiv_id
|
||||||
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= 插件主程序1 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
# ========================================= 插件主程序1 =====================================================
|
||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def Latex英文纠错加PDF对比(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def Latex英文纠错加PDF对比(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
# <-------------- information about this plugin ------------->
|
# <-------------- information about this plugin ------------->
|
||||||
chatbot.append([ "函数插件功能?",
|
chatbot.append([ "函数插件功能?",
|
||||||
"对整个Latex项目进行纠错, 用latex编译为PDF对修正处做高亮。函数插件贡献者: Binary-Husky。注意事项: 目前仅支持GPT3.5/GPT4,其他模型转化效果未知。目前对机器学习类文献转化效果最好,其他类型文献转化效果未知。仅在Windows系统进行了测试,其他操作系统表现未知。"])
|
"对整个Latex项目进行纠错, 用latex编译为PDF对修正处做高亮。函数插件贡献者: Binary-Husky。注意事项: 目前仅支持GPT3.5/GPT4,其他模型转化效果未知。目前对机器学习类文献转化效果最好,其他类型文献转化效果未知。仅在Windows系统进行了测试,其他操作系统表现未知。"])
|
||||||
@@ -218,10 +218,10 @@ def Latex英文纠错加PDF对比(txt, llm_kwargs, plugin_kwargs, chatbot, histo
|
|||||||
# <-------------- we are done ------------->
|
# <-------------- we are done ------------->
|
||||||
return success
|
return success
|
||||||
|
|
||||||
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= 插件主程序2 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
# ========================================= 插件主程序2 =====================================================
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def Latex翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def Latex翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
# <-------------- information about this plugin ------------->
|
# <-------------- information about this plugin ------------->
|
||||||
chatbot.append([
|
chatbot.append([
|
||||||
"函数插件功能?",
|
"函数插件功能?",
|
||||||
@@ -250,14 +250,7 @@ def Latex翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot,
|
|||||||
|
|
||||||
# <-------------- clear history and read input ------------->
|
# <-------------- clear history and read input ------------->
|
||||||
history = []
|
history = []
|
||||||
try:
|
|
||||||
txt, arxiv_id = yield from arxiv_download(chatbot, history, txt, allow_cache)
|
txt, arxiv_id = yield from arxiv_download(chatbot, history, txt, allow_cache)
|
||||||
except tarfile.ReadError as e:
|
|
||||||
yield from update_ui_lastest_msg(
|
|
||||||
"无法自动下载该论文的Latex源码,请前往arxiv打开此论文下载页面,点other Formats,然后download source手动下载latex源码包。接下来调用本地Latex翻译插件即可。",
|
|
||||||
chatbot=chatbot, history=history)
|
|
||||||
return
|
|
||||||
|
|
||||||
if txt.endswith('.pdf'):
|
if txt.endswith('.pdf'):
|
||||||
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"发现已经存在翻译好的PDF文档")
|
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"发现已经存在翻译好的PDF文档")
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
|||||||
@@ -35,11 +35,7 @@ def gpt_academic_generate_oai_reply(
|
|||||||
class AutoGenGeneral(PluginMultiprocessManager):
|
class AutoGenGeneral(PluginMultiprocessManager):
|
||||||
def gpt_academic_print_override(self, user_proxy, message, sender):
|
def gpt_academic_print_override(self, user_proxy, message, sender):
|
||||||
# ⭐⭐ run in subprocess
|
# ⭐⭐ run in subprocess
|
||||||
try:
|
self.child_conn.send(PipeCom("show", sender.name + "\n\n---\n\n" + message["content"]))
|
||||||
print_msg = sender.name + "\n\n---\n\n" + message["content"]
|
|
||||||
except:
|
|
||||||
print_msg = sender.name + "\n\n---\n\n" + message
|
|
||||||
self.child_conn.send(PipeCom("show", print_msg))
|
|
||||||
|
|
||||||
def gpt_academic_get_human_input(self, user_proxy, message):
|
def gpt_academic_get_human_input(self, user_proxy, message):
|
||||||
# ⭐⭐ run in subprocess
|
# ⭐⭐ run in subprocess
|
||||||
@@ -66,6 +62,7 @@ class AutoGenGeneral(PluginMultiprocessManager):
|
|||||||
def exe_autogen(self, input):
|
def exe_autogen(self, input):
|
||||||
# ⭐⭐ run in subprocess
|
# ⭐⭐ run in subprocess
|
||||||
input = input.content
|
input = input.content
|
||||||
|
with ProxyNetworkActivate("AutoGen"):
|
||||||
code_execution_config = {"work_dir": self.autogen_work_dir, "use_docker": self.use_docker}
|
code_execution_config = {"work_dir": self.autogen_work_dir, "use_docker": self.use_docker}
|
||||||
agents = self.define_agents()
|
agents = self.define_agents()
|
||||||
user_proxy = None
|
user_proxy = None
|
||||||
@@ -88,7 +85,6 @@ class AutoGenGeneral(PluginMultiprocessManager):
|
|||||||
if agent_kwargs['name'] == 'assistant': assistant = agent_handle
|
if agent_kwargs['name'] == 'assistant': assistant = agent_handle
|
||||||
try:
|
try:
|
||||||
if user_proxy is None or assistant is None: raise Exception("用户代理或助理代理未定义")
|
if user_proxy is None or assistant is None: raise Exception("用户代理或助理代理未定义")
|
||||||
with ProxyNetworkActivate("AutoGen"):
|
|
||||||
user_proxy.initiate_chat(assistant, message=input)
|
user_proxy.initiate_chat(assistant, message=input)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
tb_str = '```\n' + trimmed_format_exc() + '```'
|
tb_str = '```\n' + trimmed_format_exc() + '```'
|
||||||
|
|||||||
@@ -9,7 +9,7 @@ class PipeCom:
|
|||||||
|
|
||||||
|
|
||||||
class PluginMultiprocessManager:
|
class PluginMultiprocessManager:
|
||||||
def __init__(self, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def __init__(self, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
# ⭐ run in main process
|
# ⭐ run in main process
|
||||||
self.autogen_work_dir = os.path.join(get_log_folder("autogen"), gen_time_str())
|
self.autogen_work_dir = os.path.join(get_log_folder("autogen"), gen_time_str())
|
||||||
self.previous_work_dir_files = {}
|
self.previous_work_dir_files = {}
|
||||||
@@ -18,7 +18,7 @@ class PluginMultiprocessManager:
|
|||||||
self.chatbot = chatbot
|
self.chatbot = chatbot
|
||||||
self.history = history
|
self.history = history
|
||||||
self.system_prompt = system_prompt
|
self.system_prompt = system_prompt
|
||||||
# self.user_request = user_request
|
# self.web_port = web_port
|
||||||
self.alive = True
|
self.alive = True
|
||||||
self.use_docker = get_conf("AUTOGEN_USE_DOCKER")
|
self.use_docker = get_conf("AUTOGEN_USE_DOCKER")
|
||||||
self.last_user_input = ""
|
self.last_user_input = ""
|
||||||
|
|||||||
@@ -32,7 +32,7 @@ def string_to_options(arguments):
|
|||||||
return args
|
return args
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 微调数据集生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def 微调数据集生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
"""
|
"""
|
||||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||||
@@ -40,7 +40,7 @@ def 微调数据集生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
|
|||||||
chatbot 聊天显示框的句柄,用于显示给用户
|
chatbot 聊天显示框的句柄,用于显示给用户
|
||||||
history 聊天历史,前情提要
|
history 聊天历史,前情提要
|
||||||
system_prompt 给gpt的静默提醒
|
system_prompt 给gpt的静默提醒
|
||||||
user_request 当前用户的请求信息(IP地址等)
|
web_port 当前软件运行的端口号
|
||||||
"""
|
"""
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
chatbot.append(("这是什么功能?", "[Local Message] 微调数据集生成"))
|
chatbot.append(("这是什么功能?", "[Local Message] 微调数据集生成"))
|
||||||
@@ -80,7 +80,7 @@ def 微调数据集生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 启动微调(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def 启动微调(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
"""
|
"""
|
||||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||||
@@ -88,7 +88,7 @@ def 启动微调(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt
|
|||||||
chatbot 聊天显示框的句柄,用于显示给用户
|
chatbot 聊天显示框的句柄,用于显示给用户
|
||||||
history 聊天历史,前情提要
|
history 聊天历史,前情提要
|
||||||
system_prompt 给gpt的静默提醒
|
system_prompt 给gpt的静默提醒
|
||||||
user_request 当前用户的请求信息(IP地址等)
|
web_port 当前软件运行的端口号
|
||||||
"""
|
"""
|
||||||
import subprocess
|
import subprocess
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
|
|||||||
@@ -139,8 +139,6 @@ def can_multi_process(llm):
|
|||||||
if llm.startswith('gpt-'): return True
|
if llm.startswith('gpt-'): return True
|
||||||
if llm.startswith('api2d-'): return True
|
if llm.startswith('api2d-'): return True
|
||||||
if llm.startswith('azure-'): return True
|
if llm.startswith('azure-'): return True
|
||||||
if llm.startswith('spark'): return True
|
|
||||||
if llm.startswith('zhipuai'): return True
|
|
||||||
return False
|
return False
|
||||||
|
|
||||||
def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
|
def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
|
||||||
@@ -284,7 +282,8 @@ def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
|
|||||||
# 在前端打印些好玩的东西
|
# 在前端打印些好玩的东西
|
||||||
for thread_index, _ in enumerate(worker_done):
|
for thread_index, _ in enumerate(worker_done):
|
||||||
print_something_really_funny = "[ ...`"+mutable[thread_index][0][-scroller_max_len:].\
|
print_something_really_funny = "[ ...`"+mutable[thread_index][0][-scroller_max_len:].\
|
||||||
replace('\n', '').replace('`', '.').replace(' ', '.').replace('<br/>', '.....').replace('$', '.')+"`... ]"
|
replace('\n', '').replace('`', '.').replace(
|
||||||
|
' ', '.').replace('<br/>', '.....').replace('$', '.')+"`... ]"
|
||||||
observe_win.append(print_something_really_funny)
|
observe_win.append(print_something_really_funny)
|
||||||
# 在前端打印些好玩的东西
|
# 在前端打印些好玩的东西
|
||||||
stat_str = ''.join([f'`{mutable[thread_index][2]}`: {obs}\n\n'
|
stat_str = ''.join([f'`{mutable[thread_index][2]}`: {obs}\n\n'
|
||||||
@@ -313,6 +312,95 @@ def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
|
|||||||
return gpt_response_collection
|
return gpt_response_collection
|
||||||
|
|
||||||
|
|
||||||
|
def breakdown_txt_to_satisfy_token_limit(txt, get_token_fn, limit):
|
||||||
|
def cut(txt_tocut, must_break_at_empty_line): # 递归
|
||||||
|
if get_token_fn(txt_tocut) <= limit:
|
||||||
|
return [txt_tocut]
|
||||||
|
else:
|
||||||
|
lines = txt_tocut.split('\n')
|
||||||
|
estimated_line_cut = limit / get_token_fn(txt_tocut) * len(lines)
|
||||||
|
estimated_line_cut = int(estimated_line_cut)
|
||||||
|
for cnt in reversed(range(estimated_line_cut)):
|
||||||
|
if must_break_at_empty_line:
|
||||||
|
if lines[cnt] != "":
|
||||||
|
continue
|
||||||
|
print(cnt)
|
||||||
|
prev = "\n".join(lines[:cnt])
|
||||||
|
post = "\n".join(lines[cnt:])
|
||||||
|
if get_token_fn(prev) < limit:
|
||||||
|
break
|
||||||
|
if cnt == 0:
|
||||||
|
raise RuntimeError("存在一行极长的文本!")
|
||||||
|
# print(len(post))
|
||||||
|
# 列表递归接龙
|
||||||
|
result = [prev]
|
||||||
|
result.extend(cut(post, must_break_at_empty_line))
|
||||||
|
return result
|
||||||
|
try:
|
||||||
|
return cut(txt, must_break_at_empty_line=True)
|
||||||
|
except RuntimeError:
|
||||||
|
return cut(txt, must_break_at_empty_line=False)
|
||||||
|
|
||||||
|
|
||||||
|
def force_breakdown(txt, limit, get_token_fn):
|
||||||
|
"""
|
||||||
|
当无法用标点、空行分割时,我们用最暴力的方法切割
|
||||||
|
"""
|
||||||
|
for i in reversed(range(len(txt))):
|
||||||
|
if get_token_fn(txt[:i]) < limit:
|
||||||
|
return txt[:i], txt[i:]
|
||||||
|
return "Tiktoken未知错误", "Tiktoken未知错误"
|
||||||
|
|
||||||
|
def breakdown_txt_to_satisfy_token_limit_for_pdf(txt, get_token_fn, limit):
|
||||||
|
# 递归
|
||||||
|
def cut(txt_tocut, must_break_at_empty_line, break_anyway=False):
|
||||||
|
if get_token_fn(txt_tocut) <= limit:
|
||||||
|
return [txt_tocut]
|
||||||
|
else:
|
||||||
|
lines = txt_tocut.split('\n')
|
||||||
|
estimated_line_cut = limit / get_token_fn(txt_tocut) * len(lines)
|
||||||
|
estimated_line_cut = int(estimated_line_cut)
|
||||||
|
cnt = 0
|
||||||
|
for cnt in reversed(range(estimated_line_cut)):
|
||||||
|
if must_break_at_empty_line:
|
||||||
|
if lines[cnt] != "":
|
||||||
|
continue
|
||||||
|
prev = "\n".join(lines[:cnt])
|
||||||
|
post = "\n".join(lines[cnt:])
|
||||||
|
if get_token_fn(prev) < limit:
|
||||||
|
break
|
||||||
|
if cnt == 0:
|
||||||
|
if break_anyway:
|
||||||
|
prev, post = force_breakdown(txt_tocut, limit, get_token_fn)
|
||||||
|
else:
|
||||||
|
raise RuntimeError(f"存在一行极长的文本!{txt_tocut}")
|
||||||
|
# print(len(post))
|
||||||
|
# 列表递归接龙
|
||||||
|
result = [prev]
|
||||||
|
result.extend(cut(post, must_break_at_empty_line, break_anyway=break_anyway))
|
||||||
|
return result
|
||||||
|
try:
|
||||||
|
# 第1次尝试,将双空行(\n\n)作为切分点
|
||||||
|
return cut(txt, must_break_at_empty_line=True)
|
||||||
|
except RuntimeError:
|
||||||
|
try:
|
||||||
|
# 第2次尝试,将单空行(\n)作为切分点
|
||||||
|
return cut(txt, must_break_at_empty_line=False)
|
||||||
|
except RuntimeError:
|
||||||
|
try:
|
||||||
|
# 第3次尝试,将英文句号(.)作为切分点
|
||||||
|
res = cut(txt.replace('.', '。\n'), must_break_at_empty_line=False) # 这个中文的句号是故意的,作为一个标识而存在
|
||||||
|
return [r.replace('。\n', '.') for r in res]
|
||||||
|
except RuntimeError as e:
|
||||||
|
try:
|
||||||
|
# 第4次尝试,将中文句号(。)作为切分点
|
||||||
|
res = cut(txt.replace('。', '。。\n'), must_break_at_empty_line=False)
|
||||||
|
return [r.replace('。。\n', '。') for r in res]
|
||||||
|
except RuntimeError as e:
|
||||||
|
# 第5次尝试,没办法了,随便切一下敷衍吧
|
||||||
|
return cut(txt, must_break_at_empty_line=False, break_anyway=True)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def read_and_clean_pdf_text(fp):
|
def read_and_clean_pdf_text(fp):
|
||||||
"""
|
"""
|
||||||
@@ -465,9 +553,6 @@ def read_and_clean_pdf_text(fp):
|
|||||||
return True
|
return True
|
||||||
else:
|
else:
|
||||||
return False
|
return False
|
||||||
# 对于某些PDF会有第一个段落就以小写字母开头,为了避免索引错误将其更改为大写
|
|
||||||
if starts_with_lowercase_word(meta_txt[0]):
|
|
||||||
meta_txt[0] = meta_txt[0].capitalize()
|
|
||||||
for _ in range(100):
|
for _ in range(100):
|
||||||
for index, block_txt in enumerate(meta_txt):
|
for index, block_txt in enumerate(meta_txt):
|
||||||
if starts_with_lowercase_word(block_txt):
|
if starts_with_lowercase_word(block_txt):
|
||||||
@@ -546,6 +631,7 @@ def get_files_from_everything(txt, type): # type='.md'
|
|||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
@Singleton
|
@Singleton
|
||||||
class nougat_interface():
|
class nougat_interface():
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
|
|||||||
@@ -1,122 +0,0 @@
|
|||||||
import os
|
|
||||||
from textwrap import indent
|
|
||||||
|
|
||||||
class FileNode:
|
|
||||||
def __init__(self, name):
|
|
||||||
self.name = name
|
|
||||||
self.children = []
|
|
||||||
self.is_leaf = False
|
|
||||||
self.level = 0
|
|
||||||
self.parenting_ship = []
|
|
||||||
self.comment = ""
|
|
||||||
self.comment_maxlen_show = 50
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def add_linebreaks_at_spaces(string, interval=10):
|
|
||||||
return '\n'.join(string[i:i+interval] for i in range(0, len(string), interval))
|
|
||||||
|
|
||||||
def sanitize_comment(self, comment):
|
|
||||||
if len(comment) > self.comment_maxlen_show: suf = '...'
|
|
||||||
else: suf = ''
|
|
||||||
comment = comment[:self.comment_maxlen_show]
|
|
||||||
comment = comment.replace('\"', '').replace('`', '').replace('\n', '').replace('`', '').replace('$', '')
|
|
||||||
comment = self.add_linebreaks_at_spaces(comment, 10)
|
|
||||||
return '`' + comment + suf + '`'
|
|
||||||
|
|
||||||
def add_file(self, file_path, file_comment):
|
|
||||||
directory_names, file_name = os.path.split(file_path)
|
|
||||||
current_node = self
|
|
||||||
level = 1
|
|
||||||
if directory_names == "":
|
|
||||||
new_node = FileNode(file_name)
|
|
||||||
current_node.children.append(new_node)
|
|
||||||
new_node.is_leaf = True
|
|
||||||
new_node.comment = self.sanitize_comment(file_comment)
|
|
||||||
new_node.level = level
|
|
||||||
current_node = new_node
|
|
||||||
else:
|
|
||||||
dnamesplit = directory_names.split(os.sep)
|
|
||||||
for i, directory_name in enumerate(dnamesplit):
|
|
||||||
found_child = False
|
|
||||||
level += 1
|
|
||||||
for child in current_node.children:
|
|
||||||
if child.name == directory_name:
|
|
||||||
current_node = child
|
|
||||||
found_child = True
|
|
||||||
break
|
|
||||||
if not found_child:
|
|
||||||
new_node = FileNode(directory_name)
|
|
||||||
current_node.children.append(new_node)
|
|
||||||
new_node.level = level - 1
|
|
||||||
current_node = new_node
|
|
||||||
term = FileNode(file_name)
|
|
||||||
term.level = level
|
|
||||||
term.comment = self.sanitize_comment(file_comment)
|
|
||||||
term.is_leaf = True
|
|
||||||
current_node.children.append(term)
|
|
||||||
|
|
||||||
def print_files_recursively(self, level=0, code="R0"):
|
|
||||||
print(' '*level + self.name + ' ' + str(self.is_leaf) + ' ' + str(self.level))
|
|
||||||
for j, child in enumerate(self.children):
|
|
||||||
child.print_files_recursively(level=level+1, code=code+str(j))
|
|
||||||
self.parenting_ship.extend(child.parenting_ship)
|
|
||||||
p1 = f"""{code}[\"🗎{self.name}\"]""" if self.is_leaf else f"""{code}[[\"📁{self.name}\"]]"""
|
|
||||||
p2 = """ --> """
|
|
||||||
p3 = f"""{code+str(j)}[\"🗎{child.name}\"]""" if child.is_leaf else f"""{code+str(j)}[[\"📁{child.name}\"]]"""
|
|
||||||
edge_code = p1 + p2 + p3
|
|
||||||
if edge_code in self.parenting_ship:
|
|
||||||
continue
|
|
||||||
self.parenting_ship.append(edge_code)
|
|
||||||
if self.comment != "":
|
|
||||||
pc1 = f"""{code}[\"🗎{self.name}\"]""" if self.is_leaf else f"""{code}[[\"📁{self.name}\"]]"""
|
|
||||||
pc2 = f""" -.-x """
|
|
||||||
pc3 = f"""C{code}[\"{self.comment}\"]:::Comment"""
|
|
||||||
edge_code = pc1 + pc2 + pc3
|
|
||||||
self.parenting_ship.append(edge_code)
|
|
||||||
|
|
||||||
|
|
||||||
MERMAID_TEMPLATE = r"""
|
|
||||||
```mermaid
|
|
||||||
flowchart LR
|
|
||||||
%% <gpt_academic_hide_mermaid_code> 一个特殊标记,用于在生成mermaid图表时隐藏代码块
|
|
||||||
classDef Comment stroke-dasharray: 5 5
|
|
||||||
subgraph {graph_name}
|
|
||||||
{relationship}
|
|
||||||
end
|
|
||||||
```
|
|
||||||
"""
|
|
||||||
|
|
||||||
def build_file_tree_mermaid_diagram(file_manifest, file_comments, graph_name):
|
|
||||||
# Create the root node
|
|
||||||
file_tree_struct = FileNode("root")
|
|
||||||
# Build the tree structure
|
|
||||||
for file_path, file_comment in zip(file_manifest, file_comments):
|
|
||||||
file_tree_struct.add_file(file_path, file_comment)
|
|
||||||
file_tree_struct.print_files_recursively()
|
|
||||||
cc = "\n".join(file_tree_struct.parenting_ship)
|
|
||||||
ccc = indent(cc, prefix=" "*8)
|
|
||||||
return MERMAID_TEMPLATE.format(graph_name=graph_name, relationship=ccc)
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
# File manifest
|
|
||||||
file_manifest = [
|
|
||||||
"cradle_void_terminal.ipynb",
|
|
||||||
"tests/test_utils.py",
|
|
||||||
"tests/test_plugins.py",
|
|
||||||
"tests/test_llms.py",
|
|
||||||
"config.py",
|
|
||||||
"build/ChatGLM-6b-onnx-u8s8/chatglm-6b-int8-onnx-merged/model_weights_0.bin",
|
|
||||||
"crazy_functions/latex_fns/latex_actions.py",
|
|
||||||
"crazy_functions/latex_fns/latex_toolbox.py"
|
|
||||||
]
|
|
||||||
file_comments = [
|
|
||||||
"根据位置和名称,可能是一个模块的初始化文件根据位置和名称,可能是一个模块的初始化文件根据位置和名称,可能是一个模块的初始化文件",
|
|
||||||
"包含一些用于文本处理和模型微调的函数和装饰器包含一些用于文本处理和模型微调的函数和装饰器包含一些用于文本处理和模型微调的函数和装饰器",
|
|
||||||
"用于构建HTML报告的类和方法用于构建HTML报告的类和方法用于构建HTML报告的类和方法",
|
|
||||||
"包含了用于文本切分的函数,以及处理PDF文件的示例代码包含了用于文本切分的函数,以及处理PDF文件的示例代码包含了用于文本切分的函数,以及处理PDF文件的示例代码",
|
|
||||||
"用于解析和翻译PDF文件的功能和相关辅助函数用于解析和翻译PDF文件的功能和相关辅助函数用于解析和翻译PDF文件的功能和相关辅助函数",
|
|
||||||
"是一个包的初始化文件,用于初始化包的属性和导入模块是一个包的初始化文件,用于初始化包的属性和导入模块是一个包的初始化文件,用于初始化包的属性和导入模块",
|
|
||||||
"用于加载和分割文件中的文本的通用文件加载器用于加载和分割文件中的文本的通用文件加载器用于加载和分割文件中的文本的通用文件加载器",
|
|
||||||
"包含了用于构建和管理向量数据库的函数和类包含了用于构建和管理向量数据库的函数和类包含了用于构建和管理向量数据库的函数和类",
|
|
||||||
]
|
|
||||||
print(build_file_tree_mermaid_diagram(file_manifest, file_comments, "项目文件树"))
|
|
||||||
@@ -1,42 +0,0 @@
|
|||||||
from toolbox import CatchException, update_ui, update_ui_lastest_msg
|
|
||||||
from crazy_functions.multi_stage.multi_stage_utils import GptAcademicGameBaseState
|
|
||||||
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
|
||||||
from request_llms.bridge_all import predict_no_ui_long_connection
|
|
||||||
from crazy_functions.game_fns.game_utils import get_code_block, is_same_thing
|
|
||||||
import random
|
|
||||||
|
|
||||||
|
|
||||||
class MiniGame_ASCII_Art(GptAcademicGameBaseState):
|
|
||||||
def step(self, prompt, chatbot, history):
|
|
||||||
if self.step_cnt == 0:
|
|
||||||
chatbot.append(["我画你猜(动物)", "请稍等..."])
|
|
||||||
else:
|
|
||||||
if prompt.strip() == 'exit':
|
|
||||||
self.delete_game = True
|
|
||||||
yield from update_ui_lastest_msg(lastmsg=f"谜底是{self.obj},游戏结束。", chatbot=chatbot, history=history, delay=0.)
|
|
||||||
return
|
|
||||||
chatbot.append([prompt, ""])
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history)
|
|
||||||
|
|
||||||
if self.step_cnt == 0:
|
|
||||||
self.lock_plugin(chatbot)
|
|
||||||
self.cur_task = 'draw'
|
|
||||||
|
|
||||||
if self.cur_task == 'draw':
|
|
||||||
avail_obj = ["狗","猫","鸟","鱼","老鼠","蛇"]
|
|
||||||
self.obj = random.choice(avail_obj)
|
|
||||||
inputs = "I want to play a game called Guess the ASCII art. You can draw the ASCII art and I will try to guess it. " + \
|
|
||||||
f"This time you draw a {self.obj}. Note that you must not indicate what you have draw in the text, and you should only produce the ASCII art wrapped by ```. "
|
|
||||||
raw_res = predict_no_ui_long_connection(inputs=inputs, llm_kwargs=self.llm_kwargs, history=[], sys_prompt="")
|
|
||||||
self.cur_task = 'identify user guess'
|
|
||||||
res = get_code_block(raw_res)
|
|
||||||
history += ['', f'the answer is {self.obj}', inputs, res]
|
|
||||||
yield from update_ui_lastest_msg(lastmsg=res, chatbot=chatbot, history=history, delay=0.)
|
|
||||||
|
|
||||||
elif self.cur_task == 'identify user guess':
|
|
||||||
if is_same_thing(self.obj, prompt, self.llm_kwargs):
|
|
||||||
self.delete_game = True
|
|
||||||
yield from update_ui_lastest_msg(lastmsg="你猜对了!", chatbot=chatbot, history=history, delay=0.)
|
|
||||||
else:
|
|
||||||
self.cur_task = 'identify user guess'
|
|
||||||
yield from update_ui_lastest_msg(lastmsg="猜错了,再试试,输入“exit”获取答案。", chatbot=chatbot, history=history, delay=0.)
|
|
||||||
@@ -1,212 +0,0 @@
|
|||||||
prompts_hs = """ 请以“{headstart}”为开头,编写一个小说的第一幕。
|
|
||||||
|
|
||||||
- 尽量短,不要包含太多情节,因为你接下来将会与用户互动续写下面的情节,要留出足够的互动空间。
|
|
||||||
- 出现人物时,给出人物的名字。
|
|
||||||
- 积极地运用环境描写、人物描写等手法,让读者能够感受到你的故事世界。
|
|
||||||
- 积极地运用修辞手法,比如比喻、拟人、排比、对偶、夸张等等。
|
|
||||||
- 字数要求:第一幕的字数少于300字,且少于2个段落。
|
|
||||||
"""
|
|
||||||
|
|
||||||
prompts_interact = """ 小说的前文回顾:
|
|
||||||
「
|
|
||||||
{previously_on_story}
|
|
||||||
」
|
|
||||||
|
|
||||||
你是一个作家,根据以上的情节,给出4种不同的后续剧情发展方向,每个发展方向都精明扼要地用一句话说明。稍后,我将在这4个选择中,挑选一种剧情发展。
|
|
||||||
|
|
||||||
输出格式例如:
|
|
||||||
1. 后续剧情发展1
|
|
||||||
2. 后续剧情发展2
|
|
||||||
3. 后续剧情发展3
|
|
||||||
4. 后续剧情发展4
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
prompts_resume = """小说的前文回顾:
|
|
||||||
「
|
|
||||||
{previously_on_story}
|
|
||||||
」
|
|
||||||
|
|
||||||
你是一个作家,我们正在互相讨论,确定后续剧情的发展。
|
|
||||||
在以下的剧情发展中,
|
|
||||||
「
|
|
||||||
{choice}
|
|
||||||
」
|
|
||||||
我认为更合理的是:{user_choice}。
|
|
||||||
请在前文的基础上(不要重复前文),围绕我选定的剧情情节,编写小说的下一幕。
|
|
||||||
|
|
||||||
- 禁止杜撰不符合我选择的剧情。
|
|
||||||
- 尽量短,不要包含太多情节,因为你接下来将会与用户互动续写下面的情节,要留出足够的互动空间。
|
|
||||||
- 不要重复前文。
|
|
||||||
- 出现人物时,给出人物的名字。
|
|
||||||
- 积极地运用环境描写、人物描写等手法,让读者能够感受到你的故事世界。
|
|
||||||
- 积极地运用修辞手法,比如比喻、拟人、排比、对偶、夸张等等。
|
|
||||||
- 小说的下一幕字数少于300字,且少于2个段落。
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
prompts_terminate = """小说的前文回顾:
|
|
||||||
「
|
|
||||||
{previously_on_story}
|
|
||||||
」
|
|
||||||
|
|
||||||
你是一个作家,我们正在互相讨论,确定后续剧情的发展。
|
|
||||||
现在,故事该结束了,我认为最合理的故事结局是:{user_choice}。
|
|
||||||
|
|
||||||
请在前文的基础上(不要重复前文),编写小说的最后一幕。
|
|
||||||
|
|
||||||
- 不要重复前文。
|
|
||||||
- 出现人物时,给出人物的名字。
|
|
||||||
- 积极地运用环境描写、人物描写等手法,让读者能够感受到你的故事世界。
|
|
||||||
- 积极地运用修辞手法,比如比喻、拟人、排比、对偶、夸张等等。
|
|
||||||
- 字数要求:最后一幕的字数少于1000字。
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
from toolbox import CatchException, update_ui, update_ui_lastest_msg
|
|
||||||
from crazy_functions.multi_stage.multi_stage_utils import GptAcademicGameBaseState
|
|
||||||
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
|
||||||
from request_llms.bridge_all import predict_no_ui_long_connection
|
|
||||||
from crazy_functions.game_fns.game_utils import get_code_block, is_same_thing
|
|
||||||
import random
|
|
||||||
|
|
||||||
|
|
||||||
class MiniGame_ResumeStory(GptAcademicGameBaseState):
|
|
||||||
story_headstart = [
|
|
||||||
'先行者知道,他现在是全宇宙中唯一的一个人了。',
|
|
||||||
'深夜,一个年轻人穿过天安门广场向纪念堂走去。在二十二世纪编年史中,计算机把他的代号定为M102。',
|
|
||||||
'他知道,这最后一课要提前讲了。又一阵剧痛从肝部袭来,几乎使他晕厥过去。',
|
|
||||||
'在距地球五万光年的远方,在银河系的中心,一场延续了两万年的星际战争已接近尾声。那里的太空中渐渐隐现出一个方形区域,仿佛灿烂的群星的背景被剪出一个方口。',
|
|
||||||
'伊依一行三人乘坐一艘游艇在南太平洋上做吟诗航行,他们的目的地是南极,如果几天后能顺利到达那里,他们将钻出地壳去看诗云。',
|
|
||||||
'很多人生来就会莫名其妙地迷上一样东西,仿佛他的出生就是要和这东西约会似的,正是这样,圆圆迷上了肥皂泡。'
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
def begin_game_step_0(self, prompt, chatbot, history):
|
|
||||||
# init game at step 0
|
|
||||||
self.headstart = random.choice(self.story_headstart)
|
|
||||||
self.story = []
|
|
||||||
chatbot.append(["互动写故事", f"这次的故事开头是:{self.headstart}"])
|
|
||||||
self.sys_prompt_ = '你是一个想象力丰富的杰出作家。正在与你的朋友互动,一起写故事,因此你每次写的故事段落应少于300字(结局除外)。'
|
|
||||||
|
|
||||||
|
|
||||||
def generate_story_image(self, story_paragraph):
|
|
||||||
try:
|
|
||||||
from crazy_functions.图片生成 import gen_image
|
|
||||||
prompt_ = predict_no_ui_long_connection(inputs=story_paragraph, llm_kwargs=self.llm_kwargs, history=[], sys_prompt='你需要根据用户给出的小说段落,进行简短的环境描写。要求:80字以内。')
|
|
||||||
image_url, image_path = gen_image(self.llm_kwargs, prompt_, '512x512', model="dall-e-2", quality='standard', style='natural')
|
|
||||||
return f'<br/><div align="center"><img src="file={image_path}"></div>'
|
|
||||||
except:
|
|
||||||
return ''
|
|
||||||
|
|
||||||
def step(self, prompt, chatbot, history):
|
|
||||||
|
|
||||||
"""
|
|
||||||
首先,处理游戏初始化等特殊情况
|
|
||||||
"""
|
|
||||||
if self.step_cnt == 0:
|
|
||||||
self.begin_game_step_0(prompt, chatbot, history)
|
|
||||||
self.lock_plugin(chatbot)
|
|
||||||
self.cur_task = 'head_start'
|
|
||||||
else:
|
|
||||||
if prompt.strip() == 'exit' or prompt.strip() == '结束剧情':
|
|
||||||
# should we terminate game here?
|
|
||||||
self.delete_game = True
|
|
||||||
yield from update_ui_lastest_msg(lastmsg=f"游戏结束。", chatbot=chatbot, history=history, delay=0.)
|
|
||||||
return
|
|
||||||
if '剧情收尾' in prompt:
|
|
||||||
self.cur_task = 'story_terminate'
|
|
||||||
# # well, game resumes
|
|
||||||
# chatbot.append([prompt, ""])
|
|
||||||
# update ui, don't keep the user waiting
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history)
|
|
||||||
|
|
||||||
|
|
||||||
"""
|
|
||||||
处理游戏的主体逻辑
|
|
||||||
"""
|
|
||||||
if self.cur_task == 'head_start':
|
|
||||||
"""
|
|
||||||
这是游戏的第一步
|
|
||||||
"""
|
|
||||||
inputs_ = prompts_hs.format(headstart=self.headstart)
|
|
||||||
history_ = []
|
|
||||||
story_paragraph = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
|
||||||
inputs_, '故事开头', self.llm_kwargs,
|
|
||||||
chatbot, history_, self.sys_prompt_
|
|
||||||
)
|
|
||||||
self.story.append(story_paragraph)
|
|
||||||
# # 配图
|
|
||||||
yield from update_ui_lastest_msg(lastmsg=story_paragraph + '<br/>正在生成插图中 ...', chatbot=chatbot, history=history, delay=0.)
|
|
||||||
yield from update_ui_lastest_msg(lastmsg=story_paragraph + '<br/>'+ self.generate_story_image(story_paragraph), chatbot=chatbot, history=history, delay=0.)
|
|
||||||
|
|
||||||
# # 构建后续剧情引导
|
|
||||||
previously_on_story = ""
|
|
||||||
for s in self.story:
|
|
||||||
previously_on_story += s + '\n'
|
|
||||||
inputs_ = prompts_interact.format(previously_on_story=previously_on_story)
|
|
||||||
history_ = []
|
|
||||||
self.next_choices = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
|
||||||
inputs_, '请在以下几种故事走向中,选择一种(当然,您也可以选择给出其他故事走向):', self.llm_kwargs,
|
|
||||||
chatbot,
|
|
||||||
history_,
|
|
||||||
self.sys_prompt_
|
|
||||||
)
|
|
||||||
self.cur_task = 'user_choice'
|
|
||||||
|
|
||||||
|
|
||||||
elif self.cur_task == 'user_choice':
|
|
||||||
"""
|
|
||||||
根据用户的提示,确定故事的下一步
|
|
||||||
"""
|
|
||||||
if '请在以下几种故事走向中,选择一种' in chatbot[-1][0]: chatbot.pop(-1)
|
|
||||||
previously_on_story = ""
|
|
||||||
for s in self.story:
|
|
||||||
previously_on_story += s + '\n'
|
|
||||||
inputs_ = prompts_resume.format(previously_on_story=previously_on_story, choice=self.next_choices, user_choice=prompt)
|
|
||||||
history_ = []
|
|
||||||
story_paragraph = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
|
||||||
inputs_, f'下一段故事(您的选择是:{prompt})。', self.llm_kwargs,
|
|
||||||
chatbot, history_, self.sys_prompt_
|
|
||||||
)
|
|
||||||
self.story.append(story_paragraph)
|
|
||||||
# # 配图
|
|
||||||
yield from update_ui_lastest_msg(lastmsg=story_paragraph + '<br/>正在生成插图中 ...', chatbot=chatbot, history=history, delay=0.)
|
|
||||||
yield from update_ui_lastest_msg(lastmsg=story_paragraph + '<br/>'+ self.generate_story_image(story_paragraph), chatbot=chatbot, history=history, delay=0.)
|
|
||||||
|
|
||||||
# # 构建后续剧情引导
|
|
||||||
previously_on_story = ""
|
|
||||||
for s in self.story:
|
|
||||||
previously_on_story += s + '\n'
|
|
||||||
inputs_ = prompts_interact.format(previously_on_story=previously_on_story)
|
|
||||||
history_ = []
|
|
||||||
self.next_choices = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
|
||||||
inputs_,
|
|
||||||
'请在以下几种故事走向中,选择一种。当然,您也可以给出您心中的其他故事走向。另外,如果您希望剧情立即收尾,请输入剧情走向,并以“剧情收尾”四个字提示程序。', self.llm_kwargs,
|
|
||||||
chatbot,
|
|
||||||
history_,
|
|
||||||
self.sys_prompt_
|
|
||||||
)
|
|
||||||
self.cur_task = 'user_choice'
|
|
||||||
|
|
||||||
|
|
||||||
elif self.cur_task == 'story_terminate':
|
|
||||||
"""
|
|
||||||
根据用户的提示,确定故事的结局
|
|
||||||
"""
|
|
||||||
previously_on_story = ""
|
|
||||||
for s in self.story:
|
|
||||||
previously_on_story += s + '\n'
|
|
||||||
inputs_ = prompts_terminate.format(previously_on_story=previously_on_story, user_choice=prompt)
|
|
||||||
history_ = []
|
|
||||||
story_paragraph = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
|
||||||
inputs_, f'故事收尾(您的选择是:{prompt})。', self.llm_kwargs,
|
|
||||||
chatbot, history_, self.sys_prompt_
|
|
||||||
)
|
|
||||||
# # 配图
|
|
||||||
yield from update_ui_lastest_msg(lastmsg=story_paragraph + '<br/>正在生成插图中 ...', chatbot=chatbot, history=history, delay=0.)
|
|
||||||
yield from update_ui_lastest_msg(lastmsg=story_paragraph + '<br/>'+ self.generate_story_image(story_paragraph), chatbot=chatbot, history=history, delay=0.)
|
|
||||||
|
|
||||||
# terminate game
|
|
||||||
self.delete_game = True
|
|
||||||
return
|
|
||||||
@@ -1,37 +0,0 @@
|
|||||||
import platform
|
|
||||||
import pickle
|
|
||||||
import multiprocessing
|
|
||||||
|
|
||||||
def run_in_subprocess_wrapper_func(v_args):
|
|
||||||
func, args, kwargs, return_dict, exception_dict = pickle.loads(v_args)
|
|
||||||
import sys
|
|
||||||
try:
|
|
||||||
result = func(*args, **kwargs)
|
|
||||||
return_dict['result'] = result
|
|
||||||
except Exception as e:
|
|
||||||
exc_info = sys.exc_info()
|
|
||||||
exception_dict['exception'] = exc_info
|
|
||||||
|
|
||||||
def run_in_subprocess_with_timeout(func, timeout=60):
|
|
||||||
if platform.system() == 'Linux':
|
|
||||||
def wrapper(*args, **kwargs):
|
|
||||||
return_dict = multiprocessing.Manager().dict()
|
|
||||||
exception_dict = multiprocessing.Manager().dict()
|
|
||||||
v_args = pickle.dumps((func, args, kwargs, return_dict, exception_dict))
|
|
||||||
process = multiprocessing.Process(target=run_in_subprocess_wrapper_func, args=(v_args,))
|
|
||||||
process.start()
|
|
||||||
process.join(timeout)
|
|
||||||
if process.is_alive():
|
|
||||||
process.terminate()
|
|
||||||
raise TimeoutError(f'功能单元{str(func)}未能在规定时间内完成任务')
|
|
||||||
process.close()
|
|
||||||
if 'exception' in exception_dict:
|
|
||||||
# ooops, the subprocess ran into an exception
|
|
||||||
exc_info = exception_dict['exception']
|
|
||||||
raise exc_info[1].with_traceback(exc_info[2])
|
|
||||||
if 'result' in return_dict.keys():
|
|
||||||
# If the subprocess ran successfully, return the result
|
|
||||||
return return_dict['result']
|
|
||||||
return wrapper
|
|
||||||
else:
|
|
||||||
return func
|
|
||||||
@@ -175,6 +175,7 @@ class LatexPaperFileGroup():
|
|||||||
self.sp_file_contents = []
|
self.sp_file_contents = []
|
||||||
self.sp_file_index = []
|
self.sp_file_index = []
|
||||||
self.sp_file_tag = []
|
self.sp_file_tag = []
|
||||||
|
|
||||||
# count_token
|
# count_token
|
||||||
from request_llms.bridge_all import model_info
|
from request_llms.bridge_all import model_info
|
||||||
enc = model_info["gpt-3.5-turbo"]['tokenizer']
|
enc = model_info["gpt-3.5-turbo"]['tokenizer']
|
||||||
@@ -191,12 +192,13 @@ class LatexPaperFileGroup():
|
|||||||
self.sp_file_index.append(index)
|
self.sp_file_index.append(index)
|
||||||
self.sp_file_tag.append(self.file_paths[index])
|
self.sp_file_tag.append(self.file_paths[index])
|
||||||
else:
|
else:
|
||||||
from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
|
from ..crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
|
||||||
segments = breakdown_text_to_satisfy_token_limit(file_content, max_token_limit)
|
segments = breakdown_txt_to_satisfy_token_limit_for_pdf(file_content, self.get_token_num, max_token_limit)
|
||||||
for j, segment in enumerate(segments):
|
for j, segment in enumerate(segments):
|
||||||
self.sp_file_contents.append(segment)
|
self.sp_file_contents.append(segment)
|
||||||
self.sp_file_index.append(index)
|
self.sp_file_index.append(index)
|
||||||
self.sp_file_tag.append(self.file_paths[index] + f".part-{j}.tex")
|
self.sp_file_tag.append(self.file_paths[index] + f".part-{j}.tex")
|
||||||
|
print('Segmentation: done')
|
||||||
|
|
||||||
def merge_result(self):
|
def merge_result(self):
|
||||||
self.file_result = ["" for _ in range(len(self.file_paths))]
|
self.file_result = ["" for _ in range(len(self.file_paths))]
|
||||||
@@ -402,7 +404,7 @@ def 编译Latex(chatbot, history, main_file_original, main_file_modified, work_f
|
|||||||
result_pdf = pj(work_folder_modified, f'merge_diff.pdf') # get pdf path
|
result_pdf = pj(work_folder_modified, f'merge_diff.pdf') # get pdf path
|
||||||
promote_file_to_downloadzone(result_pdf, rename_file=None, chatbot=chatbot) # promote file to web UI
|
promote_file_to_downloadzone(result_pdf, rename_file=None, chatbot=chatbot) # promote file to web UI
|
||||||
if modified_pdf_success:
|
if modified_pdf_success:
|
||||||
yield from update_ui_lastest_msg(f'转化PDF编译已经成功, 正在尝试生成对比PDF, 请稍候 ...', chatbot, history) # 刷新Gradio前端界面
|
yield from update_ui_lastest_msg(f'转化PDF编译已经成功, 即将退出 ...', chatbot, history) # 刷新Gradio前端界面
|
||||||
result_pdf = pj(work_folder_modified, f'{main_file_modified}.pdf') # get pdf path
|
result_pdf = pj(work_folder_modified, f'{main_file_modified}.pdf') # get pdf path
|
||||||
origin_pdf = pj(work_folder_original, f'{main_file_original}.pdf') # get pdf path
|
origin_pdf = pj(work_folder_original, f'{main_file_original}.pdf') # get pdf path
|
||||||
if os.path.exists(pj(work_folder, '..', 'translation')):
|
if os.path.exists(pj(work_folder, '..', 'translation')):
|
||||||
|
|||||||
@@ -1,18 +1,15 @@
|
|||||||
import os, shutil
|
import os, shutil
|
||||||
import re
|
import re
|
||||||
import numpy as np
|
import numpy as np
|
||||||
|
|
||||||
PRESERVE = 0
|
PRESERVE = 0
|
||||||
TRANSFORM = 1
|
TRANSFORM = 1
|
||||||
|
|
||||||
pj = os.path.join
|
pj = os.path.join
|
||||||
|
|
||||||
|
class LinkedListNode():
|
||||||
class LinkedListNode:
|
|
||||||
"""
|
"""
|
||||||
Linked List Node
|
Linked List Node
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self, string, preserve=True) -> None:
|
def __init__(self, string, preserve=True) -> None:
|
||||||
self.string = string
|
self.string = string
|
||||||
self.preserve = preserve
|
self.preserve = preserve
|
||||||
@@ -21,22 +18,19 @@ class LinkedListNode:
|
|||||||
# self.begin_line = 0
|
# self.begin_line = 0
|
||||||
# self.begin_char = 0
|
# self.begin_char = 0
|
||||||
|
|
||||||
|
|
||||||
def convert_to_linklist(text, mask):
|
def convert_to_linklist(text, mask):
|
||||||
root = LinkedListNode("", preserve=True)
|
root = LinkedListNode("", preserve=True)
|
||||||
current_node = root
|
current_node = root
|
||||||
for c, m, i in zip(text, mask, range(len(text))):
|
for c, m, i in zip(text, mask, range(len(text))):
|
||||||
if (m == PRESERVE and current_node.preserve) or (
|
if (m==PRESERVE and current_node.preserve) \
|
||||||
m == TRANSFORM and not current_node.preserve
|
or (m==TRANSFORM and not current_node.preserve):
|
||||||
):
|
|
||||||
# add
|
# add
|
||||||
current_node.string += c
|
current_node.string += c
|
||||||
else:
|
else:
|
||||||
current_node.next = LinkedListNode(c, preserve=(m == PRESERVE))
|
current_node.next = LinkedListNode(c, preserve=(m==PRESERVE))
|
||||||
current_node = current_node.next
|
current_node = current_node.next
|
||||||
return root
|
return root
|
||||||
|
|
||||||
|
|
||||||
def post_process(root):
|
def post_process(root):
|
||||||
# 修复括号
|
# 修复括号
|
||||||
node = root
|
node = root
|
||||||
@@ -44,24 +38,21 @@ def post_process(root):
|
|||||||
string = node.string
|
string = node.string
|
||||||
if node.preserve:
|
if node.preserve:
|
||||||
node = node.next
|
node = node.next
|
||||||
if node is None:
|
if node is None: break
|
||||||
break
|
|
||||||
continue
|
continue
|
||||||
|
|
||||||
def break_check(string):
|
def break_check(string):
|
||||||
str_stack = [""] # (lv, index)
|
str_stack = [""] # (lv, index)
|
||||||
for i, c in enumerate(string):
|
for i, c in enumerate(string):
|
||||||
if c == "{":
|
if c == '{':
|
||||||
str_stack.append("{")
|
str_stack.append('{')
|
||||||
elif c == "}":
|
elif c == '}':
|
||||||
if len(str_stack) == 1:
|
if len(str_stack) == 1:
|
||||||
print("stack fix")
|
print('stack fix')
|
||||||
return i
|
return i
|
||||||
str_stack.pop(-1)
|
str_stack.pop(-1)
|
||||||
else:
|
else:
|
||||||
str_stack[-1] += c
|
str_stack[-1] += c
|
||||||
return -1
|
return -1
|
||||||
|
|
||||||
bp = break_check(string)
|
bp = break_check(string)
|
||||||
|
|
||||||
if bp == -1:
|
if bp == -1:
|
||||||
@@ -78,66 +69,51 @@ def post_process(root):
|
|||||||
node.next = q
|
node.next = q
|
||||||
|
|
||||||
node = node.next
|
node = node.next
|
||||||
if node is None:
|
if node is None: break
|
||||||
break
|
|
||||||
|
|
||||||
# 屏蔽空行和太短的句子
|
# 屏蔽空行和太短的句子
|
||||||
node = root
|
node = root
|
||||||
while True:
|
while True:
|
||||||
if len(node.string.strip("\n").strip("")) == 0:
|
if len(node.string.strip('\n').strip(''))==0: node.preserve = True
|
||||||
node.preserve = True
|
if len(node.string.strip('\n').strip(''))<42: node.preserve = True
|
||||||
if len(node.string.strip("\n").strip("")) < 42:
|
|
||||||
node.preserve = True
|
|
||||||
node = node.next
|
node = node.next
|
||||||
if node is None:
|
if node is None: break
|
||||||
break
|
|
||||||
node = root
|
node = root
|
||||||
while True:
|
while True:
|
||||||
if node.next and node.preserve and node.next.preserve:
|
if node.next and node.preserve and node.next.preserve:
|
||||||
node.string += node.next.string
|
node.string += node.next.string
|
||||||
node.next = node.next.next
|
node.next = node.next.next
|
||||||
node = node.next
|
node = node.next
|
||||||
if node is None:
|
if node is None: break
|
||||||
break
|
|
||||||
|
|
||||||
# 将前后断行符脱离
|
# 将前后断行符脱离
|
||||||
node = root
|
node = root
|
||||||
prev_node = None
|
prev_node = None
|
||||||
while True:
|
while True:
|
||||||
if not node.preserve:
|
if not node.preserve:
|
||||||
lstriped_ = node.string.lstrip().lstrip("\n")
|
lstriped_ = node.string.lstrip().lstrip('\n')
|
||||||
if (
|
if (prev_node is not None) and (prev_node.preserve) and (len(lstriped_)!=len(node.string)):
|
||||||
(prev_node is not None)
|
prev_node.string += node.string[:-len(lstriped_)]
|
||||||
and (prev_node.preserve)
|
|
||||||
and (len(lstriped_) != len(node.string))
|
|
||||||
):
|
|
||||||
prev_node.string += node.string[: -len(lstriped_)]
|
|
||||||
node.string = lstriped_
|
node.string = lstriped_
|
||||||
rstriped_ = node.string.rstrip().rstrip("\n")
|
rstriped_ = node.string.rstrip().rstrip('\n')
|
||||||
if (
|
if (node.next is not None) and (node.next.preserve) and (len(rstriped_)!=len(node.string)):
|
||||||
(node.next is not None)
|
node.next.string = node.string[len(rstriped_):] + node.next.string
|
||||||
and (node.next.preserve)
|
|
||||||
and (len(rstriped_) != len(node.string))
|
|
||||||
):
|
|
||||||
node.next.string = node.string[len(rstriped_) :] + node.next.string
|
|
||||||
node.string = rstriped_
|
node.string = rstriped_
|
||||||
# =-=-=
|
# =====
|
||||||
prev_node = node
|
prev_node = node
|
||||||
node = node.next
|
node = node.next
|
||||||
if node is None:
|
if node is None: break
|
||||||
break
|
|
||||||
|
|
||||||
# 标注节点的行数范围
|
# 标注节点的行数范围
|
||||||
node = root
|
node = root
|
||||||
n_line = 0
|
n_line = 0
|
||||||
expansion = 2
|
expansion = 2
|
||||||
while True:
|
while True:
|
||||||
n_l = node.string.count("\n")
|
n_l = node.string.count('\n')
|
||||||
node.range = [n_line - expansion, n_line + n_l + expansion] # 失败时,扭转的范围
|
node.range = [n_line-expansion, n_line+n_l+expansion] # 失败时,扭转的范围
|
||||||
n_line = n_line + n_l
|
n_line = n_line+n_l
|
||||||
node = node.next
|
node = node.next
|
||||||
if node is None:
|
if node is None: break
|
||||||
break
|
|
||||||
return root
|
return root
|
||||||
|
|
||||||
|
|
||||||
@@ -155,14 +131,12 @@ def set_forbidden_text(text, mask, pattern, flags=0):
|
|||||||
you can mask out (mask = PRESERVE so that text become untouchable for GPT)
|
you can mask out (mask = PRESERVE so that text become untouchable for GPT)
|
||||||
everything between "\begin{equation}" and "\end{equation}"
|
everything between "\begin{equation}" and "\end{equation}"
|
||||||
"""
|
"""
|
||||||
if isinstance(pattern, list):
|
if isinstance(pattern, list): pattern = '|'.join(pattern)
|
||||||
pattern = "|".join(pattern)
|
|
||||||
pattern_compile = re.compile(pattern, flags)
|
pattern_compile = re.compile(pattern, flags)
|
||||||
for res in pattern_compile.finditer(text):
|
for res in pattern_compile.finditer(text):
|
||||||
mask[res.span()[0] : res.span()[1]] = PRESERVE
|
mask[res.span()[0]:res.span()[1]] = PRESERVE
|
||||||
return text, mask
|
return text, mask
|
||||||
|
|
||||||
|
|
||||||
def reverse_forbidden_text(text, mask, pattern, flags=0, forbid_wrapper=True):
|
def reverse_forbidden_text(text, mask, pattern, flags=0, forbid_wrapper=True):
|
||||||
"""
|
"""
|
||||||
Move area out of preserve area (make text editable for GPT)
|
Move area out of preserve area (make text editable for GPT)
|
||||||
@@ -170,19 +144,17 @@ def reverse_forbidden_text(text, mask, pattern, flags=0, forbid_wrapper=True):
|
|||||||
e.g.
|
e.g.
|
||||||
\begin{abstract} blablablablablabla. \end{abstract}
|
\begin{abstract} blablablablablabla. \end{abstract}
|
||||||
"""
|
"""
|
||||||
if isinstance(pattern, list):
|
if isinstance(pattern, list): pattern = '|'.join(pattern)
|
||||||
pattern = "|".join(pattern)
|
|
||||||
pattern_compile = re.compile(pattern, flags)
|
pattern_compile = re.compile(pattern, flags)
|
||||||
for res in pattern_compile.finditer(text):
|
for res in pattern_compile.finditer(text):
|
||||||
if not forbid_wrapper:
|
if not forbid_wrapper:
|
||||||
mask[res.span()[0] : res.span()[1]] = TRANSFORM
|
mask[res.span()[0]:res.span()[1]] = TRANSFORM
|
||||||
else:
|
else:
|
||||||
mask[res.regs[0][0] : res.regs[1][0]] = PRESERVE # '\\begin{abstract}'
|
mask[res.regs[0][0]: res.regs[1][0]] = PRESERVE # '\\begin{abstract}'
|
||||||
mask[res.regs[1][0] : res.regs[1][1]] = TRANSFORM # abstract
|
mask[res.regs[1][0]: res.regs[1][1]] = TRANSFORM # abstract
|
||||||
mask[res.regs[1][1] : res.regs[0][1]] = PRESERVE # abstract
|
mask[res.regs[1][1]: res.regs[0][1]] = PRESERVE # abstract
|
||||||
return text, mask
|
return text, mask
|
||||||
|
|
||||||
|
|
||||||
def set_forbidden_text_careful_brace(text, mask, pattern, flags=0):
|
def set_forbidden_text_careful_brace(text, mask, pattern, flags=0):
|
||||||
"""
|
"""
|
||||||
Add a preserve text area in this paper (text become untouchable for GPT).
|
Add a preserve text area in this paper (text become untouchable for GPT).
|
||||||
@@ -194,22 +166,16 @@ def set_forbidden_text_careful_brace(text, mask, pattern, flags=0):
|
|||||||
for res in pattern_compile.finditer(text):
|
for res in pattern_compile.finditer(text):
|
||||||
brace_level = -1
|
brace_level = -1
|
||||||
p = begin = end = res.regs[0][0]
|
p = begin = end = res.regs[0][0]
|
||||||
for _ in range(1024 * 16):
|
for _ in range(1024*16):
|
||||||
if text[p] == "}" and brace_level == 0:
|
if text[p] == '}' and brace_level == 0: break
|
||||||
break
|
elif text[p] == '}': brace_level -= 1
|
||||||
elif text[p] == "}":
|
elif text[p] == '{': brace_level += 1
|
||||||
brace_level -= 1
|
|
||||||
elif text[p] == "{":
|
|
||||||
brace_level += 1
|
|
||||||
p += 1
|
p += 1
|
||||||
end = p + 1
|
end = p+1
|
||||||
mask[begin:end] = PRESERVE
|
mask[begin:end] = PRESERVE
|
||||||
return text, mask
|
return text, mask
|
||||||
|
|
||||||
|
def reverse_forbidden_text_careful_brace(text, mask, pattern, flags=0, forbid_wrapper=True):
|
||||||
def reverse_forbidden_text_careful_brace(
|
|
||||||
text, mask, pattern, flags=0, forbid_wrapper=True
|
|
||||||
):
|
|
||||||
"""
|
"""
|
||||||
Move area out of preserve area (make text editable for GPT)
|
Move area out of preserve area (make text editable for GPT)
|
||||||
count the number of the braces so as to catch compelete text area.
|
count the number of the braces so as to catch compelete text area.
|
||||||
@@ -220,66 +186,47 @@ def reverse_forbidden_text_careful_brace(
|
|||||||
for res in pattern_compile.finditer(text):
|
for res in pattern_compile.finditer(text):
|
||||||
brace_level = 0
|
brace_level = 0
|
||||||
p = begin = end = res.regs[1][0]
|
p = begin = end = res.regs[1][0]
|
||||||
for _ in range(1024 * 16):
|
for _ in range(1024*16):
|
||||||
if text[p] == "}" and brace_level == 0:
|
if text[p] == '}' and brace_level == 0: break
|
||||||
break
|
elif text[p] == '}': brace_level -= 1
|
||||||
elif text[p] == "}":
|
elif text[p] == '{': brace_level += 1
|
||||||
brace_level -= 1
|
|
||||||
elif text[p] == "{":
|
|
||||||
brace_level += 1
|
|
||||||
p += 1
|
p += 1
|
||||||
end = p
|
end = p
|
||||||
mask[begin:end] = TRANSFORM
|
mask[begin:end] = TRANSFORM
|
||||||
if forbid_wrapper:
|
if forbid_wrapper:
|
||||||
mask[res.regs[0][0] : begin] = PRESERVE
|
mask[res.regs[0][0]:begin] = PRESERVE
|
||||||
mask[end : res.regs[0][1]] = PRESERVE
|
mask[end:res.regs[0][1]] = PRESERVE
|
||||||
return text, mask
|
return text, mask
|
||||||
|
|
||||||
|
|
||||||
def set_forbidden_text_begin_end(text, mask, pattern, flags=0, limit_n_lines=42):
|
def set_forbidden_text_begin_end(text, mask, pattern, flags=0, limit_n_lines=42):
|
||||||
"""
|
"""
|
||||||
Find all \begin{} ... \end{} text block that with less than limit_n_lines lines.
|
Find all \begin{} ... \end{} text block that with less than limit_n_lines lines.
|
||||||
Add it to preserve area
|
Add it to preserve area
|
||||||
"""
|
"""
|
||||||
pattern_compile = re.compile(pattern, flags)
|
pattern_compile = re.compile(pattern, flags)
|
||||||
|
|
||||||
def search_with_line_limit(text, mask):
|
def search_with_line_limit(text, mask):
|
||||||
for res in pattern_compile.finditer(text):
|
for res in pattern_compile.finditer(text):
|
||||||
cmd = res.group(1) # begin{what}
|
cmd = res.group(1) # begin{what}
|
||||||
this = res.group(2) # content between begin and end
|
this = res.group(2) # content between begin and end
|
||||||
this_mask = mask[res.regs[2][0] : res.regs[2][1]]
|
this_mask = mask[res.regs[2][0]:res.regs[2][1]]
|
||||||
white_list = [
|
white_list = ['document', 'abstract', 'lemma', 'definition', 'sproof',
|
||||||
"document",
|
'em', 'emph', 'textit', 'textbf', 'itemize', 'enumerate']
|
||||||
"abstract",
|
if (cmd in white_list) or this.count('\n') >= limit_n_lines: # use a magical number 42
|
||||||
"lemma",
|
|
||||||
"definition",
|
|
||||||
"sproof",
|
|
||||||
"em",
|
|
||||||
"emph",
|
|
||||||
"textit",
|
|
||||||
"textbf",
|
|
||||||
"itemize",
|
|
||||||
"enumerate",
|
|
||||||
]
|
|
||||||
if (cmd in white_list) or this.count(
|
|
||||||
"\n"
|
|
||||||
) >= limit_n_lines: # use a magical number 42
|
|
||||||
this, this_mask = search_with_line_limit(this, this_mask)
|
this, this_mask = search_with_line_limit(this, this_mask)
|
||||||
mask[res.regs[2][0] : res.regs[2][1]] = this_mask
|
mask[res.regs[2][0]:res.regs[2][1]] = this_mask
|
||||||
else:
|
else:
|
||||||
mask[res.regs[0][0] : res.regs[0][1]] = PRESERVE
|
mask[res.regs[0][0]:res.regs[0][1]] = PRESERVE
|
||||||
return text, mask
|
return text, mask
|
||||||
|
|
||||||
return search_with_line_limit(text, mask)
|
return search_with_line_limit(text, mask)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
"""
|
"""
|
||||||
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
||||||
Latex Merge File
|
Latex Merge File
|
||||||
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
|
||||||
def find_main_tex_file(file_manifest, mode):
|
def find_main_tex_file(file_manifest, mode):
|
||||||
"""
|
"""
|
||||||
在多Tex文档中,寻找主文件,必须包含documentclass,返回找到的第一个。
|
在多Tex文档中,寻找主文件,必须包含documentclass,返回找到的第一个。
|
||||||
@@ -287,36 +234,27 @@ def find_main_tex_file(file_manifest, mode):
|
|||||||
"""
|
"""
|
||||||
canidates = []
|
canidates = []
|
||||||
for texf in file_manifest:
|
for texf in file_manifest:
|
||||||
if os.path.basename(texf).startswith("merge"):
|
if os.path.basename(texf).startswith('merge'):
|
||||||
continue
|
continue
|
||||||
with open(texf, "r", encoding="utf8", errors="ignore") as f:
|
with open(texf, 'r', encoding='utf8', errors='ignore') as f:
|
||||||
file_content = f.read()
|
file_content = f.read()
|
||||||
if r"\documentclass" in file_content:
|
if r'\documentclass' in file_content:
|
||||||
canidates.append(texf)
|
canidates.append(texf)
|
||||||
else:
|
else:
|
||||||
continue
|
continue
|
||||||
|
|
||||||
if len(canidates) == 0:
|
if len(canidates) == 0:
|
||||||
raise RuntimeError("无法找到一个主Tex文件(包含documentclass关键字)")
|
raise RuntimeError('无法找到一个主Tex文件(包含documentclass关键字)')
|
||||||
elif len(canidates) == 1:
|
elif len(canidates) == 1:
|
||||||
return canidates[0]
|
return canidates[0]
|
||||||
else: # if len(canidates) >= 2 通过一些Latex模板中常见(但通常不会出现在正文)的单词,对不同latex源文件扣分,取评分最高者返回
|
else: # if len(canidates) >= 2 通过一些Latex模板中常见(但通常不会出现在正文)的单词,对不同latex源文件扣分,取评分最高者返回
|
||||||
canidates_score = []
|
canidates_score = []
|
||||||
# 给出一些判定模板文档的词作为扣分项
|
# 给出一些判定模板文档的词作为扣分项
|
||||||
unexpected_words = [
|
unexpected_words = ['\LaTeX', 'manuscript', 'Guidelines', 'font', 'citations', 'rejected', 'blind review', 'reviewers']
|
||||||
"\\LaTeX",
|
expected_words = ['\input', '\ref', '\cite']
|
||||||
"manuscript",
|
|
||||||
"Guidelines",
|
|
||||||
"font",
|
|
||||||
"citations",
|
|
||||||
"rejected",
|
|
||||||
"blind review",
|
|
||||||
"reviewers",
|
|
||||||
]
|
|
||||||
expected_words = ["\\input", "\\ref", "\\cite"]
|
|
||||||
for texf in canidates:
|
for texf in canidates:
|
||||||
canidates_score.append(0)
|
canidates_score.append(0)
|
||||||
with open(texf, "r", encoding="utf8", errors="ignore") as f:
|
with open(texf, 'r', encoding='utf8', errors='ignore') as f:
|
||||||
file_content = f.read()
|
file_content = f.read()
|
||||||
file_content = rm_comments(file_content)
|
file_content = rm_comments(file_content)
|
||||||
for uw in unexpected_words:
|
for uw in unexpected_words:
|
||||||
@@ -328,7 +266,6 @@ def find_main_tex_file(file_manifest, mode):
|
|||||||
select = np.argmax(canidates_score) # 取评分最高者返回
|
select = np.argmax(canidates_score) # 取评分最高者返回
|
||||||
return canidates[select]
|
return canidates[select]
|
||||||
|
|
||||||
|
|
||||||
def rm_comments(main_file):
|
def rm_comments(main_file):
|
||||||
new_file_remove_comment_lines = []
|
new_file_remove_comment_lines = []
|
||||||
for l in main_file.splitlines():
|
for l in main_file.splitlines():
|
||||||
@@ -337,39 +274,30 @@ def rm_comments(main_file):
|
|||||||
pass
|
pass
|
||||||
else:
|
else:
|
||||||
new_file_remove_comment_lines.append(l)
|
new_file_remove_comment_lines.append(l)
|
||||||
main_file = "\n".join(new_file_remove_comment_lines)
|
main_file = '\n'.join(new_file_remove_comment_lines)
|
||||||
# main_file = re.sub(r"\\include{(.*?)}", r"\\input{\1}", main_file) # 将 \include 命令转换为 \input 命令
|
# main_file = re.sub(r"\\include{(.*?)}", r"\\input{\1}", main_file) # 将 \include 命令转换为 \input 命令
|
||||||
main_file = re.sub(r"(?<!\\)%.*", "", main_file) # 使用正则表达式查找半行注释, 并替换为空字符串
|
main_file = re.sub(r'(?<!\\)%.*', '', main_file) # 使用正则表达式查找半行注释, 并替换为空字符串
|
||||||
return main_file
|
return main_file
|
||||||
|
|
||||||
|
|
||||||
def find_tex_file_ignore_case(fp):
|
def find_tex_file_ignore_case(fp):
|
||||||
dir_name = os.path.dirname(fp)
|
dir_name = os.path.dirname(fp)
|
||||||
base_name = os.path.basename(fp)
|
base_name = os.path.basename(fp)
|
||||||
# 如果输入的文件路径是正确的
|
# 如果输入的文件路径是正确的
|
||||||
if os.path.isfile(pj(dir_name, base_name)):
|
if os.path.isfile(pj(dir_name, base_name)): return pj(dir_name, base_name)
|
||||||
return pj(dir_name, base_name)
|
|
||||||
# 如果不正确,试着加上.tex后缀试试
|
# 如果不正确,试着加上.tex后缀试试
|
||||||
if not base_name.endswith(".tex"):
|
if not base_name.endswith('.tex'): base_name+='.tex'
|
||||||
base_name += ".tex"
|
if os.path.isfile(pj(dir_name, base_name)): return pj(dir_name, base_name)
|
||||||
if os.path.isfile(pj(dir_name, base_name)):
|
|
||||||
return pj(dir_name, base_name)
|
|
||||||
# 如果还找不到,解除大小写限制,再试一次
|
# 如果还找不到,解除大小写限制,再试一次
|
||||||
import glob
|
import glob
|
||||||
|
for f in glob.glob(dir_name+'/*.tex'):
|
||||||
for f in glob.glob(dir_name + "/*.tex"):
|
|
||||||
base_name_s = os.path.basename(fp)
|
base_name_s = os.path.basename(fp)
|
||||||
base_name_f = os.path.basename(f)
|
base_name_f = os.path.basename(f)
|
||||||
if base_name_s.lower() == base_name_f.lower():
|
if base_name_s.lower() == base_name_f.lower(): return f
|
||||||
return f
|
|
||||||
# 试着加上.tex后缀试试
|
# 试着加上.tex后缀试试
|
||||||
if not base_name_s.endswith(".tex"):
|
if not base_name_s.endswith('.tex'): base_name_s+='.tex'
|
||||||
base_name_s += ".tex"
|
if base_name_s.lower() == base_name_f.lower(): return f
|
||||||
if base_name_s.lower() == base_name_f.lower():
|
|
||||||
return f
|
|
||||||
return None
|
return None
|
||||||
|
|
||||||
|
|
||||||
def merge_tex_files_(project_foler, main_file, mode):
|
def merge_tex_files_(project_foler, main_file, mode):
|
||||||
"""
|
"""
|
||||||
Merge Tex project recrusively
|
Merge Tex project recrusively
|
||||||
@@ -381,18 +309,18 @@ def merge_tex_files_(project_foler, main_file, mode):
|
|||||||
fp_ = find_tex_file_ignore_case(fp)
|
fp_ = find_tex_file_ignore_case(fp)
|
||||||
if fp_:
|
if fp_:
|
||||||
try:
|
try:
|
||||||
with open(fp_, "r", encoding="utf-8", errors="replace") as fx:
|
with open(fp_, 'r', encoding='utf-8', errors='replace') as fx: c = fx.read()
|
||||||
c = fx.read()
|
|
||||||
except:
|
except:
|
||||||
c = f"\n\nWarning from GPT-Academic: LaTex source file is missing!\n\n"
|
c = f"\n\nWarning from GPT-Academic: LaTex source file is missing!\n\n"
|
||||||
else:
|
else:
|
||||||
raise RuntimeError(f"找不到{fp},Tex源文件缺失!")
|
raise RuntimeError(f'找不到{fp},Tex源文件缺失!')
|
||||||
c = merge_tex_files_(project_foler, c, mode)
|
c = merge_tex_files_(project_foler, c, mode)
|
||||||
main_file = main_file[: s.span()[0]] + c + main_file[s.span()[1] :]
|
main_file = main_file[:s.span()[0]] + c + main_file[s.span()[1]:]
|
||||||
return main_file
|
return main_file
|
||||||
|
|
||||||
|
|
||||||
def find_title_and_abs(main_file):
|
def find_title_and_abs(main_file):
|
||||||
|
|
||||||
def extract_abstract_1(text):
|
def extract_abstract_1(text):
|
||||||
pattern = r"\\abstract\{(.*?)\}"
|
pattern = r"\\abstract\{(.*?)\}"
|
||||||
match = re.search(pattern, text, re.DOTALL)
|
match = re.search(pattern, text, re.DOTALL)
|
||||||
@@ -434,30 +362,21 @@ def merge_tex_files(project_foler, main_file, mode):
|
|||||||
main_file = merge_tex_files_(project_foler, main_file, mode)
|
main_file = merge_tex_files_(project_foler, main_file, mode)
|
||||||
main_file = rm_comments(main_file)
|
main_file = rm_comments(main_file)
|
||||||
|
|
||||||
if mode == "translate_zh":
|
if mode == 'translate_zh':
|
||||||
# find paper documentclass
|
# find paper documentclass
|
||||||
pattern = re.compile(r"\\documentclass.*\n")
|
pattern = re.compile(r'\\documentclass.*\n')
|
||||||
match = pattern.search(main_file)
|
match = pattern.search(main_file)
|
||||||
assert match is not None, "Cannot find documentclass statement!"
|
assert match is not None, "Cannot find documentclass statement!"
|
||||||
position = match.end()
|
position = match.end()
|
||||||
add_ctex = "\\usepackage{ctex}\n"
|
add_ctex = '\\usepackage{ctex}\n'
|
||||||
add_url = "\\usepackage{url}\n" if "{url}" not in main_file else ""
|
add_url = '\\usepackage{url}\n' if '{url}' not in main_file else ''
|
||||||
main_file = main_file[:position] + add_ctex + add_url + main_file[position:]
|
main_file = main_file[:position] + add_ctex + add_url + main_file[position:]
|
||||||
# fontset=windows
|
# fontset=windows
|
||||||
import platform
|
import platform
|
||||||
|
main_file = re.sub(r"\\documentclass\[(.*?)\]{(.*?)}", r"\\documentclass[\1,fontset=windows,UTF8]{\2}",main_file)
|
||||||
main_file = re.sub(
|
main_file = re.sub(r"\\documentclass{(.*?)}", r"\\documentclass[fontset=windows,UTF8]{\1}",main_file)
|
||||||
r"\\documentclass\[(.*?)\]{(.*?)}",
|
|
||||||
r"\\documentclass[\1,fontset=windows,UTF8]{\2}",
|
|
||||||
main_file,
|
|
||||||
)
|
|
||||||
main_file = re.sub(
|
|
||||||
r"\\documentclass{(.*?)}",
|
|
||||||
r"\\documentclass[fontset=windows,UTF8]{\1}",
|
|
||||||
main_file,
|
|
||||||
)
|
|
||||||
# find paper abstract
|
# find paper abstract
|
||||||
pattern_opt1 = re.compile(r"\\begin\{abstract\}.*\n")
|
pattern_opt1 = re.compile(r'\\begin\{abstract\}.*\n')
|
||||||
pattern_opt2 = re.compile(r"\\abstract\{(.*?)\}", flags=re.DOTALL)
|
pattern_opt2 = re.compile(r"\\abstract\{(.*?)\}", flags=re.DOTALL)
|
||||||
match_opt1 = pattern_opt1.search(main_file)
|
match_opt1 = pattern_opt1.search(main_file)
|
||||||
match_opt2 = pattern_opt2.search(main_file)
|
match_opt2 = pattern_opt2.search(main_file)
|
||||||
@@ -466,9 +385,7 @@ def merge_tex_files(project_foler, main_file, mode):
|
|||||||
main_file = insert_abstract(main_file)
|
main_file = insert_abstract(main_file)
|
||||||
match_opt1 = pattern_opt1.search(main_file)
|
match_opt1 = pattern_opt1.search(main_file)
|
||||||
match_opt2 = pattern_opt2.search(main_file)
|
match_opt2 = pattern_opt2.search(main_file)
|
||||||
assert (match_opt1 is not None) or (
|
assert (match_opt1 is not None) or (match_opt2 is not None), "Cannot find paper abstract section!"
|
||||||
match_opt2 is not None
|
|
||||||
), "Cannot find paper abstract section!"
|
|
||||||
return main_file
|
return main_file
|
||||||
|
|
||||||
|
|
||||||
@@ -478,7 +395,6 @@ The GPT-Academic program cannot find abstract section in this paper.
|
|||||||
\end{abstract}
|
\end{abstract}
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
|
||||||
def insert_abstract(tex_content):
|
def insert_abstract(tex_content):
|
||||||
if "\\maketitle" in tex_content:
|
if "\\maketitle" in tex_content:
|
||||||
# find the position of "\maketitle"
|
# find the position of "\maketitle"
|
||||||
@@ -486,13 +402,7 @@ def insert_abstract(tex_content):
|
|||||||
# find the nearest ending line
|
# find the nearest ending line
|
||||||
end_line_index = tex_content.find("\n", find_index)
|
end_line_index = tex_content.find("\n", find_index)
|
||||||
# insert "abs_str" on the next line
|
# insert "abs_str" on the next line
|
||||||
modified_tex = (
|
modified_tex = tex_content[:end_line_index+1] + '\n\n' + insert_missing_abs_str + '\n\n' + tex_content[end_line_index+1:]
|
||||||
tex_content[: end_line_index + 1]
|
|
||||||
+ "\n\n"
|
|
||||||
+ insert_missing_abs_str
|
|
||||||
+ "\n\n"
|
|
||||||
+ tex_content[end_line_index + 1 :]
|
|
||||||
)
|
|
||||||
return modified_tex
|
return modified_tex
|
||||||
elif r"\begin{document}" in tex_content:
|
elif r"\begin{document}" in tex_content:
|
||||||
# find the position of "\maketitle"
|
# find the position of "\maketitle"
|
||||||
@@ -500,25 +410,16 @@ def insert_abstract(tex_content):
|
|||||||
# find the nearest ending line
|
# find the nearest ending line
|
||||||
end_line_index = tex_content.find("\n", find_index)
|
end_line_index = tex_content.find("\n", find_index)
|
||||||
# insert "abs_str" on the next line
|
# insert "abs_str" on the next line
|
||||||
modified_tex = (
|
modified_tex = tex_content[:end_line_index+1] + '\n\n' + insert_missing_abs_str + '\n\n' + tex_content[end_line_index+1:]
|
||||||
tex_content[: end_line_index + 1]
|
|
||||||
+ "\n\n"
|
|
||||||
+ insert_missing_abs_str
|
|
||||||
+ "\n\n"
|
|
||||||
+ tex_content[end_line_index + 1 :]
|
|
||||||
)
|
|
||||||
return modified_tex
|
return modified_tex
|
||||||
else:
|
else:
|
||||||
return tex_content
|
return tex_content
|
||||||
|
|
||||||
|
|
||||||
"""
|
"""
|
||||||
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
||||||
Post process
|
Post process
|
||||||
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
|
||||||
def mod_inbraket(match):
|
def mod_inbraket(match):
|
||||||
"""
|
"""
|
||||||
为啥chatgpt会把cite里面的逗号换成中文逗号呀
|
为啥chatgpt会把cite里面的逗号换成中文逗号呀
|
||||||
@@ -527,12 +428,11 @@ def mod_inbraket(match):
|
|||||||
cmd = match.group(1)
|
cmd = match.group(1)
|
||||||
str_to_modify = match.group(2)
|
str_to_modify = match.group(2)
|
||||||
# modify the matched string
|
# modify the matched string
|
||||||
str_to_modify = str_to_modify.replace(":", ":") # 前面是中文冒号,后面是英文冒号
|
str_to_modify = str_to_modify.replace(':', ':') # 前面是中文冒号,后面是英文冒号
|
||||||
str_to_modify = str_to_modify.replace(",", ",") # 前面是中文逗号,后面是英文逗号
|
str_to_modify = str_to_modify.replace(',', ',') # 前面是中文逗号,后面是英文逗号
|
||||||
# str_to_modify = 'BOOM'
|
# str_to_modify = 'BOOM'
|
||||||
return "\\" + cmd + "{" + str_to_modify + "}"
|
return "\\" + cmd + "{" + str_to_modify + "}"
|
||||||
|
|
||||||
|
|
||||||
def fix_content(final_tex, node_string):
|
def fix_content(final_tex, node_string):
|
||||||
"""
|
"""
|
||||||
Fix common GPT errors to increase success rate
|
Fix common GPT errors to increase success rate
|
||||||
@@ -544,9 +444,9 @@ def fix_content(final_tex, node_string):
|
|||||||
|
|
||||||
if "Traceback" in final_tex and "[Local Message]" in final_tex:
|
if "Traceback" in final_tex and "[Local Message]" in final_tex:
|
||||||
final_tex = node_string # 出问题了,还原原文
|
final_tex = node_string # 出问题了,还原原文
|
||||||
if node_string.count("\\begin") != final_tex.count("\\begin"):
|
if node_string.count('\\begin') != final_tex.count('\\begin'):
|
||||||
final_tex = node_string # 出问题了,还原原文
|
final_tex = node_string # 出问题了,还原原文
|
||||||
if node_string.count("\_") > 0 and node_string.count("\_") > final_tex.count("\_"):
|
if node_string.count('\_') > 0 and node_string.count('\_') > final_tex.count('\_'):
|
||||||
# walk and replace any _ without \
|
# walk and replace any _ without \
|
||||||
final_tex = re.sub(r"(?<!\\)_", "\\_", final_tex)
|
final_tex = re.sub(r"(?<!\\)_", "\\_", final_tex)
|
||||||
|
|
||||||
@@ -554,32 +454,24 @@ def fix_content(final_tex, node_string):
|
|||||||
# this function count the number of { and }
|
# this function count the number of { and }
|
||||||
brace_level = 0
|
brace_level = 0
|
||||||
for c in string:
|
for c in string:
|
||||||
if c == "{":
|
if c == "{": brace_level += 1
|
||||||
brace_level += 1
|
elif c == "}": brace_level -= 1
|
||||||
elif c == "}":
|
|
||||||
brace_level -= 1
|
|
||||||
return brace_level
|
return brace_level
|
||||||
|
|
||||||
def join_most(tex_t, tex_o):
|
def join_most(tex_t, tex_o):
|
||||||
# this function join translated string and original string when something goes wrong
|
# this function join translated string and original string when something goes wrong
|
||||||
p_t = 0
|
p_t = 0
|
||||||
p_o = 0
|
p_o = 0
|
||||||
|
|
||||||
def find_next(string, chars, begin):
|
def find_next(string, chars, begin):
|
||||||
p = begin
|
p = begin
|
||||||
while p < len(string):
|
while p < len(string):
|
||||||
if string[p] in chars:
|
if string[p] in chars: return p, string[p]
|
||||||
return p, string[p]
|
|
||||||
p += 1
|
p += 1
|
||||||
return None, None
|
return None, None
|
||||||
|
|
||||||
while True:
|
while True:
|
||||||
res1, char = find_next(tex_o, ["{", "}"], p_o)
|
res1, char = find_next(tex_o, ['{','}'], p_o)
|
||||||
if res1 is None:
|
if res1 is None: break
|
||||||
break
|
|
||||||
res2, char = find_next(tex_t, [char], p_t)
|
res2, char = find_next(tex_t, [char], p_t)
|
||||||
if res2 is None:
|
if res2 is None: break
|
||||||
break
|
|
||||||
p_o = res1 + 1
|
p_o = res1 + 1
|
||||||
p_t = res2 + 1
|
p_t = res2 + 1
|
||||||
return tex_t[:p_t] + tex_o[p_o:]
|
return tex_t[:p_t] + tex_o[p_o:]
|
||||||
@@ -589,13 +481,9 @@ def fix_content(final_tex, node_string):
|
|||||||
final_tex = join_most(final_tex, node_string)
|
final_tex = join_most(final_tex, node_string)
|
||||||
return final_tex
|
return final_tex
|
||||||
|
|
||||||
|
|
||||||
def compile_latex_with_timeout(command, cwd, timeout=60):
|
def compile_latex_with_timeout(command, cwd, timeout=60):
|
||||||
import subprocess
|
import subprocess
|
||||||
|
process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, cwd=cwd)
|
||||||
process = subprocess.Popen(
|
|
||||||
command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, cwd=cwd
|
|
||||||
)
|
|
||||||
try:
|
try:
|
||||||
stdout, stderr = process.communicate(timeout=timeout)
|
stdout, stderr = process.communicate(timeout=timeout)
|
||||||
except subprocess.TimeoutExpired:
|
except subprocess.TimeoutExpired:
|
||||||
@@ -605,52 +493,43 @@ def compile_latex_with_timeout(command, cwd, timeout=60):
|
|||||||
return False
|
return False
|
||||||
return True
|
return True
|
||||||
|
|
||||||
|
|
||||||
def run_in_subprocess_wrapper_func(func, args, kwargs, return_dict, exception_dict):
|
def run_in_subprocess_wrapper_func(func, args, kwargs, return_dict, exception_dict):
|
||||||
import sys
|
import sys
|
||||||
|
|
||||||
try:
|
try:
|
||||||
result = func(*args, **kwargs)
|
result = func(*args, **kwargs)
|
||||||
return_dict["result"] = result
|
return_dict['result'] = result
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
exc_info = sys.exc_info()
|
exc_info = sys.exc_info()
|
||||||
exception_dict["exception"] = exc_info
|
exception_dict['exception'] = exc_info
|
||||||
|
|
||||||
|
|
||||||
def run_in_subprocess(func):
|
def run_in_subprocess(func):
|
||||||
import multiprocessing
|
import multiprocessing
|
||||||
|
|
||||||
def wrapper(*args, **kwargs):
|
def wrapper(*args, **kwargs):
|
||||||
return_dict = multiprocessing.Manager().dict()
|
return_dict = multiprocessing.Manager().dict()
|
||||||
exception_dict = multiprocessing.Manager().dict()
|
exception_dict = multiprocessing.Manager().dict()
|
||||||
process = multiprocessing.Process(
|
process = multiprocessing.Process(target=run_in_subprocess_wrapper_func,
|
||||||
target=run_in_subprocess_wrapper_func,
|
args=(func, args, kwargs, return_dict, exception_dict))
|
||||||
args=(func, args, kwargs, return_dict, exception_dict),
|
|
||||||
)
|
|
||||||
process.start()
|
process.start()
|
||||||
process.join()
|
process.join()
|
||||||
process.close()
|
process.close()
|
||||||
if "exception" in exception_dict:
|
if 'exception' in exception_dict:
|
||||||
# ooops, the subprocess ran into an exception
|
# ooops, the subprocess ran into an exception
|
||||||
exc_info = exception_dict["exception"]
|
exc_info = exception_dict['exception']
|
||||||
raise exc_info[1].with_traceback(exc_info[2])
|
raise exc_info[1].with_traceback(exc_info[2])
|
||||||
if "result" in return_dict.keys():
|
if 'result' in return_dict.keys():
|
||||||
# If the subprocess ran successfully, return the result
|
# If the subprocess ran successfully, return the result
|
||||||
return return_dict["result"]
|
return return_dict['result']
|
||||||
|
|
||||||
return wrapper
|
return wrapper
|
||||||
|
|
||||||
|
|
||||||
def _merge_pdfs(pdf1_path, pdf2_path, output_path):
|
def _merge_pdfs(pdf1_path, pdf2_path, output_path):
|
||||||
import PyPDF2 # PyPDF2这个库有严重的内存泄露问题,把它放到子进程中运行,从而方便内存的释放
|
import PyPDF2 # PyPDF2这个库有严重的内存泄露问题,把它放到子进程中运行,从而方便内存的释放
|
||||||
|
|
||||||
Percent = 0.95
|
Percent = 0.95
|
||||||
# raise RuntimeError('PyPDF2 has a serious memory leak problem, please use other tools to merge PDF files.')
|
# raise RuntimeError('PyPDF2 has a serious memory leak problem, please use other tools to merge PDF files.')
|
||||||
# Open the first PDF file
|
# Open the first PDF file
|
||||||
with open(pdf1_path, "rb") as pdf1_file:
|
with open(pdf1_path, 'rb') as pdf1_file:
|
||||||
pdf1_reader = PyPDF2.PdfFileReader(pdf1_file)
|
pdf1_reader = PyPDF2.PdfFileReader(pdf1_file)
|
||||||
# Open the second PDF file
|
# Open the second PDF file
|
||||||
with open(pdf2_path, "rb") as pdf2_file:
|
with open(pdf2_path, 'rb') as pdf2_file:
|
||||||
pdf2_reader = PyPDF2.PdfFileReader(pdf2_file)
|
pdf2_reader = PyPDF2.PdfFileReader(pdf2_file)
|
||||||
# Create a new PDF file to store the merged pages
|
# Create a new PDF file to store the merged pages
|
||||||
output_writer = PyPDF2.PdfFileWriter()
|
output_writer = PyPDF2.PdfFileWriter()
|
||||||
@@ -670,25 +549,14 @@ def _merge_pdfs(pdf1_path, pdf2_path, output_path):
|
|||||||
page2 = PyPDF2.PageObject.createBlankPage(pdf1_reader)
|
page2 = PyPDF2.PageObject.createBlankPage(pdf1_reader)
|
||||||
# Create a new empty page with double width
|
# Create a new empty page with double width
|
||||||
new_page = PyPDF2.PageObject.createBlankPage(
|
new_page = PyPDF2.PageObject.createBlankPage(
|
||||||
width=int(
|
width = int(int(page1.mediaBox.getWidth()) + int(page2.mediaBox.getWidth()) * Percent),
|
||||||
int(page1.mediaBox.getWidth())
|
height = max(page1.mediaBox.getHeight(), page2.mediaBox.getHeight())
|
||||||
+ int(page2.mediaBox.getWidth()) * Percent
|
|
||||||
),
|
|
||||||
height=max(page1.mediaBox.getHeight(), page2.mediaBox.getHeight()),
|
|
||||||
)
|
)
|
||||||
new_page.mergeTranslatedPage(page1, 0, 0)
|
new_page.mergeTranslatedPage(page1, 0, 0)
|
||||||
new_page.mergeTranslatedPage(
|
new_page.mergeTranslatedPage(page2, int(int(page1.mediaBox.getWidth())-int(page2.mediaBox.getWidth())* (1-Percent)), 0)
|
||||||
page2,
|
|
||||||
int(
|
|
||||||
int(page1.mediaBox.getWidth())
|
|
||||||
- int(page2.mediaBox.getWidth()) * (1 - Percent)
|
|
||||||
),
|
|
||||||
0,
|
|
||||||
)
|
|
||||||
output_writer.addPage(new_page)
|
output_writer.addPage(new_page)
|
||||||
# Save the merged PDF file
|
# Save the merged PDF file
|
||||||
with open(output_path, "wb") as output_file:
|
with open(output_path, 'wb') as output_file:
|
||||||
output_writer.write(output_file)
|
output_writer.write(output_file)
|
||||||
|
|
||||||
|
|
||||||
merge_pdfs = run_in_subprocess(_merge_pdfs) # PyPDF2这个库有严重的内存泄露问题,把它放到子进程中运行,从而方便内存的释放
|
merge_pdfs = run_in_subprocess(_merge_pdfs) # PyPDF2这个库有严重的内存泄露问题,把它放到子进程中运行,从而方便内存的释放
|
||||||
|
|||||||
@@ -1,125 +0,0 @@
|
|||||||
from crazy_functions.ipc_fns.mp import run_in_subprocess_with_timeout
|
|
||||||
|
|
||||||
def force_breakdown(txt, limit, get_token_fn):
|
|
||||||
""" 当无法用标点、空行分割时,我们用最暴力的方法切割
|
|
||||||
"""
|
|
||||||
for i in reversed(range(len(txt))):
|
|
||||||
if get_token_fn(txt[:i]) < limit:
|
|
||||||
return txt[:i], txt[i:]
|
|
||||||
return "Tiktoken未知错误", "Tiktoken未知错误"
|
|
||||||
|
|
||||||
|
|
||||||
def maintain_storage(remain_txt_to_cut, remain_txt_to_cut_storage):
|
|
||||||
""" 为了加速计算,我们采样一个特殊的手段。当 remain_txt_to_cut > `_max` 时, 我们把 _max 后的文字转存至 remain_txt_to_cut_storage
|
|
||||||
当 remain_txt_to_cut < `_min` 时,我们再把 remain_txt_to_cut_storage 中的部分文字取出
|
|
||||||
"""
|
|
||||||
_min = int(5e4)
|
|
||||||
_max = int(1e5)
|
|
||||||
# print(len(remain_txt_to_cut), len(remain_txt_to_cut_storage))
|
|
||||||
if len(remain_txt_to_cut) < _min and len(remain_txt_to_cut_storage) > 0:
|
|
||||||
remain_txt_to_cut = remain_txt_to_cut + remain_txt_to_cut_storage
|
|
||||||
remain_txt_to_cut_storage = ""
|
|
||||||
if len(remain_txt_to_cut) > _max:
|
|
||||||
remain_txt_to_cut_storage = remain_txt_to_cut[_max:] + remain_txt_to_cut_storage
|
|
||||||
remain_txt_to_cut = remain_txt_to_cut[:_max]
|
|
||||||
return remain_txt_to_cut, remain_txt_to_cut_storage
|
|
||||||
|
|
||||||
|
|
||||||
def cut(limit, get_token_fn, txt_tocut, must_break_at_empty_line, break_anyway=False):
|
|
||||||
""" 文本切分
|
|
||||||
"""
|
|
||||||
res = []
|
|
||||||
total_len = len(txt_tocut)
|
|
||||||
fin_len = 0
|
|
||||||
remain_txt_to_cut = txt_tocut
|
|
||||||
remain_txt_to_cut_storage = ""
|
|
||||||
# 为了加速计算,我们采样一个特殊的手段。当 remain_txt_to_cut > `_max` 时, 我们把 _max 后的文字转存至 remain_txt_to_cut_storage
|
|
||||||
remain_txt_to_cut, remain_txt_to_cut_storage = maintain_storage(remain_txt_to_cut, remain_txt_to_cut_storage)
|
|
||||||
|
|
||||||
while True:
|
|
||||||
if get_token_fn(remain_txt_to_cut) <= limit:
|
|
||||||
# 如果剩余文本的token数小于限制,那么就不用切了
|
|
||||||
res.append(remain_txt_to_cut); fin_len+=len(remain_txt_to_cut)
|
|
||||||
break
|
|
||||||
else:
|
|
||||||
# 如果剩余文本的token数大于限制,那么就切
|
|
||||||
lines = remain_txt_to_cut.split('\n')
|
|
||||||
|
|
||||||
# 估计一个切分点
|
|
||||||
estimated_line_cut = limit / get_token_fn(remain_txt_to_cut) * len(lines)
|
|
||||||
estimated_line_cut = int(estimated_line_cut)
|
|
||||||
|
|
||||||
# 开始查找合适切分点的偏移(cnt)
|
|
||||||
cnt = 0
|
|
||||||
for cnt in reversed(range(estimated_line_cut)):
|
|
||||||
if must_break_at_empty_line:
|
|
||||||
# 首先尝试用双空行(\n\n)作为切分点
|
|
||||||
if lines[cnt] != "":
|
|
||||||
continue
|
|
||||||
prev = "\n".join(lines[:cnt])
|
|
||||||
post = "\n".join(lines[cnt:])
|
|
||||||
if get_token_fn(prev) < limit:
|
|
||||||
break
|
|
||||||
|
|
||||||
if cnt == 0:
|
|
||||||
# 如果没有找到合适的切分点
|
|
||||||
if break_anyway:
|
|
||||||
# 是否允许暴力切分
|
|
||||||
prev, post = force_breakdown(remain_txt_to_cut, limit, get_token_fn)
|
|
||||||
else:
|
|
||||||
# 不允许直接报错
|
|
||||||
raise RuntimeError(f"存在一行极长的文本!{remain_txt_to_cut}")
|
|
||||||
|
|
||||||
# 追加列表
|
|
||||||
res.append(prev); fin_len+=len(prev)
|
|
||||||
# 准备下一次迭代
|
|
||||||
remain_txt_to_cut = post
|
|
||||||
remain_txt_to_cut, remain_txt_to_cut_storage = maintain_storage(remain_txt_to_cut, remain_txt_to_cut_storage)
|
|
||||||
process = fin_len/total_len
|
|
||||||
print(f'正在文本切分 {int(process*100)}%')
|
|
||||||
if len(remain_txt_to_cut.strip()) == 0:
|
|
||||||
break
|
|
||||||
return res
|
|
||||||
|
|
||||||
|
|
||||||
def breakdown_text_to_satisfy_token_limit_(txt, limit, llm_model="gpt-3.5-turbo"):
|
|
||||||
""" 使用多种方式尝试切分文本,以满足 token 限制
|
|
||||||
"""
|
|
||||||
from request_llms.bridge_all import model_info
|
|
||||||
enc = model_info[llm_model]['tokenizer']
|
|
||||||
def get_token_fn(txt): return len(enc.encode(txt, disallowed_special=()))
|
|
||||||
try:
|
|
||||||
# 第1次尝试,将双空行(\n\n)作为切分点
|
|
||||||
return cut(limit, get_token_fn, txt, must_break_at_empty_line=True)
|
|
||||||
except RuntimeError:
|
|
||||||
try:
|
|
||||||
# 第2次尝试,将单空行(\n)作为切分点
|
|
||||||
return cut(limit, get_token_fn, txt, must_break_at_empty_line=False)
|
|
||||||
except RuntimeError:
|
|
||||||
try:
|
|
||||||
# 第3次尝试,将英文句号(.)作为切分点
|
|
||||||
res = cut(limit, get_token_fn, txt.replace('.', '。\n'), must_break_at_empty_line=False) # 这个中文的句号是故意的,作为一个标识而存在
|
|
||||||
return [r.replace('。\n', '.') for r in res]
|
|
||||||
except RuntimeError as e:
|
|
||||||
try:
|
|
||||||
# 第4次尝试,将中文句号(。)作为切分点
|
|
||||||
res = cut(limit, get_token_fn, txt.replace('。', '。。\n'), must_break_at_empty_line=False)
|
|
||||||
return [r.replace('。。\n', '。') for r in res]
|
|
||||||
except RuntimeError as e:
|
|
||||||
# 第5次尝试,没办法了,随便切一下吧
|
|
||||||
return cut(limit, get_token_fn, txt, must_break_at_empty_line=False, break_anyway=True)
|
|
||||||
|
|
||||||
breakdown_text_to_satisfy_token_limit = run_in_subprocess_with_timeout(breakdown_text_to_satisfy_token_limit_, timeout=60)
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
from crazy_functions.crazy_utils import read_and_clean_pdf_text
|
|
||||||
file_content, page_one = read_and_clean_pdf_text("build/assets/at.pdf")
|
|
||||||
|
|
||||||
from request_llms.bridge_all import model_info
|
|
||||||
for i in range(5):
|
|
||||||
file_content += file_content
|
|
||||||
|
|
||||||
print(len(file_content))
|
|
||||||
TOKEN_LIMIT_PER_FRAGMENT = 2500
|
|
||||||
res = breakdown_text_to_satisfy_token_limit(file_content, TOKEN_LIMIT_PER_FRAGMENT)
|
|
||||||
|
|
||||||
@@ -74,7 +74,7 @@ def produce_report_markdown(gpt_response_collection, meta, paper_meta_info, chat
|
|||||||
|
|
||||||
def translate_pdf(article_dict, llm_kwargs, chatbot, fp, generated_conclusion_files, TOKEN_LIMIT_PER_FRAGMENT, DST_LANG):
|
def translate_pdf(article_dict, llm_kwargs, chatbot, fp, generated_conclusion_files, TOKEN_LIMIT_PER_FRAGMENT, DST_LANG):
|
||||||
from crazy_functions.pdf_fns.report_gen_html import construct_html
|
from crazy_functions.pdf_fns.report_gen_html import construct_html
|
||||||
from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
|
from crazy_functions.crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
|
||||||
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
||||||
from crazy_functions.crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
|
from crazy_functions.crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
|
||||||
|
|
||||||
@@ -116,7 +116,7 @@ def translate_pdf(article_dict, llm_kwargs, chatbot, fp, generated_conclusion_fi
|
|||||||
# find a smooth token limit to achieve even seperation
|
# find a smooth token limit to achieve even seperation
|
||||||
count = int(math.ceil(raw_token_num / TOKEN_LIMIT_PER_FRAGMENT))
|
count = int(math.ceil(raw_token_num / TOKEN_LIMIT_PER_FRAGMENT))
|
||||||
token_limit_smooth = raw_token_num // count + count
|
token_limit_smooth = raw_token_num // count + count
|
||||||
return breakdown_text_to_satisfy_token_limit(txt, limit=token_limit_smooth, llm_model=llm_kwargs['llm_model'])
|
return breakdown_txt_to_satisfy_token_limit_for_pdf(txt, get_token_fn=get_token_num, limit=token_limit_smooth)
|
||||||
|
|
||||||
for section in article_dict.get('sections'):
|
for section in article_dict.get('sections'):
|
||||||
if len(section['text']) == 0: continue
|
if len(section['text']) == 0: continue
|
||||||
|
|||||||
@@ -130,7 +130,7 @@ def get_name(_url_):
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 下载arxiv论文并翻译摘要(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def 下载arxiv论文并翻译摘要(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
|
|
||||||
CRAZY_FUNCTION_INFO = "下载arxiv论文并翻译摘要,函数插件作者[binary-husky]。正在提取摘要并下载PDF文档……"
|
CRAZY_FUNCTION_INFO = "下载arxiv论文并翻译摘要,函数插件作者[binary-husky]。正在提取摘要并下载PDF文档……"
|
||||||
import glob
|
import glob
|
||||||
|
|||||||
@@ -3,28 +3,47 @@ from crazy_functions.multi_stage.multi_stage_utils import GptAcademicGameBaseSta
|
|||||||
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
||||||
from request_llms.bridge_all import predict_no_ui_long_connection
|
from request_llms.bridge_all import predict_no_ui_long_connection
|
||||||
from crazy_functions.game_fns.game_utils import get_code_block, is_same_thing
|
from crazy_functions.game_fns.game_utils import get_code_block, is_same_thing
|
||||||
|
import random
|
||||||
|
|
||||||
@CatchException
|
|
||||||
def 随机小游戏(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
class MiniGame_ASCII_Art(GptAcademicGameBaseState):
|
||||||
from crazy_functions.game_fns.game_interactive_story import MiniGame_ResumeStory
|
|
||||||
# 清空历史
|
def step(self, prompt, chatbot, history):
|
||||||
history = []
|
if self.step_cnt == 0:
|
||||||
# 选择游戏
|
chatbot.append(["我画你猜(动物)", "请稍等..."])
|
||||||
cls = MiniGame_ResumeStory
|
else:
|
||||||
# 如果之前已经初始化了游戏实例,则继续该实例;否则重新初始化
|
if prompt.strip() == 'exit':
|
||||||
state = cls.sync_state(chatbot,
|
self.delete_game = True
|
||||||
llm_kwargs,
|
yield from update_ui_lastest_msg(lastmsg=f"谜底是{self.obj},游戏结束。", chatbot=chatbot, history=history, delay=0.)
|
||||||
cls,
|
return
|
||||||
plugin_name='MiniGame_ResumeStory',
|
chatbot.append([prompt, ""])
|
||||||
callback_fn='crazy_functions.互动小游戏->随机小游戏',
|
yield from update_ui(chatbot=chatbot, history=history)
|
||||||
lock_plugin=True
|
|
||||||
)
|
if self.step_cnt == 0:
|
||||||
yield from state.continue_game(prompt, chatbot, history)
|
self.lock_plugin(chatbot)
|
||||||
|
self.cur_task = 'draw'
|
||||||
|
|
||||||
|
if self.cur_task == 'draw':
|
||||||
|
avail_obj = ["狗","猫","鸟","鱼","老鼠","蛇"]
|
||||||
|
self.obj = random.choice(avail_obj)
|
||||||
|
inputs = "I want to play a game called Guess the ASCII art. You can draw the ASCII art and I will try to guess it. " + f"This time you draw a {self.obj}. Note that you must not indicate what you have draw in the text, and you should only produce the ASCII art wrapped by ```. "
|
||||||
|
raw_res = predict_no_ui_long_connection(inputs=inputs, llm_kwargs=self.llm_kwargs, history=[], sys_prompt="")
|
||||||
|
self.cur_task = 'identify user guess'
|
||||||
|
res = get_code_block(raw_res)
|
||||||
|
history += ['', f'the answer is {self.obj}', inputs, res]
|
||||||
|
yield from update_ui_lastest_msg(lastmsg=res, chatbot=chatbot, history=history, delay=0.)
|
||||||
|
|
||||||
|
elif self.cur_task == 'identify user guess':
|
||||||
|
if is_same_thing(self.obj, prompt, self.llm_kwargs):
|
||||||
|
self.delete_game = True
|
||||||
|
yield from update_ui_lastest_msg(lastmsg="你猜对了!", chatbot=chatbot, history=history, delay=0.)
|
||||||
|
else:
|
||||||
|
self.cur_task = 'identify user guess'
|
||||||
|
yield from update_ui_lastest_msg(lastmsg="猜错了,再试试,输入“exit”获取答案。", chatbot=chatbot, history=history, delay=0.)
|
||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 随机小游戏1(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def 随机小游戏(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
from crazy_functions.game_fns.game_ascii_art import MiniGame_ASCII_Art
|
|
||||||
# 清空历史
|
# 清空历史
|
||||||
history = []
|
history = []
|
||||||
# 选择游戏
|
# 选择游戏
|
||||||
@@ -34,7 +53,7 @@ def 随机小游戏1(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system
|
|||||||
llm_kwargs,
|
llm_kwargs,
|
||||||
cls,
|
cls,
|
||||||
plugin_name='MiniGame_ASCII_Art',
|
plugin_name='MiniGame_ASCII_Art',
|
||||||
callback_fn='crazy_functions.互动小游戏->随机小游戏1',
|
callback_fn='crazy_functions.互动小游戏->随机小游戏',
|
||||||
lock_plugin=True
|
lock_plugin=True
|
||||||
)
|
)
|
||||||
yield from state.continue_game(prompt, chatbot, history)
|
yield from state.continue_game(prompt, chatbot, history)
|
||||||
|
|||||||
@@ -3,7 +3,7 @@ from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 交互功能模板函数(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def 交互功能模板函数(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
"""
|
"""
|
||||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||||
llm_kwargs gpt模型参数, 如温度和top_p等, 一般原样传递下去就行
|
llm_kwargs gpt模型参数, 如温度和top_p等, 一般原样传递下去就行
|
||||||
@@ -11,7 +11,7 @@ def 交互功能模板函数(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
|
|||||||
chatbot 聊天显示框的句柄,用于显示给用户
|
chatbot 聊天显示框的句柄,用于显示给用户
|
||||||
history 聊天历史,前情提要
|
history 聊天历史,前情提要
|
||||||
system_prompt 给gpt的静默提醒
|
system_prompt 给gpt的静默提醒
|
||||||
user_request 当前用户的请求信息(IP地址等)
|
web_port 当前软件运行的端口号
|
||||||
"""
|
"""
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
chatbot.append(("这是什么功能?", "交互功能函数模板。在执行完成之后, 可以将自身的状态存储到cookie中, 等待用户的再次调用。"))
|
chatbot.append(("这是什么功能?", "交互功能函数模板。在执行完成之后, 可以将自身的状态存储到cookie中, 等待用户的再次调用。"))
|
||||||
|
|||||||
@@ -139,7 +139,7 @@ def get_recent_file_prompt_support(chatbot):
|
|||||||
return path
|
return path
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 函数动态生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def 函数动态生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
"""
|
"""
|
||||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||||
@@ -147,7 +147,7 @@ def 函数动态生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_
|
|||||||
chatbot 聊天显示框的句柄,用于显示给用户
|
chatbot 聊天显示框的句柄,用于显示给用户
|
||||||
history 聊天历史,前情提要
|
history 聊天历史,前情提要
|
||||||
system_prompt 给gpt的静默提醒
|
system_prompt 给gpt的静默提醒
|
||||||
user_request 当前用户的请求信息(IP地址等)
|
web_port 当前软件运行的端口号
|
||||||
"""
|
"""
|
||||||
|
|
||||||
# 清空历史
|
# 清空历史
|
||||||
|
|||||||
@@ -4,7 +4,7 @@ from .crazy_utils import input_clipping
|
|||||||
import copy, json
|
import copy, json
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 命令行助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def 命令行助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
"""
|
"""
|
||||||
txt 输入栏用户输入的文本, 例如需要翻译的一段话, 再例如一个包含了待处理文件的路径
|
txt 输入栏用户输入的文本, 例如需要翻译的一段话, 再例如一个包含了待处理文件的路径
|
||||||
llm_kwargs gpt模型参数, 如温度和top_p等, 一般原样传递下去就行
|
llm_kwargs gpt模型参数, 如温度和top_p等, 一般原样传递下去就行
|
||||||
@@ -12,7 +12,7 @@ def 命令行助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_pro
|
|||||||
chatbot 聊天显示框的句柄, 用于显示给用户
|
chatbot 聊天显示框的句柄, 用于显示给用户
|
||||||
history 聊天历史, 前情提要
|
history 聊天历史, 前情提要
|
||||||
system_prompt 给gpt的静默提醒
|
system_prompt 给gpt的静默提醒
|
||||||
user_request 当前用户的请求信息(IP地址等)
|
web_port 当前软件运行的端口号
|
||||||
"""
|
"""
|
||||||
# 清空历史, 以免输入溢出
|
# 清空历史, 以免输入溢出
|
||||||
history = []
|
history = []
|
||||||
|
|||||||
@@ -93,7 +93,7 @@ def edit_image(llm_kwargs, prompt, image_path, resolution="1024x1024", model="da
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 图片生成_DALLE2(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def 图片生成_DALLE2(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
"""
|
"""
|
||||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||||
@@ -101,14 +101,10 @@ def 图片生成_DALLE2(prompt, llm_kwargs, plugin_kwargs, chatbot, history, sys
|
|||||||
chatbot 聊天显示框的句柄,用于显示给用户
|
chatbot 聊天显示框的句柄,用于显示给用户
|
||||||
history 聊天历史,前情提要
|
history 聊天历史,前情提要
|
||||||
system_prompt 给gpt的静默提醒
|
system_prompt 给gpt的静默提醒
|
||||||
user_request 当前用户的请求信息(IP地址等)
|
web_port 当前软件运行的端口号
|
||||||
"""
|
"""
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
if prompt.strip() == "":
|
chatbot.append(("您正在调用“图像生成”插件。", "[Local Message] 生成图像, 请先把模型切换至gpt-*或者api2d-*。如果中文Prompt效果不理想, 请尝试英文Prompt。正在处理中 ....."))
|
||||||
chatbot.append((prompt, "[Local Message] 图像生成提示为空白,请在“输入区”输入图像生成提示。"))
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 界面更新
|
|
||||||
return
|
|
||||||
chatbot.append(("您正在调用“图像生成”插件。", "[Local Message] 生成图像, 请先把模型切换至gpt-*。如果中文Prompt效果不理想, 请尝试英文Prompt。正在处理中 ....."))
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 由于请求gpt需要一段时间,我们先及时地做一次界面更新
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 由于请求gpt需要一段时间,我们先及时地做一次界面更新
|
||||||
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
|
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
|
||||||
resolution = plugin_kwargs.get("advanced_arg", '1024x1024')
|
resolution = plugin_kwargs.get("advanced_arg", '1024x1024')
|
||||||
@@ -123,13 +119,9 @@ def 图片生成_DALLE2(prompt, llm_kwargs, plugin_kwargs, chatbot, history, sys
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 图片生成_DALLE3(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def 图片生成_DALLE3(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
if prompt.strip() == "":
|
chatbot.append(("您正在调用“图像生成”插件。", "[Local Message] 生成图像, 请先把模型切换至gpt-*或者api2d-*。如果中文Prompt效果不理想, 请尝试英文Prompt。正在处理中 ....."))
|
||||||
chatbot.append((prompt, "[Local Message] 图像生成提示为空白,请在“输入区”输入图像生成提示。"))
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 界面更新
|
|
||||||
return
|
|
||||||
chatbot.append(("您正在调用“图像生成”插件。", "[Local Message] 生成图像, 请先把模型切换至gpt-*。如果中文Prompt效果不理想, 请尝试英文Prompt。正在处理中 ....."))
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 由于请求gpt需要一段时间,我们先及时地做一次界面更新
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 由于请求gpt需要一段时间,我们先及时地做一次界面更新
|
||||||
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
|
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
|
||||||
resolution_arg = plugin_kwargs.get("advanced_arg", '1024x1024-standard-vivid').lower()
|
resolution_arg = plugin_kwargs.get("advanced_arg", '1024x1024-standard-vivid').lower()
|
||||||
@@ -209,7 +201,7 @@ class ImageEditState(GptAcademicState):
|
|||||||
return all([x['value'] is not None for x in self.req])
|
return all([x['value'] is not None for x in self.req])
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 图片修改_DALLE2(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def 图片修改_DALLE2(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
# 尚未完成
|
# 尚未完成
|
||||||
history = [] # 清空历史
|
history = [] # 清空历史
|
||||||
state = ImageEditState.get_state(chatbot, ImageEditState)
|
state = ImageEditState.get_state(chatbot, ImageEditState)
|
||||||
|
|||||||
@@ -21,7 +21,7 @@ def remove_model_prefix(llm):
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 多智能体终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def 多智能体终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
"""
|
"""
|
||||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||||
@@ -29,7 +29,7 @@ def 多智能体终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_
|
|||||||
chatbot 聊天显示框的句柄,用于显示给用户
|
chatbot 聊天显示框的句柄,用于显示给用户
|
||||||
history 聊天历史,前情提要
|
history 聊天历史,前情提要
|
||||||
system_prompt 给gpt的静默提醒
|
system_prompt 给gpt的静默提醒
|
||||||
user_request 当前用户的请求信息(IP地址等)
|
web_port 当前软件运行的端口号
|
||||||
"""
|
"""
|
||||||
# 检查当前的模型是否符合要求
|
# 检查当前的模型是否符合要求
|
||||||
supported_llms = [
|
supported_llms = [
|
||||||
@@ -37,7 +37,7 @@ def 多智能体终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_
|
|||||||
'gpt-3.5-turbo-1106',
|
'gpt-3.5-turbo-1106',
|
||||||
"gpt-4",
|
"gpt-4",
|
||||||
"gpt-4-32k",
|
"gpt-4-32k",
|
||||||
'gpt-4-turbo-preview',
|
'gpt-4-1106-preview',
|
||||||
"azure-gpt-3.5-turbo-16k",
|
"azure-gpt-3.5-turbo-16k",
|
||||||
"azure-gpt-3.5-16k",
|
"azure-gpt-3.5-16k",
|
||||||
"azure-gpt-4",
|
"azure-gpt-4",
|
||||||
@@ -51,6 +51,13 @@ def 多智能体终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_
|
|||||||
if model_info[llm_kwargs['llm_model']]["endpoint"] is not None: # 如果不是本地模型,加载API_KEY
|
if model_info[llm_kwargs['llm_model']]["endpoint"] is not None: # 如果不是本地模型,加载API_KEY
|
||||||
llm_kwargs['api_key'] = select_api_key(llm_kwargs['api_key'], llm_kwargs['llm_model'])
|
llm_kwargs['api_key'] = select_api_key(llm_kwargs['api_key'], llm_kwargs['llm_model'])
|
||||||
|
|
||||||
|
# 检查当前的模型是否符合要求
|
||||||
|
API_URL_REDIRECT = get_conf('API_URL_REDIRECT')
|
||||||
|
if len(API_URL_REDIRECT) > 0:
|
||||||
|
chatbot.append([f"处理任务: {txt}", f"暂不支持中转."])
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
return
|
||||||
|
|
||||||
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
||||||
try:
|
try:
|
||||||
import autogen
|
import autogen
|
||||||
@@ -89,7 +96,7 @@ def 多智能体终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_
|
|||||||
history = []
|
history = []
|
||||||
chatbot.append(["正在启动: 多智能体终端", "插件动态生成, 执行开始, 作者 Microsoft & Binary-Husky."])
|
chatbot.append(["正在启动: 多智能体终端", "插件动态生成, 执行开始, 作者 Microsoft & Binary-Husky."])
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
executor = AutoGenMath(llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request)
|
executor = AutoGenMath(llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port)
|
||||||
persistent_class_multi_user_manager.set(persistent_key, executor)
|
persistent_class_multi_user_manager.set(persistent_key, executor)
|
||||||
exit_reason = yield from executor.main_process_ui_control(txt, create_or_resume="create")
|
exit_reason = yield from executor.main_process_ui_control(txt, create_or_resume="create")
|
||||||
|
|
||||||
|
|||||||
@@ -69,7 +69,7 @@ def read_file_to_chat(chatbot, history, file_name):
|
|||||||
return chatbot, history
|
return chatbot, history
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def 对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
"""
|
"""
|
||||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||||
@@ -77,7 +77,7 @@ def 对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_
|
|||||||
chatbot 聊天显示框的句柄,用于显示给用户
|
chatbot 聊天显示框的句柄,用于显示给用户
|
||||||
history 聊天历史,前情提要
|
history 聊天历史,前情提要
|
||||||
system_prompt 给gpt的静默提醒
|
system_prompt 给gpt的静默提醒
|
||||||
user_request 当前用户的请求信息(IP地址等)
|
web_port 当前软件运行的端口号
|
||||||
"""
|
"""
|
||||||
|
|
||||||
chatbot.append(("保存当前对话",
|
chatbot.append(("保存当前对话",
|
||||||
@@ -91,7 +91,7 @@ def hide_cwd(str):
|
|||||||
return str.replace(current_path, replace_path)
|
return str.replace(current_path, replace_path)
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 载入对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def 载入对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
"""
|
"""
|
||||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||||
@@ -99,7 +99,7 @@ def 载入对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
|
|||||||
chatbot 聊天显示框的句柄,用于显示给用户
|
chatbot 聊天显示框的句柄,用于显示给用户
|
||||||
history 聊天历史,前情提要
|
history 聊天历史,前情提要
|
||||||
system_prompt 给gpt的静默提醒
|
system_prompt 给gpt的静默提醒
|
||||||
user_request 当前用户的请求信息(IP地址等)
|
web_port 当前软件运行的端口号
|
||||||
"""
|
"""
|
||||||
from .crazy_utils import get_files_from_everything
|
from .crazy_utils import get_files_from_everything
|
||||||
success, file_manifest, _ = get_files_from_everything(txt, type='.html')
|
success, file_manifest, _ = get_files_from_everything(txt, type='.html')
|
||||||
@@ -126,7 +126,7 @@ def 载入对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
|
|||||||
return
|
return
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 删除所有本地对话历史记录(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def 删除所有本地对话历史记录(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
"""
|
"""
|
||||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||||
@@ -134,7 +134,7 @@ def 删除所有本地对话历史记录(txt, llm_kwargs, plugin_kwargs, chatbot
|
|||||||
chatbot 聊天显示框的句柄,用于显示给用户
|
chatbot 聊天显示框的句柄,用于显示给用户
|
||||||
history 聊天历史,前情提要
|
history 聊天历史,前情提要
|
||||||
system_prompt 给gpt的静默提醒
|
system_prompt 给gpt的静默提醒
|
||||||
user_request 当前用户的请求信息(IP地址等)
|
web_port 当前软件运行的端口号
|
||||||
"""
|
"""
|
||||||
|
|
||||||
import glob, os
|
import glob, os
|
||||||
|
|||||||
@@ -29,12 +29,17 @@ def 解析docx(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot
|
|||||||
except:
|
except:
|
||||||
raise RuntimeError('请先将.doc文档转换为.docx文档。')
|
raise RuntimeError('请先将.doc文档转换为.docx文档。')
|
||||||
|
|
||||||
|
print(file_content)
|
||||||
# private_upload里面的文件名在解压zip后容易出现乱码(rar和7z格式正常),故可以只分析文章内容,不输入文件名
|
# private_upload里面的文件名在解压zip后容易出现乱码(rar和7z格式正常),故可以只分析文章内容,不输入文件名
|
||||||
from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
|
from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
|
||||||
from request_llms.bridge_all import model_info
|
from request_llms.bridge_all import model_info
|
||||||
max_token = model_info[llm_kwargs['llm_model']]['max_token']
|
max_token = model_info[llm_kwargs['llm_model']]['max_token']
|
||||||
TOKEN_LIMIT_PER_FRAGMENT = max_token * 3 // 4
|
TOKEN_LIMIT_PER_FRAGMENT = max_token * 3 // 4
|
||||||
paper_fragments = breakdown_text_to_satisfy_token_limit(txt=file_content, limit=TOKEN_LIMIT_PER_FRAGMENT, llm_model=llm_kwargs['llm_model'])
|
paper_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf(
|
||||||
|
txt=file_content,
|
||||||
|
get_token_fn=model_info[llm_kwargs['llm_model']]['token_cnt'],
|
||||||
|
limit=TOKEN_LIMIT_PER_FRAGMENT
|
||||||
|
)
|
||||||
this_paper_history = []
|
this_paper_history = []
|
||||||
for i, paper_frag in enumerate(paper_fragments):
|
for i, paper_frag in enumerate(paper_fragments):
|
||||||
i_say = f'请对下面的文章片段用中文做概述,文件名是{os.path.relpath(fp, project_folder)},文章内容是 ```{paper_frag}```'
|
i_say = f'请对下面的文章片段用中文做概述,文件名是{os.path.relpath(fp, project_folder)},文章内容是 ```{paper_frag}```'
|
||||||
@@ -79,7 +84,7 @@ def 解析docx(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 总结word文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def 总结word文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
import glob, os
|
import glob, os
|
||||||
|
|
||||||
# 基本信息:功能、贡献者
|
# 基本信息:功能、贡献者
|
||||||
|
|||||||
@@ -28,8 +28,8 @@ class PaperFileGroup():
|
|||||||
self.sp_file_index.append(index)
|
self.sp_file_index.append(index)
|
||||||
self.sp_file_tag.append(self.file_paths[index])
|
self.sp_file_tag.append(self.file_paths[index])
|
||||||
else:
|
else:
|
||||||
from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
|
from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
|
||||||
segments = breakdown_text_to_satisfy_token_limit(file_content, max_token_limit)
|
segments = breakdown_txt_to_satisfy_token_limit_for_pdf(file_content, self.get_token_num, max_token_limit)
|
||||||
for j, segment in enumerate(segments):
|
for j, segment in enumerate(segments):
|
||||||
self.sp_file_contents.append(segment)
|
self.sp_file_contents.append(segment)
|
||||||
self.sp_file_index.append(index)
|
self.sp_file_index.append(index)
|
||||||
@@ -153,7 +153,7 @@ def get_files_from_everything(txt, preference=''):
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def Markdown英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def Markdown英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
# 基本信息:功能、贡献者
|
# 基本信息:功能、贡献者
|
||||||
chatbot.append([
|
chatbot.append([
|
||||||
"函数插件功能?",
|
"函数插件功能?",
|
||||||
@@ -193,7 +193,7 @@ def Markdown英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def Markdown中译英(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def Markdown中译英(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
# 基本信息:功能、贡献者
|
# 基本信息:功能、贡献者
|
||||||
chatbot.append([
|
chatbot.append([
|
||||||
"函数插件功能?",
|
"函数插件功能?",
|
||||||
@@ -226,7 +226,7 @@ def Markdown中译英(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def Markdown翻译指定语言(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def Markdown翻译指定语言(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
# 基本信息:功能、贡献者
|
# 基本信息:功能、贡献者
|
||||||
chatbot.append([
|
chatbot.append([
|
||||||
"函数插件功能?",
|
"函数插件功能?",
|
||||||
|
|||||||
@@ -20,9 +20,14 @@ def 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot,
|
|||||||
|
|
||||||
TOKEN_LIMIT_PER_FRAGMENT = 2500
|
TOKEN_LIMIT_PER_FRAGMENT = 2500
|
||||||
|
|
||||||
from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
|
from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
|
||||||
paper_fragments = breakdown_text_to_satisfy_token_limit(txt=file_content, limit=TOKEN_LIMIT_PER_FRAGMENT, llm_model=llm_kwargs['llm_model'])
|
from request_llms.bridge_all import model_info
|
||||||
page_one_fragments = breakdown_text_to_satisfy_token_limit(txt=str(page_one), limit=TOKEN_LIMIT_PER_FRAGMENT//4, llm_model=llm_kwargs['llm_model'])
|
enc = model_info["gpt-3.5-turbo"]['tokenizer']
|
||||||
|
def get_token_num(txt): return len(enc.encode(txt, disallowed_special=()))
|
||||||
|
paper_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf(
|
||||||
|
txt=file_content, get_token_fn=get_token_num, limit=TOKEN_LIMIT_PER_FRAGMENT)
|
||||||
|
page_one_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf(
|
||||||
|
txt=str(page_one), get_token_fn=get_token_num, limit=TOKEN_LIMIT_PER_FRAGMENT//4)
|
||||||
# 为了更好的效果,我们剥离Introduction之后的部分(如果有)
|
# 为了更好的效果,我们剥离Introduction之后的部分(如果有)
|
||||||
paper_meta = page_one_fragments[0].split('introduction')[0].split('Introduction')[0].split('INTRODUCTION')[0]
|
paper_meta = page_one_fragments[0].split('introduction')[0].split('Introduction')[0].split('INTRODUCTION')[0]
|
||||||
|
|
||||||
@@ -101,7 +106,7 @@ do not have too much repetitive information, numerical values using the original
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 批量总结PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def 批量总结PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
import glob, os
|
import glob, os
|
||||||
|
|
||||||
# 基本信息:功能、贡献者
|
# 基本信息:功能、贡献者
|
||||||
|
|||||||
@@ -124,7 +124,7 @@ def 解析Paper(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbo
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 批量总结PDF文档pdfminer(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def 批量总结PDF文档pdfminer(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
import glob, os
|
import glob, os
|
||||||
|
|
||||||
|
|||||||
@@ -48,7 +48,7 @@ def markdown_to_dict(article_content):
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
|
|
||||||
disable_auto_promotion(chatbot)
|
disable_auto_promotion(chatbot)
|
||||||
# 基本信息:功能、贡献者
|
# 基本信息:功能、贡献者
|
||||||
|
|||||||
@@ -10,7 +10,7 @@ import os
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
|
|
||||||
disable_auto_promotion(chatbot)
|
disable_auto_promotion(chatbot)
|
||||||
# 基本信息:功能、贡献者
|
# 基本信息:功能、贡献者
|
||||||
@@ -91,9 +91,14 @@ def 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot,
|
|||||||
page_one = str(page_one).encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
|
page_one = str(page_one).encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
|
||||||
|
|
||||||
# 递归地切割PDF文件
|
# 递归地切割PDF文件
|
||||||
from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
|
from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
|
||||||
paper_fragments = breakdown_text_to_satisfy_token_limit(txt=file_content, limit=TOKEN_LIMIT_PER_FRAGMENT, llm_model=llm_kwargs['llm_model'])
|
from request_llms.bridge_all import model_info
|
||||||
page_one_fragments = breakdown_text_to_satisfy_token_limit(txt=page_one, limit=TOKEN_LIMIT_PER_FRAGMENT//4, llm_model=llm_kwargs['llm_model'])
|
enc = model_info["gpt-3.5-turbo"]['tokenizer']
|
||||||
|
def get_token_num(txt): return len(enc.encode(txt, disallowed_special=()))
|
||||||
|
paper_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf(
|
||||||
|
txt=file_content, get_token_fn=get_token_num, limit=TOKEN_LIMIT_PER_FRAGMENT)
|
||||||
|
page_one_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf(
|
||||||
|
txt=page_one, get_token_fn=get_token_num, limit=TOKEN_LIMIT_PER_FRAGMENT//4)
|
||||||
|
|
||||||
# 为了更好的效果,我们剥离Introduction之后的部分(如果有)
|
# 为了更好的效果,我们剥离Introduction之后的部分(如果有)
|
||||||
paper_meta = page_one_fragments[0].split('introduction')[0].split('Introduction')[0].split('INTRODUCTION')[0]
|
paper_meta = page_one_fragments[0].split('introduction')[0].split('Introduction')[0].split('INTRODUCTION')[0]
|
||||||
|
|||||||
@@ -1,7 +1,6 @@
|
|||||||
import os
|
from toolbox import CatchException, update_ui, gen_time_str
|
||||||
from toolbox import CatchException, update_ui, gen_time_str, promote_file_to_downloadzone
|
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
||||||
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
from .crazy_utils import input_clipping
|
||||||
from crazy_functions.crazy_utils import input_clipping
|
|
||||||
|
|
||||||
def inspect_dependency(chatbot, history):
|
def inspect_dependency(chatbot, history):
|
||||||
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
||||||
@@ -28,10 +27,9 @@ def eval_manim(code):
|
|||||||
class_name = get_class_name(code)
|
class_name = get_class_name(code)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
time_str = gen_time_str()
|
|
||||||
subprocess.check_output([sys.executable, '-c', f"from gpt_log.MyAnimation import {class_name}; {class_name}().render()"])
|
subprocess.check_output([sys.executable, '-c', f"from gpt_log.MyAnimation import {class_name}; {class_name}().render()"])
|
||||||
shutil.move(f'media/videos/1080p60/{class_name}.mp4', f'gpt_log/{class_name}-{time_str}.mp4')
|
shutil.move('media/videos/1080p60/{class_name}.mp4', f'gpt_log/{class_name}-{gen_time_str()}.mp4')
|
||||||
return f'gpt_log/{time_str}.mp4'
|
return f'gpt_log/{gen_time_str()}.mp4'
|
||||||
except subprocess.CalledProcessError as e:
|
except subprocess.CalledProcessError as e:
|
||||||
output = e.output.decode()
|
output = e.output.decode()
|
||||||
print(f"Command returned non-zero exit status {e.returncode}: {output}.")
|
print(f"Command returned non-zero exit status {e.returncode}: {output}.")
|
||||||
@@ -50,7 +48,7 @@ def get_code_block(reply):
|
|||||||
return matches[0].strip('python') # code block
|
return matches[0].strip('python') # code block
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 动画生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def 动画生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
"""
|
"""
|
||||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||||
@@ -58,7 +56,7 @@ def 动画生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt
|
|||||||
chatbot 聊天显示框的句柄,用于显示给用户
|
chatbot 聊天显示框的句柄,用于显示给用户
|
||||||
history 聊天历史,前情提要
|
history 聊天历史,前情提要
|
||||||
system_prompt 给gpt的静默提醒
|
system_prompt 给gpt的静默提醒
|
||||||
user_request 当前用户的请求信息(IP地址等)
|
web_port 当前软件运行的端口号
|
||||||
"""
|
"""
|
||||||
# 清空历史,以免输入溢出
|
# 清空历史,以免输入溢出
|
||||||
history = []
|
history = []
|
||||||
@@ -96,8 +94,6 @@ def 动画生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt
|
|||||||
res = eval_manim(code)
|
res = eval_manim(code)
|
||||||
|
|
||||||
chatbot.append(("生成的视频文件路径", res))
|
chatbot.append(("生成的视频文件路径", res))
|
||||||
if os.path.exists(res):
|
|
||||||
promote_file_to_downloadzone(res, chatbot=chatbot)
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
|
||||||
|
|
||||||
# 在这里放一些网上搜集的demo,辅助gpt生成代码
|
# 在这里放一些网上搜集的demo,辅助gpt生成代码
|
||||||
|
|||||||
@@ -18,9 +18,14 @@ def 解析PDF(file_name, llm_kwargs, plugin_kwargs, chatbot, history, system_pro
|
|||||||
|
|
||||||
TOKEN_LIMIT_PER_FRAGMENT = 2500
|
TOKEN_LIMIT_PER_FRAGMENT = 2500
|
||||||
|
|
||||||
from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
|
from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
|
||||||
paper_fragments = breakdown_text_to_satisfy_token_limit(txt=file_content, limit=TOKEN_LIMIT_PER_FRAGMENT, llm_model=llm_kwargs['llm_model'])
|
from request_llms.bridge_all import model_info
|
||||||
page_one_fragments = breakdown_text_to_satisfy_token_limit(txt=str(page_one), limit=TOKEN_LIMIT_PER_FRAGMENT//4, llm_model=llm_kwargs['llm_model'])
|
enc = model_info["gpt-3.5-turbo"]['tokenizer']
|
||||||
|
def get_token_num(txt): return len(enc.encode(txt, disallowed_special=()))
|
||||||
|
paper_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf(
|
||||||
|
txt=file_content, get_token_fn=get_token_num, limit=TOKEN_LIMIT_PER_FRAGMENT)
|
||||||
|
page_one_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf(
|
||||||
|
txt=str(page_one), get_token_fn=get_token_num, limit=TOKEN_LIMIT_PER_FRAGMENT//4)
|
||||||
# 为了更好的效果,我们剥离Introduction之后的部分(如果有)
|
# 为了更好的效果,我们剥离Introduction之后的部分(如果有)
|
||||||
paper_meta = page_one_fragments[0].split('introduction')[0].split('Introduction')[0].split('INTRODUCTION')[0]
|
paper_meta = page_one_fragments[0].split('introduction')[0].split('Introduction')[0].split('INTRODUCTION')[0]
|
||||||
|
|
||||||
@@ -40,7 +45,7 @@ def 解析PDF(file_name, llm_kwargs, plugin_kwargs, chatbot, history, system_pro
|
|||||||
for i in range(n_fragment):
|
for i in range(n_fragment):
|
||||||
NUM_OF_WORD = MAX_WORD_TOTAL // n_fragment
|
NUM_OF_WORD = MAX_WORD_TOTAL // n_fragment
|
||||||
i_say = f"Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} words: {paper_fragments[i]}"
|
i_say = f"Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} words: {paper_fragments[i]}"
|
||||||
i_say_show_user = f"[{i+1}/{n_fragment}] Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} words: {paper_fragments[i][:200]} ...."
|
i_say_show_user = f"[{i+1}/{n_fragment}] Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} words: {paper_fragments[i][:200]}"
|
||||||
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(i_say, i_say_show_user, # i_say=真正给chatgpt的提问, i_say_show_user=给用户看的提问
|
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(i_say, i_say_show_user, # i_say=真正给chatgpt的提问, i_say_show_user=给用户看的提问
|
||||||
llm_kwargs, chatbot,
|
llm_kwargs, chatbot,
|
||||||
history=["The main idea of the previous section is?", last_iteration_result], # 迭代上一次的结果
|
history=["The main idea of the previous section is?", last_iteration_result], # 迭代上一次的结果
|
||||||
@@ -63,7 +68,7 @@ def 解析PDF(file_name, llm_kwargs, plugin_kwargs, chatbot, history, system_pro
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 理解PDF文档内容标准文件输入(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def 理解PDF文档内容标准文件输入(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
import glob, os
|
import glob, os
|
||||||
|
|
||||||
# 基本信息:功能、贡献者
|
# 基本信息:功能、贡献者
|
||||||
|
|||||||
@@ -36,7 +36,7 @@ def 生成函数注释(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 批量生成函数注释(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def 批量生成函数注释(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
import glob, os
|
import glob, os
|
||||||
if os.path.exists(txt):
|
if os.path.exists(txt):
|
||||||
|
|||||||
@@ -1,302 +0,0 @@
|
|||||||
from toolbox import CatchException, update_ui, report_exception
|
|
||||||
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
|
||||||
from .crazy_utils import read_and_clean_pdf_text
|
|
||||||
import datetime
|
|
||||||
|
|
||||||
#以下是每类图表的PROMPT
|
|
||||||
SELECT_PROMPT = """
|
|
||||||
“{subject}”
|
|
||||||
=============
|
|
||||||
以上是从文章中提取的摘要,将会使用这些摘要绘制图表。请你选择一个合适的图表类型:
|
|
||||||
1 流程图
|
|
||||||
2 序列图
|
|
||||||
3 类图
|
|
||||||
4 饼图
|
|
||||||
5 甘特图
|
|
||||||
6 状态图
|
|
||||||
7 实体关系图
|
|
||||||
8 象限提示图
|
|
||||||
不需要解释原因,仅需要输出单个不带任何标点符号的数字。
|
|
||||||
"""
|
|
||||||
#没有思维导图!!!测试发现模型始终会优先选择思维导图
|
|
||||||
#流程图
|
|
||||||
PROMPT_1 = """
|
|
||||||
请你给出围绕“{subject}”的逻辑关系图,使用mermaid语法,mermaid语法举例:
|
|
||||||
```mermaid
|
|
||||||
graph TD
|
|
||||||
P(编程) --> L1(Python)
|
|
||||||
P(编程) --> L2(C)
|
|
||||||
P(编程) --> L3(C++)
|
|
||||||
P(编程) --> L4(Javascipt)
|
|
||||||
P(编程) --> L5(PHP)
|
|
||||||
```
|
|
||||||
"""
|
|
||||||
#序列图
|
|
||||||
PROMPT_2 = """
|
|
||||||
请你给出围绕“{subject}”的序列图,使用mermaid语法,mermaid语法举例:
|
|
||||||
```mermaid
|
|
||||||
sequenceDiagram
|
|
||||||
participant A as 用户
|
|
||||||
participant B as 系统
|
|
||||||
A->>B: 登录请求
|
|
||||||
B->>A: 登录成功
|
|
||||||
A->>B: 获取数据
|
|
||||||
B->>A: 返回数据
|
|
||||||
```
|
|
||||||
"""
|
|
||||||
#类图
|
|
||||||
PROMPT_3 = """
|
|
||||||
请你给出围绕“{subject}”的类图,使用mermaid语法,mermaid语法举例:
|
|
||||||
```mermaid
|
|
||||||
classDiagram
|
|
||||||
Class01 <|-- AveryLongClass : Cool
|
|
||||||
Class03 *-- Class04
|
|
||||||
Class05 o-- Class06
|
|
||||||
Class07 .. Class08
|
|
||||||
Class09 --> C2 : Where am i?
|
|
||||||
Class09 --* C3
|
|
||||||
Class09 --|> Class07
|
|
||||||
Class07 : equals()
|
|
||||||
Class07 : Object[] elementData
|
|
||||||
Class01 : size()
|
|
||||||
Class01 : int chimp
|
|
||||||
Class01 : int gorilla
|
|
||||||
Class08 <--> C2: Cool label
|
|
||||||
```
|
|
||||||
"""
|
|
||||||
#饼图
|
|
||||||
PROMPT_4 = """
|
|
||||||
请你给出围绕“{subject}”的饼图,使用mermaid语法,mermaid语法举例:
|
|
||||||
```mermaid
|
|
||||||
pie title Pets adopted by volunteers
|
|
||||||
"狗" : 386
|
|
||||||
"猫" : 85
|
|
||||||
"兔子" : 15
|
|
||||||
```
|
|
||||||
"""
|
|
||||||
#甘特图
|
|
||||||
PROMPT_5 = """
|
|
||||||
请你给出围绕“{subject}”的甘特图,使用mermaid语法,mermaid语法举例:
|
|
||||||
```mermaid
|
|
||||||
gantt
|
|
||||||
title 项目开发流程
|
|
||||||
dateFormat YYYY-MM-DD
|
|
||||||
section 设计
|
|
||||||
需求分析 :done, des1, 2024-01-06,2024-01-08
|
|
||||||
原型设计 :active, des2, 2024-01-09, 3d
|
|
||||||
UI设计 : des3, after des2, 5d
|
|
||||||
section 开发
|
|
||||||
前端开发 :2024-01-20, 10d
|
|
||||||
后端开发 :2024-01-20, 10d
|
|
||||||
```
|
|
||||||
"""
|
|
||||||
#状态图
|
|
||||||
PROMPT_6 = """
|
|
||||||
请你给出围绕“{subject}”的状态图,使用mermaid语法,mermaid语法举例:
|
|
||||||
```mermaid
|
|
||||||
stateDiagram-v2
|
|
||||||
[*] --> Still
|
|
||||||
Still --> [*]
|
|
||||||
Still --> Moving
|
|
||||||
Moving --> Still
|
|
||||||
Moving --> Crash
|
|
||||||
Crash --> [*]
|
|
||||||
```
|
|
||||||
"""
|
|
||||||
#实体关系图
|
|
||||||
PROMPT_7 = """
|
|
||||||
请你给出围绕“{subject}”的实体关系图,使用mermaid语法,mermaid语法举例:
|
|
||||||
```mermaid
|
|
||||||
erDiagram
|
|
||||||
CUSTOMER ||--o{ ORDER : places
|
|
||||||
ORDER ||--|{ LINE-ITEM : contains
|
|
||||||
CUSTOMER {
|
|
||||||
string name
|
|
||||||
string id
|
|
||||||
}
|
|
||||||
ORDER {
|
|
||||||
string orderNumber
|
|
||||||
date orderDate
|
|
||||||
string customerID
|
|
||||||
}
|
|
||||||
LINE-ITEM {
|
|
||||||
number quantity
|
|
||||||
string productID
|
|
||||||
}
|
|
||||||
```
|
|
||||||
"""
|
|
||||||
#象限提示图
|
|
||||||
PROMPT_8 = """
|
|
||||||
请你给出围绕“{subject}”的象限图,使用mermaid语法,mermaid语法举例:
|
|
||||||
```mermaid
|
|
||||||
graph LR
|
|
||||||
A[Hard skill] --> B(Programming)
|
|
||||||
A[Hard skill] --> C(Design)
|
|
||||||
D[Soft skill] --> E(Coordination)
|
|
||||||
D[Soft skill] --> F(Communication)
|
|
||||||
```
|
|
||||||
"""
|
|
||||||
#思维导图
|
|
||||||
PROMPT_9 = """
|
|
||||||
{subject}
|
|
||||||
==========
|
|
||||||
请给出上方内容的思维导图,充分考虑其之间的逻辑,使用mermaid语法,mermaid语法举例:
|
|
||||||
```mermaid
|
|
||||||
mindmap
|
|
||||||
root((mindmap))
|
|
||||||
Origins
|
|
||||||
Long history
|
|
||||||
::icon(fa fa-book)
|
|
||||||
Popularisation
|
|
||||||
British popular psychology author Tony Buzan
|
|
||||||
Research
|
|
||||||
On effectiveness<br/>and features
|
|
||||||
On Automatic creation
|
|
||||||
Uses
|
|
||||||
Creative techniques
|
|
||||||
Strategic planning
|
|
||||||
Argument mapping
|
|
||||||
Tools
|
|
||||||
Pen and paper
|
|
||||||
Mermaid
|
|
||||||
```
|
|
||||||
"""
|
|
||||||
|
|
||||||
def 解析历史输入(history,llm_kwargs,chatbot,plugin_kwargs):
|
|
||||||
############################## <第 0 步,切割输入> ##################################
|
|
||||||
# 借用PDF切割中的函数对文本进行切割
|
|
||||||
TOKEN_LIMIT_PER_FRAGMENT = 2500
|
|
||||||
txt = str(history).encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
|
|
||||||
from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
|
|
||||||
txt = breakdown_text_to_satisfy_token_limit(txt=txt, limit=TOKEN_LIMIT_PER_FRAGMENT, llm_model=llm_kwargs['llm_model'])
|
|
||||||
############################## <第 1 步,迭代地历遍整个文章,提取精炼信息> ##################################
|
|
||||||
i_say_show_user = f'首先你从历史记录或文件中提取摘要。'; gpt_say = "[Local Message] 收到。" # 用户提示
|
|
||||||
chatbot.append([i_say_show_user, gpt_say]); yield from update_ui(chatbot=chatbot, history=history) # 更新UI
|
|
||||||
results = []
|
|
||||||
MAX_WORD_TOTAL = 4096
|
|
||||||
n_txt = len(txt)
|
|
||||||
last_iteration_result = "从以下文本中提取摘要。"
|
|
||||||
if n_txt >= 20: print('文章极长,不能达到预期效果')
|
|
||||||
for i in range(n_txt):
|
|
||||||
NUM_OF_WORD = MAX_WORD_TOTAL // n_txt
|
|
||||||
i_say = f"Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} words: {txt[i]}"
|
|
||||||
i_say_show_user = f"[{i+1}/{n_txt}] Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} words: {txt[i][:200]} ...."
|
|
||||||
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(i_say, i_say_show_user, # i_say=真正给chatgpt的提问, i_say_show_user=给用户看的提问
|
|
||||||
llm_kwargs, chatbot,
|
|
||||||
history=["The main content of the previous section is?", last_iteration_result], # 迭代上一次的结果
|
|
||||||
sys_prompt="Extracts the main content from the text section where it is located for graphing purposes, answer me with Chinese." # 提示
|
|
||||||
)
|
|
||||||
results.append(gpt_say)
|
|
||||||
last_iteration_result = gpt_say
|
|
||||||
############################## <第 2 步,根据整理的摘要选择图表类型> ##################################
|
|
||||||
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
|
|
||||||
gpt_say = plugin_kwargs.get("advanced_arg", "") #将图表类型参数赋值为插件参数
|
|
||||||
results_txt = '\n'.join(results) #合并摘要
|
|
||||||
if gpt_say not in ['1','2','3','4','5','6','7','8','9']: #如插件参数不正确则使用对话模型判断
|
|
||||||
i_say_show_user = f'接下来将判断适合的图表类型,如连续3次判断失败将会使用流程图进行绘制'; gpt_say = "[Local Message] 收到。" # 用户提示
|
|
||||||
chatbot.append([i_say_show_user, gpt_say]); yield from update_ui(chatbot=chatbot, history=[]) # 更新UI
|
|
||||||
i_say = SELECT_PROMPT.format(subject=results_txt)
|
|
||||||
i_say_show_user = f'请判断适合使用的流程图类型,其中数字对应关系为:1-流程图,2-序列图,3-类图,4-饼图,5-甘特图,6-状态图,7-实体关系图,8-象限提示图。由于不管提供文本是什么,模型大概率认为"思维导图"最合适,因此思维导图仅能通过参数调用。'
|
|
||||||
for i in range(3):
|
|
||||||
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
|
||||||
inputs=i_say,
|
|
||||||
inputs_show_user=i_say_show_user,
|
|
||||||
llm_kwargs=llm_kwargs, chatbot=chatbot, history=[],
|
|
||||||
sys_prompt=""
|
|
||||||
)
|
|
||||||
if gpt_say in ['1','2','3','4','5','6','7','8','9']: #判断返回是否正确
|
|
||||||
break
|
|
||||||
if gpt_say not in ['1','2','3','4','5','6','7','8','9']:
|
|
||||||
gpt_say = '1'
|
|
||||||
############################## <第 3 步,根据选择的图表类型绘制图表> ##################################
|
|
||||||
if gpt_say == '1':
|
|
||||||
i_say = PROMPT_1.format(subject=results_txt)
|
|
||||||
elif gpt_say == '2':
|
|
||||||
i_say = PROMPT_2.format(subject=results_txt)
|
|
||||||
elif gpt_say == '3':
|
|
||||||
i_say = PROMPT_3.format(subject=results_txt)
|
|
||||||
elif gpt_say == '4':
|
|
||||||
i_say = PROMPT_4.format(subject=results_txt)
|
|
||||||
elif gpt_say == '5':
|
|
||||||
i_say = PROMPT_5.format(subject=results_txt)
|
|
||||||
elif gpt_say == '6':
|
|
||||||
i_say = PROMPT_6.format(subject=results_txt)
|
|
||||||
elif gpt_say == '7':
|
|
||||||
i_say = PROMPT_7.replace("{subject}", results_txt) #由于实体关系图用到了{}符号
|
|
||||||
elif gpt_say == '8':
|
|
||||||
i_say = PROMPT_8.format(subject=results_txt)
|
|
||||||
elif gpt_say == '9':
|
|
||||||
i_say = PROMPT_9.format(subject=results_txt)
|
|
||||||
i_say_show_user = f'请根据判断结果绘制相应的图表。如需绘制思维导图请使用参数调用,同时过大的图表可能需要复制到在线编辑器中进行渲染。'
|
|
||||||
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
|
||||||
inputs=i_say,
|
|
||||||
inputs_show_user=i_say_show_user,
|
|
||||||
llm_kwargs=llm_kwargs, chatbot=chatbot, history=[],
|
|
||||||
sys_prompt="你精通使用mermaid语法来绘制图表,首先确保语法正确,其次避免在mermaid语法中使用不允许的字符,此外也应当分考虑图表的可读性。"
|
|
||||||
)
|
|
||||||
history.append(gpt_say)
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
|
|
||||||
|
|
||||||
def 输入区文件处理(txt):
|
|
||||||
if txt == "": return False, txt
|
|
||||||
success = True
|
|
||||||
import glob
|
|
||||||
from .crazy_utils import get_files_from_everything
|
|
||||||
file_pdf,pdf_manifest,folder_pdf = get_files_from_everything(txt, '.pdf')
|
|
||||||
file_md,md_manifest,folder_md = get_files_from_everything(txt, '.md')
|
|
||||||
if len(pdf_manifest) == 0 and len(md_manifest) == 0:
|
|
||||||
return False, txt #如输入区内容不是文件则直接返回输入区内容
|
|
||||||
|
|
||||||
final_result = ""
|
|
||||||
if file_pdf:
|
|
||||||
for index, fp in enumerate(pdf_manifest):
|
|
||||||
file_content, page_one = read_and_clean_pdf_text(fp) # (尝试)按照章节切割PDF
|
|
||||||
file_content = file_content.encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
|
|
||||||
final_result += "\n" + file_content
|
|
||||||
if file_md:
|
|
||||||
for index, fp in enumerate(md_manifest):
|
|
||||||
with open(fp, 'r', encoding='utf-8', errors='replace') as f:
|
|
||||||
file_content = f.read()
|
|
||||||
file_content = file_content.encode('utf-8', 'ignore').decode()
|
|
||||||
final_result += "\n" + file_content
|
|
||||||
return True, final_result
|
|
||||||
|
|
||||||
@CatchException
|
|
||||||
def 生成多种Mermaid图表(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
|
||||||
"""
|
|
||||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
|
||||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
|
||||||
plugin_kwargs 插件模型的参数,用于灵活调整复杂功能的各种参数
|
|
||||||
chatbot 聊天显示框的句柄,用于显示给用户
|
|
||||||
history 聊天历史,前情提要
|
|
||||||
system_prompt 给gpt的静默提醒
|
|
||||||
web_port 当前软件运行的端口号
|
|
||||||
"""
|
|
||||||
import os
|
|
||||||
|
|
||||||
# 基本信息:功能、贡献者
|
|
||||||
chatbot.append([
|
|
||||||
"函数插件功能?",
|
|
||||||
"根据当前聊天历史或文件中(文件内容优先)绘制多种mermaid图表,将会由对话模型首先判断适合的图表类型,随后绘制图表。\
|
|
||||||
\n您也可以使用插件参数指定绘制的图表类型,函数插件贡献者: Menghuan1918"])
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
|
||||||
|
|
||||||
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
|
||||||
try:
|
|
||||||
import fitz
|
|
||||||
except:
|
|
||||||
report_exception(chatbot, history,
|
|
||||||
a = f"解析项目: {txt}",
|
|
||||||
b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pymupdf```。")
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
|
||||||
return
|
|
||||||
|
|
||||||
if os.path.exists(txt): #如输入区无内容则直接解析历史记录
|
|
||||||
file_exist, txt = 输入区文件处理(txt)
|
|
||||||
else:
|
|
||||||
file_exist = False
|
|
||||||
|
|
||||||
if file_exist : history = [] #如输入区内容为文件则清空历史记录
|
|
||||||
history.append(txt) #将解析后的txt传递加入到历史中
|
|
||||||
|
|
||||||
yield from 解析历史输入(history,llm_kwargs,chatbot,plugin_kwargs)
|
|
||||||
@@ -13,7 +13,7 @@ install_msg ="""
|
|||||||
"""
|
"""
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 知识库文件注入(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def 知识库文件注入(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
"""
|
"""
|
||||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||||
llm_kwargs gpt模型参数, 如温度和top_p等, 一般原样传递下去就行
|
llm_kwargs gpt模型参数, 如温度和top_p等, 一般原样传递下去就行
|
||||||
@@ -21,7 +21,7 @@ def 知识库文件注入(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
|
|||||||
chatbot 聊天显示框的句柄,用于显示给用户
|
chatbot 聊天显示框的句柄,用于显示给用户
|
||||||
history 聊天历史,前情提要
|
history 聊天历史,前情提要
|
||||||
system_prompt 给gpt的静默提醒
|
system_prompt 给gpt的静默提醒
|
||||||
user_request 当前用户的请求信息(IP地址等)
|
web_port 当前软件运行的端口号
|
||||||
"""
|
"""
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
|
|
||||||
@@ -84,7 +84,7 @@ def 知识库文件注入(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
|
|||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 读取知识库作答(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request=-1):
|
def 读取知识库作答(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port=-1):
|
||||||
# resolve deps
|
# resolve deps
|
||||||
try:
|
try:
|
||||||
# from zh_langchain import construct_vector_store
|
# from zh_langchain import construct_vector_store
|
||||||
|
|||||||
@@ -55,7 +55,7 @@ def scrape_text(url, proxies) -> str:
|
|||||||
return text
|
return text
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 连接网络回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def 连接网络回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
"""
|
"""
|
||||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||||
@@ -63,7 +63,7 @@ def 连接网络回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
|
|||||||
chatbot 聊天显示框的句柄,用于显示给用户
|
chatbot 聊天显示框的句柄,用于显示给用户
|
||||||
history 聊天历史,前情提要
|
history 聊天历史,前情提要
|
||||||
system_prompt 给gpt的静默提醒
|
system_prompt 给gpt的静默提醒
|
||||||
user_request 当前用户的请求信息(IP地址等)
|
web_port 当前软件运行的端口号
|
||||||
"""
|
"""
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
chatbot.append((f"请结合互联网信息回答以下问题:{txt}",
|
chatbot.append((f"请结合互联网信息回答以下问题:{txt}",
|
||||||
|
|||||||
@@ -55,7 +55,7 @@ def scrape_text(url, proxies) -> str:
|
|||||||
return text
|
return text
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 连接bing搜索回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def 连接bing搜索回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
"""
|
"""
|
||||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||||
@@ -63,7 +63,7 @@ def 连接bing搜索回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, histor
|
|||||||
chatbot 聊天显示框的句柄,用于显示给用户
|
chatbot 聊天显示框的句柄,用于显示给用户
|
||||||
history 聊天历史,前情提要
|
history 聊天历史,前情提要
|
||||||
system_prompt 给gpt的静默提醒
|
system_prompt 给gpt的静默提醒
|
||||||
user_request 当前用户的请求信息(IP地址等)
|
web_port 当前软件运行的端口号
|
||||||
"""
|
"""
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
chatbot.append((f"请结合互联网信息回答以下问题:{txt}",
|
chatbot.append((f"请结合互联网信息回答以下问题:{txt}",
|
||||||
|
|||||||
@@ -104,7 +104,7 @@ def analyze_intention_with_simple_rules(txt):
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 虚空终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def 虚空终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
disable_auto_promotion(chatbot=chatbot)
|
disable_auto_promotion(chatbot=chatbot)
|
||||||
# 获取当前虚空终端状态
|
# 获取当前虚空终端状态
|
||||||
state = VoidTerminalState.get_state(chatbot)
|
state = VoidTerminalState.get_state(chatbot)
|
||||||
@@ -121,7 +121,7 @@ def 虚空终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt
|
|||||||
state.set_state(chatbot=chatbot, key='has_provided_explaination', value=True)
|
state.set_state(chatbot=chatbot, key='has_provided_explaination', value=True)
|
||||||
state.unlock_plugin(chatbot=chatbot)
|
state.unlock_plugin(chatbot=chatbot)
|
||||||
yield from update_ui(chatbot=chatbot, history=history)
|
yield from update_ui(chatbot=chatbot, history=history)
|
||||||
yield from 虚空终端主路由(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request)
|
yield from 虚空终端主路由(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port)
|
||||||
return
|
return
|
||||||
else:
|
else:
|
||||||
# 如果意图模糊,提示
|
# 如果意图模糊,提示
|
||||||
@@ -133,7 +133,7 @@ def 虚空终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt
|
|||||||
|
|
||||||
|
|
||||||
|
|
||||||
def 虚空终端主路由(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def 虚空终端主路由(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
history = []
|
history = []
|
||||||
chatbot.append(("虚空终端状态: ", f"正在执行任务: {txt}"))
|
chatbot.append(("虚空终端状态: ", f"正在执行任务: {txt}"))
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
|||||||
@@ -12,6 +12,13 @@ class PaperFileGroup():
|
|||||||
self.sp_file_index = []
|
self.sp_file_index = []
|
||||||
self.sp_file_tag = []
|
self.sp_file_tag = []
|
||||||
|
|
||||||
|
# count_token
|
||||||
|
from request_llms.bridge_all import model_info
|
||||||
|
enc = model_info["gpt-3.5-turbo"]['tokenizer']
|
||||||
|
def get_token_num(txt): return len(
|
||||||
|
enc.encode(txt, disallowed_special=()))
|
||||||
|
self.get_token_num = get_token_num
|
||||||
|
|
||||||
def run_file_split(self, max_token_limit=1900):
|
def run_file_split(self, max_token_limit=1900):
|
||||||
"""
|
"""
|
||||||
将长文本分离开来
|
将长文本分离开来
|
||||||
@@ -22,8 +29,9 @@ class PaperFileGroup():
|
|||||||
self.sp_file_index.append(index)
|
self.sp_file_index.append(index)
|
||||||
self.sp_file_tag.append(self.file_paths[index])
|
self.sp_file_tag.append(self.file_paths[index])
|
||||||
else:
|
else:
|
||||||
from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
|
from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
|
||||||
segments = breakdown_text_to_satisfy_token_limit(file_content, max_token_limit)
|
segments = breakdown_txt_to_satisfy_token_limit_for_pdf(
|
||||||
|
file_content, self.get_token_num, max_token_limit)
|
||||||
for j, segment in enumerate(segments):
|
for j, segment in enumerate(segments):
|
||||||
self.sp_file_contents.append(segment)
|
self.sp_file_contents.append(segment)
|
||||||
self.sp_file_index.append(index)
|
self.sp_file_index.append(index)
|
||||||
@@ -109,7 +117,7 @@ def ipynb解释(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbo
|
|||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 解析ipynb文件(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def 解析ipynb文件(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
chatbot.append([
|
chatbot.append([
|
||||||
"函数插件功能?",
|
"函数插件功能?",
|
||||||
"对IPynb文件进行解析。Contributor: codycjy."])
|
"对IPynb文件进行解析。Contributor: codycjy."])
|
||||||
|
|||||||
@@ -83,8 +83,7 @@ def 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
|
|||||||
history=this_iteration_history_feed, # 迭代之前的分析
|
history=this_iteration_history_feed, # 迭代之前的分析
|
||||||
sys_prompt="你是一个程序架构分析师,正在分析一个项目的源代码。" + sys_prompt_additional)
|
sys_prompt="你是一个程序架构分析师,正在分析一个项目的源代码。" + sys_prompt_additional)
|
||||||
|
|
||||||
diagram_code = make_diagram(this_iteration_files, result, this_iteration_history_feed)
|
summary = "请用一句话概括这些文件的整体功能"
|
||||||
summary = "请用一句话概括这些文件的整体功能。\n\n" + diagram_code
|
|
||||||
summary_result = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
summary_result = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
||||||
inputs=summary,
|
inputs=summary,
|
||||||
inputs_show_user=summary,
|
inputs_show_user=summary,
|
||||||
@@ -105,12 +104,9 @@ def 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
|
|||||||
chatbot.append(("完成了吗?", res))
|
chatbot.append(("完成了吗?", res))
|
||||||
yield from update_ui(chatbot=chatbot, history=history_to_return) # 刷新界面
|
yield from update_ui(chatbot=chatbot, history=history_to_return) # 刷新界面
|
||||||
|
|
||||||
def make_diagram(this_iteration_files, result, this_iteration_history_feed):
|
|
||||||
from crazy_functions.diagram_fns.file_tree import build_file_tree_mermaid_diagram
|
|
||||||
return build_file_tree_mermaid_diagram(this_iteration_history_feed[0::2], this_iteration_history_feed[1::2], "项目示意图")
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 解析项目本身(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def 解析项目本身(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
import glob
|
import glob
|
||||||
file_manifest = [f for f in glob.glob('./*.py')] + \
|
file_manifest = [f for f in glob.glob('./*.py')] + \
|
||||||
@@ -123,7 +119,7 @@ def 解析项目本身(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_
|
|||||||
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 解析一个Python项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def 解析一个Python项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
import glob, os
|
import glob, os
|
||||||
if os.path.exists(txt):
|
if os.path.exists(txt):
|
||||||
@@ -141,7 +137,7 @@ def 解析一个Python项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
|
|||||||
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 解析一个Matlab项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def 解析一个Matlab项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
import glob, os
|
import glob, os
|
||||||
if os.path.exists(txt):
|
if os.path.exists(txt):
|
||||||
@@ -159,7 +155,7 @@ def 解析一个Matlab项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
|
|||||||
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 解析一个C项目的头文件(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def 解析一个C项目的头文件(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
import glob, os
|
import glob, os
|
||||||
if os.path.exists(txt):
|
if os.path.exists(txt):
|
||||||
@@ -179,7 +175,7 @@ def 解析一个C项目的头文件(txt, llm_kwargs, plugin_kwargs, chatbot, his
|
|||||||
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 解析一个C项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def 解析一个C项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
import glob, os
|
import glob, os
|
||||||
if os.path.exists(txt):
|
if os.path.exists(txt):
|
||||||
@@ -201,7 +197,7 @@ def 解析一个C项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 解析一个Java项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def 解析一个Java项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
import glob, os
|
import glob, os
|
||||||
if os.path.exists(txt):
|
if os.path.exists(txt):
|
||||||
@@ -223,7 +219,7 @@ def 解析一个Java项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, sys
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 解析一个前端项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def 解析一个前端项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
import glob, os
|
import glob, os
|
||||||
if os.path.exists(txt):
|
if os.path.exists(txt):
|
||||||
@@ -252,7 +248,7 @@ def 解析一个前端项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 解析一个Golang项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def 解析一个Golang项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
import glob, os
|
import glob, os
|
||||||
if os.path.exists(txt):
|
if os.path.exists(txt):
|
||||||
@@ -273,7 +269,7 @@ def 解析一个Golang项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
|
|||||||
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 解析一个Rust项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def 解析一个Rust项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
import glob, os
|
import glob, os
|
||||||
if os.path.exists(txt):
|
if os.path.exists(txt):
|
||||||
@@ -293,7 +289,7 @@ def 解析一个Rust项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, sys
|
|||||||
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 解析一个Lua项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def 解析一个Lua项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
import glob, os
|
import glob, os
|
||||||
if os.path.exists(txt):
|
if os.path.exists(txt):
|
||||||
@@ -315,7 +311,7 @@ def 解析一个Lua项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 解析一个CSharp项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def 解析一个CSharp项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
import glob, os
|
import glob, os
|
||||||
if os.path.exists(txt):
|
if os.path.exists(txt):
|
||||||
@@ -335,7 +331,7 @@ def 解析一个CSharp项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 解析任意code项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def 解析任意code项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
txt_pattern = plugin_kwargs.get("advanced_arg")
|
txt_pattern = plugin_kwargs.get("advanced_arg")
|
||||||
txt_pattern = txt_pattern.replace(",", ",")
|
txt_pattern = txt_pattern.replace(",", ",")
|
||||||
# 将要匹配的模式(例如: *.c, *.cpp, *.py, config.toml)
|
# 将要匹配的模式(例如: *.c, *.cpp, *.py, config.toml)
|
||||||
|
|||||||
@@ -2,7 +2,7 @@ from toolbox import CatchException, update_ui, get_conf
|
|||||||
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
||||||
import datetime
|
import datetime
|
||||||
@CatchException
|
@CatchException
|
||||||
def 同时问询(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def 同时问询(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
"""
|
"""
|
||||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||||
@@ -10,7 +10,7 @@ def 同时问询(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt
|
|||||||
chatbot 聊天显示框的句柄,用于显示给用户
|
chatbot 聊天显示框的句柄,用于显示给用户
|
||||||
history 聊天历史,前情提要
|
history 聊天历史,前情提要
|
||||||
system_prompt 给gpt的静默提醒
|
system_prompt 给gpt的静默提醒
|
||||||
user_request 当前用户的请求信息(IP地址等)
|
web_port 当前软件运行的端口号
|
||||||
"""
|
"""
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
MULTI_QUERY_LLM_MODELS = get_conf('MULTI_QUERY_LLM_MODELS')
|
MULTI_QUERY_LLM_MODELS = get_conf('MULTI_QUERY_LLM_MODELS')
|
||||||
@@ -32,7 +32,7 @@ def 同时问询(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 同时问询_指定模型(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def 同时问询_指定模型(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
"""
|
"""
|
||||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||||
@@ -40,7 +40,7 @@ def 同时问询_指定模型(txt, llm_kwargs, plugin_kwargs, chatbot, history,
|
|||||||
chatbot 聊天显示框的句柄,用于显示给用户
|
chatbot 聊天显示框的句柄,用于显示给用户
|
||||||
history 聊天历史,前情提要
|
history 聊天历史,前情提要
|
||||||
system_prompt 给gpt的静默提醒
|
system_prompt 给gpt的静默提醒
|
||||||
user_request 当前用户的请求信息(IP地址等)
|
web_port 当前软件运行的端口号
|
||||||
"""
|
"""
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
|
|
||||||
|
|||||||
@@ -166,7 +166,7 @@ class InterviewAssistant(AliyunASR):
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 语音助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def 语音助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
# pip install -U openai-whisper
|
# pip install -U openai-whisper
|
||||||
chatbot.append(["对话助手函数插件:使用时,双手离开鼠标键盘吧", "音频助手, 正在听您讲话(点击“停止”键可终止程序)..."])
|
chatbot.append(["对话助手函数插件:使用时,双手离开鼠标键盘吧", "音频助手, 正在听您讲话(点击“停止”键可终止程序)..."])
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
|||||||
@@ -44,7 +44,7 @@ def 解析Paper(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbo
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 读文章写摘要(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def 读文章写摘要(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
import glob, os
|
import glob, os
|
||||||
if os.path.exists(txt):
|
if os.path.exists(txt):
|
||||||
|
|||||||
@@ -132,7 +132,7 @@ def get_meta_information(url, chatbot, history):
|
|||||||
return profile
|
return profile
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 谷歌检索小助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def 谷歌检索小助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
disable_auto_promotion(chatbot=chatbot)
|
disable_auto_promotion(chatbot=chatbot)
|
||||||
# 基本信息:功能、贡献者
|
# 基本信息:功能、贡献者
|
||||||
chatbot.append([
|
chatbot.append([
|
||||||
|
|||||||
@@ -11,7 +11,7 @@ import os
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 猜你想问(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def 猜你想问(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
if txt:
|
if txt:
|
||||||
show_say = txt
|
show_say = txt
|
||||||
prompt = txt+'\n回答完问题后,再列出用户可能提出的三个问题。'
|
prompt = txt+'\n回答完问题后,再列出用户可能提出的三个问题。'
|
||||||
@@ -32,7 +32,7 @@ def 猜你想问(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt
|
|||||||
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 清除缓存(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def 清除缓存(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
chatbot.append(['清除本地缓存数据', '执行中. 删除数据'])
|
chatbot.append(['清除本地缓存数据', '执行中. 删除数据'])
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
|
||||||
|
|||||||
@@ -1,47 +1,19 @@
|
|||||||
from toolbox import CatchException, update_ui
|
from toolbox import CatchException, update_ui
|
||||||
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
||||||
import datetime
|
import datetime
|
||||||
|
|
||||||
高阶功能模板函数示意图 = f"""
|
|
||||||
```mermaid
|
|
||||||
flowchart TD
|
|
||||||
%% <gpt_academic_hide_mermaid_code> 一个特殊标记,用于在生成mermaid图表时隐藏代码块
|
|
||||||
subgraph 函数调用["函数调用过程"]
|
|
||||||
AA["输入栏用户输入的文本(txt)"] --> BB["gpt模型参数(llm_kwargs)"]
|
|
||||||
BB --> CC["插件模型参数(plugin_kwargs)"]
|
|
||||||
CC --> DD["对话显示框的句柄(chatbot)"]
|
|
||||||
DD --> EE["对话历史(history)"]
|
|
||||||
EE --> FF["系统提示词(system_prompt)"]
|
|
||||||
FF --> GG["当前用户信息(web_port)"]
|
|
||||||
|
|
||||||
A["开始(查询5天历史事件)"]
|
|
||||||
A --> B["获取当前月份和日期"]
|
|
||||||
B --> C["生成历史事件查询提示词"]
|
|
||||||
C --> D["调用大模型"]
|
|
||||||
D --> E["更新界面"]
|
|
||||||
E --> F["记录历史"]
|
|
||||||
F --> |"下一天"| B
|
|
||||||
end
|
|
||||||
```
|
|
||||||
"""
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 高阶功能模板函数(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
def 高阶功能模板函数(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
"""
|
"""
|
||||||
# 高阶功能模板函数示意图:https://mermaid.live/edit#pako:eNptk1tvEkEYhv8KmattQpvlvOyFCcdeeaVXuoYssBwie8gyhCIlqVoLhrbbtAWNUpEGUkyMEDW2Fmn_DDOL_8LZHdOwxrnamX3f7_3mmZk6yKhZCfAgV1KrmYKoQ9fDuKC4yChX0nld1Aou1JzjznQ5fWmejh8LYHW6vG2a47YAnlCLNSIRolnenKBXI_zRIBrcuqRT890u7jZx7zMDt-AaMbnW1--5olGiz2sQjwfoQxsZL0hxplSSU0-rop4vrzmKR6O2JxYjHmwcL2Y_HDatVMkXlf86YzHbGY9bO5j8XE7O8Nsbc3iNB3ukL2SMcH-XIQBgWoVOZzxuOxOJOyc63EPGV6ZQLENVrznViYStTiaJ2vw2M2d9bByRnOXkgCnXylCSU5quyto_IcmkbdvctELmJ-j1ASW3uB3g5xOmKqVTmqr_Na3AtuS_dtBFm8H90XJyHkDDT7S9xXWb4HGmRChx64AOL5HRpUm411rM5uh4H78Z4V7fCZzytjZz2seto9XaNPFue07clLaVZF8UNLygJ-VES8lah_n-O-5Ozc7-77NzJ0-K0yr0ZYrmHdqAk50t2RbA4qq9uNohBASw7YpSgaRkLWCCAtxAlnRZLGbJba9bPwUAC5IsCYAnn1kpJ1ZKUACC0iBSsQLVBzUlA3ioVyQ3qGhZEUrxokiehAz4nFgqk1VNVABfB1uAD_g2_AGPl-W8nMcbCvsDblADfNCz4feyobDPy3rYEMtxwYYbPFNVUoHdCPmDHBv2cP4AMfrCbiBli-Q-3afv0X6WdsIjW2-10fgDy1SAig
|
|
||||||
|
|
||||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||||
plugin_kwargs 插件模型的参数,用于灵活调整复杂功能的各种参数
|
plugin_kwargs 插件模型的参数,用于灵活调整复杂功能的各种参数
|
||||||
chatbot 聊天显示框的句柄,用于显示给用户
|
chatbot 聊天显示框的句柄,用于显示给用户
|
||||||
history 聊天历史,前情提要
|
history 聊天历史,前情提要
|
||||||
system_prompt 给gpt的静默提醒
|
system_prompt 给gpt的静默提醒
|
||||||
user_request 当前用户的请求信息(IP地址等)
|
web_port 当前软件运行的端口号
|
||||||
"""
|
"""
|
||||||
history = [] # 清空历史,以免输入溢出
|
history = [] # 清空历史,以免输入溢出
|
||||||
chatbot.append((
|
chatbot.append(("这是什么功能?", "[Local Message] 请注意,您正在调用一个[函数插件]的模板,该函数面向希望实现更多有趣功能的开发者,它可以作为创建新功能函数的模板(该函数只有20多行代码)。此外我们也提供可同步处理大量文件的多线程Demo供您参考。您若希望分享新的功能模组,请不吝PR!"))
|
||||||
"您正在调用插件:历史上的今天",
|
|
||||||
"[Local Message] 请注意,您正在调用一个[函数插件]的模板,该函数面向希望实现更多有趣功能的开发者,它可以作为创建新功能函数的模板(该函数只有20多行代码)。此外我们也提供可同步处理大量文件的多线程Demo供您参考。您若希望分享新的功能模组,请不吝PR!" + 高阶功能模板函数示意图))
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
|
||||||
for i in range(5):
|
for i in range(5):
|
||||||
currentMonth = (datetime.date.today() + datetime.timedelta(days=i)).month
|
currentMonth = (datetime.date.today() + datetime.timedelta(days=i)).month
|
||||||
@@ -55,45 +27,3 @@ def 高阶功能模板函数(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
|
|||||||
chatbot[-1] = (i_say, gpt_say)
|
chatbot[-1] = (i_say, gpt_say)
|
||||||
history.append(i_say);history.append(gpt_say)
|
history.append(i_say);history.append(gpt_say)
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
PROMPT = """
|
|
||||||
请你给出围绕“{subject}”的逻辑关系图,使用mermaid语法,mermaid语法举例:
|
|
||||||
```mermaid
|
|
||||||
graph TD
|
|
||||||
P(编程) --> L1(Python)
|
|
||||||
P(编程) --> L2(C)
|
|
||||||
P(编程) --> L3(C++)
|
|
||||||
P(编程) --> L4(Javascipt)
|
|
||||||
P(编程) --> L5(PHP)
|
|
||||||
```
|
|
||||||
"""
|
|
||||||
@CatchException
|
|
||||||
def 测试图表渲染(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
|
||||||
"""
|
|
||||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
|
||||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
|
||||||
plugin_kwargs 插件模型的参数,用于灵活调整复杂功能的各种参数
|
|
||||||
chatbot 聊天显示框的句柄,用于显示给用户
|
|
||||||
history 聊天历史,前情提要
|
|
||||||
system_prompt 给gpt的静默提醒
|
|
||||||
user_request 当前用户的请求信息(IP地址等)
|
|
||||||
"""
|
|
||||||
history = [] # 清空历史,以免输入溢出
|
|
||||||
chatbot.append(("这是什么功能?", "一个测试mermaid绘制图表的功能,您可以在输入框中输入一些关键词,然后使用mermaid+llm绘制图表。"))
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
|
|
||||||
|
|
||||||
if txt == "": txt = "空白的输入栏" # 调皮一下
|
|
||||||
|
|
||||||
i_say_show_user = f'请绘制有关“{txt}”的逻辑关系图。'
|
|
||||||
i_say = PROMPT.format(subject=txt)
|
|
||||||
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
|
||||||
inputs=i_say,
|
|
||||||
inputs_show_user=i_say_show_user,
|
|
||||||
llm_kwargs=llm_kwargs, chatbot=chatbot, history=[],
|
|
||||||
sys_prompt=""
|
|
||||||
)
|
|
||||||
history.append(i_say); history.append(gpt_say)
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
|
|
||||||
@@ -229,3 +229,4 @@ services:
|
|||||||
# 不使用代理网络拉取最新代码
|
# 不使用代理网络拉取最新代码
|
||||||
command: >
|
command: >
|
||||||
bash -c "python3 -u main.py"
|
bash -c "python3 -u main.py"
|
||||||
|
|
||||||
|
|||||||
@@ -1 +1,2 @@
|
|||||||
# 此Dockerfile不再维护,请前往docs/GithubAction+ChatGLM+Moss
|
# 此Dockerfile不再维护,请前往docs/GithubAction+ChatGLM+Moss
|
||||||
|
|
||||||
|
|||||||
@@ -1,53 +0,0 @@
|
|||||||
# docker build -t gpt-academic-all-capacity -f docs/GithubAction+AllCapacity --network=host --build-arg http_proxy=http://localhost:10881 --build-arg https_proxy=http://localhost:10881 .
|
|
||||||
# docker build -t gpt-academic-all-capacity -f docs/GithubAction+AllCapacityBeta --network=host .
|
|
||||||
# docker run -it --net=host gpt-academic-all-capacity bash
|
|
||||||
|
|
||||||
# 从NVIDIA源,从而支持显卡(检查宿主的nvidia-smi中的cuda版本必须>=11.3)
|
|
||||||
FROM fuqingxu/11.3.1-runtime-ubuntu20.04-with-texlive:latest
|
|
||||||
|
|
||||||
# use python3 as the system default python
|
|
||||||
WORKDIR /gpt
|
|
||||||
RUN curl -sS https://bootstrap.pypa.io/get-pip.py | python3.8
|
|
||||||
|
|
||||||
# # 非必要步骤,更换pip源 (以下三行,可以删除)
|
|
||||||
# RUN echo '[global]' > /etc/pip.conf && \
|
|
||||||
# echo 'index-url = https://mirrors.aliyun.com/pypi/simple/' >> /etc/pip.conf && \
|
|
||||||
# echo 'trusted-host = mirrors.aliyun.com' >> /etc/pip.conf
|
|
||||||
|
|
||||||
# 下载pytorch
|
|
||||||
RUN python3 -m pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/cu113
|
|
||||||
# 准备pip依赖
|
|
||||||
RUN python3 -m pip install openai numpy arxiv rich
|
|
||||||
RUN python3 -m pip install colorama Markdown pygments pymupdf
|
|
||||||
RUN python3 -m pip install python-docx moviepy pdfminer
|
|
||||||
RUN python3 -m pip install zh_langchain==0.2.1 pypinyin
|
|
||||||
RUN python3 -m pip install rarfile py7zr
|
|
||||||
RUN python3 -m pip install aliyun-python-sdk-core==2.13.3 pyOpenSSL webrtcvad scipy git+https://github.com/aliyun/alibabacloud-nls-python-sdk.git
|
|
||||||
# 下载分支
|
|
||||||
WORKDIR /gpt
|
|
||||||
RUN git clone --depth=1 https://github.com/binary-husky/gpt_academic.git
|
|
||||||
WORKDIR /gpt/gpt_academic
|
|
||||||
RUN git clone --depth=1 https://github.com/OpenLMLab/MOSS.git request_llms/moss
|
|
||||||
|
|
||||||
RUN python3 -m pip install -r requirements.txt
|
|
||||||
RUN python3 -m pip install -r request_llms/requirements_moss.txt
|
|
||||||
RUN python3 -m pip install -r request_llms/requirements_qwen.txt
|
|
||||||
RUN python3 -m pip install -r request_llms/requirements_chatglm.txt
|
|
||||||
RUN python3 -m pip install -r request_llms/requirements_newbing.txt
|
|
||||||
RUN python3 -m pip install nougat-ocr
|
|
||||||
|
|
||||||
# 预热Tiktoken模块
|
|
||||||
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'
|
|
||||||
|
|
||||||
# 安装知识库插件的额外依赖
|
|
||||||
RUN apt-get update && apt-get install libgl1 -y
|
|
||||||
RUN pip3 install transformers protobuf langchain sentence-transformers faiss-cpu nltk beautifulsoup4 bitsandbytes tabulate icetk --upgrade
|
|
||||||
RUN pip3 install unstructured[all-docs] --upgrade
|
|
||||||
RUN python3 -c 'from check_proxy import warm_up_vectordb; warm_up_vectordb()'
|
|
||||||
RUN rm -rf /usr/local/lib/python3.8/dist-packages/tests
|
|
||||||
|
|
||||||
|
|
||||||
# COPY .cache /root/.cache
|
|
||||||
# COPY config_private.py config_private.py
|
|
||||||
# 启动
|
|
||||||
CMD ["python3", "-u", "main.py"]
|
|
||||||
@@ -17,10 +17,10 @@ RUN apt-get update && apt-get install libgl1 -y
|
|||||||
RUN pip3 install torch torchvision --index-url https://download.pytorch.org/whl/cpu
|
RUN pip3 install torch torchvision --index-url https://download.pytorch.org/whl/cpu
|
||||||
RUN pip3 install transformers protobuf langchain sentence-transformers faiss-cpu nltk beautifulsoup4 bitsandbytes tabulate icetk --upgrade
|
RUN pip3 install transformers protobuf langchain sentence-transformers faiss-cpu nltk beautifulsoup4 bitsandbytes tabulate icetk --upgrade
|
||||||
RUN pip3 install unstructured[all-docs] --upgrade
|
RUN pip3 install unstructured[all-docs] --upgrade
|
||||||
RUN python3 -c 'from check_proxy import warm_up_vectordb; warm_up_vectordb()'
|
|
||||||
|
|
||||||
# 可选步骤,用于预热模块
|
# 可选步骤,用于预热模块
|
||||||
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'
|
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'
|
||||||
|
RUN python3 -c 'from check_proxy import warm_up_vectordb; warm_up_vectordb()'
|
||||||
|
|
||||||
# 启动
|
# 启动
|
||||||
CMD ["python3", "-u", "main.py"]
|
CMD ["python3", "-u", "main.py"]
|
||||||
|
|||||||
@@ -341,3 +341,4 @@ https://github.com/oobabooga/one-click-installers
|
|||||||
# المزيد:
|
# المزيد:
|
||||||
https://github.com/gradio-app/gradio
|
https://github.com/gradio-app/gradio
|
||||||
https://github.com/fghrsh/live2d_demo
|
https://github.com/fghrsh/live2d_demo
|
||||||
|
|
||||||
|
|||||||
@@ -355,3 +355,4 @@ https://github.com/oobabooga/one-click-installers
|
|||||||
# More:
|
# More:
|
||||||
https://github.com/gradio-app/gradio
|
https://github.com/gradio-app/gradio
|
||||||
https://github.com/fghrsh/live2d_demo
|
https://github.com/fghrsh/live2d_demo
|
||||||
|
|
||||||
|
|||||||
@@ -354,3 +354,4 @@ https://github.com/oobabooga/one-click-installers
|
|||||||
# Plus:
|
# Plus:
|
||||||
https://github.com/gradio-app/gradio
|
https://github.com/gradio-app/gradio
|
||||||
https://github.com/fghrsh/live2d_demo
|
https://github.com/fghrsh/live2d_demo
|
||||||
|
|
||||||
|
|||||||
@@ -361,3 +361,4 @@ https://github.com/oobabooga/one-click-installers
|
|||||||
# Weitere:
|
# Weitere:
|
||||||
https://github.com/gradio-app/gradio
|
https://github.com/gradio-app/gradio
|
||||||
https://github.com/fghrsh/live2d_demo
|
https://github.com/fghrsh/live2d_demo
|
||||||
|
|
||||||
|
|||||||
@@ -358,3 +358,4 @@ https://github.com/oobabooga/one-click-installers
|
|||||||
# Altre risorse:
|
# Altre risorse:
|
||||||
https://github.com/gradio-app/gradio
|
https://github.com/gradio-app/gradio
|
||||||
https://github.com/fghrsh/live2d_demo
|
https://github.com/fghrsh/live2d_demo
|
||||||
|
|
||||||
|
|||||||
@@ -342,3 +342,4 @@ https://github.com/oobabooga/one-click-installers
|
|||||||
# その他:
|
# その他:
|
||||||
https://github.com/gradio-app/gradio
|
https://github.com/gradio-app/gradio
|
||||||
https://github.com/fghrsh/live2d_demo
|
https://github.com/fghrsh/live2d_demo
|
||||||
|
|
||||||
|
|||||||
@@ -361,3 +361,4 @@ https://github.com/oobabooga/one-click-installers
|
|||||||
# 더보기:
|
# 더보기:
|
||||||
https://github.com/gradio-app/gradio
|
https://github.com/gradio-app/gradio
|
||||||
https://github.com/fghrsh/live2d_demo
|
https://github.com/fghrsh/live2d_demo
|
||||||
|
|
||||||
|
|||||||
@@ -355,3 +355,4 @@ https://github.com/oobabooga/instaladores-de-um-clique
|
|||||||
# Mais:
|
# Mais:
|
||||||
https://github.com/gradio-app/gradio
|
https://github.com/gradio-app/gradio
|
||||||
https://github.com/fghrsh/live2d_demo
|
https://github.com/fghrsh/live2d_demo
|
||||||
|
|
||||||
|
|||||||
@@ -358,3 +358,4 @@ https://github.com/oobabooga/one-click-installers
|
|||||||
# Больше:
|
# Больше:
|
||||||
https://github.com/gradio-app/gradio
|
https://github.com/gradio-app/gradio
|
||||||
https://github.com/fghrsh/live2d_demo
|
https://github.com/fghrsh/live2d_demo
|
||||||
|
|
||||||
|
|||||||
BIN
docs/gradio-3.32.6-py3-none-any.whl
Normal file
BIN
docs/gradio-3.32.6-py3-none-any.whl
Normal file
Binary file not shown.
@@ -165,7 +165,7 @@ toolbox.py是一个工具类库,其中主要包含了一些函数装饰器和
|
|||||||
|
|
||||||
3. read_file_to_chat(chatbot, history, file_name):从传入的文件中读取内容,解析出对话历史记录并更新聊天显示框。
|
3. read_file_to_chat(chatbot, history, file_name):从传入的文件中读取内容,解析出对话历史记录并更新聊天显示框。
|
||||||
|
|
||||||
4. 对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):一个主要函数,用于保存当前对话记录并提醒用户。如果用户希望加载历史记录,则调用read_file_to_chat()来更新聊天显示框。如果用户希望删除历史记录,调用删除所有本地对话历史记录()函数完成删除操作。
|
4. 对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):一个主要函数,用于保存当前对话记录并提醒用户。如果用户希望加载历史记录,则调用read_file_to_chat()来更新聊天显示框。如果用户希望删除历史记录,调用删除所有本地对话历史记录()函数完成删除操作。
|
||||||
|
|
||||||
## [19/48] 请对下面的程序文件做一个概述: crazy_functions\总结word文档.py
|
## [19/48] 请对下面的程序文件做一个概述: crazy_functions\总结word文档.py
|
||||||
|
|
||||||
|
|||||||
@@ -7,27 +7,13 @@ sample = """
|
|||||||
"""
|
"""
|
||||||
import re
|
import re
|
||||||
|
|
||||||
|
|
||||||
def preprocess_newbing_out(s):
|
def preprocess_newbing_out(s):
|
||||||
pattern = r"\^(\d+)\^" # 匹配^数字^
|
pattern = r'\^(\d+)\^' # 匹配^数字^
|
||||||
pattern2 = r"\[(\d+)\]" # 匹配^数字^
|
pattern2 = r'\[(\d+)\]' # 匹配^数字^
|
||||||
|
sub = lambda m: '\['+m.group(1)+'\]' # 将匹配到的数字作为替换值
|
||||||
def sub(m):
|
|
||||||
return "\\[" + m.group(1) + "\\]" # 将匹配到的数字作为替换值
|
|
||||||
|
|
||||||
result = re.sub(pattern, sub, s) # 替换操作
|
result = re.sub(pattern, sub, s) # 替换操作
|
||||||
if "[1]" in result:
|
if '[1]' in result:
|
||||||
result += (
|
result += '<br/><hr style="border-top: dotted 1px #44ac5c;"><br/><small>' + "<br/>".join([re.sub(pattern2, sub, r) for r in result.split('\n') if r.startswith('[')]) + '</small>'
|
||||||
'<br/><hr style="border-top: dotted 1px #44ac5c;"><br/><small>'
|
|
||||||
+ "<br/>".join(
|
|
||||||
[
|
|
||||||
re.sub(pattern2, sub, r)
|
|
||||||
for r in result.split("\n")
|
|
||||||
if r.startswith("[")
|
|
||||||
]
|
|
||||||
)
|
|
||||||
+ "</small>"
|
|
||||||
)
|
|
||||||
return result
|
return result
|
||||||
|
|
||||||
|
|
||||||
@@ -42,39 +28,37 @@ def close_up_code_segment_during_stream(gpt_reply):
|
|||||||
str: 返回一个新的字符串,将输出代码片段的“后面的```”补上。
|
str: 返回一个新的字符串,将输出代码片段的“后面的```”补上。
|
||||||
|
|
||||||
"""
|
"""
|
||||||
if "```" not in gpt_reply:
|
if '```' not in gpt_reply:
|
||||||
return gpt_reply
|
return gpt_reply
|
||||||
if gpt_reply.endswith("```"):
|
if gpt_reply.endswith('```'):
|
||||||
return gpt_reply
|
return gpt_reply
|
||||||
|
|
||||||
# 排除了以上两个情况,我们
|
# 排除了以上两个情况,我们
|
||||||
segments = gpt_reply.split("```")
|
segments = gpt_reply.split('```')
|
||||||
n_mark = len(segments) - 1
|
n_mark = len(segments) - 1
|
||||||
if n_mark % 2 == 1:
|
if n_mark % 2 == 1:
|
||||||
# print('输出代码片段中!')
|
# print('输出代码片段中!')
|
||||||
return gpt_reply + "\n```"
|
return gpt_reply+'\n```'
|
||||||
else:
|
else:
|
||||||
return gpt_reply
|
return gpt_reply
|
||||||
|
|
||||||
|
|
||||||
import markdown
|
import markdown
|
||||||
from latex2mathml.converter import convert as tex2mathml
|
from latex2mathml.converter import convert as tex2mathml
|
||||||
|
from functools import wraps, lru_cache
|
||||||
|
|
||||||
def markdown_convertion(txt):
|
def markdown_convertion(txt):
|
||||||
"""
|
"""
|
||||||
将Markdown格式的文本转换为HTML格式。如果包含数学公式,则先将公式转换为HTML格式。
|
将Markdown格式的文本转换为HTML格式。如果包含数学公式,则先将公式转换为HTML格式。
|
||||||
"""
|
"""
|
||||||
pre = '<div class="markdown-body">'
|
pre = '<div class="markdown-body">'
|
||||||
suf = "</div>"
|
suf = '</div>'
|
||||||
if txt.startswith(pre) and txt.endswith(suf):
|
if txt.startswith(pre) and txt.endswith(suf):
|
||||||
# print('警告,输入了已经经过转化的字符串,二次转化可能出问题')
|
# print('警告,输入了已经经过转化的字符串,二次转化可能出问题')
|
||||||
return txt # 已经被转化过,不需要再次转化
|
return txt # 已经被转化过,不需要再次转化
|
||||||
|
|
||||||
markdown_extension_configs = {
|
markdown_extension_configs = {
|
||||||
"mdx_math": {
|
'mdx_math': {
|
||||||
"enable_dollar_delimiter": True,
|
'enable_dollar_delimiter': True,
|
||||||
"use_gitlab_delimiters": False,
|
'use_gitlab_delimiters': False,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
find_equation_pattern = r'<script type="math/tex(?:.*?)>(.*?)</script>'
|
find_equation_pattern = r'<script type="math/tex(?:.*?)>(.*?)</script>'
|
||||||
@@ -88,19 +72,19 @@ def markdown_convertion(txt):
|
|||||||
|
|
||||||
def replace_math_no_render(match):
|
def replace_math_no_render(match):
|
||||||
content = match.group(1)
|
content = match.group(1)
|
||||||
if "mode=display" in match.group(0):
|
if 'mode=display' in match.group(0):
|
||||||
content = content.replace("\n", "</br>")
|
content = content.replace('\n', '</br>')
|
||||||
return f'<font color="#00FF00">$$</font><font color="#FF00FF">{content}</font><font color="#00FF00">$$</font>'
|
return f"<font color=\"#00FF00\">$$</font><font color=\"#FF00FF\">{content}</font><font color=\"#00FF00\">$$</font>"
|
||||||
else:
|
else:
|
||||||
return f'<font color="#00FF00">$</font><font color="#FF00FF">{content}</font><font color="#00FF00">$</font>'
|
return f"<font color=\"#00FF00\">$</font><font color=\"#FF00FF\">{content}</font><font color=\"#00FF00\">$</font>"
|
||||||
|
|
||||||
def replace_math_render(match):
|
def replace_math_render(match):
|
||||||
content = match.group(1)
|
content = match.group(1)
|
||||||
if "mode=display" in match.group(0):
|
if 'mode=display' in match.group(0):
|
||||||
if "\\begin{aligned}" in content:
|
if '\\begin{aligned}' in content:
|
||||||
content = content.replace("\\begin{aligned}", "\\begin{array}")
|
content = content.replace('\\begin{aligned}', '\\begin{array}')
|
||||||
content = content.replace("\\end{aligned}", "\\end{array}")
|
content = content.replace('\\end{aligned}', '\\end{array}')
|
||||||
content = content.replace("&", " ")
|
content = content.replace('&', ' ')
|
||||||
content = tex2mathml_catch_exception(content, display="block")
|
content = tex2mathml_catch_exception(content, display="block")
|
||||||
return content
|
return content
|
||||||
else:
|
else:
|
||||||
@@ -110,58 +94,37 @@ def markdown_convertion(txt):
|
|||||||
"""
|
"""
|
||||||
解决一个mdx_math的bug(单$包裹begin命令时多余<script>)
|
解决一个mdx_math的bug(单$包裹begin命令时多余<script>)
|
||||||
"""
|
"""
|
||||||
content = content.replace(
|
content = content.replace('<script type="math/tex">\n<script type="math/tex; mode=display">', '<script type="math/tex; mode=display">')
|
||||||
'<script type="math/tex">\n<script type="math/tex; mode=display">',
|
content = content.replace('</script>\n</script>', '</script>')
|
||||||
'<script type="math/tex; mode=display">',
|
|
||||||
)
|
|
||||||
content = content.replace("</script>\n</script>", "</script>")
|
|
||||||
return content
|
return content
|
||||||
|
|
||||||
if ("$" in txt) and ("```" not in txt): # 有$标识的公式符号,且没有代码段```的标识
|
|
||||||
|
if ('$' in txt) and ('```' not in txt): # 有$标识的公式符号,且没有代码段```的标识
|
||||||
# convert everything to html format
|
# convert everything to html format
|
||||||
split = markdown.markdown(text="---")
|
split = markdown.markdown(text='---')
|
||||||
convert_stage_1 = markdown.markdown(
|
convert_stage_1 = markdown.markdown(text=txt, extensions=['mdx_math', 'fenced_code', 'tables', 'sane_lists'], extension_configs=markdown_extension_configs)
|
||||||
text=txt,
|
|
||||||
extensions=["mdx_math", "fenced_code", "tables", "sane_lists"],
|
|
||||||
extension_configs=markdown_extension_configs,
|
|
||||||
)
|
|
||||||
convert_stage_1 = markdown_bug_hunt(convert_stage_1)
|
convert_stage_1 = markdown_bug_hunt(convert_stage_1)
|
||||||
# re.DOTALL: Make the '.' special character match any character at all, including a newline; without this flag, '.' will match anything except a newline. Corresponds to the inline flag (?s).
|
# re.DOTALL: Make the '.' special character match any character at all, including a newline; without this flag, '.' will match anything except a newline. Corresponds to the inline flag (?s).
|
||||||
# 1. convert to easy-to-copy tex (do not render math)
|
# 1. convert to easy-to-copy tex (do not render math)
|
||||||
convert_stage_2_1, n = re.subn(
|
convert_stage_2_1, n = re.subn(find_equation_pattern, replace_math_no_render, convert_stage_1, flags=re.DOTALL)
|
||||||
find_equation_pattern,
|
|
||||||
replace_math_no_render,
|
|
||||||
convert_stage_1,
|
|
||||||
flags=re.DOTALL,
|
|
||||||
)
|
|
||||||
# 2. convert to rendered equation
|
# 2. convert to rendered equation
|
||||||
convert_stage_2_2, n = re.subn(
|
convert_stage_2_2, n = re.subn(find_equation_pattern, replace_math_render, convert_stage_1, flags=re.DOTALL)
|
||||||
find_equation_pattern, replace_math_render, convert_stage_1, flags=re.DOTALL
|
|
||||||
)
|
|
||||||
# cat them together
|
# cat them together
|
||||||
return pre + convert_stage_2_1 + f"{split}" + convert_stage_2_2 + suf
|
return pre + convert_stage_2_1 + f'{split}' + convert_stage_2_2 + suf
|
||||||
else:
|
else:
|
||||||
return (
|
return pre + markdown.markdown(txt, extensions=['fenced_code', 'codehilite', 'tables', 'sane_lists']) + suf
|
||||||
pre
|
|
||||||
+ markdown.markdown(
|
|
||||||
txt, extensions=["fenced_code", "codehilite", "tables", "sane_lists"]
|
|
||||||
)
|
|
||||||
+ suf
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
sample = preprocess_newbing_out(sample)
|
sample = preprocess_newbing_out(sample)
|
||||||
sample = close_up_code_segment_during_stream(sample)
|
sample = close_up_code_segment_during_stream(sample)
|
||||||
sample = markdown_convertion(sample)
|
sample = markdown_convertion(sample)
|
||||||
with open("tmp.html", "w", encoding="utf8") as f:
|
with open('tmp.html', 'w', encoding='utf8') as f:
|
||||||
f.write(
|
f.write("""
|
||||||
"""
|
|
||||||
|
|
||||||
<head>
|
<head>
|
||||||
<title>My Website</title>
|
<title>My Website</title>
|
||||||
<link rel="stylesheet" type="text/css" href="style.css">
|
<link rel="stylesheet" type="text/css" href="style.css">
|
||||||
</head>
|
</head>
|
||||||
|
|
||||||
"""
|
""")
|
||||||
)
|
|
||||||
f.write(sample)
|
f.write(sample)
|
||||||
|
|||||||
@@ -2863,7 +2863,7 @@
|
|||||||
"加载API_KEY": "Loading API_KEY",
|
"加载API_KEY": "Loading API_KEY",
|
||||||
"协助您编写代码": "Assist you in writing code",
|
"协助您编写代码": "Assist you in writing code",
|
||||||
"我可以为您提供以下服务": "I can provide you with the following services",
|
"我可以为您提供以下服务": "I can provide you with the following services",
|
||||||
"排队中请稍候 ...": "Please wait in line ...",
|
"排队中请稍后 ...": "Please wait in line ...",
|
||||||
"建议您使用英文提示词": "It is recommended to use English prompts",
|
"建议您使用英文提示词": "It is recommended to use English prompts",
|
||||||
"不能支撑AutoGen运行": "Cannot support AutoGen operation",
|
"不能支撑AutoGen运行": "Cannot support AutoGen operation",
|
||||||
"帮助您解决编程问题": "Help you solve programming problems",
|
"帮助您解决编程问题": "Help you solve programming problems",
|
||||||
|
|||||||
@@ -61,3 +61,4 @@ VI 两种音频监听模式切换时,需要刷新页面才有效。
|
|||||||
VII 非localhost运行+非https情况下无法打开录音功能的坑:https://blog.csdn.net/weixin_39461487/article/details/109594434
|
VII 非localhost运行+非https情况下无法打开录音功能的坑:https://blog.csdn.net/weixin_39461487/article/details/109594434
|
||||||
|
|
||||||
## 5.点击函数插件区“实时音频采集” 或者其他音频交互功能
|
## 5.点击函数插件区“实时音频采集” 或者其他音频交互功能
|
||||||
|
|
||||||
|
|||||||
111
main.py
111
main.py
@@ -1,25 +1,14 @@
|
|||||||
import os; os.environ['no_proxy'] = '*' # 避免代理网络产生意外污染
|
import os; os.environ['no_proxy'] = '*' # 避免代理网络产生意外污染
|
||||||
|
import pickle
|
||||||
help_menu_description = \
|
import base64
|
||||||
"""Github源代码开源和更新[地址🚀](https://github.com/binary-husky/gpt_academic),
|
|
||||||
感谢热情的[开发者们❤️](https://github.com/binary-husky/gpt_academic/graphs/contributors).
|
|
||||||
</br></br>常见问题请查阅[项目Wiki](https://github.com/binary-husky/gpt_academic/wiki),
|
|
||||||
如遇到Bug请前往[Bug反馈](https://github.com/binary-husky/gpt_academic/issues).
|
|
||||||
</br></br>普通对话使用说明: 1. 输入问题; 2. 点击提交
|
|
||||||
</br></br>基础功能区使用说明: 1. 输入文本; 2. 点击任意基础功能区按钮
|
|
||||||
</br></br>函数插件区使用说明: 1. 输入路径/问题, 或者上传文件; 2. 点击任意函数插件区按钮
|
|
||||||
</br></br>虚空终端使用说明: 点击虚空终端, 然后根据提示输入指令, 再次点击虚空终端
|
|
||||||
</br></br>如何保存对话: 点击保存当前的对话按钮
|
|
||||||
</br></br>如何语音对话: 请阅读Wiki
|
|
||||||
</br></br>如何临时更换API_KEY: 在输入区输入临时API_KEY后提交(网页刷新后失效)"""
|
|
||||||
|
|
||||||
def main():
|
def main():
|
||||||
import gradio as gr
|
import gradio as gr
|
||||||
if gr.__version__ not in ['3.32.6', '3.32.7']:
|
if gr.__version__ not in ['3.32.6']:
|
||||||
raise ModuleNotFoundError("使用项目内置Gradio获取最优体验! 请运行 `pip install -r requirements.txt` 指令安装内置Gradio及其他依赖, 详情信息见requirements.txt.")
|
raise ModuleNotFoundError("使用项目内置Gradio获取最优体验! 请运行 `pip install -r requirements.txt` 指令安装内置Gradio及其他依赖, 详情信息见requirements.txt.")
|
||||||
from request_llms.bridge_all import predict
|
from request_llms.bridge_all import predict
|
||||||
from toolbox import format_io, find_free_port, on_file_uploaded, on_report_generated, get_conf, ArgsGeneralWrapper, load_chat_cookies, DummyWith
|
from toolbox import format_io, find_free_port, on_file_uploaded, on_report_generated, get_conf, ArgsGeneralWrapper, load_chat_cookies, DummyWith
|
||||||
# 建议您复制一个config_private.py放自己的秘密, 如API和代理网址
|
# 建议您复制一个config_private.py放自己的秘密, 如API和代理网址, 避免不小心传github被别人看到
|
||||||
proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION = get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION')
|
proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION = get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION')
|
||||||
CHATBOT_HEIGHT, LAYOUT, AVAIL_LLM_MODELS, AUTO_CLEAR_TXT = get_conf('CHATBOT_HEIGHT', 'LAYOUT', 'AVAIL_LLM_MODELS', 'AUTO_CLEAR_TXT')
|
CHATBOT_HEIGHT, LAYOUT, AVAIL_LLM_MODELS, AUTO_CLEAR_TXT = get_conf('CHATBOT_HEIGHT', 'LAYOUT', 'AVAIL_LLM_MODELS', 'AUTO_CLEAR_TXT')
|
||||||
ENABLE_AUDIO, AUTO_CLEAR_TXT, PATH_LOGGING, AVAIL_THEMES, THEME = get_conf('ENABLE_AUDIO', 'AUTO_CLEAR_TXT', 'PATH_LOGGING', 'AVAIL_THEMES', 'THEME')
|
ENABLE_AUDIO, AUTO_CLEAR_TXT, PATH_LOGGING, AVAIL_THEMES, THEME = get_conf('ENABLE_AUDIO', 'AUTO_CLEAR_TXT', 'PATH_LOGGING', 'AVAIL_THEMES', 'THEME')
|
||||||
@@ -29,10 +18,20 @@ def main():
|
|||||||
# 如果WEB_PORT是-1, 则随机选取WEB端口
|
# 如果WEB_PORT是-1, 则随机选取WEB端口
|
||||||
PORT = find_free_port() if WEB_PORT <= 0 else WEB_PORT
|
PORT = find_free_port() if WEB_PORT <= 0 else WEB_PORT
|
||||||
from check_proxy import get_current_version
|
from check_proxy import get_current_version
|
||||||
from themes.theme import adjust_theme, advanced_css, theme_declaration
|
from themes.theme import adjust_theme, advanced_css, theme_declaration, load_dynamic_theme
|
||||||
from themes.theme import js_code_for_css_changing, js_code_for_darkmode_init, js_code_for_toggle_darkmode, js_code_for_persistent_cookie_init
|
|
||||||
from themes.theme import load_dynamic_theme, to_cookie_str, from_cookie_str, init_cookie
|
|
||||||
title_html = f"<h1 align=\"center\">GPT 学术优化 {get_current_version()}</h1>{theme_declaration}"
|
title_html = f"<h1 align=\"center\">GPT 学术优化 {get_current_version()}</h1>{theme_declaration}"
|
||||||
|
description = "Github源代码开源和更新[地址🚀](https://github.com/binary-husky/gpt_academic), "
|
||||||
|
description += "感谢热情的[开发者们❤️](https://github.com/binary-husky/gpt_academic/graphs/contributors)."
|
||||||
|
description += "</br></br>常见问题请查阅[项目Wiki](https://github.com/binary-husky/gpt_academic/wiki), "
|
||||||
|
description += "如遇到Bug请前往[Bug反馈](https://github.com/binary-husky/gpt_academic/issues)."
|
||||||
|
description += "</br></br>普通对话使用说明: 1. 输入问题; 2. 点击提交"
|
||||||
|
description += "</br></br>基础功能区使用说明: 1. 输入文本; 2. 点击任意基础功能区按钮"
|
||||||
|
description += "</br></br>函数插件区使用说明: 1. 输入路径/问题, 或者上传文件; 2. 点击任意函数插件区按钮"
|
||||||
|
description += "</br></br>虚空终端使用说明: 点击虚空终端, 然后根据提示输入指令, 再次点击虚空终端"
|
||||||
|
description += "</br></br>如何保存对话: 点击保存当前的对话按钮"
|
||||||
|
description += "</br></br>如何语音对话: 请阅读Wiki"
|
||||||
|
description += "</br></br>如何临时更换API_KEY: 在输入区输入临时API_KEY后提交(网页刷新后失效)"
|
||||||
|
|
||||||
# 问询记录, python 版本建议3.9+(越新越好)
|
# 问询记录, python 版本建议3.9+(越新越好)
|
||||||
import logging, uuid
|
import logging, uuid
|
||||||
@@ -139,17 +138,17 @@ def main():
|
|||||||
with gr.Row():
|
with gr.Row():
|
||||||
switchy_bt = gr.Button(r"请先从插件列表中选择", variant="secondary").style(size="sm")
|
switchy_bt = gr.Button(r"请先从插件列表中选择", variant="secondary").style(size="sm")
|
||||||
with gr.Row():
|
with gr.Row():
|
||||||
with gr.Accordion("点击展开“文件下载区”。", open=False) as area_file_up:
|
with gr.Accordion("点击展开“文件上传区”。上传本地文件/压缩包供函数插件调用。", open=False) as area_file_up:
|
||||||
file_upload = gr.Files(label="任何文件, 推荐上传压缩文件(zip, tar)", file_count="multiple", elem_id="elem_upload")
|
file_upload = gr.Files(label="任何文件, 推荐上传压缩文件(zip, tar)", file_count="multiple", elem_id="elem_upload")
|
||||||
|
|
||||||
|
|
||||||
with gr.Floating(init_x="0%", init_y="0%", visible=True, width=None, drag="forbidden", elem_id="tooltip"):
|
with gr.Floating(init_x="0%", init_y="0%", visible=True, width=None, drag="forbidden"):
|
||||||
with gr.Row():
|
with gr.Row():
|
||||||
with gr.Tab("上传文件", elem_id="interact-panel"):
|
with gr.Tab("上传文件", elem_id="interact-panel"):
|
||||||
gr.Markdown("请上传本地文件/压缩包供“函数插件区”功能调用。请注意: 上传文件后会自动把输入区修改为相应路径。")
|
gr.Markdown("请上传本地文件/压缩包供“函数插件区”功能调用。请注意: 上传文件后会自动把输入区修改为相应路径。")
|
||||||
file_upload_2 = gr.Files(label="任何文件, 推荐上传压缩文件(zip, tar)", file_count="multiple", elem_id="elem_upload_float")
|
file_upload_2 = gr.Files(label="任何文件, 推荐上传压缩文件(zip, tar)", file_count="multiple", elem_id="elem_upload_float")
|
||||||
|
|
||||||
with gr.Tab("更换模型", elem_id="interact-panel"):
|
with gr.Tab("更换模型 & Prompt", elem_id="interact-panel"):
|
||||||
md_dropdown = gr.Dropdown(AVAIL_LLM_MODELS, value=LLM_MODEL, label="更换LLM模型/请求源").style(container=False)
|
md_dropdown = gr.Dropdown(AVAIL_LLM_MODELS, value=LLM_MODEL, label="更换LLM模型/请求源").style(container=False)
|
||||||
top_p = gr.Slider(minimum=-0, maximum=1.0, value=1.0, step=0.01,interactive=True, label="Top-p (nucleus sampling)",)
|
top_p = gr.Slider(minimum=-0, maximum=1.0, value=1.0, step=0.01,interactive=True, label="Top-p (nucleus sampling)",)
|
||||||
temperature = gr.Slider(minimum=-0, maximum=2.0, value=1.0, step=0.01, interactive=True, label="Temperature",)
|
temperature = gr.Slider(minimum=-0, maximum=2.0, value=1.0, step=0.01, interactive=True, label="Temperature",)
|
||||||
@@ -161,11 +160,18 @@ def main():
|
|||||||
checkboxes = gr.CheckboxGroup(["基础功能区", "函数插件区", "浮动输入区", "输入清除键", "插件参数区"],
|
checkboxes = gr.CheckboxGroup(["基础功能区", "函数插件区", "浮动输入区", "输入清除键", "插件参数区"],
|
||||||
value=["基础功能区", "函数插件区"], label="显示/隐藏功能区", elem_id='cbs').style(container=False)
|
value=["基础功能区", "函数插件区"], label="显示/隐藏功能区", elem_id='cbs').style(container=False)
|
||||||
checkboxes_2 = gr.CheckboxGroup(["自定义菜单"],
|
checkboxes_2 = gr.CheckboxGroup(["自定义菜单"],
|
||||||
value=[], label="显示/隐藏自定义菜单", elem_id='cbsc').style(container=False)
|
value=[], label="显示/隐藏自定义菜单", elem_id='cbs').style(container=False)
|
||||||
dark_mode_btn = gr.Button("切换界面明暗 ☀", variant="secondary").style(size="sm")
|
dark_mode_btn = gr.Button("切换界面明暗 ☀", variant="secondary").style(size="sm")
|
||||||
dark_mode_btn.click(None, None, None, _js=js_code_for_toggle_darkmode)
|
dark_mode_btn.click(None, None, None, _js="""() => {
|
||||||
|
if (document.querySelectorAll('.dark').length) {
|
||||||
|
document.querySelectorAll('.dark').forEach(el => el.classList.remove('dark'));
|
||||||
|
} else {
|
||||||
|
document.querySelector('body').classList.add('dark');
|
||||||
|
}
|
||||||
|
}""",
|
||||||
|
)
|
||||||
with gr.Tab("帮助", elem_id="interact-panel"):
|
with gr.Tab("帮助", elem_id="interact-panel"):
|
||||||
gr.Markdown(help_menu_description)
|
gr.Markdown(description)
|
||||||
|
|
||||||
with gr.Floating(init_x="20%", init_y="50%", visible=False, width="40%", drag="top") as area_input_secondary:
|
with gr.Floating(init_x="20%", init_y="50%", visible=False, width="40%", drag="top") as area_input_secondary:
|
||||||
with gr.Accordion("浮动输入区", open=True, elem_id="input-panel2"):
|
with gr.Accordion("浮动输入区", open=True, elem_id="input-panel2"):
|
||||||
@@ -180,6 +186,16 @@ def main():
|
|||||||
stopBtn2 = gr.Button("停止", variant="secondary"); stopBtn2.style(size="sm")
|
stopBtn2 = gr.Button("停止", variant="secondary"); stopBtn2.style(size="sm")
|
||||||
clearBtn2 = gr.Button("清除", variant="secondary", visible=False); clearBtn2.style(size="sm")
|
clearBtn2 = gr.Button("清除", variant="secondary", visible=False); clearBtn2.style(size="sm")
|
||||||
|
|
||||||
|
def to_cookie_str(d):
|
||||||
|
# Pickle the dictionary and encode it as a string
|
||||||
|
pickled_dict = pickle.dumps(d)
|
||||||
|
cookie_value = base64.b64encode(pickled_dict).decode('utf-8')
|
||||||
|
return cookie_value
|
||||||
|
|
||||||
|
def from_cookie_str(c):
|
||||||
|
# Decode the base64-encoded string and unpickle it into a dictionary
|
||||||
|
pickled_dict = base64.b64decode(c.encode('utf-8'))
|
||||||
|
return pickle.loads(pickled_dict)
|
||||||
|
|
||||||
with gr.Floating(init_x="20%", init_y="50%", visible=False, width="40%", drag="top") as area_customize:
|
with gr.Floating(init_x="20%", init_y="50%", visible=False, width="40%", drag="top") as area_customize:
|
||||||
with gr.Accordion("自定义菜单", open=True, elem_id="edit-panel"):
|
with gr.Accordion("自定义菜单", open=True, elem_id="edit-panel"):
|
||||||
@@ -236,11 +252,10 @@ def main():
|
|||||||
else: ret.update({predefined_btns[k]: gr.update(visible=True, value=v['Title'])})
|
else: ret.update({predefined_btns[k]: gr.update(visible=True, value=v['Title'])})
|
||||||
return ret
|
return ret
|
||||||
|
|
||||||
basic_fn_load.click(reflesh_btn, [persistent_cookie, cookies], [cookies, *customize_btns.values(), *predefined_btns.values()])
|
basic_fn_load.click(reflesh_btn, [persistent_cookie, cookies],[cookies, *customize_btns.values(), *predefined_btns.values()])
|
||||||
h = basic_fn_confirm.click(assign_btn, [persistent_cookie, cookies, basic_btn_dropdown, basic_fn_title, basic_fn_prefix, basic_fn_suffix],
|
h = basic_fn_confirm.click(assign_btn, [persistent_cookie, cookies, basic_btn_dropdown, basic_fn_title, basic_fn_prefix, basic_fn_suffix],
|
||||||
[persistent_cookie, cookies, *customize_btns.values(), *predefined_btns.values()])
|
[persistent_cookie, cookies, *customize_btns.values(), *predefined_btns.values()])
|
||||||
# save persistent cookie
|
h.then(None, [persistent_cookie], None, _js="""(persistent_cookie)=>{setCookie("persistent_cookie", persistent_cookie, 5);}""") # save persistent cookie
|
||||||
h.then(None, [persistent_cookie], None, _js="""(persistent_cookie)=>{setCookie("persistent_cookie", persistent_cookie, 5);}""")
|
|
||||||
|
|
||||||
# 功能区显示开关与功能区的互动
|
# 功能区显示开关与功能区的互动
|
||||||
def fn_area_visibility(a):
|
def fn_area_visibility(a):
|
||||||
@@ -290,8 +305,8 @@ def main():
|
|||||||
click_handle = btn.click(fn=ArgsGeneralWrapper(predict), inputs=[*input_combo, gr.State(True), gr.State(btn.value)], outputs=output_combo)
|
click_handle = btn.click(fn=ArgsGeneralWrapper(predict), inputs=[*input_combo, gr.State(True), gr.State(btn.value)], outputs=output_combo)
|
||||||
cancel_handles.append(click_handle)
|
cancel_handles.append(click_handle)
|
||||||
# 文件上传区,接收文件后与chatbot的互动
|
# 文件上传区,接收文件后与chatbot的互动
|
||||||
file_upload.upload(on_file_uploaded, [file_upload, chatbot, txt, txt2, checkboxes, cookies], [chatbot, txt, txt2, cookies]).then(None, None, None, _js=r"()=>{toast_push('上传完毕 ...'); cancel_loading_status();}")
|
file_upload.upload(on_file_uploaded, [file_upload, chatbot, txt, txt2, checkboxes, cookies], [chatbot, txt, txt2, cookies])
|
||||||
file_upload_2.upload(on_file_uploaded, [file_upload_2, chatbot, txt, txt2, checkboxes, cookies], [chatbot, txt, txt2, cookies]).then(None, None, None, _js=r"()=>{toast_push('上传完毕 ...'); cancel_loading_status();}")
|
file_upload_2.upload(on_file_uploaded, [file_upload_2, chatbot, txt, txt2, checkboxes, cookies], [chatbot, txt, txt2, cookies])
|
||||||
# 函数插件-固定按钮区
|
# 函数插件-固定按钮区
|
||||||
for k in plugins:
|
for k in plugins:
|
||||||
if not plugins[k].get("AsButton", True): continue
|
if not plugins[k].get("AsButton", True): continue
|
||||||
@@ -327,7 +342,18 @@ def main():
|
|||||||
None,
|
None,
|
||||||
[secret_css],
|
[secret_css],
|
||||||
None,
|
None,
|
||||||
_js=js_code_for_css_changing
|
_js="""(css) => {
|
||||||
|
var existingStyles = document.querySelectorAll("style[data-loaded-css]");
|
||||||
|
for (var i = 0; i < existingStyles.length; i++) {
|
||||||
|
var style = existingStyles[i];
|
||||||
|
style.parentNode.removeChild(style);
|
||||||
|
}
|
||||||
|
var styleElement = document.createElement('style');
|
||||||
|
styleElement.setAttribute('data-loaded-css', css);
|
||||||
|
styleElement.innerHTML = css;
|
||||||
|
document.head.appendChild(styleElement);
|
||||||
|
}
|
||||||
|
"""
|
||||||
)
|
)
|
||||||
# 随变按钮的回调函数注册
|
# 随变按钮的回调函数注册
|
||||||
def route(request: gr.Request, k, *args, **kwargs):
|
def route(request: gr.Request, k, *args, **kwargs):
|
||||||
@@ -359,10 +385,27 @@ def main():
|
|||||||
rad.feed(cookies['uuid'].hex, audio)
|
rad.feed(cookies['uuid'].hex, audio)
|
||||||
audio_mic.stream(deal_audio, inputs=[audio_mic, cookies])
|
audio_mic.stream(deal_audio, inputs=[audio_mic, cookies])
|
||||||
|
|
||||||
|
def init_cookie(cookies, chatbot):
|
||||||
|
# 为每一位访问的用户赋予一个独一无二的uuid编码
|
||||||
|
cookies.update({'uuid': uuid.uuid4()})
|
||||||
|
return cookies
|
||||||
demo.load(init_cookie, inputs=[cookies, chatbot], outputs=[cookies])
|
demo.load(init_cookie, inputs=[cookies, chatbot], outputs=[cookies])
|
||||||
darkmode_js = js_code_for_darkmode_init
|
darkmode_js = """(dark) => {
|
||||||
demo.load(None, inputs=None, outputs=[persistent_cookie], _js=js_code_for_persistent_cookie_init)
|
dark = dark == "True";
|
||||||
|
if (document.querySelectorAll('.dark').length) {
|
||||||
|
if (!dark){
|
||||||
|
document.querySelectorAll('.dark').forEach(el => el.classList.remove('dark'));
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
if (dark){
|
||||||
|
document.querySelector('body').classList.add('dark');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}"""
|
||||||
|
load_cookie_js = """(persistent_cookie) => {
|
||||||
|
return getCookie("persistent_cookie");
|
||||||
|
}"""
|
||||||
|
demo.load(None, inputs=None, outputs=[persistent_cookie], _js=load_cookie_js)
|
||||||
demo.load(None, inputs=[dark_mode], outputs=None, _js=darkmode_js) # 配置暗色主题或亮色主题
|
demo.load(None, inputs=[dark_mode], outputs=None, _js=darkmode_js) # 配置暗色主题或亮色主题
|
||||||
demo.load(None, inputs=[gr.Textbox(LAYOUT, visible=False)], outputs=None, _js='(LAYOUT)=>{GptAcademicJavaScriptInit(LAYOUT);}')
|
demo.load(None, inputs=[gr.Textbox(LAYOUT, visible=False)], outputs=None, _js='(LAYOUT)=>{GptAcademicJavaScriptInit(LAYOUT);}')
|
||||||
|
|
||||||
@@ -375,7 +418,7 @@ def main():
|
|||||||
|
|
||||||
def auto_updates(): time.sleep(0); auto_update()
|
def auto_updates(): time.sleep(0); auto_update()
|
||||||
def open_browser(): time.sleep(2); webbrowser.open_new_tab(f"http://localhost:{PORT}")
|
def open_browser(): time.sleep(2); webbrowser.open_new_tab(f"http://localhost:{PORT}")
|
||||||
def warm_up_mods(): time.sleep(6); warm_up_modules()
|
def warm_up_mods(): time.sleep(4); warm_up_modules()
|
||||||
|
|
||||||
threading.Thread(target=auto_updates, name="self-upgrade", daemon=True).start() # 查看自动更新
|
threading.Thread(target=auto_updates, name="self-upgrade", daemon=True).start() # 查看自动更新
|
||||||
threading.Thread(target=open_browser, name="open-browser", daemon=True).start() # 打开浏览器页面
|
threading.Thread(target=open_browser, name="open-browser", daemon=True).start() # 打开浏览器页面
|
||||||
|
|||||||
@@ -352,9 +352,9 @@ def step_1_core_key_translate():
|
|||||||
chinese_core_keys_norepeat_mapping.update({k:cached_translation[k]})
|
chinese_core_keys_norepeat_mapping.update({k:cached_translation[k]})
|
||||||
chinese_core_keys_norepeat_mapping = dict(sorted(chinese_core_keys_norepeat_mapping.items(), key=lambda x: -len(x[0])))
|
chinese_core_keys_norepeat_mapping = dict(sorted(chinese_core_keys_norepeat_mapping.items(), key=lambda x: -len(x[0])))
|
||||||
|
|
||||||
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
# ===============================================
|
||||||
# copy
|
# copy
|
||||||
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
# ===============================================
|
||||||
def copy_source_code():
|
def copy_source_code():
|
||||||
|
|
||||||
from toolbox import get_conf
|
from toolbox import get_conf
|
||||||
@@ -367,9 +367,9 @@ def step_1_core_key_translate():
|
|||||||
shutil.copytree('./', backup_dir, ignore=lambda x, y: blacklist)
|
shutil.copytree('./', backup_dir, ignore=lambda x, y: blacklist)
|
||||||
copy_source_code()
|
copy_source_code()
|
||||||
|
|
||||||
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
# ===============================================
|
||||||
# primary key replace
|
# primary key replace
|
||||||
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
# ===============================================
|
||||||
directory_path = f'./multi-language/{LANG}/'
|
directory_path = f'./multi-language/{LANG}/'
|
||||||
for root, dirs, files in os.walk(directory_path):
|
for root, dirs, files in os.walk(directory_path):
|
||||||
for file in files:
|
for file in files:
|
||||||
@@ -389,9 +389,9 @@ def step_1_core_key_translate():
|
|||||||
|
|
||||||
def step_2_core_key_translate():
|
def step_2_core_key_translate():
|
||||||
|
|
||||||
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
# =================================================================================================
|
||||||
# step2
|
# step2
|
||||||
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
# =================================================================================================
|
||||||
|
|
||||||
def load_string(strings, string_input):
|
def load_string(strings, string_input):
|
||||||
string_ = string_input.strip().strip(',').strip().strip('.').strip()
|
string_ = string_input.strip().strip(',').strip().strip('.').strip()
|
||||||
@@ -492,9 +492,9 @@ def step_2_core_key_translate():
|
|||||||
cached_translation.update(read_map_from_json(language=LANG_STD))
|
cached_translation.update(read_map_from_json(language=LANG_STD))
|
||||||
cached_translation = dict(sorted(cached_translation.items(), key=lambda x: -len(x[0])))
|
cached_translation = dict(sorted(cached_translation.items(), key=lambda x: -len(x[0])))
|
||||||
|
|
||||||
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
# ===============================================
|
||||||
# literal key replace
|
# literal key replace
|
||||||
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
# ===============================================
|
||||||
directory_path = f'./multi-language/{LANG}/'
|
directory_path = f'./multi-language/{LANG}/'
|
||||||
for root, dirs, files in os.walk(directory_path):
|
for root, dirs, files in os.walk(directory_path):
|
||||||
for file in files:
|
for file in files:
|
||||||
|
|||||||
@@ -11,7 +11,7 @@
|
|||||||
import tiktoken, copy
|
import tiktoken, copy
|
||||||
from functools import lru_cache
|
from functools import lru_cache
|
||||||
from concurrent.futures import ThreadPoolExecutor
|
from concurrent.futures import ThreadPoolExecutor
|
||||||
from toolbox import get_conf, trimmed_format_exc, apply_gpt_academic_string_mask
|
from toolbox import get_conf, trimmed_format_exc
|
||||||
|
|
||||||
from .bridge_chatgpt import predict_no_ui_long_connection as chatgpt_noui
|
from .bridge_chatgpt import predict_no_ui_long_connection as chatgpt_noui
|
||||||
from .bridge_chatgpt import predict as chatgpt_ui
|
from .bridge_chatgpt import predict as chatgpt_ui
|
||||||
@@ -28,9 +28,6 @@ from .bridge_chatglm3 import predict as chatglm3_ui
|
|||||||
from .bridge_qianfan import predict_no_ui_long_connection as qianfan_noui
|
from .bridge_qianfan import predict_no_ui_long_connection as qianfan_noui
|
||||||
from .bridge_qianfan import predict as qianfan_ui
|
from .bridge_qianfan import predict as qianfan_ui
|
||||||
|
|
||||||
from .bridge_google_gemini import predict as genai_ui
|
|
||||||
from .bridge_google_gemini import predict_no_ui_long_connection as genai_noui
|
|
||||||
|
|
||||||
colors = ['#FF00FF', '#00FFFF', '#FF0000', '#990099', '#009999', '#990044']
|
colors = ['#FF00FF', '#00FFFF', '#FF0000', '#990099', '#009999', '#990044']
|
||||||
|
|
||||||
class LazyloadTiktoken(object):
|
class LazyloadTiktoken(object):
|
||||||
@@ -249,22 +246,6 @@ model_info = {
|
|||||||
"tokenizer": tokenizer_gpt35,
|
"tokenizer": tokenizer_gpt35,
|
||||||
"token_cnt": get_token_num_gpt35,
|
"token_cnt": get_token_num_gpt35,
|
||||||
},
|
},
|
||||||
"gemini-pro": {
|
|
||||||
"fn_with_ui": genai_ui,
|
|
||||||
"fn_without_ui": genai_noui,
|
|
||||||
"endpoint": None,
|
|
||||||
"max_token": 1024 * 32,
|
|
||||||
"tokenizer": tokenizer_gpt35,
|
|
||||||
"token_cnt": get_token_num_gpt35,
|
|
||||||
},
|
|
||||||
"gemini-pro-vision": {
|
|
||||||
"fn_with_ui": genai_ui,
|
|
||||||
"fn_without_ui": genai_noui,
|
|
||||||
"endpoint": None,
|
|
||||||
"max_token": 1024 * 32,
|
|
||||||
"tokenizer": tokenizer_gpt35,
|
|
||||||
"token_cnt": get_token_num_gpt35,
|
|
||||||
},
|
|
||||||
}
|
}
|
||||||
|
|
||||||
# -=-=-=-=-=-=- api2d 对齐支持 -=-=-=-=-=-=-
|
# -=-=-=-=-=-=- api2d 对齐支持 -=-=-=-=-=-=-
|
||||||
@@ -450,14 +431,14 @@ if "chatglm_onnx" in AVAIL_LLM_MODELS:
|
|||||||
})
|
})
|
||||||
except:
|
except:
|
||||||
print(trimmed_format_exc())
|
print(trimmed_format_exc())
|
||||||
if "qwen-local" in AVAIL_LLM_MODELS:
|
if "qwen" in AVAIL_LLM_MODELS:
|
||||||
try:
|
try:
|
||||||
from .bridge_qwen_local import predict_no_ui_long_connection as qwen_local_noui
|
from .bridge_qwen import predict_no_ui_long_connection as qwen_noui
|
||||||
from .bridge_qwen_local import predict as qwen_local_ui
|
from .bridge_qwen import predict as qwen_ui
|
||||||
model_info.update({
|
model_info.update({
|
||||||
"qwen-local": {
|
"qwen": {
|
||||||
"fn_with_ui": qwen_local_ui,
|
"fn_with_ui": qwen_ui,
|
||||||
"fn_without_ui": qwen_local_noui,
|
"fn_without_ui": qwen_noui,
|
||||||
"endpoint": None,
|
"endpoint": None,
|
||||||
"max_token": 4096,
|
"max_token": 4096,
|
||||||
"tokenizer": tokenizer_gpt35,
|
"tokenizer": tokenizer_gpt35,
|
||||||
@@ -466,32 +447,16 @@ if "qwen-local" in AVAIL_LLM_MODELS:
|
|||||||
})
|
})
|
||||||
except:
|
except:
|
||||||
print(trimmed_format_exc())
|
print(trimmed_format_exc())
|
||||||
if "qwen-turbo" in AVAIL_LLM_MODELS or "qwen-plus" in AVAIL_LLM_MODELS or "qwen-max" in AVAIL_LLM_MODELS: # zhipuai
|
if "chatgpt_website" in AVAIL_LLM_MODELS: # 接入一些逆向工程https://github.com/acheong08/ChatGPT-to-API/
|
||||||
try:
|
try:
|
||||||
from .bridge_qwen import predict_no_ui_long_connection as qwen_noui
|
from .bridge_chatgpt_website import predict_no_ui_long_connection as chatgpt_website_noui
|
||||||
from .bridge_qwen import predict as qwen_ui
|
from .bridge_chatgpt_website import predict as chatgpt_website_ui
|
||||||
model_info.update({
|
model_info.update({
|
||||||
"qwen-turbo": {
|
"chatgpt_website": {
|
||||||
"fn_with_ui": qwen_ui,
|
"fn_with_ui": chatgpt_website_ui,
|
||||||
"fn_without_ui": qwen_noui,
|
"fn_without_ui": chatgpt_website_noui,
|
||||||
"endpoint": None,
|
"endpoint": openai_endpoint,
|
||||||
"max_token": 6144,
|
"max_token": 4096,
|
||||||
"tokenizer": tokenizer_gpt35,
|
|
||||||
"token_cnt": get_token_num_gpt35,
|
|
||||||
},
|
|
||||||
"qwen-plus": {
|
|
||||||
"fn_with_ui": qwen_ui,
|
|
||||||
"fn_without_ui": qwen_noui,
|
|
||||||
"endpoint": None,
|
|
||||||
"max_token": 30720,
|
|
||||||
"tokenizer": tokenizer_gpt35,
|
|
||||||
"token_cnt": get_token_num_gpt35,
|
|
||||||
},
|
|
||||||
"qwen-max": {
|
|
||||||
"fn_with_ui": qwen_ui,
|
|
||||||
"fn_without_ui": qwen_noui,
|
|
||||||
"endpoint": None,
|
|
||||||
"max_token": 28672,
|
|
||||||
"tokenizer": tokenizer_gpt35,
|
"tokenizer": tokenizer_gpt35,
|
||||||
"token_cnt": get_token_num_gpt35,
|
"token_cnt": get_token_num_gpt35,
|
||||||
}
|
}
|
||||||
@@ -594,23 +559,6 @@ if "deepseekcoder" in AVAIL_LLM_MODELS: # deepseekcoder
|
|||||||
})
|
})
|
||||||
except:
|
except:
|
||||||
print(trimmed_format_exc())
|
print(trimmed_format_exc())
|
||||||
# if "skylark" in AVAIL_LLM_MODELS:
|
|
||||||
# try:
|
|
||||||
# from .bridge_skylark2 import predict_no_ui_long_connection as skylark_noui
|
|
||||||
# from .bridge_skylark2 import predict as skylark_ui
|
|
||||||
# model_info.update({
|
|
||||||
# "skylark": {
|
|
||||||
# "fn_with_ui": skylark_ui,
|
|
||||||
# "fn_without_ui": skylark_noui,
|
|
||||||
# "endpoint": None,
|
|
||||||
# "max_token": 4096,
|
|
||||||
# "tokenizer": tokenizer_gpt35,
|
|
||||||
# "token_cnt": get_token_num_gpt35,
|
|
||||||
# }
|
|
||||||
# })
|
|
||||||
# except:
|
|
||||||
# print(trimmed_format_exc())
|
|
||||||
|
|
||||||
|
|
||||||
# <-- 用于定义和切换多个azure模型 -->
|
# <-- 用于定义和切换多个azure模型 -->
|
||||||
AZURE_CFG_ARRAY = get_conf("AZURE_CFG_ARRAY")
|
AZURE_CFG_ARRAY = get_conf("AZURE_CFG_ARRAY")
|
||||||
@@ -668,7 +616,6 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history, sys_prompt, obser
|
|||||||
"""
|
"""
|
||||||
import threading, time, copy
|
import threading, time, copy
|
||||||
|
|
||||||
inputs = apply_gpt_academic_string_mask(inputs, mode="show_llm")
|
|
||||||
model = llm_kwargs['llm_model']
|
model = llm_kwargs['llm_model']
|
||||||
n_model = 1
|
n_model = 1
|
||||||
if '&' not in model:
|
if '&' not in model:
|
||||||
@@ -742,7 +689,6 @@ def predict(inputs, llm_kwargs, *args, **kwargs):
|
|||||||
additional_fn代表点击的哪个按钮,按钮见functional.py
|
additional_fn代表点击的哪个按钮,按钮见functional.py
|
||||||
"""
|
"""
|
||||||
|
|
||||||
inputs = apply_gpt_academic_string_mask(inputs, mode="show_llm")
|
|
||||||
method = model_info[llm_kwargs['llm_model']]["fn_with_ui"] # 如果这里报错,检查config中的AVAIL_LLM_MODELS选项
|
method = model_info[llm_kwargs['llm_model']]["fn_with_ui"] # 如果这里报错,检查config中的AVAIL_LLM_MODELS选项
|
||||||
yield from method(inputs, llm_kwargs, *args, **kwargs)
|
yield from method(inputs, llm_kwargs, *args, **kwargs)
|
||||||
|
|
||||||
|
|||||||
@@ -28,6 +28,12 @@ proxies, TIMEOUT_SECONDS, MAX_RETRY, API_ORG, AZURE_CFG_ARRAY = \
|
|||||||
timeout_bot_msg = '[Local Message] Request timeout. Network error. Please check proxy settings in config.py.' + \
|
timeout_bot_msg = '[Local Message] Request timeout. Network error. Please check proxy settings in config.py.' + \
|
||||||
'网络错误,检查代理服务器是否可用,以及代理设置的格式是否正确,格式须是[协议]://[地址]:[端口],缺一不可。'
|
'网络错误,检查代理服务器是否可用,以及代理设置的格式是否正确,格式须是[协议]://[地址]:[端口],缺一不可。'
|
||||||
|
|
||||||
|
def report_invalid_key(key):
|
||||||
|
if get_conf("BLOCK_INVALID_APIKEY"):
|
||||||
|
# 实验性功能,自动检测并屏蔽失效的KEY,请勿使用
|
||||||
|
from request_llms.key_manager import ApiKeyManager
|
||||||
|
api_key = ApiKeyManager().add_key_to_blacklist(key)
|
||||||
|
|
||||||
def get_full_error(chunk, stream_response):
|
def get_full_error(chunk, stream_response):
|
||||||
"""
|
"""
|
||||||
获取完整的从Openai返回的报错
|
获取完整的从Openai返回的报错
|
||||||
@@ -51,8 +57,7 @@ def decode_chunk(chunk):
|
|||||||
chunkjson = json.loads(chunk_decoded[6:])
|
chunkjson = json.loads(chunk_decoded[6:])
|
||||||
has_choices = 'choices' in chunkjson
|
has_choices = 'choices' in chunkjson
|
||||||
if has_choices: choice_valid = (len(chunkjson['choices']) > 0)
|
if has_choices: choice_valid = (len(chunkjson['choices']) > 0)
|
||||||
if has_choices and choice_valid: has_content = ("content" in chunkjson['choices'][0]["delta"])
|
if has_choices and choice_valid: has_content = "content" in chunkjson['choices'][0]["delta"]
|
||||||
if has_content: has_content = (chunkjson['choices'][0]["delta"]["content"] is not None)
|
|
||||||
if has_choices and choice_valid: has_role = "role" in chunkjson['choices'][0]["delta"]
|
if has_choices and choice_valid: has_role = "role" in chunkjson['choices'][0]["delta"]
|
||||||
except:
|
except:
|
||||||
pass
|
pass
|
||||||
@@ -83,7 +88,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
|
|||||||
用于负责跨越线程传递已经输出的部分,大部分时候仅仅为了fancy的视觉效果,留空即可。observe_window[0]:观测窗。observe_window[1]:看门狗
|
用于负责跨越线程传递已经输出的部分,大部分时候仅仅为了fancy的视觉效果,留空即可。observe_window[0]:观测窗。observe_window[1]:看门狗
|
||||||
"""
|
"""
|
||||||
watch_dog_patience = 5 # 看门狗的耐心, 设置5秒即可
|
watch_dog_patience = 5 # 看门狗的耐心, 设置5秒即可
|
||||||
headers, payload = generate_payload(inputs, llm_kwargs, history, system_prompt=sys_prompt, stream=True)
|
headers, payload, api_key = generate_payload(inputs, llm_kwargs, history, system_prompt=sys_prompt, stream=True)
|
||||||
retry = 0
|
retry = 0
|
||||||
while True:
|
while True:
|
||||||
try:
|
try:
|
||||||
@@ -102,25 +107,22 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
|
|||||||
result = ''
|
result = ''
|
||||||
json_data = None
|
json_data = None
|
||||||
while True:
|
while True:
|
||||||
try: chunk = next(stream_response)
|
try: chunk = next(stream_response).decode()
|
||||||
except StopIteration:
|
except StopIteration:
|
||||||
break
|
break
|
||||||
except requests.exceptions.ConnectionError:
|
except requests.exceptions.ConnectionError:
|
||||||
chunk = next(stream_response) # 失败了,重试一次?再失败就没办法了。
|
chunk = next(stream_response).decode() # 失败了,重试一次?再失败就没办法了。
|
||||||
chunk_decoded, chunkjson, has_choices, choice_valid, has_content, has_role = decode_chunk(chunk)
|
if len(chunk)==0: continue
|
||||||
if len(chunk_decoded)==0: continue
|
if not chunk.startswith('data:'):
|
||||||
if not chunk_decoded.startswith('data:'):
|
error_msg = get_full_error(chunk.encode('utf8'), stream_response).decode()
|
||||||
error_msg = get_full_error(chunk, stream_response).decode()
|
|
||||||
if "reduce the length" in error_msg:
|
if "reduce the length" in error_msg:
|
||||||
raise ConnectionAbortedError("OpenAI拒绝了请求:" + error_msg)
|
raise ConnectionAbortedError("OpenAI拒绝了请求:" + error_msg)
|
||||||
else:
|
else:
|
||||||
|
if "API key has been deactivated" in error_msg: report_invalid_key(api_key)
|
||||||
|
elif "exceeded your current quota" in error_msg: report_invalid_key(api_key)
|
||||||
raise RuntimeError("OpenAI拒绝了请求:" + error_msg)
|
raise RuntimeError("OpenAI拒绝了请求:" + error_msg)
|
||||||
if ('data: [DONE]' in chunk_decoded): break # api2d 正常完成
|
if ('data: [DONE]' in chunk): break # api2d 正常完成
|
||||||
# 提前读取一些信息 (用于判断异常)
|
json_data = json.loads(chunk.lstrip('data:'))['choices'][0]
|
||||||
if has_choices and not choice_valid:
|
|
||||||
# 一些垃圾第三方接口的出现这样的错误
|
|
||||||
continue
|
|
||||||
json_data = chunkjson['choices'][0]
|
|
||||||
delta = json_data["delta"]
|
delta = json_data["delta"]
|
||||||
if len(delta) == 0: break
|
if len(delta) == 0: break
|
||||||
if "role" in delta: continue
|
if "role" in delta: continue
|
||||||
@@ -180,7 +182,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
|||||||
time.sleep(2)
|
time.sleep(2)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
headers, payload = generate_payload(inputs, llm_kwargs, history, system_prompt, stream)
|
headers, payload, api_key = generate_payload(inputs, llm_kwargs, history, system_prompt, stream)
|
||||||
except RuntimeError as e:
|
except RuntimeError as e:
|
||||||
chatbot[-1] = (inputs, f"您提供的api-key不满足要求,不包含任何可用于{llm_kwargs['llm_model']}的api-key。您可能选择了错误的模型或请求源。")
|
chatbot[-1] = (inputs, f"您提供的api-key不满足要求,不包含任何可用于{llm_kwargs['llm_model']}的api-key。您可能选择了错误的模型或请求源。")
|
||||||
yield from update_ui(chatbot=chatbot, history=history, msg="api-key不满足要求") # 刷新界面
|
yield from update_ui(chatbot=chatbot, history=history, msg="api-key不满足要求") # 刷新界面
|
||||||
@@ -228,7 +230,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
|||||||
yield from update_ui(chatbot=chatbot, history=history, msg="检测到有缺陷的非OpenAI官方接口,建议选择更稳定的接口。")
|
yield from update_ui(chatbot=chatbot, history=history, msg="检测到有缺陷的非OpenAI官方接口,建议选择更稳定的接口。")
|
||||||
break
|
break
|
||||||
# 其他情况,直接返回报错
|
# 其他情况,直接返回报错
|
||||||
chatbot, history = handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg)
|
chatbot, history = handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg, api_key)
|
||||||
yield from update_ui(chatbot=chatbot, history=history, msg="非OpenAI官方接口返回了错误:" + chunk.decode()) # 刷新界面
|
yield from update_ui(chatbot=chatbot, history=history, msg="非OpenAI官方接口返回了错误:" + chunk.decode()) # 刷新界面
|
||||||
return
|
return
|
||||||
|
|
||||||
@@ -244,9 +246,6 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
|||||||
if has_choices and not choice_valid:
|
if has_choices and not choice_valid:
|
||||||
# 一些垃圾第三方接口的出现这样的错误
|
# 一些垃圾第三方接口的出现这样的错误
|
||||||
continue
|
continue
|
||||||
if ('data: [DONE]' not in chunk_decoded) and len(chunk_decoded) > 0 and (chunkjson is None):
|
|
||||||
# 传递进来一些奇怪的东西
|
|
||||||
raise ValueError(f'无法读取以下数据,请检查配置。\n\n{chunk_decoded}')
|
|
||||||
# 前者是API2D的结束条件,后者是OPENAI的结束条件
|
# 前者是API2D的结束条件,后者是OPENAI的结束条件
|
||||||
if ('data: [DONE]' in chunk_decoded) or (len(chunkjson['choices'][0]["delta"]) == 0):
|
if ('data: [DONE]' in chunk_decoded) or (len(chunkjson['choices'][0]["delta"]) == 0):
|
||||||
# 判定为数据流的结束,gpt_replying_buffer也写完了
|
# 判定为数据流的结束,gpt_replying_buffer也写完了
|
||||||
@@ -273,12 +272,12 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
|||||||
chunk = get_full_error(chunk, stream_response)
|
chunk = get_full_error(chunk, stream_response)
|
||||||
chunk_decoded = chunk.decode()
|
chunk_decoded = chunk.decode()
|
||||||
error_msg = chunk_decoded
|
error_msg = chunk_decoded
|
||||||
chatbot, history = handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg)
|
chatbot, history = handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg, api_key)
|
||||||
yield from update_ui(chatbot=chatbot, history=history, msg="Json异常" + error_msg) # 刷新界面
|
yield from update_ui(chatbot=chatbot, history=history, msg="Json异常" + error_msg) # 刷新界面
|
||||||
print(error_msg)
|
print(error_msg)
|
||||||
return
|
return
|
||||||
|
|
||||||
def handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg):
|
def handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg, api_key=""):
|
||||||
from .bridge_all import model_info
|
from .bridge_all import model_info
|
||||||
openai_website = ' 请登录OpenAI查看详情 https://platform.openai.com/signup'
|
openai_website = ' 请登录OpenAI查看详情 https://platform.openai.com/signup'
|
||||||
if "reduce the length" in error_msg:
|
if "reduce the length" in error_msg:
|
||||||
@@ -289,15 +288,15 @@ def handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg)
|
|||||||
elif "does not exist" in error_msg:
|
elif "does not exist" in error_msg:
|
||||||
chatbot[-1] = (chatbot[-1][0], f"[Local Message] Model {llm_kwargs['llm_model']} does not exist. 模型不存在, 或者您没有获得体验资格.")
|
chatbot[-1] = (chatbot[-1][0], f"[Local Message] Model {llm_kwargs['llm_model']} does not exist. 模型不存在, 或者您没有获得体验资格.")
|
||||||
elif "Incorrect API key" in error_msg:
|
elif "Incorrect API key" in error_msg:
|
||||||
chatbot[-1] = (chatbot[-1][0], "[Local Message] Incorrect API key. OpenAI以提供了不正确的API_KEY为由, 拒绝服务. " + openai_website)
|
chatbot[-1] = (chatbot[-1][0], "[Local Message] Incorrect API key. OpenAI以提供了不正确的API_KEY为由, 拒绝服务. " + openai_website); report_invalid_key(api_key)
|
||||||
elif "exceeded your current quota" in error_msg:
|
elif "exceeded your current quota" in error_msg:
|
||||||
chatbot[-1] = (chatbot[-1][0], "[Local Message] You exceeded your current quota. OpenAI以账户额度不足为由, 拒绝服务." + openai_website)
|
chatbot[-1] = (chatbot[-1][0], "[Local Message] You exceeded your current quota. OpenAI以账户额度不足为由, 拒绝服务." + openai_website); report_invalid_key(api_key)
|
||||||
elif "account is not active" in error_msg:
|
elif "account is not active" in error_msg:
|
||||||
chatbot[-1] = (chatbot[-1][0], "[Local Message] Your account is not active. OpenAI以账户失效为由, 拒绝服务." + openai_website)
|
chatbot[-1] = (chatbot[-1][0], "[Local Message] Your account is not active. OpenAI以账户失效为由, 拒绝服务." + openai_website); report_invalid_key(api_key)
|
||||||
elif "associated with a deactivated account" in error_msg:
|
elif "associated with a deactivated account" in error_msg:
|
||||||
chatbot[-1] = (chatbot[-1][0], "[Local Message] You are associated with a deactivated account. OpenAI以账户失效为由, 拒绝服务." + openai_website)
|
chatbot[-1] = (chatbot[-1][0], "[Local Message] You are associated with a deactivated account. OpenAI以账户失效为由, 拒绝服务." + openai_website); report_invalid_key(api_key)
|
||||||
elif "API key has been deactivated" in error_msg:
|
elif "API key has been deactivated" in error_msg:
|
||||||
chatbot[-1] = (chatbot[-1][0], "[Local Message] API key has been deactivated. OpenAI以账户失效为由, 拒绝服务." + openai_website)
|
chatbot[-1] = (chatbot[-1][0], "[Local Message] API key has been deactivated. OpenAI以账户失效为由, 拒绝服务." + openai_website); report_invalid_key(api_key)
|
||||||
elif "bad forward key" in error_msg:
|
elif "bad forward key" in error_msg:
|
||||||
chatbot[-1] = (chatbot[-1][0], "[Local Message] Bad forward key. API2D账户额度不足.")
|
chatbot[-1] = (chatbot[-1][0], "[Local Message] Bad forward key. API2D账户额度不足.")
|
||||||
elif "Not enough point" in error_msg:
|
elif "Not enough point" in error_msg:
|
||||||
@@ -380,6 +379,6 @@ def generate_payload(inputs, llm_kwargs, history, system_prompt, stream):
|
|||||||
print(f" {llm_kwargs['llm_model']} : {conversation_cnt} : {inputs[:100]} ..........")
|
print(f" {llm_kwargs['llm_model']} : {conversation_cnt} : {inputs[:100]} ..........")
|
||||||
except:
|
except:
|
||||||
print('输入中可能存在乱码。')
|
print('输入中可能存在乱码。')
|
||||||
return headers,payload
|
return headers, payload, api_key
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -1,114 +0,0 @@
|
|||||||
# encoding: utf-8
|
|
||||||
# @Time : 2023/12/21
|
|
||||||
# @Author : Spike
|
|
||||||
# @Descr :
|
|
||||||
import json
|
|
||||||
import re
|
|
||||||
import os
|
|
||||||
import time
|
|
||||||
from request_llms.com_google import GoogleChatInit
|
|
||||||
from toolbox import get_conf, update_ui, update_ui_lastest_msg, have_any_recent_upload_image_files, trimmed_format_exc
|
|
||||||
|
|
||||||
proxies, TIMEOUT_SECONDS, MAX_RETRY = get_conf('proxies', 'TIMEOUT_SECONDS', 'MAX_RETRY')
|
|
||||||
timeout_bot_msg = '[Local Message] Request timeout. Network error. Please check proxy settings in config.py.' + \
|
|
||||||
'网络错误,检查代理服务器是否可用,以及代理设置的格式是否正确,格式须是[协议]://[地址]:[端口],缺一不可。'
|
|
||||||
|
|
||||||
|
|
||||||
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=None,
|
|
||||||
console_slience=False):
|
|
||||||
# 检查API_KEY
|
|
||||||
if get_conf("GEMINI_API_KEY") == "":
|
|
||||||
raise ValueError(f"请配置 GEMINI_API_KEY。")
|
|
||||||
|
|
||||||
genai = GoogleChatInit()
|
|
||||||
watch_dog_patience = 5 # 看门狗的耐心, 设置5秒即可
|
|
||||||
gpt_replying_buffer = ''
|
|
||||||
stream_response = genai.generate_chat(inputs, llm_kwargs, history, sys_prompt)
|
|
||||||
for response in stream_response:
|
|
||||||
results = response.decode()
|
|
||||||
match = re.search(r'"text":\s*"((?:[^"\\]|\\.)*)"', results, flags=re.DOTALL)
|
|
||||||
error_match = re.search(r'\"message\":\s*\"(.*?)\"', results, flags=re.DOTALL)
|
|
||||||
if match:
|
|
||||||
try:
|
|
||||||
paraphrase = json.loads('{"text": "%s"}' % match.group(1))
|
|
||||||
except:
|
|
||||||
raise ValueError(f"解析GEMINI消息出错。")
|
|
||||||
buffer = paraphrase['text']
|
|
||||||
gpt_replying_buffer += buffer
|
|
||||||
if len(observe_window) >= 1:
|
|
||||||
observe_window[0] = gpt_replying_buffer
|
|
||||||
if len(observe_window) >= 2:
|
|
||||||
if (time.time() - observe_window[1]) > watch_dog_patience: raise RuntimeError("程序终止。")
|
|
||||||
if error_match:
|
|
||||||
raise RuntimeError(f'{gpt_replying_buffer} 对话错误')
|
|
||||||
return gpt_replying_buffer
|
|
||||||
|
|
||||||
|
|
||||||
def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream=True, additional_fn=None):
|
|
||||||
# 检查API_KEY
|
|
||||||
if get_conf("GEMINI_API_KEY") == "":
|
|
||||||
yield from update_ui_lastest_msg(f"请配置 GEMINI_API_KEY。", chatbot=chatbot, history=history, delay=0)
|
|
||||||
return
|
|
||||||
|
|
||||||
# 适配润色区域
|
|
||||||
if additional_fn is not None:
|
|
||||||
from core_functional import handle_core_functionality
|
|
||||||
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
|
|
||||||
|
|
||||||
if "vision" in llm_kwargs["llm_model"]:
|
|
||||||
have_recent_file, image_paths = have_any_recent_upload_image_files(chatbot)
|
|
||||||
def make_media_input(inputs, image_paths):
|
|
||||||
for image_path in image_paths:
|
|
||||||
inputs = inputs + f'<br/><br/><div align="center"><img src="file={os.path.abspath(image_path)}"></div>'
|
|
||||||
return inputs
|
|
||||||
if have_recent_file:
|
|
||||||
inputs = make_media_input(inputs, image_paths)
|
|
||||||
|
|
||||||
chatbot.append((inputs, ""))
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history)
|
|
||||||
genai = GoogleChatInit()
|
|
||||||
retry = 0
|
|
||||||
while True:
|
|
||||||
try:
|
|
||||||
stream_response = genai.generate_chat(inputs, llm_kwargs, history, system_prompt)
|
|
||||||
break
|
|
||||||
except Exception as e:
|
|
||||||
retry += 1
|
|
||||||
chatbot[-1] = ((chatbot[-1][0], trimmed_format_exc()))
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history, msg="请求失败") # 刷新界面
|
|
||||||
return
|
|
||||||
gpt_replying_buffer = ""
|
|
||||||
gpt_security_policy = ""
|
|
||||||
history.extend([inputs, ''])
|
|
||||||
for response in stream_response:
|
|
||||||
results = response.decode("utf-8") # 被这个解码给耍了。。
|
|
||||||
gpt_security_policy += results
|
|
||||||
match = re.search(r'"text":\s*"((?:[^"\\]|\\.)*)"', results, flags=re.DOTALL)
|
|
||||||
error_match = re.search(r'\"message\":\s*\"(.*)\"', results, flags=re.DOTALL)
|
|
||||||
if match:
|
|
||||||
try:
|
|
||||||
paraphrase = json.loads('{"text": "%s"}' % match.group(1))
|
|
||||||
except:
|
|
||||||
raise ValueError(f"解析GEMINI消息出错。")
|
|
||||||
gpt_replying_buffer += paraphrase['text'] # 使用 json 解析库进行处理
|
|
||||||
chatbot[-1] = (inputs, gpt_replying_buffer)
|
|
||||||
history[-1] = gpt_replying_buffer
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history)
|
|
||||||
if error_match:
|
|
||||||
history = history[-2] # 错误的不纳入对话
|
|
||||||
chatbot[-1] = (inputs, gpt_replying_buffer + f"对话错误,请查看message\n\n```\n{error_match.group(1)}\n```")
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history)
|
|
||||||
raise RuntimeError('对话错误')
|
|
||||||
if not gpt_replying_buffer:
|
|
||||||
history = history[-2] # 错误的不纳入对话
|
|
||||||
chatbot[-1] = (inputs, gpt_replying_buffer + f"触发了Google的安全访问策略,没有回答\n\n```\n{gpt_security_policy}\n```")
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history)
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
import sys
|
|
||||||
llm_kwargs = {'llm_model': 'gemini-pro'}
|
|
||||||
result = predict('Write long a story about a magic backpack.', llm_kwargs, llm_kwargs, [])
|
|
||||||
for i in result:
|
|
||||||
print(i)
|
|
||||||
@@ -1,17 +1,16 @@
|
|||||||
"""
|
"""
|
||||||
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
========================================================================
|
||||||
第一部分:来自EdgeGPT.py
|
第一部分:来自EdgeGPT.py
|
||||||
https://github.com/acheong08/EdgeGPT
|
https://github.com/acheong08/EdgeGPT
|
||||||
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
========================================================================
|
||||||
"""
|
"""
|
||||||
from .edge_gpt_free import Chatbot as NewbingChatbot
|
from .edge_gpt_free import Chatbot as NewbingChatbot
|
||||||
|
|
||||||
load_message = "等待NewBing响应。"
|
load_message = "等待NewBing响应。"
|
||||||
|
|
||||||
"""
|
"""
|
||||||
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
========================================================================
|
||||||
第二部分:子进程Worker(调用主体)
|
第二部分:子进程Worker(调用主体)
|
||||||
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
========================================================================
|
||||||
"""
|
"""
|
||||||
import time
|
import time
|
||||||
import json
|
import json
|
||||||
@@ -23,30 +22,19 @@ import threading
|
|||||||
from toolbox import update_ui, get_conf, trimmed_format_exc
|
from toolbox import update_ui, get_conf, trimmed_format_exc
|
||||||
from multiprocessing import Process, Pipe
|
from multiprocessing import Process, Pipe
|
||||||
|
|
||||||
|
|
||||||
def preprocess_newbing_out(s):
|
def preprocess_newbing_out(s):
|
||||||
pattern = r"\^(\d+)\^" # 匹配^数字^
|
pattern = r'\^(\d+)\^' # 匹配^数字^
|
||||||
sub = lambda m: "(" + m.group(1) + ")" # 将匹配到的数字作为替换值
|
sub = lambda m: '('+m.group(1)+')' # 将匹配到的数字作为替换值
|
||||||
result = re.sub(pattern, sub, s) # 替换操作
|
result = re.sub(pattern, sub, s) # 替换操作
|
||||||
if "[1]" in result:
|
if '[1]' in result:
|
||||||
result += (
|
result += '\n\n```reference\n' + "\n".join([r for r in result.split('\n') if r.startswith('[')]) + '\n```\n'
|
||||||
"\n\n```reference\n"
|
|
||||||
+ "\n".join([r for r in result.split("\n") if r.startswith("[")])
|
|
||||||
+ "\n```\n"
|
|
||||||
)
|
|
||||||
return result
|
return result
|
||||||
|
|
||||||
|
|
||||||
def preprocess_newbing_out_simple(result):
|
def preprocess_newbing_out_simple(result):
|
||||||
if "[1]" in result:
|
if '[1]' in result:
|
||||||
result += (
|
result += '\n\n```reference\n' + "\n".join([r for r in result.split('\n') if r.startswith('[')]) + '\n```\n'
|
||||||
"\n\n```reference\n"
|
|
||||||
+ "\n".join([r for r in result.split("\n") if r.startswith("[")])
|
|
||||||
+ "\n```\n"
|
|
||||||
)
|
|
||||||
return result
|
return result
|
||||||
|
|
||||||
|
|
||||||
class NewBingHandle(Process):
|
class NewBingHandle(Process):
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
super().__init__(daemon=True)
|
super().__init__(daemon=True)
|
||||||
@@ -63,7 +51,6 @@ class NewBingHandle(Process):
|
|||||||
try:
|
try:
|
||||||
self.success = False
|
self.success = False
|
||||||
import certifi, httpx, rich
|
import certifi, httpx, rich
|
||||||
|
|
||||||
self.info = "依赖检测通过,等待NewBing响应。注意目前不能多人同时调用NewBing接口(有线程锁),否则将导致每个人的NewBing问询历史互相渗透。调用NewBing时,会自动使用已配置的代理。"
|
self.info = "依赖检测通过,等待NewBing响应。注意目前不能多人同时调用NewBing接口(有线程锁),否则将导致每个人的NewBing问询历史互相渗透。调用NewBing时,会自动使用已配置的代理。"
|
||||||
self.success = True
|
self.success = True
|
||||||
except:
|
except:
|
||||||
@@ -75,19 +62,18 @@ class NewBingHandle(Process):
|
|||||||
|
|
||||||
async def async_run(self):
|
async def async_run(self):
|
||||||
# 读取配置
|
# 读取配置
|
||||||
NEWBING_STYLE = get_conf("NEWBING_STYLE")
|
NEWBING_STYLE = get_conf('NEWBING_STYLE')
|
||||||
from request_llms.bridge_all import model_info
|
from request_llms.bridge_all import model_info
|
||||||
|
endpoint = model_info['newbing']['endpoint']
|
||||||
endpoint = model_info["newbing"]["endpoint"]
|
|
||||||
while True:
|
while True:
|
||||||
# 等待
|
# 等待
|
||||||
kwargs = self.child.recv()
|
kwargs = self.child.recv()
|
||||||
question = kwargs["query"]
|
question=kwargs['query']
|
||||||
history = kwargs["history"]
|
history=kwargs['history']
|
||||||
system_prompt = kwargs["system_prompt"]
|
system_prompt=kwargs['system_prompt']
|
||||||
|
|
||||||
# 是否重置
|
# 是否重置
|
||||||
if len(self.local_history) > 0 and len(history) == 0:
|
if len(self.local_history) > 0 and len(history)==0:
|
||||||
await self.newbing_model.reset()
|
await self.newbing_model.reset()
|
||||||
self.local_history = []
|
self.local_history = []
|
||||||
|
|
||||||
@@ -95,19 +81,19 @@ class NewBingHandle(Process):
|
|||||||
prompt = ""
|
prompt = ""
|
||||||
if system_prompt not in self.local_history:
|
if system_prompt not in self.local_history:
|
||||||
self.local_history.append(system_prompt)
|
self.local_history.append(system_prompt)
|
||||||
prompt += system_prompt + "\n"
|
prompt += system_prompt + '\n'
|
||||||
|
|
||||||
# 追加历史
|
# 追加历史
|
||||||
for ab in history:
|
for ab in history:
|
||||||
a, b = ab
|
a, b = ab
|
||||||
if a not in self.local_history:
|
if a not in self.local_history:
|
||||||
self.local_history.append(a)
|
self.local_history.append(a)
|
||||||
prompt += a + "\n"
|
prompt += a + '\n'
|
||||||
|
|
||||||
# 问题
|
# 问题
|
||||||
prompt += question
|
prompt += question
|
||||||
self.local_history.append(question)
|
self.local_history.append(question)
|
||||||
print("question:", prompt)
|
print('question:', prompt)
|
||||||
# 提交
|
# 提交
|
||||||
async for final, response in self.newbing_model.ask_stream(
|
async for final, response in self.newbing_model.ask_stream(
|
||||||
prompt=question,
|
prompt=question,
|
||||||
@@ -118,10 +104,11 @@ class NewBingHandle(Process):
|
|||||||
print(response)
|
print(response)
|
||||||
self.child.send(str(response))
|
self.child.send(str(response))
|
||||||
else:
|
else:
|
||||||
print("-------- receive final ---------")
|
print('-------- receive final ---------')
|
||||||
self.child.send("[Finish]")
|
self.child.send('[Finish]')
|
||||||
# self.local_history.append(response)
|
# self.local_history.append(response)
|
||||||
|
|
||||||
|
|
||||||
def run(self):
|
def run(self):
|
||||||
"""
|
"""
|
||||||
这个函数运行在子进程
|
这个函数运行在子进程
|
||||||
@@ -131,37 +118,32 @@ class NewBingHandle(Process):
|
|||||||
self.local_history = []
|
self.local_history = []
|
||||||
if (self.newbing_model is None) or (not self.success):
|
if (self.newbing_model is None) or (not self.success):
|
||||||
# 代理设置
|
# 代理设置
|
||||||
proxies, NEWBING_COOKIES = get_conf("proxies", "NEWBING_COOKIES")
|
proxies, NEWBING_COOKIES = get_conf('proxies', 'NEWBING_COOKIES')
|
||||||
if proxies is None:
|
if proxies is None:
|
||||||
self.proxies_https = None
|
self.proxies_https = None
|
||||||
else:
|
else:
|
||||||
self.proxies_https = proxies["https"]
|
self.proxies_https = proxies['https']
|
||||||
|
|
||||||
if (NEWBING_COOKIES is not None) and len(NEWBING_COOKIES) > 100:
|
if (NEWBING_COOKIES is not None) and len(NEWBING_COOKIES) > 100:
|
||||||
try:
|
try:
|
||||||
cookies = json.loads(NEWBING_COOKIES)
|
cookies = json.loads(NEWBING_COOKIES)
|
||||||
except:
|
except:
|
||||||
self.success = False
|
self.success = False
|
||||||
tb_str = "\n```\n" + trimmed_format_exc() + "\n```\n"
|
tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n'
|
||||||
self.child.send(f"[Local Message] NEWBING_COOKIES未填写或有格式错误。")
|
self.child.send(f'[Local Message] NEWBING_COOKIES未填写或有格式错误。')
|
||||||
self.child.send("[Fail]")
|
self.child.send('[Fail]'); self.child.send('[Finish]')
|
||||||
self.child.send("[Finish]")
|
|
||||||
raise RuntimeError(f"NEWBING_COOKIES未填写或有格式错误。")
|
raise RuntimeError(f"NEWBING_COOKIES未填写或有格式错误。")
|
||||||
else:
|
else:
|
||||||
cookies = None
|
cookies = None
|
||||||
|
|
||||||
try:
|
try:
|
||||||
self.newbing_model = NewbingChatbot(
|
self.newbing_model = NewbingChatbot(proxy=self.proxies_https, cookies=cookies)
|
||||||
proxy=self.proxies_https, cookies=cookies
|
|
||||||
)
|
|
||||||
except:
|
except:
|
||||||
self.success = False
|
self.success = False
|
||||||
tb_str = "\n```\n" + trimmed_format_exc() + "\n```\n"
|
tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n'
|
||||||
self.child.send(
|
self.child.send(f'[Local Message] 不能加载Newbing组件,请注意Newbing组件已不再维护。{tb_str}')
|
||||||
f"[Local Message] 不能加载Newbing组件,请注意Newbing组件已不再维护。{tb_str}"
|
self.child.send('[Fail]')
|
||||||
)
|
self.child.send('[Finish]')
|
||||||
self.child.send("[Fail]")
|
|
||||||
self.child.send("[Finish]")
|
|
||||||
raise RuntimeError(f"不能加载Newbing组件,请注意Newbing组件已不再维护。")
|
raise RuntimeError(f"不能加载Newbing组件,请注意Newbing组件已不再维护。")
|
||||||
|
|
||||||
self.success = True
|
self.success = True
|
||||||
@@ -169,12 +151,10 @@ class NewBingHandle(Process):
|
|||||||
# 进入任务等待状态
|
# 进入任务等待状态
|
||||||
asyncio.run(self.async_run())
|
asyncio.run(self.async_run())
|
||||||
except Exception:
|
except Exception:
|
||||||
tb_str = "\n```\n" + trimmed_format_exc() + "\n```\n"
|
tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n'
|
||||||
self.child.send(
|
self.child.send(f'[Local Message] Newbing 请求失败,报错信息如下. 如果是与网络相关的问题,建议更换代理协议(推荐http)或代理节点 {tb_str}.')
|
||||||
f"[Local Message] Newbing 请求失败,报错信息如下. 如果是与网络相关的问题,建议更换代理协议(推荐http)或代理节点 {tb_str}."
|
self.child.send('[Fail]')
|
||||||
)
|
self.child.send('[Finish]')
|
||||||
self.child.send("[Fail]")
|
|
||||||
self.child.send("[Finish]")
|
|
||||||
|
|
||||||
def stream_chat(self, **kwargs):
|
def stream_chat(self, **kwargs):
|
||||||
"""
|
"""
|
||||||
@@ -184,33 +164,21 @@ class NewBingHandle(Process):
|
|||||||
self.parent.send(kwargs) # 请求子进程
|
self.parent.send(kwargs) # 请求子进程
|
||||||
while True:
|
while True:
|
||||||
res = self.parent.recv() # 等待newbing回复的片段
|
res = self.parent.recv() # 等待newbing回复的片段
|
||||||
if res == "[Finish]":
|
if res == '[Finish]': break # 结束
|
||||||
break # 结束
|
elif res == '[Fail]': self.success = False; break # 失败
|
||||||
elif res == "[Fail]":
|
else: yield res # newbing回复的片段
|
||||||
self.success = False
|
|
||||||
break # 失败
|
|
||||||
else:
|
|
||||||
yield res # newbing回复的片段
|
|
||||||
self.threadLock.release() # 释放线程锁
|
self.threadLock.release() # 释放线程锁
|
||||||
|
|
||||||
|
|
||||||
"""
|
"""
|
||||||
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
========================================================================
|
||||||
第三部分:主进程统一调用函数接口
|
第三部分:主进程统一调用函数接口
|
||||||
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
========================================================================
|
||||||
"""
|
"""
|
||||||
global newbingfree_handle
|
global newbingfree_handle
|
||||||
newbingfree_handle = None
|
newbingfree_handle = None
|
||||||
|
|
||||||
|
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False):
|
||||||
def predict_no_ui_long_connection(
|
|
||||||
inputs,
|
|
||||||
llm_kwargs,
|
|
||||||
history=[],
|
|
||||||
sys_prompt="",
|
|
||||||
observe_window=[],
|
|
||||||
console_slience=False,
|
|
||||||
):
|
|
||||||
"""
|
"""
|
||||||
多线程方法
|
多线程方法
|
||||||
函数的说明请见 request_llms/bridge_all.py
|
函数的说明请见 request_llms/bridge_all.py
|
||||||
@@ -218,8 +186,7 @@ def predict_no_ui_long_connection(
|
|||||||
global newbingfree_handle
|
global newbingfree_handle
|
||||||
if (newbingfree_handle is None) or (not newbingfree_handle.success):
|
if (newbingfree_handle is None) or (not newbingfree_handle.success):
|
||||||
newbingfree_handle = NewBingHandle()
|
newbingfree_handle = NewBingHandle()
|
||||||
if len(observe_window) >= 1:
|
if len(observe_window) >= 1: observe_window[0] = load_message + "\n\n" + newbingfree_handle.info
|
||||||
observe_window[0] = load_message + "\n\n" + newbingfree_handle.info
|
|
||||||
if not newbingfree_handle.success:
|
if not newbingfree_handle.success:
|
||||||
error = newbingfree_handle.info
|
error = newbingfree_handle.info
|
||||||
newbingfree_handle = None
|
newbingfree_handle = None
|
||||||
@@ -227,39 +194,20 @@ def predict_no_ui_long_connection(
|
|||||||
|
|
||||||
# 没有 sys_prompt 接口,因此把prompt加入 history
|
# 没有 sys_prompt 接口,因此把prompt加入 history
|
||||||
history_feedin = []
|
history_feedin = []
|
||||||
for i in range(len(history) // 2):
|
for i in range(len(history)//2):
|
||||||
history_feedin.append([history[2 * i], history[2 * i + 1]])
|
history_feedin.append([history[2*i], history[2*i+1]] )
|
||||||
|
|
||||||
watch_dog_patience = 5 # 看门狗 (watchdog) 的耐心, 设置5秒即可
|
watch_dog_patience = 5 # 看门狗 (watchdog) 的耐心, 设置5秒即可
|
||||||
response = ""
|
response = ""
|
||||||
if len(observe_window) >= 1:
|
if len(observe_window) >= 1: observe_window[0] = "[Local Message] 等待NewBing响应中 ..."
|
||||||
observe_window[0] = "[Local Message] 等待NewBing响应中 ..."
|
for response in newbingfree_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=sys_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
|
||||||
for response in newbingfree_handle.stream_chat(
|
if len(observe_window) >= 1: observe_window[0] = preprocess_newbing_out_simple(response)
|
||||||
query=inputs,
|
|
||||||
history=history_feedin,
|
|
||||||
system_prompt=sys_prompt,
|
|
||||||
max_length=llm_kwargs["max_length"],
|
|
||||||
top_p=llm_kwargs["top_p"],
|
|
||||||
temperature=llm_kwargs["temperature"],
|
|
||||||
):
|
|
||||||
if len(observe_window) >= 1:
|
|
||||||
observe_window[0] = preprocess_newbing_out_simple(response)
|
|
||||||
if len(observe_window) >= 2:
|
if len(observe_window) >= 2:
|
||||||
if (time.time() - observe_window[1]) > watch_dog_patience:
|
if (time.time()-observe_window[1]) > watch_dog_patience:
|
||||||
raise RuntimeError("程序终止。")
|
raise RuntimeError("程序终止。")
|
||||||
return preprocess_newbing_out_simple(response)
|
return preprocess_newbing_out_simple(response)
|
||||||
|
|
||||||
|
def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None):
|
||||||
def predict(
|
|
||||||
inputs,
|
|
||||||
llm_kwargs,
|
|
||||||
plugin_kwargs,
|
|
||||||
chatbot,
|
|
||||||
history=[],
|
|
||||||
system_prompt="",
|
|
||||||
stream=True,
|
|
||||||
additional_fn=None,
|
|
||||||
):
|
|
||||||
"""
|
"""
|
||||||
单线程方法
|
单线程方法
|
||||||
函数的说明请见 request_llms/bridge_all.py
|
函数的说明请见 request_llms/bridge_all.py
|
||||||
@@ -277,35 +225,21 @@ def predict(
|
|||||||
|
|
||||||
if additional_fn is not None:
|
if additional_fn is not None:
|
||||||
from core_functional import handle_core_functionality
|
from core_functional import handle_core_functionality
|
||||||
|
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
|
||||||
inputs, history = handle_core_functionality(
|
|
||||||
additional_fn, inputs, history, chatbot
|
|
||||||
)
|
|
||||||
|
|
||||||
history_feedin = []
|
history_feedin = []
|
||||||
for i in range(len(history) // 2):
|
for i in range(len(history)//2):
|
||||||
history_feedin.append([history[2 * i], history[2 * i + 1]])
|
history_feedin.append([history[2*i], history[2*i+1]] )
|
||||||
|
|
||||||
chatbot[-1] = (inputs, "[Local Message] 等待NewBing响应中 ...")
|
chatbot[-1] = (inputs, "[Local Message] 等待NewBing响应中 ...")
|
||||||
response = "[Local Message] 等待NewBing响应中 ..."
|
response = "[Local Message] 等待NewBing响应中 ..."
|
||||||
yield from update_ui(
|
yield from update_ui(chatbot=chatbot, history=history, msg="NewBing响应缓慢,尚未完成全部响应,请耐心完成后再提交新问题。")
|
||||||
chatbot=chatbot, history=history, msg="NewBing响应缓慢,尚未完成全部响应,请耐心完成后再提交新问题。"
|
for response in newbingfree_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=system_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
|
||||||
)
|
|
||||||
for response in newbingfree_handle.stream_chat(
|
|
||||||
query=inputs,
|
|
||||||
history=history_feedin,
|
|
||||||
system_prompt=system_prompt,
|
|
||||||
max_length=llm_kwargs["max_length"],
|
|
||||||
top_p=llm_kwargs["top_p"],
|
|
||||||
temperature=llm_kwargs["temperature"],
|
|
||||||
):
|
|
||||||
chatbot[-1] = (inputs, preprocess_newbing_out(response))
|
chatbot[-1] = (inputs, preprocess_newbing_out(response))
|
||||||
yield from update_ui(
|
yield from update_ui(chatbot=chatbot, history=history, msg="NewBing响应缓慢,尚未完成全部响应,请耐心完成后再提交新问题。")
|
||||||
chatbot=chatbot, history=history, msg="NewBing响应缓慢,尚未完成全部响应,请耐心完成后再提交新问题。"
|
if response == "[Local Message] 等待NewBing响应中 ...": response = "[Local Message] NewBing响应异常,请刷新界面重试 ..."
|
||||||
)
|
|
||||||
if response == "[Local Message] 等待NewBing响应中 ...":
|
|
||||||
response = "[Local Message] NewBing响应异常,请刷新界面重试 ..."
|
|
||||||
history.extend([inputs, response])
|
history.extend([inputs, response])
|
||||||
logging.info(f"[raw_input] {inputs}")
|
logging.info(f'[raw_input] {inputs}')
|
||||||
logging.info(f"[response] {response}")
|
logging.info(f'[response] {response}')
|
||||||
yield from update_ui(chatbot=chatbot, history=history, msg="完成全部响应,请提交新问题。")
|
yield from update_ui(chatbot=chatbot, history=history, msg="完成全部响应,请提交新问题。")
|
||||||
|
|
||||||
|
|||||||
@@ -1,62 +1,59 @@
|
|||||||
import time
|
model_name = "Qwen"
|
||||||
import os
|
cmd_to_install = "`pip install -r request_llms/requirements_qwen.txt`"
|
||||||
from toolbox import update_ui, get_conf, update_ui_lastest_msg
|
|
||||||
from toolbox import check_packages, report_exception
|
|
||||||
|
|
||||||
model_name = 'Qwen'
|
from toolbox import ProxyNetworkActivate, get_conf
|
||||||
|
from .local_llm_class import LocalLLMHandle, get_local_llm_predict_fns
|
||||||
|
|
||||||
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False):
|
|
||||||
"""
|
|
||||||
⭐多线程方法
|
|
||||||
函数的说明请见 request_llms/bridge_all.py
|
|
||||||
"""
|
|
||||||
watch_dog_patience = 5
|
|
||||||
response = ""
|
|
||||||
|
|
||||||
from .com_qwenapi import QwenRequestInstance
|
|
||||||
sri = QwenRequestInstance()
|
|
||||||
for response in sri.generate(inputs, llm_kwargs, history, sys_prompt):
|
|
||||||
if len(observe_window) >= 1:
|
|
||||||
observe_window[0] = response
|
|
||||||
if len(observe_window) >= 2:
|
|
||||||
if (time.time()-observe_window[1]) > watch_dog_patience: raise RuntimeError("程序终止。")
|
|
||||||
return response
|
|
||||||
|
|
||||||
def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None):
|
# ------------------------------------------------------------------------------------------------------------------------
|
||||||
"""
|
# 🔌💻 Local Model
|
||||||
⭐单线程方法
|
# ------------------------------------------------------------------------------------------------------------------------
|
||||||
函数的说明请见 request_llms/bridge_all.py
|
class GetQwenLMHandle(LocalLLMHandle):
|
||||||
"""
|
|
||||||
chatbot.append((inputs, ""))
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history)
|
|
||||||
|
|
||||||
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
def load_model_info(self):
|
||||||
try:
|
# 🏃♂️🏃♂️🏃♂️ 子进程执行
|
||||||
check_packages(["dashscope"])
|
self.model_name = model_name
|
||||||
except:
|
self.cmd_to_install = cmd_to_install
|
||||||
yield from update_ui_lastest_msg(f"导入软件依赖失败。使用该模型需要额外依赖,安装方法```pip install --upgrade dashscope```。",
|
|
||||||
chatbot=chatbot, history=history, delay=0)
|
|
||||||
return
|
|
||||||
|
|
||||||
# 检查DASHSCOPE_API_KEY
|
def load_model_and_tokenizer(self):
|
||||||
if get_conf("DASHSCOPE_API_KEY") == "":
|
# 🏃♂️🏃♂️🏃♂️ 子进程执行
|
||||||
yield from update_ui_lastest_msg(f"请配置 DASHSCOPE_API_KEY。",
|
# from modelscope import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
|
||||||
chatbot=chatbot, history=history, delay=0)
|
from transformers import AutoModelForCausalLM, AutoTokenizer
|
||||||
return
|
from transformers.generation import GenerationConfig
|
||||||
|
with ProxyNetworkActivate('Download_LLM'):
|
||||||
|
model_id = get_conf('QWEN_MODEL_SELECTION')
|
||||||
|
self._tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True, resume_download=True)
|
||||||
|
# use fp16
|
||||||
|
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", trust_remote_code=True).eval()
|
||||||
|
model.generation_config = GenerationConfig.from_pretrained(model_id, trust_remote_code=True) # 可指定不同的生成长度、top_p等相关超参
|
||||||
|
self._model = model
|
||||||
|
|
||||||
if additional_fn is not None:
|
return self._model, self._tokenizer
|
||||||
from core_functional import handle_core_functionality
|
|
||||||
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
|
|
||||||
|
|
||||||
# 开始接收回复
|
def llm_stream_generator(self, **kwargs):
|
||||||
from .com_qwenapi import QwenRequestInstance
|
# 🏃♂️🏃♂️🏃♂️ 子进程执行
|
||||||
sri = QwenRequestInstance()
|
def adaptor(kwargs):
|
||||||
for response in sri.generate(inputs, llm_kwargs, history, system_prompt):
|
query = kwargs['query']
|
||||||
chatbot[-1] = (inputs, response)
|
max_length = kwargs['max_length']
|
||||||
yield from update_ui(chatbot=chatbot, history=history)
|
top_p = kwargs['top_p']
|
||||||
|
temperature = kwargs['temperature']
|
||||||
|
history = kwargs['history']
|
||||||
|
return query, max_length, top_p, temperature, history
|
||||||
|
|
||||||
# 总结输出
|
query, max_length, top_p, temperature, history = adaptor(kwargs)
|
||||||
if response == f"[Local Message] 等待{model_name}响应中 ...":
|
|
||||||
response = f"[Local Message] {model_name}响应异常 ..."
|
for response in self._model.chat_stream(self._tokenizer, query, history=history):
|
||||||
history.extend([inputs, response])
|
yield response
|
||||||
yield from update_ui(chatbot=chatbot, history=history)
|
|
||||||
|
def try_to_import_special_deps(self, **kwargs):
|
||||||
|
# import something that will raise error if the user does not install requirement_*.txt
|
||||||
|
# 🏃♂️🏃♂️🏃♂️ 主进程执行
|
||||||
|
import importlib
|
||||||
|
importlib.import_module('modelscope')
|
||||||
|
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------------------------------------------------------------
|
||||||
|
# 🔌💻 GPT-Academic Interface
|
||||||
|
# ------------------------------------------------------------------------------------------------------------------------
|
||||||
|
predict_no_ui_long_connection, predict = get_local_llm_predict_fns(GetQwenLMHandle, model_name)
|
||||||
@@ -1,59 +0,0 @@
|
|||||||
model_name = "Qwen_Local"
|
|
||||||
cmd_to_install = "`pip install -r request_llms/requirements_qwen_local.txt`"
|
|
||||||
|
|
||||||
from toolbox import ProxyNetworkActivate, get_conf
|
|
||||||
from .local_llm_class import LocalLLMHandle, get_local_llm_predict_fns
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
# ------------------------------------------------------------------------------------------------------------------------
|
|
||||||
# 🔌💻 Local Model
|
|
||||||
# ------------------------------------------------------------------------------------------------------------------------
|
|
||||||
class GetQwenLMHandle(LocalLLMHandle):
|
|
||||||
|
|
||||||
def load_model_info(self):
|
|
||||||
# 🏃♂️🏃♂️🏃♂️ 子进程执行
|
|
||||||
self.model_name = model_name
|
|
||||||
self.cmd_to_install = cmd_to_install
|
|
||||||
|
|
||||||
def load_model_and_tokenizer(self):
|
|
||||||
# 🏃♂️🏃♂️🏃♂️ 子进程执行
|
|
||||||
# from modelscope import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
|
|
||||||
from transformers import AutoModelForCausalLM, AutoTokenizer
|
|
||||||
from transformers.generation import GenerationConfig
|
|
||||||
with ProxyNetworkActivate('Download_LLM'):
|
|
||||||
model_id = get_conf('QWEN_LOCAL_MODEL_SELECTION')
|
|
||||||
self._tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True, resume_download=True)
|
|
||||||
# use fp16
|
|
||||||
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", trust_remote_code=True).eval()
|
|
||||||
model.generation_config = GenerationConfig.from_pretrained(model_id, trust_remote_code=True) # 可指定不同的生成长度、top_p等相关超参
|
|
||||||
self._model = model
|
|
||||||
|
|
||||||
return self._model, self._tokenizer
|
|
||||||
|
|
||||||
def llm_stream_generator(self, **kwargs):
|
|
||||||
# 🏃♂️🏃♂️🏃♂️ 子进程执行
|
|
||||||
def adaptor(kwargs):
|
|
||||||
query = kwargs['query']
|
|
||||||
max_length = kwargs['max_length']
|
|
||||||
top_p = kwargs['top_p']
|
|
||||||
temperature = kwargs['temperature']
|
|
||||||
history = kwargs['history']
|
|
||||||
return query, max_length, top_p, temperature, history
|
|
||||||
|
|
||||||
query, max_length, top_p, temperature, history = adaptor(kwargs)
|
|
||||||
|
|
||||||
for response in self._model.chat_stream(self._tokenizer, query, history=history):
|
|
||||||
yield response
|
|
||||||
|
|
||||||
def try_to_import_special_deps(self, **kwargs):
|
|
||||||
# import something that will raise error if the user does not install requirement_*.txt
|
|
||||||
# 🏃♂️🏃♂️🏃♂️ 主进程执行
|
|
||||||
import importlib
|
|
||||||
importlib.import_module('modelscope')
|
|
||||||
|
|
||||||
|
|
||||||
# ------------------------------------------------------------------------------------------------------------------------
|
|
||||||
# 🔌💻 GPT-Academic Interface
|
|
||||||
# ------------------------------------------------------------------------------------------------------------------------
|
|
||||||
predict_no_ui_long_connection, predict = get_local_llm_predict_fns(GetQwenLMHandle, model_name)
|
|
||||||
@@ -1,67 +0,0 @@
|
|||||||
import time
|
|
||||||
from toolbox import update_ui, get_conf, update_ui_lastest_msg
|
|
||||||
from toolbox import check_packages, report_exception
|
|
||||||
|
|
||||||
model_name = '云雀大模型'
|
|
||||||
|
|
||||||
def validate_key():
|
|
||||||
YUNQUE_SECRET_KEY = get_conf("YUNQUE_SECRET_KEY")
|
|
||||||
if YUNQUE_SECRET_KEY == '': return False
|
|
||||||
return True
|
|
||||||
|
|
||||||
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False):
|
|
||||||
"""
|
|
||||||
⭐ 多线程方法
|
|
||||||
函数的说明请见 request_llms/bridge_all.py
|
|
||||||
"""
|
|
||||||
watch_dog_patience = 5
|
|
||||||
response = ""
|
|
||||||
|
|
||||||
if validate_key() is False:
|
|
||||||
raise RuntimeError('请配置YUNQUE_SECRET_KEY')
|
|
||||||
|
|
||||||
from .com_skylark2api import YUNQUERequestInstance
|
|
||||||
sri = YUNQUERequestInstance()
|
|
||||||
for response in sri.generate(inputs, llm_kwargs, history, sys_prompt):
|
|
||||||
if len(observe_window) >= 1:
|
|
||||||
observe_window[0] = response
|
|
||||||
if len(observe_window) >= 2:
|
|
||||||
if (time.time()-observe_window[1]) > watch_dog_patience: raise RuntimeError("程序终止。")
|
|
||||||
return response
|
|
||||||
|
|
||||||
def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None):
|
|
||||||
"""
|
|
||||||
⭐ 单线程方法
|
|
||||||
函数的说明请见 request_llms/bridge_all.py
|
|
||||||
"""
|
|
||||||
chatbot.append((inputs, ""))
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history)
|
|
||||||
|
|
||||||
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
|
||||||
try:
|
|
||||||
check_packages(["zhipuai"])
|
|
||||||
except:
|
|
||||||
yield from update_ui_lastest_msg(f"导入软件依赖失败。使用该模型需要额外依赖,安装方法```pip install --upgrade zhipuai```。",
|
|
||||||
chatbot=chatbot, history=history, delay=0)
|
|
||||||
return
|
|
||||||
|
|
||||||
if validate_key() is False:
|
|
||||||
yield from update_ui_lastest_msg(lastmsg="[Local Message] 请配置HUOSHAN_API_KEY", chatbot=chatbot, history=history, delay=0)
|
|
||||||
return
|
|
||||||
|
|
||||||
if additional_fn is not None:
|
|
||||||
from core_functional import handle_core_functionality
|
|
||||||
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
|
|
||||||
|
|
||||||
# 开始接收回复
|
|
||||||
from .com_skylark2api import YUNQUERequestInstance
|
|
||||||
sri = YUNQUERequestInstance()
|
|
||||||
for response in sri.generate(inputs, llm_kwargs, history, system_prompt):
|
|
||||||
chatbot[-1] = (inputs, response)
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history)
|
|
||||||
|
|
||||||
# 总结输出
|
|
||||||
if response == f"[Local Message] 等待{model_name}响应中 ...":
|
|
||||||
response = f"[Local Message] {model_name}响应异常 ..."
|
|
||||||
history.extend([inputs, response])
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history)
|
|
||||||
@@ -26,7 +26,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
|
|||||||
|
|
||||||
from .com_sparkapi import SparkRequestInstance
|
from .com_sparkapi import SparkRequestInstance
|
||||||
sri = SparkRequestInstance()
|
sri = SparkRequestInstance()
|
||||||
for response in sri.generate(inputs, llm_kwargs, history, sys_prompt, use_image_api=False):
|
for response in sri.generate(inputs, llm_kwargs, history, sys_prompt):
|
||||||
if len(observe_window) >= 1:
|
if len(observe_window) >= 1:
|
||||||
observe_window[0] = response
|
observe_window[0] = response
|
||||||
if len(observe_window) >= 2:
|
if len(observe_window) >= 2:
|
||||||
@@ -52,7 +52,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
|||||||
# 开始接收回复
|
# 开始接收回复
|
||||||
from .com_sparkapi import SparkRequestInstance
|
from .com_sparkapi import SparkRequestInstance
|
||||||
sri = SparkRequestInstance()
|
sri = SparkRequestInstance()
|
||||||
for response in sri.generate(inputs, llm_kwargs, history, system_prompt, use_image_api=True):
|
for response in sri.generate(inputs, llm_kwargs, history, system_prompt):
|
||||||
chatbot[-1] = (inputs, response)
|
chatbot[-1] = (inputs, response)
|
||||||
yield from update_ui(chatbot=chatbot, history=history)
|
yield from update_ui(chatbot=chatbot, history=history)
|
||||||
|
|
||||||
|
|||||||
@@ -7,15 +7,14 @@ import logging
|
|||||||
import time
|
import time
|
||||||
from toolbox import get_conf
|
from toolbox import get_conf
|
||||||
import asyncio
|
import asyncio
|
||||||
|
|
||||||
load_message = "正在加载Claude组件,请稍候..."
|
load_message = "正在加载Claude组件,请稍候..."
|
||||||
|
|
||||||
try:
|
try:
|
||||||
"""
|
"""
|
||||||
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
========================================================================
|
||||||
第一部分:Slack API Client
|
第一部分:Slack API Client
|
||||||
https://github.com/yokonsan/claude-in-slack-api
|
https://github.com/yokonsan/claude-in-slack-api
|
||||||
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
========================================================================
|
||||||
"""
|
"""
|
||||||
|
|
||||||
from slack_sdk.errors import SlackApiError
|
from slack_sdk.errors import SlackApiError
|
||||||
@@ -34,13 +33,10 @@ try:
|
|||||||
- get_reply():异步方法。循环监听已打开频道的消息,如果收到"Typing…_"结尾的消息说明Claude还在继续输出,否则结束循环。
|
- get_reply():异步方法。循环监听已打开频道的消息,如果收到"Typing…_"结尾的消息说明Claude还在继续输出,否则结束循环。
|
||||||
|
|
||||||
"""
|
"""
|
||||||
|
|
||||||
CHANNEL_ID = None
|
CHANNEL_ID = None
|
||||||
|
|
||||||
async def open_channel(self):
|
async def open_channel(self):
|
||||||
response = await self.conversations_open(
|
response = await self.conversations_open(users=get_conf('SLACK_CLAUDE_BOT_ID'))
|
||||||
users=get_conf("SLACK_CLAUDE_BOT_ID")
|
|
||||||
)
|
|
||||||
self.CHANNEL_ID = response["channel"]["id"]
|
self.CHANNEL_ID = response["channel"]["id"]
|
||||||
|
|
||||||
async def chat(self, text):
|
async def chat(self, text):
|
||||||
@@ -53,14 +49,9 @@ try:
|
|||||||
async def get_slack_messages(self):
|
async def get_slack_messages(self):
|
||||||
try:
|
try:
|
||||||
# TODO:暂时不支持历史消息,因为在同一个频道里存在多人使用时历史消息渗透问题
|
# TODO:暂时不支持历史消息,因为在同一个频道里存在多人使用时历史消息渗透问题
|
||||||
resp = await self.conversations_history(
|
resp = await self.conversations_history(channel=self.CHANNEL_ID, oldest=self.LAST_TS, limit=1)
|
||||||
channel=self.CHANNEL_ID, oldest=self.LAST_TS, limit=1
|
msg = [msg for msg in resp["messages"]
|
||||||
)
|
if msg.get("user") == get_conf('SLACK_CLAUDE_BOT_ID')]
|
||||||
msg = [
|
|
||||||
msg
|
|
||||||
for msg in resp["messages"]
|
|
||||||
if msg.get("user") == get_conf("SLACK_CLAUDE_BOT_ID")
|
|
||||||
]
|
|
||||||
return msg
|
return msg
|
||||||
except (SlackApiError, KeyError) as e:
|
except (SlackApiError, KeyError) as e:
|
||||||
raise RuntimeError(f"获取Slack消息失败。")
|
raise RuntimeError(f"获取Slack消息失败。")
|
||||||
@@ -78,14 +69,13 @@ try:
|
|||||||
else:
|
else:
|
||||||
yield True, msg["text"]
|
yield True, msg["text"]
|
||||||
break
|
break
|
||||||
|
|
||||||
except:
|
except:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
"""
|
"""
|
||||||
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
========================================================================
|
||||||
第二部分:子进程Worker(调用主体)
|
第二部分:子进程Worker(调用主体)
|
||||||
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
========================================================================
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
|
||||||
@@ -106,7 +96,6 @@ class ClaudeHandle(Process):
|
|||||||
try:
|
try:
|
||||||
self.success = False
|
self.success = False
|
||||||
import slack_sdk
|
import slack_sdk
|
||||||
|
|
||||||
self.info = "依赖检测通过,等待Claude响应。注意目前不能多人同时调用Claude接口(有线程锁),否则将导致每个人的Claude问询历史互相渗透。调用Claude时,会自动使用已配置的代理。"
|
self.info = "依赖检测通过,等待Claude响应。注意目前不能多人同时调用Claude接口(有线程锁),否则将导致每个人的Claude问询历史互相渗透。调用Claude时,会自动使用已配置的代理。"
|
||||||
self.success = True
|
self.success = True
|
||||||
except:
|
except:
|
||||||
@@ -121,15 +110,15 @@ class ClaudeHandle(Process):
|
|||||||
while True:
|
while True:
|
||||||
# 等待
|
# 等待
|
||||||
kwargs = self.child.recv()
|
kwargs = self.child.recv()
|
||||||
question = kwargs["query"]
|
question = kwargs['query']
|
||||||
history = kwargs["history"]
|
history = kwargs['history']
|
||||||
|
|
||||||
# 开始问问题
|
# 开始问问题
|
||||||
prompt = ""
|
prompt = ""
|
||||||
|
|
||||||
# 问题
|
# 问题
|
||||||
prompt += question
|
prompt += question
|
||||||
print("question:", prompt)
|
print('question:', prompt)
|
||||||
|
|
||||||
# 提交
|
# 提交
|
||||||
await self.claude_model.chat(prompt)
|
await self.claude_model.chat(prompt)
|
||||||
@@ -142,15 +131,11 @@ class ClaudeHandle(Process):
|
|||||||
else:
|
else:
|
||||||
# 防止丢失最后一条消息
|
# 防止丢失最后一条消息
|
||||||
slack_msgs = await self.claude_model.get_slack_messages()
|
slack_msgs = await self.claude_model.get_slack_messages()
|
||||||
last_msg = (
|
last_msg = slack_msgs[-1]["text"] if slack_msgs and len(slack_msgs) > 0 else ""
|
||||||
slack_msgs[-1]["text"]
|
|
||||||
if slack_msgs and len(slack_msgs) > 0
|
|
||||||
else ""
|
|
||||||
)
|
|
||||||
if last_msg:
|
if last_msg:
|
||||||
self.child.send(last_msg)
|
self.child.send(last_msg)
|
||||||
print("-------- receive final ---------")
|
print('-------- receive final ---------')
|
||||||
self.child.send("[Finish]")
|
self.child.send('[Finish]')
|
||||||
|
|
||||||
def run(self):
|
def run(self):
|
||||||
"""
|
"""
|
||||||
@@ -161,24 +146,22 @@ class ClaudeHandle(Process):
|
|||||||
self.local_history = []
|
self.local_history = []
|
||||||
if (self.claude_model is None) or (not self.success):
|
if (self.claude_model is None) or (not self.success):
|
||||||
# 代理设置
|
# 代理设置
|
||||||
proxies = get_conf("proxies")
|
proxies = get_conf('proxies')
|
||||||
if proxies is None:
|
if proxies is None:
|
||||||
self.proxies_https = None
|
self.proxies_https = None
|
||||||
else:
|
else:
|
||||||
self.proxies_https = proxies["https"]
|
self.proxies_https = proxies['https']
|
||||||
|
|
||||||
try:
|
try:
|
||||||
SLACK_CLAUDE_USER_TOKEN = get_conf("SLACK_CLAUDE_USER_TOKEN")
|
SLACK_CLAUDE_USER_TOKEN = get_conf('SLACK_CLAUDE_USER_TOKEN')
|
||||||
self.claude_model = SlackClient(
|
self.claude_model = SlackClient(token=SLACK_CLAUDE_USER_TOKEN, proxy=self.proxies_https)
|
||||||
token=SLACK_CLAUDE_USER_TOKEN, proxy=self.proxies_https
|
print('Claude组件初始化成功。')
|
||||||
)
|
|
||||||
print("Claude组件初始化成功。")
|
|
||||||
except:
|
except:
|
||||||
self.success = False
|
self.success = False
|
||||||
tb_str = "\n```\n" + trimmed_format_exc() + "\n```\n"
|
tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n'
|
||||||
self.child.send(f"[Local Message] 不能加载Claude组件。{tb_str}")
|
self.child.send(f'[Local Message] 不能加载Claude组件。{tb_str}')
|
||||||
self.child.send("[Fail]")
|
self.child.send('[Fail]')
|
||||||
self.child.send("[Finish]")
|
self.child.send('[Finish]')
|
||||||
raise RuntimeError(f"不能加载Claude组件。")
|
raise RuntimeError(f"不能加载Claude组件。")
|
||||||
|
|
||||||
self.success = True
|
self.success = True
|
||||||
@@ -186,10 +169,10 @@ class ClaudeHandle(Process):
|
|||||||
# 进入任务等待状态
|
# 进入任务等待状态
|
||||||
asyncio.run(self.async_run())
|
asyncio.run(self.async_run())
|
||||||
except Exception:
|
except Exception:
|
||||||
tb_str = "\n```\n" + trimmed_format_exc() + "\n```\n"
|
tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n'
|
||||||
self.child.send(f"[Local Message] Claude失败 {tb_str}.")
|
self.child.send(f'[Local Message] Claude失败 {tb_str}.')
|
||||||
self.child.send("[Fail]")
|
self.child.send('[Fail]')
|
||||||
self.child.send("[Finish]")
|
self.child.send('[Finish]')
|
||||||
|
|
||||||
def stream_chat(self, **kwargs):
|
def stream_chat(self, **kwargs):
|
||||||
"""
|
"""
|
||||||
@@ -199,9 +182,9 @@ class ClaudeHandle(Process):
|
|||||||
self.parent.send(kwargs) # 发送请求到子进程
|
self.parent.send(kwargs) # 发送请求到子进程
|
||||||
while True:
|
while True:
|
||||||
res = self.parent.recv() # 等待Claude回复的片段
|
res = self.parent.recv() # 等待Claude回复的片段
|
||||||
if res == "[Finish]":
|
if res == '[Finish]':
|
||||||
break # 结束
|
break # 结束
|
||||||
elif res == "[Fail]":
|
elif res == '[Fail]':
|
||||||
self.success = False
|
self.success = False
|
||||||
break
|
break
|
||||||
else:
|
else:
|
||||||
@@ -210,22 +193,15 @@ class ClaudeHandle(Process):
|
|||||||
|
|
||||||
|
|
||||||
"""
|
"""
|
||||||
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
========================================================================
|
||||||
第三部分:主进程统一调用函数接口
|
第三部分:主进程统一调用函数接口
|
||||||
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
========================================================================
|
||||||
"""
|
"""
|
||||||
global claude_handle
|
global claude_handle
|
||||||
claude_handle = None
|
claude_handle = None
|
||||||
|
|
||||||
|
|
||||||
def predict_no_ui_long_connection(
|
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=None, console_slience=False):
|
||||||
inputs,
|
|
||||||
llm_kwargs,
|
|
||||||
history=[],
|
|
||||||
sys_prompt="",
|
|
||||||
observe_window=None,
|
|
||||||
console_slience=False,
|
|
||||||
):
|
|
||||||
"""
|
"""
|
||||||
多线程方法
|
多线程方法
|
||||||
函数的说明请见 request_llms/bridge_all.py
|
函数的说明请见 request_llms/bridge_all.py
|
||||||
@@ -241,37 +217,21 @@ def predict_no_ui_long_connection(
|
|||||||
|
|
||||||
# 没有 sys_prompt 接口,因此把prompt加入 history
|
# 没有 sys_prompt 接口,因此把prompt加入 history
|
||||||
history_feedin = []
|
history_feedin = []
|
||||||
for i in range(len(history) // 2):
|
for i in range(len(history)//2):
|
||||||
history_feedin.append([history[2 * i], history[2 * i + 1]])
|
history_feedin.append([history[2*i], history[2*i+1]])
|
||||||
|
|
||||||
watch_dog_patience = 5 # 看门狗 (watchdog) 的耐心, 设置5秒即可
|
watch_dog_patience = 5 # 看门狗 (watchdog) 的耐心, 设置5秒即可
|
||||||
response = ""
|
response = ""
|
||||||
observe_window[0] = "[Local Message] 等待Claude响应中 ..."
|
observe_window[0] = "[Local Message] 等待Claude响应中 ..."
|
||||||
for response in claude_handle.stream_chat(
|
for response in claude_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=sys_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
|
||||||
query=inputs,
|
|
||||||
history=history_feedin,
|
|
||||||
system_prompt=sys_prompt,
|
|
||||||
max_length=llm_kwargs["max_length"],
|
|
||||||
top_p=llm_kwargs["top_p"],
|
|
||||||
temperature=llm_kwargs["temperature"],
|
|
||||||
):
|
|
||||||
observe_window[0] = preprocess_newbing_out_simple(response)
|
observe_window[0] = preprocess_newbing_out_simple(response)
|
||||||
if len(observe_window) >= 2:
|
if len(observe_window) >= 2:
|
||||||
if (time.time() - observe_window[1]) > watch_dog_patience:
|
if (time.time()-observe_window[1]) > watch_dog_patience:
|
||||||
raise RuntimeError("程序终止。")
|
raise RuntimeError("程序终止。")
|
||||||
return preprocess_newbing_out_simple(response)
|
return preprocess_newbing_out_simple(response)
|
||||||
|
|
||||||
|
|
||||||
def predict(
|
def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream=True, additional_fn=None):
|
||||||
inputs,
|
|
||||||
llm_kwargs,
|
|
||||||
plugin_kwargs,
|
|
||||||
chatbot,
|
|
||||||
history=[],
|
|
||||||
system_prompt="",
|
|
||||||
stream=True,
|
|
||||||
additional_fn=None,
|
|
||||||
):
|
|
||||||
"""
|
"""
|
||||||
单线程方法
|
单线程方法
|
||||||
函数的说明请见 request_llms/bridge_all.py
|
函数的说明请见 request_llms/bridge_all.py
|
||||||
@@ -289,30 +249,21 @@ def predict(
|
|||||||
|
|
||||||
if additional_fn is not None:
|
if additional_fn is not None:
|
||||||
from core_functional import handle_core_functionality
|
from core_functional import handle_core_functionality
|
||||||
|
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
|
||||||
inputs, history = handle_core_functionality(
|
|
||||||
additional_fn, inputs, history, chatbot
|
|
||||||
)
|
|
||||||
|
|
||||||
history_feedin = []
|
history_feedin = []
|
||||||
for i in range(len(history) // 2):
|
for i in range(len(history)//2):
|
||||||
history_feedin.append([history[2 * i], history[2 * i + 1]])
|
history_feedin.append([history[2*i], history[2*i+1]])
|
||||||
|
|
||||||
chatbot[-1] = (inputs, "[Local Message] 等待Claude响应中 ...")
|
chatbot[-1] = (inputs, "[Local Message] 等待Claude响应中 ...")
|
||||||
response = "[Local Message] 等待Claude响应中 ..."
|
response = "[Local Message] 等待Claude响应中 ..."
|
||||||
yield from update_ui(
|
yield from update_ui(chatbot=chatbot, history=history, msg="Claude响应缓慢,尚未完成全部响应,请耐心完成后再提交新问题。")
|
||||||
chatbot=chatbot, history=history, msg="Claude响应缓慢,尚未完成全部响应,请耐心完成后再提交新问题。"
|
for response in claude_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=system_prompt):
|
||||||
)
|
|
||||||
for response in claude_handle.stream_chat(
|
|
||||||
query=inputs, history=history_feedin, system_prompt=system_prompt
|
|
||||||
):
|
|
||||||
chatbot[-1] = (inputs, preprocess_newbing_out(response))
|
chatbot[-1] = (inputs, preprocess_newbing_out(response))
|
||||||
yield from update_ui(
|
yield from update_ui(chatbot=chatbot, history=history, msg="Claude响应缓慢,尚未完成全部响应,请耐心完成后再提交新问题。")
|
||||||
chatbot=chatbot, history=history, msg="Claude响应缓慢,尚未完成全部响应,请耐心完成后再提交新问题。"
|
|
||||||
)
|
|
||||||
if response == "[Local Message] 等待Claude响应中 ...":
|
if response == "[Local Message] 等待Claude响应中 ...":
|
||||||
response = "[Local Message] Claude响应异常,请刷新界面重试 ..."
|
response = "[Local Message] Claude响应异常,请刷新界面重试 ..."
|
||||||
history.extend([inputs, response])
|
history.extend([inputs, response])
|
||||||
logging.info(f"[raw_input] {inputs}")
|
logging.info(f'[raw_input] {inputs}')
|
||||||
logging.info(f"[response] {response}")
|
logging.info(f'[response] {response}')
|
||||||
yield from update_ui(chatbot=chatbot, history=history, msg="完成全部响应,请提交新问题。")
|
yield from update_ui(chatbot=chatbot, history=history, msg="完成全部响应,请提交新问题。")
|
||||||
|
|||||||
@@ -42,7 +42,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
|||||||
try:
|
try:
|
||||||
check_packages(["zhipuai"])
|
check_packages(["zhipuai"])
|
||||||
except:
|
except:
|
||||||
yield from update_ui_lastest_msg(f"导入软件依赖失败。使用该模型需要额外依赖,安装方法```pip install zhipuai==1.0.7```。",
|
yield from update_ui_lastest_msg(f"导入软件依赖失败。使用该模型需要额外依赖,安装方法```pip install --upgrade zhipuai```。",
|
||||||
chatbot=chatbot, history=history, delay=0)
|
chatbot=chatbot, history=history, delay=0)
|
||||||
return
|
return
|
||||||
|
|
||||||
|
|||||||
@@ -1,229 +0,0 @@
|
|||||||
# encoding: utf-8
|
|
||||||
# @Time : 2023/12/25
|
|
||||||
# @Author : Spike
|
|
||||||
# @Descr :
|
|
||||||
import json
|
|
||||||
import os
|
|
||||||
import re
|
|
||||||
import requests
|
|
||||||
from typing import List, Dict, Tuple
|
|
||||||
from toolbox import get_conf, encode_image, get_pictures_list
|
|
||||||
|
|
||||||
proxies, TIMEOUT_SECONDS = get_conf("proxies", "TIMEOUT_SECONDS")
|
|
||||||
|
|
||||||
"""
|
|
||||||
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
|
||||||
第五部分 一些文件处理方法
|
|
||||||
files_filter_handler 根据type过滤文件
|
|
||||||
input_encode_handler 提取input中的文件,并解析
|
|
||||||
file_manifest_filter_html 根据type过滤文件, 并解析为html or md 文本
|
|
||||||
link_mtime_to_md 文件增加本地时间参数,避免下载到缓存文件
|
|
||||||
html_view_blank 超链接
|
|
||||||
html_local_file 本地文件取相对路径
|
|
||||||
to_markdown_tabs 文件list 转换为 md tab
|
|
||||||
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
def files_filter_handler(file_list):
|
|
||||||
new_list = []
|
|
||||||
filter_ = [
|
|
||||||
"png",
|
|
||||||
"jpg",
|
|
||||||
"jpeg",
|
|
||||||
"bmp",
|
|
||||||
"svg",
|
|
||||||
"webp",
|
|
||||||
"ico",
|
|
||||||
"tif",
|
|
||||||
"tiff",
|
|
||||||
"raw",
|
|
||||||
"eps",
|
|
||||||
]
|
|
||||||
for file in file_list:
|
|
||||||
file = str(file).replace("file=", "")
|
|
||||||
if os.path.exists(file):
|
|
||||||
if str(os.path.basename(file)).split(".")[-1] in filter_:
|
|
||||||
new_list.append(file)
|
|
||||||
return new_list
|
|
||||||
|
|
||||||
|
|
||||||
def input_encode_handler(inputs, llm_kwargs):
|
|
||||||
if llm_kwargs["most_recent_uploaded"].get("path"):
|
|
||||||
image_paths = get_pictures_list(llm_kwargs["most_recent_uploaded"]["path"])
|
|
||||||
md_encode = []
|
|
||||||
for md_path in image_paths:
|
|
||||||
type_ = os.path.splitext(md_path)[1].replace(".", "")
|
|
||||||
type_ = "jpeg" if type_ == "jpg" else type_
|
|
||||||
md_encode.append({"data": encode_image(md_path), "type": type_})
|
|
||||||
return inputs, md_encode
|
|
||||||
|
|
||||||
|
|
||||||
def file_manifest_filter_html(file_list, filter_: list = None, md_type=False):
|
|
||||||
new_list = []
|
|
||||||
if not filter_:
|
|
||||||
filter_ = [
|
|
||||||
"png",
|
|
||||||
"jpg",
|
|
||||||
"jpeg",
|
|
||||||
"bmp",
|
|
||||||
"svg",
|
|
||||||
"webp",
|
|
||||||
"ico",
|
|
||||||
"tif",
|
|
||||||
"tiff",
|
|
||||||
"raw",
|
|
||||||
"eps",
|
|
||||||
]
|
|
||||||
for file in file_list:
|
|
||||||
if str(os.path.basename(file)).split(".")[-1] in filter_:
|
|
||||||
new_list.append(html_local_img(file, md=md_type))
|
|
||||||
elif os.path.exists(file):
|
|
||||||
new_list.append(link_mtime_to_md(file))
|
|
||||||
else:
|
|
||||||
new_list.append(file)
|
|
||||||
return new_list
|
|
||||||
|
|
||||||
|
|
||||||
def link_mtime_to_md(file):
|
|
||||||
link_local = html_local_file(file)
|
|
||||||
link_name = os.path.basename(file)
|
|
||||||
a = f"[{link_name}]({link_local}?{os.path.getmtime(file)})"
|
|
||||||
return a
|
|
||||||
|
|
||||||
|
|
||||||
def html_local_file(file):
|
|
||||||
base_path = os.path.dirname(__file__) # 项目目录
|
|
||||||
if os.path.exists(str(file)):
|
|
||||||
file = f'file={file.replace(base_path, ".")}'
|
|
||||||
return file
|
|
||||||
|
|
||||||
|
|
||||||
def html_local_img(__file, layout="left", max_width=None, max_height=None, md=True):
|
|
||||||
style = ""
|
|
||||||
if max_width is not None:
|
|
||||||
style += f"max-width: {max_width};"
|
|
||||||
if max_height is not None:
|
|
||||||
style += f"max-height: {max_height};"
|
|
||||||
__file = html_local_file(__file)
|
|
||||||
a = f'<div align="{layout}"><img src="{__file}" style="{style}"></div>'
|
|
||||||
if md:
|
|
||||||
a = f""
|
|
||||||
return a
|
|
||||||
|
|
||||||
|
|
||||||
def to_markdown_tabs(head: list, tabs: list, alignment=":---:", column=False):
|
|
||||||
"""
|
|
||||||
Args:
|
|
||||||
head: 表头:[]
|
|
||||||
tabs: 表值:[[列1], [列2], [列3], [列4]]
|
|
||||||
alignment: :--- 左对齐, :---: 居中对齐, ---: 右对齐
|
|
||||||
column: True to keep data in columns, False to keep data in rows (default).
|
|
||||||
Returns:
|
|
||||||
A string representation of the markdown table.
|
|
||||||
"""
|
|
||||||
if column:
|
|
||||||
transposed_tabs = list(map(list, zip(*tabs)))
|
|
||||||
else:
|
|
||||||
transposed_tabs = tabs
|
|
||||||
# Find the maximum length among the columns
|
|
||||||
max_len = max(len(column) for column in transposed_tabs)
|
|
||||||
|
|
||||||
tab_format = "| %s "
|
|
||||||
tabs_list = "".join([tab_format % i for i in head]) + "|\n"
|
|
||||||
tabs_list += "".join([tab_format % alignment for i in head]) + "|\n"
|
|
||||||
|
|
||||||
for i in range(max_len):
|
|
||||||
row_data = [tab[i] if i < len(tab) else "" for tab in transposed_tabs]
|
|
||||||
row_data = file_manifest_filter_html(row_data, filter_=None)
|
|
||||||
tabs_list += "".join([tab_format % i for i in row_data]) + "|\n"
|
|
||||||
|
|
||||||
return tabs_list
|
|
||||||
|
|
||||||
|
|
||||||
class GoogleChatInit:
|
|
||||||
def __init__(self):
|
|
||||||
self.url_gemini = "https://generativelanguage.googleapis.com/v1beta/models/%m:streamGenerateContent?key=%k"
|
|
||||||
|
|
||||||
def generate_chat(self, inputs, llm_kwargs, history, system_prompt):
|
|
||||||
headers, payload = self.generate_message_payload(
|
|
||||||
inputs, llm_kwargs, history, system_prompt
|
|
||||||
)
|
|
||||||
response = requests.post(
|
|
||||||
url=self.url_gemini,
|
|
||||||
headers=headers,
|
|
||||||
data=json.dumps(payload),
|
|
||||||
stream=True,
|
|
||||||
proxies=proxies,
|
|
||||||
timeout=TIMEOUT_SECONDS,
|
|
||||||
)
|
|
||||||
return response.iter_lines()
|
|
||||||
|
|
||||||
def __conversation_user(self, user_input, llm_kwargs):
|
|
||||||
what_i_have_asked = {"role": "user", "parts": []}
|
|
||||||
if "vision" not in self.url_gemini:
|
|
||||||
input_ = user_input
|
|
||||||
encode_img = []
|
|
||||||
else:
|
|
||||||
input_, encode_img = input_encode_handler(user_input, llm_kwargs=llm_kwargs)
|
|
||||||
what_i_have_asked["parts"].append({"text": input_})
|
|
||||||
if encode_img:
|
|
||||||
for data in encode_img:
|
|
||||||
what_i_have_asked["parts"].append(
|
|
||||||
{
|
|
||||||
"inline_data": {
|
|
||||||
"mime_type": f"image/{data['type']}",
|
|
||||||
"data": data["data"],
|
|
||||||
}
|
|
||||||
}
|
|
||||||
)
|
|
||||||
return what_i_have_asked
|
|
||||||
|
|
||||||
def __conversation_history(self, history, llm_kwargs):
|
|
||||||
messages = []
|
|
||||||
conversation_cnt = len(history) // 2
|
|
||||||
if conversation_cnt:
|
|
||||||
for index in range(0, 2 * conversation_cnt, 2):
|
|
||||||
what_i_have_asked = self.__conversation_user(history[index], llm_kwargs)
|
|
||||||
what_gpt_answer = {
|
|
||||||
"role": "model",
|
|
||||||
"parts": [{"text": history[index + 1]}],
|
|
||||||
}
|
|
||||||
messages.append(what_i_have_asked)
|
|
||||||
messages.append(what_gpt_answer)
|
|
||||||
return messages
|
|
||||||
|
|
||||||
def generate_message_payload(
|
|
||||||
self, inputs, llm_kwargs, history, system_prompt
|
|
||||||
) -> Tuple[Dict, Dict]:
|
|
||||||
messages = [
|
|
||||||
# {"role": "system", "parts": [{"text": system_prompt}]}, # gemini 不允许对话轮次为偶数,所以这个没有用,看后续支持吧。。。
|
|
||||||
# {"role": "user", "parts": [{"text": ""}]},
|
|
||||||
# {"role": "model", "parts": [{"text": ""}]}
|
|
||||||
]
|
|
||||||
self.url_gemini = self.url_gemini.replace(
|
|
||||||
"%m", llm_kwargs["llm_model"]
|
|
||||||
).replace("%k", get_conf("GEMINI_API_KEY"))
|
|
||||||
header = {"Content-Type": "application/json"}
|
|
||||||
if "vision" not in self.url_gemini: # 不是vision 才处理history
|
|
||||||
messages.extend(
|
|
||||||
self.__conversation_history(history, llm_kwargs)
|
|
||||||
) # 处理 history
|
|
||||||
messages.append(self.__conversation_user(inputs, llm_kwargs)) # 处理用户对话
|
|
||||||
payload = {
|
|
||||||
"contents": messages,
|
|
||||||
"generationConfig": {
|
|
||||||
# "maxOutputTokens": 800,
|
|
||||||
"stopSequences": str(llm_kwargs.get("stop", "")).split(" "),
|
|
||||||
"temperature": llm_kwargs.get("temperature", 1),
|
|
||||||
"topP": llm_kwargs.get("top_p", 0.8),
|
|
||||||
"topK": 10,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
return header, payload
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
google = GoogleChatInit()
|
|
||||||
# print(gootle.generate_message_payload('你好呀', {}, ['123123', '3123123'], ''))
|
|
||||||
# gootle.input_encode_handle('123123[123123](./123123), ')
|
|
||||||
@@ -1,94 +0,0 @@
|
|||||||
from http import HTTPStatus
|
|
||||||
from toolbox import get_conf
|
|
||||||
import threading
|
|
||||||
import logging
|
|
||||||
|
|
||||||
timeout_bot_msg = '[Local Message] Request timeout. Network error.'
|
|
||||||
|
|
||||||
class QwenRequestInstance():
|
|
||||||
def __init__(self):
|
|
||||||
import dashscope
|
|
||||||
self.time_to_yield_event = threading.Event()
|
|
||||||
self.time_to_exit_event = threading.Event()
|
|
||||||
self.result_buf = ""
|
|
||||||
|
|
||||||
def validate_key():
|
|
||||||
DASHSCOPE_API_KEY = get_conf("DASHSCOPE_API_KEY")
|
|
||||||
if DASHSCOPE_API_KEY == '': return False
|
|
||||||
return True
|
|
||||||
|
|
||||||
if not validate_key():
|
|
||||||
raise RuntimeError('请配置 DASHSCOPE_API_KEY')
|
|
||||||
dashscope.api_key = get_conf("DASHSCOPE_API_KEY")
|
|
||||||
|
|
||||||
|
|
||||||
def generate(self, inputs, llm_kwargs, history, system_prompt):
|
|
||||||
# import _thread as thread
|
|
||||||
from dashscope import Generation
|
|
||||||
QWEN_MODEL = {
|
|
||||||
'qwen-turbo': Generation.Models.qwen_turbo,
|
|
||||||
'qwen-plus': Generation.Models.qwen_plus,
|
|
||||||
'qwen-max': Generation.Models.qwen_max,
|
|
||||||
}[llm_kwargs['llm_model']]
|
|
||||||
top_p = llm_kwargs.get('top_p', 0.8)
|
|
||||||
if top_p == 0: top_p += 1e-5
|
|
||||||
if top_p == 1: top_p -= 1e-5
|
|
||||||
|
|
||||||
self.result_buf = ""
|
|
||||||
responses = Generation.call(
|
|
||||||
model=QWEN_MODEL,
|
|
||||||
messages=generate_message_payload(inputs, llm_kwargs, history, system_prompt),
|
|
||||||
top_p=top_p,
|
|
||||||
temperature=llm_kwargs.get('temperature', 1.0),
|
|
||||||
result_format='message',
|
|
||||||
stream=True,
|
|
||||||
incremental_output=True
|
|
||||||
)
|
|
||||||
|
|
||||||
for response in responses:
|
|
||||||
if response.status_code == HTTPStatus.OK:
|
|
||||||
if response.output.choices[0].finish_reason == 'stop':
|
|
||||||
yield self.result_buf
|
|
||||||
break
|
|
||||||
elif response.output.choices[0].finish_reason == 'length':
|
|
||||||
self.result_buf += "[Local Message] 生成长度过长,后续输出被截断"
|
|
||||||
yield self.result_buf
|
|
||||||
break
|
|
||||||
else:
|
|
||||||
self.result_buf += response.output.choices[0].message.content
|
|
||||||
yield self.result_buf
|
|
||||||
else:
|
|
||||||
self.result_buf += f"[Local Message] 请求错误:状态码:{response.status_code},错误码:{response.code},消息:{response.message}"
|
|
||||||
yield self.result_buf
|
|
||||||
break
|
|
||||||
logging.info(f'[raw_input] {inputs}')
|
|
||||||
logging.info(f'[response] {self.result_buf}')
|
|
||||||
return self.result_buf
|
|
||||||
|
|
||||||
|
|
||||||
def generate_message_payload(inputs, llm_kwargs, history, system_prompt):
|
|
||||||
conversation_cnt = len(history) // 2
|
|
||||||
if system_prompt == '': system_prompt = 'Hello!'
|
|
||||||
messages = [{"role": "user", "content": system_prompt}, {"role": "assistant", "content": "Certainly!"}]
|
|
||||||
if conversation_cnt:
|
|
||||||
for index in range(0, 2*conversation_cnt, 2):
|
|
||||||
what_i_have_asked = {}
|
|
||||||
what_i_have_asked["role"] = "user"
|
|
||||||
what_i_have_asked["content"] = history[index]
|
|
||||||
what_gpt_answer = {}
|
|
||||||
what_gpt_answer["role"] = "assistant"
|
|
||||||
what_gpt_answer["content"] = history[index+1]
|
|
||||||
if what_i_have_asked["content"] != "":
|
|
||||||
if what_gpt_answer["content"] == "":
|
|
||||||
continue
|
|
||||||
if what_gpt_answer["content"] == timeout_bot_msg:
|
|
||||||
continue
|
|
||||||
messages.append(what_i_have_asked)
|
|
||||||
messages.append(what_gpt_answer)
|
|
||||||
else:
|
|
||||||
messages[-1]['content'] = what_gpt_answer['content']
|
|
||||||
what_i_ask_now = {}
|
|
||||||
what_i_ask_now["role"] = "user"
|
|
||||||
what_i_ask_now["content"] = inputs
|
|
||||||
messages.append(what_i_ask_now)
|
|
||||||
return messages
|
|
||||||
@@ -1,95 +0,0 @@
|
|||||||
from toolbox import get_conf
|
|
||||||
import threading
|
|
||||||
import logging
|
|
||||||
import os
|
|
||||||
|
|
||||||
timeout_bot_msg = '[Local Message] Request timeout. Network error.'
|
|
||||||
#os.environ['VOLC_ACCESSKEY'] = ''
|
|
||||||
#os.environ['VOLC_SECRETKEY'] = ''
|
|
||||||
|
|
||||||
class YUNQUERequestInstance():
|
|
||||||
def __init__(self):
|
|
||||||
|
|
||||||
self.time_to_yield_event = threading.Event()
|
|
||||||
self.time_to_exit_event = threading.Event()
|
|
||||||
|
|
||||||
self.result_buf = ""
|
|
||||||
|
|
||||||
def generate(self, inputs, llm_kwargs, history, system_prompt):
|
|
||||||
# import _thread as thread
|
|
||||||
from volcengine.maas import MaasService, MaasException
|
|
||||||
|
|
||||||
maas = MaasService('maas-api.ml-platform-cn-beijing.volces.com', 'cn-beijing')
|
|
||||||
|
|
||||||
YUNQUE_SECRET_KEY, YUNQUE_ACCESS_KEY,YUNQUE_MODEL = get_conf("YUNQUE_SECRET_KEY", "YUNQUE_ACCESS_KEY","YUNQUE_MODEL")
|
|
||||||
maas.set_ak(YUNQUE_ACCESS_KEY) #填写 VOLC_ACCESSKEY
|
|
||||||
maas.set_sk(YUNQUE_SECRET_KEY) #填写 'VOLC_SECRETKEY'
|
|
||||||
|
|
||||||
self.result_buf = ""
|
|
||||||
|
|
||||||
req = {
|
|
||||||
"model": {
|
|
||||||
"name": YUNQUE_MODEL,
|
|
||||||
"version": "1.0", # use default version if not specified.
|
|
||||||
},
|
|
||||||
"parameters": {
|
|
||||||
"max_new_tokens": 4000, # 输出文本的最大tokens限制
|
|
||||||
"min_new_tokens": 1, # 输出文本的最小tokens限制
|
|
||||||
"temperature": llm_kwargs['temperature'], # 用于控制生成文本的随机性和创造性,Temperature值越大随机性越大,取值范围0~1
|
|
||||||
"top_p": llm_kwargs['top_p'], # 用于控制输出tokens的多样性,TopP值越大输出的tokens类型越丰富,取值范围0~1
|
|
||||||
"top_k": 0, # 选择预测值最大的k个token进行采样,取值范围0-1000,0表示不生效
|
|
||||||
"max_prompt_tokens": 4000, # 最大输入 token 数,如果给出的 prompt 的 token 长度超过此限制,取最后 max_prompt_tokens 个 token 输入模型。
|
|
||||||
},
|
|
||||||
"messages": self.generate_message_payload(inputs, llm_kwargs, history, system_prompt)
|
|
||||||
}
|
|
||||||
|
|
||||||
response = maas.stream_chat(req)
|
|
||||||
|
|
||||||
for resp in response:
|
|
||||||
self.result_buf += resp.choice.message.content
|
|
||||||
yield self.result_buf
|
|
||||||
'''
|
|
||||||
for event in response.events():
|
|
||||||
if event.event == "add":
|
|
||||||
self.result_buf += event.data
|
|
||||||
yield self.result_buf
|
|
||||||
elif event.event == "error" or event.event == "interrupted":
|
|
||||||
raise RuntimeError("Unknown error:" + event.data)
|
|
||||||
elif event.event == "finish":
|
|
||||||
yield self.result_buf
|
|
||||||
break
|
|
||||||
else:
|
|
||||||
raise RuntimeError("Unknown error:" + str(event))
|
|
||||||
|
|
||||||
logging.info(f'[raw_input] {inputs}')
|
|
||||||
logging.info(f'[response] {self.result_buf}')
|
|
||||||
'''
|
|
||||||
return self.result_buf
|
|
||||||
|
|
||||||
def generate_message_payload(inputs, llm_kwargs, history, system_prompt):
|
|
||||||
from volcengine.maas import ChatRole
|
|
||||||
conversation_cnt = len(history) // 2
|
|
||||||
messages = [{"role": ChatRole.USER, "content": system_prompt},
|
|
||||||
{"role": ChatRole.ASSISTANT, "content": "Certainly!"}]
|
|
||||||
if conversation_cnt:
|
|
||||||
for index in range(0, 2 * conversation_cnt, 2):
|
|
||||||
what_i_have_asked = {}
|
|
||||||
what_i_have_asked["role"] = ChatRole.USER
|
|
||||||
what_i_have_asked["content"] = history[index]
|
|
||||||
what_gpt_answer = {}
|
|
||||||
what_gpt_answer["role"] = ChatRole.ASSISTANT
|
|
||||||
what_gpt_answer["content"] = history[index + 1]
|
|
||||||
if what_i_have_asked["content"] != "":
|
|
||||||
if what_gpt_answer["content"] == "":
|
|
||||||
continue
|
|
||||||
if what_gpt_answer["content"] == timeout_bot_msg:
|
|
||||||
continue
|
|
||||||
messages.append(what_i_have_asked)
|
|
||||||
messages.append(what_gpt_answer)
|
|
||||||
else:
|
|
||||||
messages[-1]['content'] = what_gpt_answer['content']
|
|
||||||
what_i_ask_now = {}
|
|
||||||
what_i_ask_now["role"] = ChatRole.USER
|
|
||||||
what_i_ask_now["content"] = inputs
|
|
||||||
messages.append(what_i_ask_now)
|
|
||||||
return messages
|
|
||||||
@@ -72,12 +72,12 @@ class SparkRequestInstance():
|
|||||||
|
|
||||||
self.result_buf = ""
|
self.result_buf = ""
|
||||||
|
|
||||||
def generate(self, inputs, llm_kwargs, history, system_prompt, use_image_api=False):
|
def generate(self, inputs, llm_kwargs, history, system_prompt):
|
||||||
llm_kwargs = llm_kwargs
|
llm_kwargs = llm_kwargs
|
||||||
history = history
|
history = history
|
||||||
system_prompt = system_prompt
|
system_prompt = system_prompt
|
||||||
import _thread as thread
|
import _thread as thread
|
||||||
thread.start_new_thread(self.create_blocking_request, (inputs, llm_kwargs, history, system_prompt, use_image_api))
|
thread.start_new_thread(self.create_blocking_request, (inputs, llm_kwargs, history, system_prompt))
|
||||||
while True:
|
while True:
|
||||||
self.time_to_yield_event.wait(timeout=1)
|
self.time_to_yield_event.wait(timeout=1)
|
||||||
if self.time_to_yield_event.is_set():
|
if self.time_to_yield_event.is_set():
|
||||||
@@ -86,7 +86,7 @@ class SparkRequestInstance():
|
|||||||
return self.result_buf
|
return self.result_buf
|
||||||
|
|
||||||
|
|
||||||
def create_blocking_request(self, inputs, llm_kwargs, history, system_prompt, use_image_api):
|
def create_blocking_request(self, inputs, llm_kwargs, history, system_prompt):
|
||||||
if llm_kwargs['llm_model'] == 'sparkv2':
|
if llm_kwargs['llm_model'] == 'sparkv2':
|
||||||
gpt_url = self.gpt_url_v2
|
gpt_url = self.gpt_url_v2
|
||||||
elif llm_kwargs['llm_model'] == 'sparkv3':
|
elif llm_kwargs['llm_model'] == 'sparkv3':
|
||||||
@@ -94,11 +94,9 @@ class SparkRequestInstance():
|
|||||||
else:
|
else:
|
||||||
gpt_url = self.gpt_url
|
gpt_url = self.gpt_url
|
||||||
file_manifest = []
|
file_manifest = []
|
||||||
if use_image_api and llm_kwargs.get('most_recent_uploaded'):
|
if llm_kwargs.get('most_recent_uploaded'):
|
||||||
if llm_kwargs['most_recent_uploaded'].get('path'):
|
if llm_kwargs['most_recent_uploaded'].get('path'):
|
||||||
file_manifest = get_pictures_list(llm_kwargs['most_recent_uploaded']['path'])
|
file_manifest = get_pictures_list(llm_kwargs['most_recent_uploaded']['path'])
|
||||||
if len(file_manifest) > 0:
|
|
||||||
print('正在使用讯飞图片理解API')
|
|
||||||
gpt_url = self.gpt_url_img
|
gpt_url = self.gpt_url_img
|
||||||
wsParam = Ws_Param(self.appid, self.api_key, self.api_secret, gpt_url)
|
wsParam = Ws_Param(self.appid, self.api_key, self.api_secret, gpt_url)
|
||||||
websocket.enableTrace(False)
|
websocket.enableTrace(False)
|
||||||
|
|||||||
@@ -21,13 +21,11 @@ class ZhipuRequestInstance():
|
|||||||
response = zhipuai.model_api.sse_invoke(
|
response = zhipuai.model_api.sse_invoke(
|
||||||
model=ZHIPUAI_MODEL,
|
model=ZHIPUAI_MODEL,
|
||||||
prompt=generate_message_payload(inputs, llm_kwargs, history, system_prompt),
|
prompt=generate_message_payload(inputs, llm_kwargs, history, system_prompt),
|
||||||
top_p=llm_kwargs['top_p']*0.7, # 智谱的API抽风,手动*0.7给做个线性变换
|
top_p=llm_kwargs['top_p'],
|
||||||
temperature=llm_kwargs['temperature']*0.95, # 智谱的API抽风,手动*0.7给做个线性变换
|
temperature=llm_kwargs['temperature'],
|
||||||
)
|
)
|
||||||
for event in response.events():
|
for event in response.events():
|
||||||
if event.event == "add":
|
if event.event == "add":
|
||||||
# if self.result_buf == "" and event.data.startswith(" "):
|
|
||||||
# event.data = event.data.lstrip(" ") # 每次智谱为啥都要带个空格开头呢?
|
|
||||||
self.result_buf += event.data
|
self.result_buf += event.data
|
||||||
yield self.result_buf
|
yield self.result_buf
|
||||||
elif event.event == "error" or event.event == "interrupted":
|
elif event.event == "error" or event.event == "interrupted":
|
||||||
@@ -37,8 +35,7 @@ class ZhipuRequestInstance():
|
|||||||
break
|
break
|
||||||
else:
|
else:
|
||||||
raise RuntimeError("Unknown error:" + str(event))
|
raise RuntimeError("Unknown error:" + str(event))
|
||||||
if self.result_buf == "":
|
|
||||||
yield "智谱没有返回任何数据, 请检查ZHIPUAI_API_KEY和ZHIPUAI_MODEL是否填写正确."
|
|
||||||
logging.info(f'[raw_input] {inputs}')
|
logging.info(f'[raw_input] {inputs}')
|
||||||
logging.info(f'[response] {self.result_buf}')
|
logging.info(f'[response] {self.result_buf}')
|
||||||
return self.result_buf
|
return self.result_buf
|
||||||
|
|||||||
@@ -1,8 +1,8 @@
|
|||||||
"""
|
"""
|
||||||
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
========================================================================
|
||||||
第一部分:来自EdgeGPT.py
|
第一部分:来自EdgeGPT.py
|
||||||
https://github.com/acheong08/EdgeGPT
|
https://github.com/acheong08/EdgeGPT
|
||||||
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
========================================================================
|
||||||
"""
|
"""
|
||||||
"""
|
"""
|
||||||
Main.py
|
Main.py
|
||||||
@@ -196,9 +196,9 @@ class _ChatHubRequest:
|
|||||||
self,
|
self,
|
||||||
prompt: str,
|
prompt: str,
|
||||||
conversation_style: CONVERSATION_STYLE_TYPE,
|
conversation_style: CONVERSATION_STYLE_TYPE,
|
||||||
options=None,
|
options = None,
|
||||||
webpage_context=None,
|
webpage_context = None,
|
||||||
search_result=False,
|
search_result = False,
|
||||||
) -> None:
|
) -> None:
|
||||||
"""
|
"""
|
||||||
Updates request object
|
Updates request object
|
||||||
@@ -294,9 +294,9 @@ class _Conversation:
|
|||||||
|
|
||||||
def __init__(
|
def __init__(
|
||||||
self,
|
self,
|
||||||
proxy=None,
|
proxy = None,
|
||||||
async_mode=False,
|
async_mode = False,
|
||||||
cookies=None,
|
cookies = None,
|
||||||
) -> None:
|
) -> None:
|
||||||
if async_mode:
|
if async_mode:
|
||||||
return
|
return
|
||||||
@@ -350,8 +350,8 @@ class _Conversation:
|
|||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
async def create(
|
async def create(
|
||||||
proxy=None,
|
proxy = None,
|
||||||
cookies=None,
|
cookies = None,
|
||||||
):
|
):
|
||||||
self = _Conversation(async_mode=True)
|
self = _Conversation(async_mode=True)
|
||||||
self.struct = {
|
self.struct = {
|
||||||
@@ -418,8 +418,8 @@ class _ChatHub:
|
|||||||
def __init__(
|
def __init__(
|
||||||
self,
|
self,
|
||||||
conversation: _Conversation,
|
conversation: _Conversation,
|
||||||
proxy=None,
|
proxy = None,
|
||||||
cookies=None,
|
cookies = None,
|
||||||
) -> None:
|
) -> None:
|
||||||
self.session = None
|
self.session = None
|
||||||
self.wss = None
|
self.wss = None
|
||||||
@@ -441,7 +441,7 @@ class _ChatHub:
|
|||||||
conversation_style: CONVERSATION_STYLE_TYPE = None,
|
conversation_style: CONVERSATION_STYLE_TYPE = None,
|
||||||
raw: bool = False,
|
raw: bool = False,
|
||||||
options: dict = None,
|
options: dict = None,
|
||||||
webpage_context=None,
|
webpage_context = None,
|
||||||
search_result: bool = False,
|
search_result: bool = False,
|
||||||
) -> Generator[str, None, None]:
|
) -> Generator[str, None, None]:
|
||||||
"""
|
"""
|
||||||
@@ -452,11 +452,9 @@ class _ChatHub:
|
|||||||
ws_cookies = []
|
ws_cookies = []
|
||||||
for cookie in self.cookies:
|
for cookie in self.cookies:
|
||||||
ws_cookies.append(f"{cookie['name']}={cookie['value']}")
|
ws_cookies.append(f"{cookie['name']}={cookie['value']}")
|
||||||
req_header.update(
|
req_header.update({
|
||||||
{
|
'Cookie': ';'.join(ws_cookies),
|
||||||
"Cookie": ";".join(ws_cookies),
|
})
|
||||||
}
|
|
||||||
)
|
|
||||||
|
|
||||||
timeout = aiohttp.ClientTimeout(total=30)
|
timeout = aiohttp.ClientTimeout(total=30)
|
||||||
self.session = aiohttp.ClientSession(timeout=timeout)
|
self.session = aiohttp.ClientSession(timeout=timeout)
|
||||||
@@ -523,7 +521,7 @@ class _ChatHub:
|
|||||||
msg = await self.wss.receive()
|
msg = await self.wss.receive()
|
||||||
try:
|
try:
|
||||||
objects = msg.data.split(DELIMITER)
|
objects = msg.data.split(DELIMITER)
|
||||||
except:
|
except :
|
||||||
continue
|
continue
|
||||||
|
|
||||||
for obj in objects:
|
for obj in objects:
|
||||||
@@ -626,8 +624,8 @@ class Chatbot:
|
|||||||
|
|
||||||
def __init__(
|
def __init__(
|
||||||
self,
|
self,
|
||||||
proxy=None,
|
proxy = None,
|
||||||
cookies=None,
|
cookies = None,
|
||||||
) -> None:
|
) -> None:
|
||||||
self.proxy = proxy
|
self.proxy = proxy
|
||||||
self.chat_hub: _ChatHub = _ChatHub(
|
self.chat_hub: _ChatHub = _ChatHub(
|
||||||
@@ -638,8 +636,8 @@ class Chatbot:
|
|||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
async def create(
|
async def create(
|
||||||
proxy=None,
|
proxy = None,
|
||||||
cookies=None,
|
cookies = None,
|
||||||
):
|
):
|
||||||
self = Chatbot.__new__(Chatbot)
|
self = Chatbot.__new__(Chatbot)
|
||||||
self.proxy = proxy
|
self.proxy = proxy
|
||||||
@@ -656,7 +654,7 @@ class Chatbot:
|
|||||||
wss_link: str = "wss://sydney.bing.com/sydney/ChatHub",
|
wss_link: str = "wss://sydney.bing.com/sydney/ChatHub",
|
||||||
conversation_style: CONVERSATION_STYLE_TYPE = None,
|
conversation_style: CONVERSATION_STYLE_TYPE = None,
|
||||||
options: dict = None,
|
options: dict = None,
|
||||||
webpage_context=None,
|
webpage_context = None,
|
||||||
search_result: bool = False,
|
search_result: bool = False,
|
||||||
) -> dict:
|
) -> dict:
|
||||||
"""
|
"""
|
||||||
@@ -682,7 +680,7 @@ class Chatbot:
|
|||||||
conversation_style: CONVERSATION_STYLE_TYPE = None,
|
conversation_style: CONVERSATION_STYLE_TYPE = None,
|
||||||
raw: bool = False,
|
raw: bool = False,
|
||||||
options: dict = None,
|
options: dict = None,
|
||||||
webpage_context=None,
|
webpage_context = None,
|
||||||
search_result: bool = False,
|
search_result: bool = False,
|
||||||
) -> Generator[str, None, None]:
|
) -> Generator[str, None, None]:
|
||||||
"""
|
"""
|
||||||
|
|||||||
@@ -1,4 +1,6 @@
|
|||||||
import random
|
import random
|
||||||
|
import os
|
||||||
|
from toolbox import get_log_folder
|
||||||
|
|
||||||
def Singleton(cls):
|
def Singleton(cls):
|
||||||
_instance = {}
|
_instance = {}
|
||||||
@@ -12,18 +14,41 @@ def Singleton(cls):
|
|||||||
|
|
||||||
|
|
||||||
@Singleton
|
@Singleton
|
||||||
class OpenAI_ApiKeyManager():
|
class ApiKeyManager():
|
||||||
|
"""
|
||||||
|
只把失效的key保存在内存中
|
||||||
|
"""
|
||||||
def __init__(self, mode='blacklist') -> None:
|
def __init__(self, mode='blacklist') -> None:
|
||||||
# self.key_avail_list = []
|
# self.key_avail_list = []
|
||||||
self.key_black_list = []
|
self.key_black_list = []
|
||||||
|
self.debug = False
|
||||||
|
self.log = True
|
||||||
|
self.remain_keys = []
|
||||||
|
|
||||||
def add_key_to_blacklist(self, key):
|
def add_key_to_blacklist(self, key):
|
||||||
self.key_black_list.append(key)
|
self.key_black_list.append(key)
|
||||||
|
if self.debug: print('black list key added', key)
|
||||||
|
if self.log:
|
||||||
|
with open(
|
||||||
|
os.path.join(get_log_folder(user='admin', plugin_name='api_key_manager'), 'invalid_key.log'), 'a+', encoding='utf8') as f:
|
||||||
|
summary = 'num blacklist keys:' + str(len(self.key_black_list)) + '\tnum valid keys:' + str(len(self.remain_keys))
|
||||||
|
f.write('\n\n' + summary + '\n')
|
||||||
|
f.write('---- <add blacklist key> ----\n')
|
||||||
|
f.write(key)
|
||||||
|
f.write('\n')
|
||||||
|
f.write('---- <all blacklist keys> ----\n')
|
||||||
|
f.write(str(self.key_black_list))
|
||||||
|
f.write('\n')
|
||||||
|
f.write('---- <remain keys> ----\n')
|
||||||
|
f.write(str(self.remain_keys))
|
||||||
|
f.write('\n')
|
||||||
|
|
||||||
def select_avail_key(self, key_list):
|
def select_avail_key(self, key_list):
|
||||||
# select key from key_list, but avoid keys also in self.key_black_list, raise error if no key can be found
|
# select key from key_list, but avoid keys also in self.key_black_list, raise error if no key can be found
|
||||||
available_keys = [key for key in key_list if key not in self.key_black_list]
|
available_keys = [key for key in key_list if key not in self.key_black_list]
|
||||||
if not available_keys:
|
if not available_keys:
|
||||||
raise KeyError("No available key found.")
|
raise KeyError("所有API KEY都被OPENAI拒绝了")
|
||||||
selected_key = random.choice(available_keys)
|
selected_key = random.choice(available_keys)
|
||||||
|
if self.debug: print('total keys', len(key_list), 'valid keys', len(available_keys))
|
||||||
|
if self.log: self.remain_keys = available_keys
|
||||||
return selected_key
|
return selected_key
|
||||||
@@ -183,11 +183,11 @@ class LocalLLMHandle(Process):
|
|||||||
def stream_chat(self, **kwargs):
|
def stream_chat(self, **kwargs):
|
||||||
# ⭐run in main process
|
# ⭐run in main process
|
||||||
if self.get_state() == "`准备就绪`":
|
if self.get_state() == "`准备就绪`":
|
||||||
yield "`正在等待线程锁,排队中请稍候 ...`"
|
yield "`正在等待线程锁,排队中请稍后 ...`"
|
||||||
|
|
||||||
with self.threadLock:
|
with self.threadLock:
|
||||||
if self.parent.poll():
|
if self.parent.poll():
|
||||||
yield "`排队中请稍候 ...`"
|
yield "`排队中请稍后 ...`"
|
||||||
self.clear_pending_messages()
|
self.clear_pending_messages()
|
||||||
self.parent.send(kwargs)
|
self.parent.send(kwargs)
|
||||||
std_out = ""
|
std_out = ""
|
||||||
|
|||||||
@@ -6,3 +6,5 @@ sentencepiece
|
|||||||
numpy
|
numpy
|
||||||
onnxruntime
|
onnxruntime
|
||||||
sentencepiece
|
sentencepiece
|
||||||
|
streamlit
|
||||||
|
streamlit-chat
|
||||||
|
|||||||
@@ -5,3 +5,5 @@ accelerate
|
|||||||
matplotlib
|
matplotlib
|
||||||
huggingface_hub
|
huggingface_hub
|
||||||
triton
|
triton
|
||||||
|
streamlit
|
||||||
|
|
||||||
|
|||||||
@@ -1 +1,4 @@
|
|||||||
dashscope
|
modelscope
|
||||||
|
transformers_stream_generator
|
||||||
|
auto-gptq
|
||||||
|
optimum
|
||||||
@@ -1,5 +0,0 @@
|
|||||||
modelscope
|
|
||||||
transformers_stream_generator
|
|
||||||
auto-gptq
|
|
||||||
optimum
|
|
||||||
urllib3<2
|
|
||||||
@@ -1,14 +1,11 @@
|
|||||||
https://fastly.jsdelivr.net/gh/binary-husky/gradio-fix@gpt-academic/release/gradio-3.32.7-py3-none-any.whl
|
./docs/gradio-3.32.6-py3-none-any.whl
|
||||||
pypdf2==2.12.1
|
pypdf2==2.12.1
|
||||||
zhipuai<2
|
|
||||||
tiktoken>=0.3.3
|
tiktoken>=0.3.3
|
||||||
requests[socks]
|
requests[socks]
|
||||||
pydantic==1.10.11
|
pydantic==1.10.11
|
||||||
protobuf==3.18
|
|
||||||
transformers>=4.27.1
|
transformers>=4.27.1
|
||||||
scipdf_parser>=0.52
|
scipdf_parser>=0.52
|
||||||
python-markdown-math
|
python-markdown-math
|
||||||
pymdown-extensions
|
|
||||||
websocket-client
|
websocket-client
|
||||||
beautifulsoup4
|
beautifulsoup4
|
||||||
prompt_toolkit
|
prompt_toolkit
|
||||||
|
|||||||
@@ -1,361 +0,0 @@
|
|||||||
import markdown
|
|
||||||
import re
|
|
||||||
import os
|
|
||||||
import math
|
|
||||||
from textwrap import dedent
|
|
||||||
from functools import lru_cache
|
|
||||||
from pymdownx.superfences import fence_code_format
|
|
||||||
from latex2mathml.converter import convert as tex2mathml
|
|
||||||
from shared_utils.config_loader import get_conf as get_conf
|
|
||||||
from shared_utils.text_mask import apply_gpt_academic_string_mask
|
|
||||||
|
|
||||||
markdown_extension_configs = {
|
|
||||||
"mdx_math": {
|
|
||||||
"enable_dollar_delimiter": True,
|
|
||||||
"use_gitlab_delimiters": False,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
code_highlight_configs = {
|
|
||||||
"pymdownx.superfences": {
|
|
||||||
"css_class": "codehilite",
|
|
||||||
"custom_fences": [
|
|
||||||
{"name": "mermaid", "class": "mermaid", "format": fence_code_format}
|
|
||||||
],
|
|
||||||
},
|
|
||||||
"pymdownx.highlight": {
|
|
||||||
"css_class": "codehilite",
|
|
||||||
"guess_lang": True,
|
|
||||||
# 'auto_title': True,
|
|
||||||
# 'linenums': True
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
code_highlight_configs_block_mermaid = {
|
|
||||||
"pymdownx.superfences": {
|
|
||||||
"css_class": "codehilite",
|
|
||||||
# "custom_fences": [
|
|
||||||
# {"name": "mermaid", "class": "mermaid", "format": fence_code_format}
|
|
||||||
# ],
|
|
||||||
},
|
|
||||||
"pymdownx.highlight": {
|
|
||||||
"css_class": "codehilite",
|
|
||||||
"guess_lang": True,
|
|
||||||
# 'auto_title': True,
|
|
||||||
# 'linenums': True
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
def tex2mathml_catch_exception(content, *args, **kwargs):
|
|
||||||
try:
|
|
||||||
content = tex2mathml(content, *args, **kwargs)
|
|
||||||
except:
|
|
||||||
content = content
|
|
||||||
return content
|
|
||||||
|
|
||||||
|
|
||||||
def replace_math_no_render(match):
|
|
||||||
content = match.group(1)
|
|
||||||
if "mode=display" in match.group(0):
|
|
||||||
content = content.replace("\n", "</br>")
|
|
||||||
return f'<font color="#00FF00">$$</font><font color="#FF00FF">{content}</font><font color="#00FF00">$$</font>'
|
|
||||||
else:
|
|
||||||
return f'<font color="#00FF00">$</font><font color="#FF00FF">{content}</font><font color="#00FF00">$</font>'
|
|
||||||
|
|
||||||
|
|
||||||
def replace_math_render(match):
|
|
||||||
content = match.group(1)
|
|
||||||
if "mode=display" in match.group(0):
|
|
||||||
if "\\begin{aligned}" in content:
|
|
||||||
content = content.replace("\\begin{aligned}", "\\begin{array}")
|
|
||||||
content = content.replace("\\end{aligned}", "\\end{array}")
|
|
||||||
content = content.replace("&", " ")
|
|
||||||
content = tex2mathml_catch_exception(content, display="block")
|
|
||||||
return content
|
|
||||||
else:
|
|
||||||
return tex2mathml_catch_exception(content)
|
|
||||||
|
|
||||||
|
|
||||||
def markdown_bug_hunt(content):
|
|
||||||
"""
|
|
||||||
解决一个mdx_math的bug(单$包裹begin命令时多余<script>)
|
|
||||||
"""
|
|
||||||
content = content.replace(
|
|
||||||
'<script type="math/tex">\n<script type="math/tex; mode=display">',
|
|
||||||
'<script type="math/tex; mode=display">',
|
|
||||||
)
|
|
||||||
content = content.replace("</script>\n</script>", "</script>")
|
|
||||||
return content
|
|
||||||
|
|
||||||
|
|
||||||
def is_equation(txt):
|
|
||||||
"""
|
|
||||||
判定是否为公式 | 测试1 写出洛伦兹定律,使用tex格式公式 测试2 给出柯西不等式,使用latex格式 测试3 写出麦克斯韦方程组
|
|
||||||
"""
|
|
||||||
if "```" in txt and "```reference" not in txt:
|
|
||||||
return False
|
|
||||||
if "$" not in txt and "\\[" not in txt:
|
|
||||||
return False
|
|
||||||
mathpatterns = {
|
|
||||||
r"(?<!\\|\$)(\$)([^\$]+)(\$)": {"allow_multi_lines": False}, # $...$
|
|
||||||
r"(?<!\\)(\$\$)([^\$]+)(\$\$)": {"allow_multi_lines": True}, # $$...$$
|
|
||||||
r"(?<!\\)(\\\[)(.+?)(\\\])": {"allow_multi_lines": False}, # \[...\]
|
|
||||||
# r'(?<!\\)(\\\()(.+?)(\\\))': {'allow_multi_lines': False}, # \(...\)
|
|
||||||
# r'(?<!\\)(\\begin{([a-z]+?\*?)})(.+?)(\\end{\2})': {'allow_multi_lines': True}, # \begin...\end
|
|
||||||
# r'(?<!\\)(\$`)([^`]+)(`\$)': {'allow_multi_lines': False}, # $`...`$
|
|
||||||
}
|
|
||||||
matches = []
|
|
||||||
for pattern, property in mathpatterns.items():
|
|
||||||
flags = re.ASCII | re.DOTALL if property["allow_multi_lines"] else re.ASCII
|
|
||||||
matches.extend(re.findall(pattern, txt, flags))
|
|
||||||
if len(matches) == 0:
|
|
||||||
return False
|
|
||||||
contain_any_eq = False
|
|
||||||
illegal_pattern = re.compile(r"[^\x00-\x7F]|echo")
|
|
||||||
for match in matches:
|
|
||||||
if len(match) != 3:
|
|
||||||
return False
|
|
||||||
eq_canidate = match[1]
|
|
||||||
if illegal_pattern.search(eq_canidate):
|
|
||||||
return False
|
|
||||||
else:
|
|
||||||
contain_any_eq = True
|
|
||||||
return contain_any_eq
|
|
||||||
|
|
||||||
|
|
||||||
def fix_markdown_indent(txt):
|
|
||||||
# fix markdown indent
|
|
||||||
if (" - " not in txt) or (". " not in txt):
|
|
||||||
# do not need to fix, fast escape
|
|
||||||
return txt
|
|
||||||
# walk through the lines and fix non-standard indentation
|
|
||||||
lines = txt.split("\n")
|
|
||||||
pattern = re.compile(r"^\s+-")
|
|
||||||
activated = False
|
|
||||||
for i, line in enumerate(lines):
|
|
||||||
if line.startswith("- ") or line.startswith("1. "):
|
|
||||||
activated = True
|
|
||||||
if activated and pattern.match(line):
|
|
||||||
stripped_string = line.lstrip()
|
|
||||||
num_spaces = len(line) - len(stripped_string)
|
|
||||||
if (num_spaces % 4) == 3:
|
|
||||||
num_spaces_should_be = math.ceil(num_spaces / 4) * 4
|
|
||||||
lines[i] = " " * num_spaces_should_be + stripped_string
|
|
||||||
return "\n".join(lines)
|
|
||||||
|
|
||||||
|
|
||||||
FENCED_BLOCK_RE = re.compile(
|
|
||||||
dedent(
|
|
||||||
r"""
|
|
||||||
(?P<fence>^[ \t]*(?:~{3,}|`{3,}))[ ]* # opening fence
|
|
||||||
((\{(?P<attrs>[^\}\n]*)\})| # (optional {attrs} or
|
|
||||||
(\.?(?P<lang>[\w#.+-]*)[ ]*)? # optional (.)lang
|
|
||||||
(hl_lines=(?P<quot>"|')(?P<hl_lines>.*?)(?P=quot)[ ]*)?) # optional hl_lines)
|
|
||||||
\n # newline (end of opening fence)
|
|
||||||
(?P<code>.*?)(?<=\n) # the code block
|
|
||||||
(?P=fence)[ ]*$ # closing fence
|
|
||||||
"""
|
|
||||||
),
|
|
||||||
re.MULTILINE | re.DOTALL | re.VERBOSE,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def get_line_range(re_match_obj, txt):
|
|
||||||
start_pos, end_pos = re_match_obj.regs[0]
|
|
||||||
num_newlines_before = txt[: start_pos + 1].count("\n")
|
|
||||||
line_start = num_newlines_before
|
|
||||||
line_end = num_newlines_before + txt[start_pos:end_pos].count("\n") + 1
|
|
||||||
return line_start, line_end
|
|
||||||
|
|
||||||
|
|
||||||
def fix_code_segment_indent(txt):
|
|
||||||
lines = []
|
|
||||||
change_any = False
|
|
||||||
txt_tmp = txt
|
|
||||||
while True:
|
|
||||||
re_match_obj = FENCED_BLOCK_RE.search(txt_tmp)
|
|
||||||
if not re_match_obj:
|
|
||||||
break
|
|
||||||
if len(lines) == 0:
|
|
||||||
lines = txt.split("\n")
|
|
||||||
|
|
||||||
# 清空 txt_tmp 对应的位置方便下次搜索
|
|
||||||
start_pos, end_pos = re_match_obj.regs[0]
|
|
||||||
txt_tmp = txt_tmp[:start_pos] + " " * (end_pos - start_pos) + txt_tmp[end_pos:]
|
|
||||||
line_start, line_end = get_line_range(re_match_obj, txt)
|
|
||||||
|
|
||||||
# 获取公共缩进
|
|
||||||
shared_indent_cnt = 1e5
|
|
||||||
for i in range(line_start, line_end):
|
|
||||||
stripped_string = lines[i].lstrip()
|
|
||||||
num_spaces = len(lines[i]) - len(stripped_string)
|
|
||||||
if num_spaces < shared_indent_cnt:
|
|
||||||
shared_indent_cnt = num_spaces
|
|
||||||
|
|
||||||
# 修复缩进
|
|
||||||
if (shared_indent_cnt < 1e5) and (shared_indent_cnt % 4) == 3:
|
|
||||||
num_spaces_should_be = math.ceil(shared_indent_cnt / 4) * 4
|
|
||||||
for i in range(line_start, line_end):
|
|
||||||
add_n = num_spaces_should_be - shared_indent_cnt
|
|
||||||
lines[i] = " " * add_n + lines[i]
|
|
||||||
if not change_any: # 遇到第一个
|
|
||||||
change_any = True
|
|
||||||
|
|
||||||
if change_any:
|
|
||||||
return "\n".join(lines)
|
|
||||||
else:
|
|
||||||
return txt
|
|
||||||
|
|
||||||
|
|
||||||
@lru_cache(maxsize=128) # 使用 lru缓存 加快转换速度
|
|
||||||
def markdown_convertion(txt):
|
|
||||||
"""
|
|
||||||
将Markdown格式的文本转换为HTML格式。如果包含数学公式,则先将公式转换为HTML格式。
|
|
||||||
"""
|
|
||||||
pre = '<div class="markdown-body">'
|
|
||||||
suf = "</div>"
|
|
||||||
if txt.startswith(pre) and txt.endswith(suf):
|
|
||||||
# print('警告,输入了已经经过转化的字符串,二次转化可能出问题')
|
|
||||||
return txt # 已经被转化过,不需要再次转化
|
|
||||||
|
|
||||||
find_equation_pattern = r'<script type="math/tex(?:.*?)>(.*?)</script>'
|
|
||||||
|
|
||||||
txt = fix_markdown_indent(txt)
|
|
||||||
# txt = fix_code_segment_indent(txt)
|
|
||||||
if is_equation(txt): # 有$标识的公式符号,且没有代码段```的标识
|
|
||||||
# convert everything to html format
|
|
||||||
split = markdown.markdown(text="---")
|
|
||||||
convert_stage_1 = markdown.markdown(
|
|
||||||
text=txt,
|
|
||||||
extensions=[
|
|
||||||
"sane_lists",
|
|
||||||
"tables",
|
|
||||||
"mdx_math",
|
|
||||||
"pymdownx.superfences",
|
|
||||||
"pymdownx.highlight",
|
|
||||||
],
|
|
||||||
extension_configs={**markdown_extension_configs, **code_highlight_configs},
|
|
||||||
)
|
|
||||||
convert_stage_1 = markdown_bug_hunt(convert_stage_1)
|
|
||||||
# 1. convert to easy-to-copy tex (do not render math)
|
|
||||||
convert_stage_2_1, n = re.subn(
|
|
||||||
find_equation_pattern,
|
|
||||||
replace_math_no_render,
|
|
||||||
convert_stage_1,
|
|
||||||
flags=re.DOTALL,
|
|
||||||
)
|
|
||||||
# 2. convert to rendered equation
|
|
||||||
convert_stage_2_2, n = re.subn(
|
|
||||||
find_equation_pattern, replace_math_render, convert_stage_1, flags=re.DOTALL
|
|
||||||
)
|
|
||||||
# cat them together
|
|
||||||
return pre + convert_stage_2_1 + f"{split}" + convert_stage_2_2 + suf
|
|
||||||
else:
|
|
||||||
return (
|
|
||||||
pre
|
|
||||||
+ markdown.markdown(
|
|
||||||
txt,
|
|
||||||
extensions=[
|
|
||||||
"sane_lists",
|
|
||||||
"tables",
|
|
||||||
"pymdownx.superfences",
|
|
||||||
"pymdownx.highlight",
|
|
||||||
],
|
|
||||||
extension_configs=code_highlight_configs,
|
|
||||||
)
|
|
||||||
+ suf
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def close_up_code_segment_during_stream(gpt_reply):
|
|
||||||
"""
|
|
||||||
在gpt输出代码的中途(输出了前面的```,但还没输出完后面的```),补上后面的```
|
|
||||||
|
|
||||||
Args:
|
|
||||||
gpt_reply (str): GPT模型返回的回复字符串。
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
str: 返回一个新的字符串,将输出代码片段的“后面的```”补上。
|
|
||||||
|
|
||||||
"""
|
|
||||||
if "```" not in gpt_reply:
|
|
||||||
return gpt_reply
|
|
||||||
if gpt_reply.endswith("```"):
|
|
||||||
return gpt_reply
|
|
||||||
|
|
||||||
# 排除了以上两个情况,我们
|
|
||||||
segments = gpt_reply.split("```")
|
|
||||||
n_mark = len(segments) - 1
|
|
||||||
if n_mark % 2 == 1:
|
|
||||||
return gpt_reply + "\n```" # 输出代码片段中!
|
|
||||||
else:
|
|
||||||
return gpt_reply
|
|
||||||
|
|
||||||
|
|
||||||
def special_render_issues_for_mermaid(text):
|
|
||||||
# 用不太优雅的方式处理一个core_functional.py中出现的mermaid渲染特例:
|
|
||||||
# 我不希望"总结绘制脑图"prompt中的mermaid渲染出来
|
|
||||||
@lru_cache(maxsize=1)
|
|
||||||
def get_special_case():
|
|
||||||
from core_functional import get_core_functions
|
|
||||||
special_case = get_core_functions()["总结绘制脑图"]["Suffix"]
|
|
||||||
return special_case
|
|
||||||
if text.endswith(get_special_case()): text = text.replace("```mermaid", "```")
|
|
||||||
return text
|
|
||||||
|
|
||||||
|
|
||||||
def compat_non_markdown_input(text):
|
|
||||||
"""
|
|
||||||
改善非markdown输入的显示效果,例如将空格转换为 ,将换行符转换为</br>等。
|
|
||||||
"""
|
|
||||||
if "```" in text:
|
|
||||||
# careful input:markdown输入
|
|
||||||
text = special_render_issues_for_mermaid(text) # 处理特殊的渲染问题
|
|
||||||
return text
|
|
||||||
elif "</div>" in text:
|
|
||||||
# careful input:html输入
|
|
||||||
return text
|
|
||||||
else:
|
|
||||||
# whatever input:非markdown输入
|
|
||||||
lines = text.split("\n")
|
|
||||||
for i, line in enumerate(lines):
|
|
||||||
lines[i] = lines[i].replace(" ", " ") # 空格转换为
|
|
||||||
text = "</br>".join(lines) # 换行符转换为</br>
|
|
||||||
return text
|
|
||||||
|
|
||||||
|
|
||||||
@lru_cache(maxsize=128) # 使用lru缓存
|
|
||||||
def simple_markdown_convertion(text):
|
|
||||||
pre = '<div class="markdown-body">'
|
|
||||||
suf = "</div>"
|
|
||||||
if text.startswith(pre) and text.endswith(suf):
|
|
||||||
return text # 已经被转化过,不需要再次转化
|
|
||||||
text = compat_non_markdown_input(text) # 兼容非markdown输入
|
|
||||||
text = markdown.markdown(
|
|
||||||
text,
|
|
||||||
extensions=["pymdownx.superfences", "tables", "pymdownx.highlight"],
|
|
||||||
extension_configs=code_highlight_configs,
|
|
||||||
)
|
|
||||||
return pre + text + suf
|
|
||||||
|
|
||||||
|
|
||||||
def format_io(self, y):
|
|
||||||
"""
|
|
||||||
将输入和输出解析为HTML格式。将y中最后一项的输入部分段落化,并将输出部分的Markdown和数学公式转换为HTML格式。
|
|
||||||
"""
|
|
||||||
if y is None or y == []:
|
|
||||||
return []
|
|
||||||
i_ask, gpt_reply = y[-1]
|
|
||||||
i_ask = apply_gpt_academic_string_mask(i_ask, mode="show_render")
|
|
||||||
gpt_reply = apply_gpt_academic_string_mask(gpt_reply, mode="show_render")
|
|
||||||
# 当代码输出半截的时候,试着补上后个```
|
|
||||||
if gpt_reply is not None:
|
|
||||||
gpt_reply = close_up_code_segment_during_stream(gpt_reply)
|
|
||||||
# 处理提问与输出
|
|
||||||
y[-1] = (
|
|
||||||
# 输入部分
|
|
||||||
None if i_ask is None else simple_markdown_convertion(i_ask),
|
|
||||||
# 输出部分
|
|
||||||
None if gpt_reply is None else markdown_convertion(gpt_reply),
|
|
||||||
)
|
|
||||||
return y
|
|
||||||
@@ -1,131 +0,0 @@
|
|||||||
import importlib
|
|
||||||
import time
|
|
||||||
import os
|
|
||||||
from functools import lru_cache
|
|
||||||
from colorful import print亮红, print亮绿, print亮蓝
|
|
||||||
|
|
||||||
pj = os.path.join
|
|
||||||
default_user_name = 'default_user'
|
|
||||||
|
|
||||||
def read_env_variable(arg, default_value):
|
|
||||||
"""
|
|
||||||
环境变量可以是 `GPT_ACADEMIC_CONFIG`(优先),也可以直接是`CONFIG`
|
|
||||||
例如在windows cmd中,既可以写:
|
|
||||||
set USE_PROXY=True
|
|
||||||
set API_KEY=sk-j7caBpkRoxxxxxxxxxxxxxxxxxxxxxxxxxxxx
|
|
||||||
set proxies={"http":"http://127.0.0.1:10085", "https":"http://127.0.0.1:10085",}
|
|
||||||
set AVAIL_LLM_MODELS=["gpt-3.5-turbo", "chatglm"]
|
|
||||||
set AUTHENTICATION=[("username", "password"), ("username2", "password2")]
|
|
||||||
也可以写:
|
|
||||||
set GPT_ACADEMIC_USE_PROXY=True
|
|
||||||
set GPT_ACADEMIC_API_KEY=sk-j7caBpkRoxxxxxxxxxxxxxxxxxxxxxxxxxxxx
|
|
||||||
set GPT_ACADEMIC_proxies={"http":"http://127.0.0.1:10085", "https":"http://127.0.0.1:10085",}
|
|
||||||
set GPT_ACADEMIC_AVAIL_LLM_MODELS=["gpt-3.5-turbo", "chatglm"]
|
|
||||||
set GPT_ACADEMIC_AUTHENTICATION=[("username", "password"), ("username2", "password2")]
|
|
||||||
"""
|
|
||||||
arg_with_prefix = "GPT_ACADEMIC_" + arg
|
|
||||||
if arg_with_prefix in os.environ:
|
|
||||||
env_arg = os.environ[arg_with_prefix]
|
|
||||||
elif arg in os.environ:
|
|
||||||
env_arg = os.environ[arg]
|
|
||||||
else:
|
|
||||||
raise KeyError
|
|
||||||
print(f"[ENV_VAR] 尝试加载{arg},默认值:{default_value} --> 修正值:{env_arg}")
|
|
||||||
try:
|
|
||||||
if isinstance(default_value, bool):
|
|
||||||
env_arg = env_arg.strip()
|
|
||||||
if env_arg == 'True': r = True
|
|
||||||
elif env_arg == 'False': r = False
|
|
||||||
else: print('Enter True or False, but have:', env_arg); r = default_value
|
|
||||||
elif isinstance(default_value, int):
|
|
||||||
r = int(env_arg)
|
|
||||||
elif isinstance(default_value, float):
|
|
||||||
r = float(env_arg)
|
|
||||||
elif isinstance(default_value, str):
|
|
||||||
r = env_arg.strip()
|
|
||||||
elif isinstance(default_value, dict):
|
|
||||||
r = eval(env_arg)
|
|
||||||
elif isinstance(default_value, list):
|
|
||||||
r = eval(env_arg)
|
|
||||||
elif default_value is None:
|
|
||||||
assert arg == "proxies"
|
|
||||||
r = eval(env_arg)
|
|
||||||
else:
|
|
||||||
print亮红(f"[ENV_VAR] 环境变量{arg}不支持通过环境变量设置! ")
|
|
||||||
raise KeyError
|
|
||||||
except:
|
|
||||||
print亮红(f"[ENV_VAR] 环境变量{arg}加载失败! ")
|
|
||||||
raise KeyError(f"[ENV_VAR] 环境变量{arg}加载失败! ")
|
|
||||||
|
|
||||||
print亮绿(f"[ENV_VAR] 成功读取环境变量{arg}")
|
|
||||||
return r
|
|
||||||
|
|
||||||
|
|
||||||
@lru_cache(maxsize=128)
|
|
||||||
def read_single_conf_with_lru_cache(arg):
|
|
||||||
from shared_utils.key_pattern_manager import is_any_api_key
|
|
||||||
try:
|
|
||||||
# 优先级1. 获取环境变量作为配置
|
|
||||||
default_ref = getattr(importlib.import_module('config'), arg) # 读取默认值作为数据类型转换的参考
|
|
||||||
r = read_env_variable(arg, default_ref)
|
|
||||||
except:
|
|
||||||
try:
|
|
||||||
# 优先级2. 获取config_private中的配置
|
|
||||||
r = getattr(importlib.import_module('config_private'), arg)
|
|
||||||
except:
|
|
||||||
# 优先级3. 获取config中的配置
|
|
||||||
r = getattr(importlib.import_module('config'), arg)
|
|
||||||
|
|
||||||
# 在读取API_KEY时,检查一下是不是忘了改config
|
|
||||||
if arg == 'API_URL_REDIRECT':
|
|
||||||
oai_rd = r.get("https://api.openai.com/v1/chat/completions", None) # API_URL_REDIRECT填写格式是错误的,请阅读`https://github.com/binary-husky/gpt_academic/wiki/项目配置说明`
|
|
||||||
if oai_rd and not oai_rd.endswith('/completions'):
|
|
||||||
print亮红("\n\n[API_URL_REDIRECT] API_URL_REDIRECT填错了。请阅读`https://github.com/binary-husky/gpt_academic/wiki/项目配置说明`。如果您确信自己没填错,无视此消息即可。")
|
|
||||||
time.sleep(5)
|
|
||||||
if arg == 'API_KEY':
|
|
||||||
print亮蓝(f"[API_KEY] 本项目现已支持OpenAI和Azure的api-key。也支持同时填写多个api-key,如API_KEY=\"openai-key1,openai-key2,azure-key3\"")
|
|
||||||
print亮蓝(f"[API_KEY] 您既可以在config.py中修改api-key(s),也可以在问题输入区输入临时的api-key(s),然后回车键提交后即可生效。")
|
|
||||||
if is_any_api_key(r):
|
|
||||||
print亮绿(f"[API_KEY] 您的 API_KEY 是: {r[:15]}*** API_KEY 导入成功")
|
|
||||||
else:
|
|
||||||
print亮红("[API_KEY] 您的 API_KEY 不满足任何一种已知的密钥格式,请在config文件中修改API密钥之后再运行。")
|
|
||||||
if arg == 'proxies':
|
|
||||||
if not read_single_conf_with_lru_cache('USE_PROXY'): r = None # 检查USE_PROXY,防止proxies单独起作用
|
|
||||||
if r is None:
|
|
||||||
print亮红('[PROXY] 网络代理状态:未配置。无代理状态下很可能无法访问OpenAI家族的模型。建议:检查USE_PROXY选项是否修改。')
|
|
||||||
else:
|
|
||||||
print亮绿('[PROXY] 网络代理状态:已配置。配置信息如下:', r)
|
|
||||||
assert isinstance(r, dict), 'proxies格式错误,请注意proxies选项的格式,不要遗漏括号。'
|
|
||||||
return r
|
|
||||||
|
|
||||||
|
|
||||||
@lru_cache(maxsize=128)
|
|
||||||
def get_conf(*args):
|
|
||||||
"""
|
|
||||||
本项目的所有配置都集中在config.py中。 修改配置有三种方法,您只需要选择其中一种即可:
|
|
||||||
- 直接修改config.py
|
|
||||||
- 创建并修改config_private.py
|
|
||||||
- 修改环境变量(修改docker-compose.yml等价于修改容器内部的环境变量)
|
|
||||||
|
|
||||||
注意:如果您使用docker-compose部署,请修改docker-compose(等价于修改容器内部的环境变量)
|
|
||||||
"""
|
|
||||||
res = []
|
|
||||||
for arg in args:
|
|
||||||
r = read_single_conf_with_lru_cache(arg)
|
|
||||||
res.append(r)
|
|
||||||
if len(res) == 1: return res[0]
|
|
||||||
return res
|
|
||||||
|
|
||||||
|
|
||||||
def set_conf(key, value):
|
|
||||||
from toolbox import read_single_conf_with_lru_cache
|
|
||||||
read_single_conf_with_lru_cache.cache_clear()
|
|
||||||
get_conf.cache_clear()
|
|
||||||
os.environ[key] = str(value)
|
|
||||||
altered = get_conf(key)
|
|
||||||
return altered
|
|
||||||
|
|
||||||
|
|
||||||
def set_multi_conf(dic):
|
|
||||||
for k, v in dic.items(): set_conf(k, v)
|
|
||||||
return
|
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user