Compare commits

...

130 Commits

Author SHA1 Message Date
binary-husky
5e0f327237 Merge branch 'master' into frontier 2025-02-04 16:12:42 +08:00
binary-husky
7f4b87a633 update readme 2025-02-04 16:08:18 +08:00
binary-husky
2ddd1bb634 Merge branch 'memset0-master' 2025-02-04 16:03:53 +08:00
binary-husky
c68285aeac update config and version 2025-02-04 16:03:01 +08:00
Memento mori.
caaebe4296 add support for Deepseek R1 model and display CoT (#2118)
* feat: add support for R1 model and display CoT

* fix unpacking

* feat: customized font & font size

* auto hide tooltip when scoll down

* tooltip glass transparent css

* fix: Enhance API key validation in is_any_api_key function (#2113)

* support qwen2.5-max!

* update minior adjustment

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>
Co-authored-by: Steven Moder <java20131114@gmail.com>
2025-02-04 16:02:02 +08:00
binary-husky
39d50c1c95 update minior adjustment 2025-02-04 15:57:35 +08:00
binary-husky
25dc7bf912 Merge branch 'master' of https://github.com/memset0/gpt_academic into memset0-master 2025-01-30 22:03:31 +08:00
binary-husky
0458590a77 support qwen2.5-max! 2025-01-29 23:29:38 +08:00
Steven Moder
44fe78fff5 fix: Enhance API key validation in is_any_api_key function (#2113) 2025-01-29 21:40:30 +08:00
binary-husky
6a6eba5f16 support qwen2.5-max! 2025-01-29 21:30:54 +08:00
binary-husky
722a055879 Merge branch 'master' into frontier 2025-01-29 00:00:08 +08:00
binary-husky
5ddd657ebc tooltip glass transparent css 2025-01-28 23:50:21 +08:00
binary-husky
9b0b2cf260 auto hide tooltip when scoll down 2025-01-28 23:32:40 +08:00
binary-husky
9f39a6571a feat: customized font & font size 2025-01-28 02:52:56 +08:00
memset0
d07e736214 fix unpacking 2025-01-25 00:00:13 +08:00
memset0
a1f7ae5b55 feat: add support for R1 model and display CoT 2025-01-24 14:43:49 +08:00
binary-husky
1213ef19e5 Merge branch 'master' of github.com:binary-husky/chatgpt_academic 2025-01-22 01:50:08 +08:00
binary-husky
aaafe2a797 fix xelatex font problem in all-cap image 2025-01-22 01:49:53 +08:00
binary-husky
2716606f0c Update README.md 2025-01-16 23:40:24 +08:00
binary-husky
286f7303be fix image display bug 2025-01-12 21:54:43 +08:00
binary-husky
7eeab9e376 fix code block display bug 2025-01-09 22:31:59 +08:00
binary-husky
4ca331fb28 prevent html rendering for input 2025-01-05 21:20:12 +08:00
binary-husky
9487829930 change max_chat_preserve = 10 2025-01-03 00:34:36 +08:00
binary-husky
8254930495 Merge branch 'master' into frontier 2025-01-03 00:31:30 +08:00
binary-husky
a73074b89e upgrade chat checkpoint 2025-01-03 00:31:03 +08:00
binary-husky
ca1ab57f5d Merge branch 'master' into frontier 2024-12-29 00:08:59 +08:00
Yuki
e20177cb7d Support new azure ai key pattern (#2098)
* fix cookie overflow bug

* fix temp issue of o1

* compat bug fix

* support new azure ai key pattern

* support new azure ai key pattern

* allow disable openai proxy in `WHEN_TO_USE_PROXY`

* change padding

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>
2024-12-29 00:04:16 +08:00
Southlandi
fd93622840 修复Gemini对话错误问题(停用词数量为0的情况) (#2092) 2024-12-28 23:22:10 +08:00
whyXVI
09a82a572d Fix RuntimeError in predict_no_ui_long_connection() (#2095)
Bug fix: Fix RuntimeError in predict_no_ui_long_connection()

In the original code, calling predict_no_ui_long_connection() would trigger a RuntimeError("OpenAI拒绝了请求:" + error_msg) even when the server responded normally. The issue occurred due to incorrect handling of SSE protocol comment lines (lines starting with ":"). 

Modified the parsing logic in both `predict` and `predict_no_ui_long_connection` to handle these lines correctly, making the logic more intuitive and robust.
2024-12-28 23:21:14 +08:00
G.RQ
c53ddf65aa 修复 bug“重置”按钮报错 (#2102)
* fix 重置按钮bug

* fix version control bug

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>
2024-12-28 23:19:25 +08:00
binary-husky
6bd410582b Merge branch 'master' into frontier 2024-12-28 07:15:37 +08:00
binary-husky
ac64a77c2d allow disable openai proxy in WHEN_TO_USE_PROXY 2024-12-28 07:14:54 +08:00
binary-husky
dae8a0affc compat bug fix 2024-12-25 01:21:58 +08:00
binary-husky
97a81e9388 fix temp issue of o1 2024-12-25 00:54:03 +08:00
binary-husky
1dd1d0ed6c fix cookie overflow bug 2024-12-25 00:33:20 +08:00
Aibot
4fe638ffa8 Dev/aibot/bug fix (#2086)
* 添加为windows的环境打包以及一键启动脚本 (#2068)

* 新增自动打包windows下的环境依赖

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>

* update requirements

* update readme

* idor-vuln-bug-fix

* vuln-bug-fix: validate file size, default 500M

* add tts test

* remove welcome card when layout overflows

---------

Co-authored-by: Menghuan <menghuan2003@outlook.com>
Co-authored-by: binary-husky <qingxu.fu@outlook.com>
Co-authored-by: aibot <hangyuntang@qq.com>
2024-12-23 10:17:43 +08:00
binary-husky
060af0d2e6 Merge branch 'master' of github.com:binary-husky/chatgpt_academic 2024-12-22 23:33:44 +08:00
binary-husky
a848f714b6 fix welcome card bugs 2024-12-22 23:33:22 +08:00
binary-husky
924f8e30c7 Update issue stale.yml 2024-12-22 14:16:18 +08:00
binary-husky
f40347665b github action change 2024-12-22 14:15:16 +08:00
binary-husky
734c40bbde fix non-localhost javascript error 2024-12-22 14:01:22 +08:00
binary-husky
4ec87fbb54 history ng patch 1 2024-12-21 11:27:53 +08:00
binary-husky
17b5c22e61 Merge branch 'master' of github.com:binary-husky/chatgpt_academic 2024-12-19 22:46:14 +08:00
binary-husky
c6cd04a407 promote the rank of DASHSCOPE_API_KEY 2024-12-19 22:39:14 +08:00
YIQI JIANG
f60a12f8b4 Add o1 and o1-2024-12-17 model support (#2090)
* Add o1 and o1-2024-12-17 model support

* patch api key selection

---------

Co-authored-by: 蒋翌琪 <jiangyiqi99@jiangyiqideMacBook-Pro.local>
Co-authored-by: binary-husky <qingxu.fu@outlook.com>
2024-12-19 22:32:57 +08:00
binary-husky
8413fb15ba optimize welcome page 2024-12-18 23:35:25 +08:00
binary-husky
72b2ce9b62 ollama patch 2024-12-18 23:05:55 +08:00
binary-husky
f43ef909e2 roll version to 3.91 2024-12-18 22:56:41 +08:00
binary-husky
9651ad488f Merge branch 'master' into frontier 2024-12-18 22:27:12 +08:00
binary-husky
81da7bb1a5 remove welcome card when layout overflows 2024-12-18 17:48:02 +08:00
binary-husky
4127162ee7 add tts test 2024-12-18 17:47:23 +08:00
binary-husky
98e5cb7b77 update readme 2024-12-09 23:57:10 +08:00
binary-husky
c88d8047dd cookie storage to local storage 2024-12-09 23:52:02 +08:00
binary-husky
e4bebea28d update requirements 2024-12-09 23:40:23 +08:00
YE Ke 叶柯
294df6c2d5 Add ChatGLM4 local deployment support and refactor ChatGLM bridge's path configuration (#2062)
*  feat(request_llms and config.py): ChatGLM4 Deployment

Add support for local deployment of ChatGLM4 model

* 🦄 refactor(bridge_chatglm3.py): ChatGLM3 model path

Added ChatGLM3 path customization (in config.py).
Removed useless quantization model options that have been annotated

---------

Co-authored-by: MarkDeia <17290550+MarkDeia@users.noreply.github.com>
2024-12-07 23:43:51 +08:00
Zhenhong Du
239894544e Add support for grok-beta model from x.ai (#2060)
* Update config.py

add support for `grok-beta` model

* Update bridge_all.py

add support for `grok-beta` model
2024-12-07 23:41:53 +08:00
Menghuan
ed5fc84d4e 添加为windows的环境打包以及一键启动脚本 (#2068)
* 新增自动打包windows下的环境依赖

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>
2024-12-07 23:41:02 +08:00
Menghuan
e3f84069ee 改进Doc2X请求,并增加xelatex编译的支持 (#2058)
* doc2x请求函数格式清理

* 更新中间部分

* 添加doc2x超时设置并添加对xelatex编译的支持

* Bug修复以及增加对xelatex安装的检测

* 增强弱网环境下的稳定性

* 修复模型中_无法显示的问题

* add xelatex logs

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>
2024-12-07 23:23:59 +08:00
binary-husky
7bf094b6b6 remove 2024-12-07 22:43:03 +08:00
binary-husky
57d7dc33d3 sync common.js 2024-12-07 17:10:01 +08:00
binary-husky
94ccd77480 remove gen restore btn 2024-12-07 16:22:29 +08:00
binary-husky
48e53cba05 update gradio 2024-12-07 16:18:05 +08:00
binary-husky
e9a7f9439f upgrade gradio 2024-12-07 15:59:30 +08:00
binary-husky
a88b119bf0 change urls 2024-12-05 22:13:59 +08:00
binary-husky
eee8115434 add a config note 2024-12-04 23:55:22 +08:00
binary-husky
4f6a272113 remove keyword extraction 2024-12-04 01:33:31 +08:00
binary-husky
cf3dd5ddb6 add fail fallback option for media plugin 2024-12-04 01:06:12 +08:00
binary-husky
f6f10b7230 media plugin update 2024-12-04 00:36:34 +08:00
binary-husky
bd7b219e8f update web search functionality 2024-12-02 01:55:01 +08:00
binary-husky
e62decac21 change some open fn encoding to utf-8 2024-11-19 15:53:50 +00:00
binary-husky
588b22e039 comment remove 2024-11-19 15:05:48 +00:00
binary-husky
ef18aeda81 adjust rag 2024-11-19 14:59:50 +00:00
binary-husky
3520131ca2 public media gpt 2024-11-18 18:38:49 +00:00
binary-husky
ff5901d8c0 Merge branch 'master' into frontier 2024-11-17 18:16:19 +00:00
binary-husky
2305576410 unify mutex button manifest 2024-11-17 18:14:45 +00:00
binary-husky
52f23c505c media-gpt update 2024-11-17 17:45:53 +00:00
binary-husky
34cc484635 chatgpt-4o-latest 2024-11-11 15:58:57 +00:00
binary-husky
d152f62894 renamed plugins 2024-11-11 14:55:05 +00:00
binary-husky
ca35f56f9b fix: media gpt upgrade 2024-11-11 14:48:29 +00:00
binary-husky
d616fd121a update experimental media agent 2024-11-10 16:42:31 +00:00
binary-husky
f3fda0d3fc Merge branch 'master' into frontier 2024-11-10 13:41:44 +00:00
binary-husky
197287fc30 Enhance archive extraction with error handling for tar and gzip formats 2024-11-09 10:10:46 +00:00
Bingchen Jiang
c37fcc9299 Adding support to new openai apikey format (#2030) 2024-11-09 13:41:19 +08:00
binary-husky
91f5e6b8f7 resolve pickle security issue 2024-11-04 13:49:49 +00:00
hcy2206
4f0851f703 增加了对于glm-4-plus的支持 (#2014)
* 增加对于讯飞星火大模型Spark4.0的支持

* Create github action sync.yml

* 增加对于智谱glm-4-plus的支持

* feat: change arxiv io param

* catch comment source code exception

* upgrade auto comment

* add security patch

---------

Co-authored-by: GH Action - Upstream Sync <action@github.com>
Co-authored-by: binary-husky <qingxu.fu@outlook.com>
2024-11-03 22:41:16 +08:00
binary-husky
2821f27756 add security patch 2024-11-03 14:34:17 +00:00
binary-husky
8f91a048a8 dfa algo imp 2024-11-03 09:39:14 +00:00
binary-husky
58eac38b4d Merge branch 'master' into frontier 2024-10-30 13:42:17 +00:00
binary-husky
d7f0cbe68e Merge branch 'master' into frontier 2024-10-21 14:31:25 +00:00
binary-husky
04c9077265 Merge branch 'papershare_beta' into frontier 2024-10-21 14:06:52 +00:00
binary-husky
6afd7db1e3 Merge branch 'master' into frontier 2024-10-21 14:06:23 +00:00
binary-husky
42d10a9481 update doc2x functions 2024-10-21 14:05:05 +00:00
binary-husky
50a1ea83ef control whether to allow sharing translation results with GPTAC academic cloud. 2024-10-18 18:05:50 +00:00
binary-husky
a9c86a7fb8 pre 2024-10-18 14:16:24 +00:00
binary-husky
2b299cf579 Merge branch 'master' into frontier 2024-10-16 15:22:27 +00:00
binary-husky
0121cacc84 Merge branch 'master' into frontier 2024-10-15 09:10:36 +00:00
binary-husky
f2dcd6ad55 compat: arxiv translation src shift 2024-10-15 09:06:57 +00:00
binary-husky
42d9712f20 Merge branch 'frontier' of github.com:binary-husky/chatgpt_academic into frontier 2024-10-15 08:24:01 +00:00
Boyin Liu
7f0ffa58f0 Boyin rag (#1983)
* first_version

* rag document support

* RAG interactive prompts added, issues resolved

* Resolve conflicts

* Resolve conflicts

* Resolve conflicts

* more file format support

* move import

* Resolve LlamaIndexRagWorker bug

* new resolve

* Address import  LlamaIndexRagWorker problem

* change import order

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>
2024-10-14 22:48:24 +08:00
binary-husky
5888d038aa move import 2024-10-13 16:17:10 +00:00
binary-husky
ee8213e936 Merge branch 'boyin_rag' into frontier 2024-10-13 16:12:51 +00:00
binary-husky
a57dcbcaeb Merge branch 'frontier' of github.com:binary-husky/chatgpt_academic into frontier 2024-10-13 08:26:06 +00:00
binary-husky
b812392a9d Merge branch 'master' into frontier 2024-10-13 08:25:47 +00:00
lbykkkk
fce4fa1ec7 more file format support 2024-10-12 18:25:33 +00:00
Boyin Liu
d13f1e270c Merge branch 'master' into boyin_rag 2024-10-11 22:31:07 +08:00
lbykkkk
85cf3d08eb Resolve conflicts 2024-10-11 22:29:56 +08:00
lbykkkk
584e747565 Resolve conflicts 2024-10-11 22:27:57 +08:00
lbykkkk
02ba653c19 Resolve conflicts 2024-10-11 22:21:53 +08:00
lbykkkk
2d12b5b27d RAG interactive prompts added, issues resolved 2024-10-11 01:06:17 +08:00
binary-husky
a4bcd262f9 Merge branch 'master' into frontier 2024-10-07 05:20:49 +00:00
Boyin Liu
748e31102f Merge branch 'master' into boyin_rag 2024-10-05 23:58:43 +08:00
binary-husky
97eef45ab7 Merge branch 'frontier' of github.com:binary-husky/chatgpt_academic into frontier 2024-10-01 11:59:14 +00:00
binary-husky
0c0e2acb9b remove logging extra 2024-10-01 11:57:47 +00:00
Ren Lifei
9fba8e0142 Added some modules to support openrouter (#1975)
* Added some modules for supporting openrouter model

Added some modules for supporting openrouter model

* Update config.py

* Update .gitignore

* Update bridge_openrouter.py

* Not changed actually

* Refactor logging in bridge_openrouter.py

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>
2024-09-28 18:05:34 +08:00
binary-husky
7d7867fb64 remove comment 2024-09-23 15:16:13 +00:00
lbykkkk
7ea791d83a rag document support 2024-09-22 21:37:57 +08:00
binary-husky
f9dbaa39fb Merge branch 'frontier' of github.com:binary-husky/chatgpt_academic into frontier 2024-09-21 15:40:24 +00:00
binary-husky
bbc2288c5b relax llama index version 2024-09-21 15:40:10 +00:00
Steven Moder
64ab916838 fix: loguru argument error with proxy enabled (#1977) 2024-09-21 23:32:00 +08:00
binary-husky
8fe559da9f update translation matrix 2024-09-21 14:56:10 +00:00
binary-husky
09fd22091a fix: console output 2024-09-21 14:41:36 +00:00
lbykkkk
df717f8bba first_version 2024-09-20 00:06:59 +08:00
binary-husky
e296719b23 Merge branch 'purge_print' into frontier 2024-09-16 09:56:25 +00:00
binary-husky
2f343179a2 logging -> loguru: final stage 2024-09-15 15:51:51 +00:00
binary-husky
4d9604f2e9 update social helper 2024-09-15 15:16:36 +00:00
binary-husky
bbf9e9f868 logging -> loguru stage 4 2024-09-14 16:00:09 +00:00
binary-husky
aa1f967dd7 support o1-preview and o1-mini 2024-09-13 03:11:53 +00:00
binary-husky
0d082327c8 logging -> loguru: stage 3 2024-09-11 08:49:55 +00:00
binary-husky
80acd9c875 import loguru: stage 2 2024-09-11 08:18:01 +00:00
binary-husky
17cd4f8210 logging sys to loguru: stage 1 complete 2024-09-11 03:30:30 +00:00
78 changed files with 5966 additions and 1071 deletions

View File

@@ -0,0 +1,56 @@
name: Create Conda Environment Package
on:
workflow_dispatch:
jobs:
build:
runs-on: windows-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Setup Miniconda
uses: conda-incubator/setup-miniconda@v3
with:
auto-activate-base: true
activate-environment: ""
- name: Create new Conda environment
shell: bash -l {0}
run: |
conda create -n gpt python=3.11 -y
conda activate gpt
- name: Install requirements
shell: bash -l {0}
run: |
conda activate gpt
pip install -r requirements.txt
- name: Install conda-pack
shell: bash -l {0}
run: |
conda activate gpt
conda install conda-pack -y
- name: Pack conda environment
shell: bash -l {0}
run: |
conda activate gpt
conda pack -n gpt -o gpt.tar.gz
- name: Create workspace zip
shell: pwsh
run: |
mkdir workspace
Get-ChildItem -Exclude "workspace" | Copy-Item -Destination workspace -Recurse
Remove-Item -Path workspace/.git* -Recurse -Force -ErrorAction SilentlyContinue
Copy-Item gpt.tar.gz workspace/ -Force
- name: Upload packed files
uses: actions/upload-artifact@v4
with:
name: gpt-academic-package
path: workspace

View File

@@ -7,7 +7,7 @@
name: 'Close stale issues and PRs'
on:
schedule:
- cron: '*/5 * * * *'
- cron: '*/30 * * * *'
jobs:
stale:
@@ -19,7 +19,6 @@ jobs:
steps:
- uses: actions/stale@v8
with:
stale-issue-message: 'This issue is stale because it has been open 100 days with no activity. Remove stale label or comment or this will be closed in 1 days.'
stale-issue-message: 'This issue is stale because it has been open 100 days with no activity. Remove stale label or comment or this will be closed in 7 days.'
days-before-stale: 100
days-before-close: 1
debug-only: true
days-before-close: 7

2
.gitignore vendored
View File

@@ -161,3 +161,5 @@ temp.*
objdump*
*.min.*.js
TODO
experimental_mods
search_results

View File

@@ -15,6 +15,7 @@ RUN echo '[global]' > /etc/pip.conf && \
# 语音输出功能以下两行第一行更换阿里源第二行安装ffmpeg都可以删除
RUN UBUNTU_VERSION=$(awk -F= '/^VERSION_CODENAME=/{print $2}' /etc/os-release); echo "deb https://mirrors.aliyun.com/debian/ $UBUNTU_VERSION main non-free contrib" > /etc/apt/sources.list; apt-get update
RUN apt-get install ffmpeg -y
RUN apt-get clean
# 进入工作路径(必要)
@@ -33,6 +34,7 @@ RUN pip3 install -r requirements.txt
# 非必要步骤,用于预热模块(可以删除)
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'
RUN python3 -m pip cache purge
# 启动(必要)

View File

@@ -1,5 +1,10 @@
> [!IMPORTANT]
> 2024.10.10: 突发停电,紧急恢复了提供[whl包](https://drive.google.com/file/d/19U_hsLoMrjOlQSzYS3pzWX9fTzyusArP/view?usp=sharing)的文件服务器
> `master主分支`最新动态(2025.2.4): 增加deepseek-r1支持增加字体自定义功能
> `master主分支`最新动态(2025.2.2): 三分钟快速接入最强qwen2.5-max[视频](https://www.bilibili.com/video/BV1LeFuerEG4)
> `frontier开发分支`最新动态(2024.12.9): 更新对话时间线功能优化xelatex论文翻译
> `wiki文档`最新动态(2024.12.5): 更新ollama接入指南
>
> 2024.10.10: 突发停电,紧急恢复了提供[whl包](https://drive.google.com/drive/folders/14kR-3V-lIbvGxri4AHc8TpiA1fqsw7SK?usp=sharing)的文件服务器
> 2024.10.8: 版本3.90加入对llama-index的初步支持版本3.80加入插件二级菜单功能详见wiki
> 2024.5.1: 加入Doc2x翻译PDF论文的功能[查看详情](https://github.com/binary-husky/gpt_academic/wiki/Doc2x)
> 2024.3.11: 全力支持Qwen、GLM、DeepseekCoder等中文大语言模型 SoVits语音克隆模块[查看详情](https://www.bilibili.com/video/BV1Rp421S7tF/)
@@ -170,26 +175,32 @@ flowchart TD
```
<details><summary>如果需要支持清华ChatGLM2/复旦MOSS/RWKV作为后端请点击展开此处</summary>
<details><summary>如果需要支持清华ChatGLM系列/复旦MOSS/RWKV作为后端请点击展开此处</summary>
<p>
【可选步骤】如果需要支持清华ChatGLM3/复旦MOSS作为后端需要额外安装更多依赖前提条件熟悉Python + 用过Pytorch + 电脑配置够强):
【可选步骤】如果需要支持清华ChatGLM系列/复旦MOSS作为后端需要额外安装更多依赖前提条件熟悉Python + 用过Pytorch + 电脑配置够强):
```sh
# 【可选步骤I】支持清华ChatGLM3。清华ChatGLM备注如果遇到"Call ChatGLM fail 不能正常加载ChatGLM的参数" 错误,参考如下: 1以上默认安装的为torch+cpu版使用cuda需要卸载torch重新安装torch+cuda 2如因本机配置不够无法加载模型可以修改request_llm/bridge_chatglm.py中的模型精度, 将 AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) 都修改为 AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)
python -m pip install -r request_llms/requirements_chatglm.txt
# 【可选步骤II】支持复旦MOSS
# 【可选步骤II】支持清华ChatGLM4 注意此模型至少需要24G显存
python -m pip install -r request_llms/requirements_chatglm4.txt
# 可使用modelscope下载ChatGLM4模型
# pip install modelscope
# modelscope download --model ZhipuAI/glm-4-9b-chat --local_dir ./THUDM/glm-4-9b-chat
# 【可选步骤III】支持复旦MOSS
python -m pip install -r request_llms/requirements_moss.txt
git clone --depth=1 https://github.com/OpenLMLab/MOSS.git request_llms/moss # 注意执行此行代码时,必须处于项目根路径
# 【可选步骤III】支持RWKV Runner
# 【可选步骤IV】支持RWKV Runner
参考wikihttps://github.com/binary-husky/gpt_academic/wiki/%E9%80%82%E9%85%8DRWKV-Runner
# 【可选步骤IV】确保config.py配置文件的AVAIL_LLM_MODELS包含了期望的模型目前支持的全部模型如下(jittorllms系列目前仅支持docker方案)
# 【可选步骤V】确保config.py配置文件的AVAIL_LLM_MODELS包含了期望的模型目前支持的全部模型如下(jittorllms系列目前仅支持docker方案)
AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"]
# 【可选步骤V】支持本地模型INT8,INT4量化这里所指的模型本身不是量化版本目前deepseek-coder支持后面测试后会加入更多模型量化选择
# 【可选步骤VI】支持本地模型INT8,INT4量化这里所指的模型本身不是量化版本目前deepseek-coder支持后面测试后会加入更多模型量化选择
pip install bitsandbyte
# windows用户安装bitsandbytes需要使用下面bitsandbytes-windows-webui
python -m pip install bitsandbytes --prefer-binary --extra-index-url=https://jllllll.github.io/bitsandbytes-windows-webui

View File

@@ -7,11 +7,16 @@
Configuration reading priority: environment variable > config_private.py > config.py
"""
# [step 1]>> API_KEY = "sk-123456789xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx123456789"。极少数情况下还需要填写组织格式如org-123456789abcdefghijklmno的请向下翻找 API_ORG 设置项
API_KEY = "此处填API密钥" # 可同时填写多个API-KEY用英文逗号分割例如API_KEY = "sk-openaikey1,sk-openaikey2,fkxxxx-api2dkey3,azure-apikey4"
# [step 1-1]>> ( 接入GPT等模型 ) API_KEY = "sk-123456789xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx123456789"。极少数情况下还需要填写组织格式如org-123456789abcdefghijklmno的请向下翻找 API_ORG 设置项
API_KEY = "此处填APIKEY" # 可同时填写多个API-KEY用英文逗号分割例如API_KEY = "sk-openaikey1,sk-openaikey2,fkxxxx-api2dkey3,azure-apikey4"
# [step 1-2]>> ( 接入通义 qwen-max ) 接入通义千问在线大模型api-key获取地址 https://dashscope.console.aliyun.com/
DASHSCOPE_API_KEY = "" # 阿里灵积云API_KEY
# [step 2]>> 改为True应用代理如果直接在海外服务器部署此处不修改如果使用本地或无地域限制的大模型时此处也不需要修改
# [step 1-3]>> ( 接入 deepseek-reasoner, 即 deepseek-r1 ) 深度求索(DeepSeek) API KEY默认请求地址为"https://api.deepseek.com/v1/chat/completions"
DEEPSEEK_API_KEY = ""
# [step 2]>> 改为True应用代理。如果使用本地或无地域限制的大模型时此处不修改如果直接在海外服务器部署此处不修改
USE_PROXY = False
if USE_PROXY:
"""
@@ -32,11 +37,13 @@ else:
# [step 3]>> 模型选择是 (注意: LLM_MODEL是默认选中的模型, 它*必须*被包含在AVAIL_LLM_MODELS列表中 )
LLM_MODEL = "gpt-3.5-turbo-16k" # 可选 ↓↓↓
AVAIL_LLM_MODELS = ["gpt-4-1106-preview", "gpt-4-turbo-preview", "gpt-4-vision-preview",
AVAIL_LLM_MODELS = ["qwen-max", "o1-mini", "o1-mini-2024-09-12", "o1", "o1-2024-12-17", "o1-preview", "o1-preview-2024-09-12",
"gpt-4-1106-preview", "gpt-4-turbo-preview", "gpt-4-vision-preview",
"gpt-4o", "gpt-4o-mini", "gpt-4-turbo", "gpt-4-turbo-2024-04-09",
"gpt-3.5-turbo-1106", "gpt-3.5-turbo-16k", "gpt-3.5-turbo", "azure-gpt-3.5",
"gpt-4", "gpt-4-32k", "azure-gpt-4", "glm-4", "glm-4v", "glm-3-turbo",
"gemini-1.5-pro", "chatglm3"
"gemini-1.5-pro", "chatglm3", "chatglm4",
"deepseek-chat", "deepseek-coder", "deepseek-reasoner"
]
EMBEDDING_MODEL = "text-embedding-3-small"
@@ -47,7 +54,7 @@ EMBEDDING_MODEL = "text-embedding-3-small"
# "glm-4-0520", "glm-4-air", "glm-4-airx", "glm-4-flash",
# "qianfan", "deepseekcoder",
# "spark", "sparkv2", "sparkv3", "sparkv3.5", "sparkv4",
# "qwen-turbo", "qwen-plus", "qwen-max", "qwen-local",
# "qwen-turbo", "qwen-plus", "qwen-local",
# "moonshot-v1-128k", "moonshot-v1-32k", "moonshot-v1-8k",
# "gpt-3.5-turbo-0613", "gpt-3.5-turbo-16k-0613", "gpt-3.5-turbo-0125", "gpt-4o-2024-05-13"
# "claude-3-haiku-20240307","claude-3-sonnet-20240229","claude-3-opus-20240229", "claude-2.1", "claude-instant-1.2",
@@ -55,6 +62,7 @@ EMBEDDING_MODEL = "text-embedding-3-small"
# "deepseek-chat" ,"deepseek-coder",
# "gemini-1.5-flash",
# "yi-34b-chat-0205","yi-34b-chat-200k","yi-large","yi-medium","yi-spark","yi-large-turbo","yi-large-preview",
# "grok-beta",
# ]
# --- --- --- ---
# 此外您还可以在接入one-api/vllm/ollama/Openroute时
@@ -81,6 +89,30 @@ DEFAULT_WORKER_NUM = 3
THEME = "Default"
AVAIL_THEMES = ["Default", "Chuanhu-Small-and-Beautiful", "High-Contrast", "Gstaff/Xkcd", "NoCrypt/Miku"]
FONT = "Theme-Default-Font"
AVAIL_FONTS = [
"默认值(Theme-Default-Font)",
"宋体(SimSun)",
"黑体(SimHei)",
"楷体(KaiTi)",
"仿宋(FangSong)",
"华文细黑(STHeiti Light)",
"华文楷体(STKaiti)",
"华文仿宋(STFangsong)",
"华文宋体(STSong)",
"华文中宋(STZhongsong)",
"华文新魏(STXinwei)",
"华文隶书(STLiti)",
"思源宋体(Source Han Serif CN VF@https://chinese-fonts-cdn.deno.dev/packages/syst/dist/SourceHanSerifCN/result.css)",
"月星楷(Moon Stars Kai HW@https://chinese-fonts-cdn.deno.dev/packages/moon-stars-kai/dist/MoonStarsKaiHW-Regular/result.css)",
"珠圆体(MaokenZhuyuanTi@https://chinese-fonts-cdn.deno.dev/packages/mkzyt/dist/猫啃珠圆体/result.css)",
"平方萌萌哒(PING FANG MENG MNEG DA@https://chinese-fonts-cdn.deno.dev/packages/pfmmd/dist/平方萌萌哒/result.css)",
"Helvetica",
"ui-sans-serif",
"sans-serif",
"system-ui"
]
# 默认的系统提示词system prompt
INIT_SYS_PROMPT = "Serve me as a writing and programming assistant."
@@ -132,16 +164,15 @@ MULTI_QUERY_LLM_MODELS = "gpt-3.5-turbo&chatglm3"
QWEN_LOCAL_MODEL_SELECTION = "Qwen/Qwen-1_8B-Chat-Int8"
# 接入通义千问在线大模型 https://dashscope.console.aliyun.com/
DASHSCOPE_API_KEY = "" # 阿里灵积云API_KEY
# 百度千帆LLM_MODEL="qianfan"
BAIDU_CLOUD_API_KEY = ''
BAIDU_CLOUD_SECRET_KEY = ''
BAIDU_CLOUD_QIANFAN_MODEL = 'ERNIE-Bot' # 可选 "ERNIE-Bot-4"(文心大模型4.0), "ERNIE-Bot"(文心一言), "ERNIE-Bot-turbo", "BLOOMZ-7B", "Llama-2-70B-Chat", "Llama-2-13B-Chat", "Llama-2-7B-Chat", "ERNIE-Speed-128K", "ERNIE-Speed-8K", "ERNIE-Lite-8K"
# 如果使用ChatGLM3或ChatGLM4本地模型请把 LLM_MODEL="chatglm3" 或LLM_MODEL="chatglm4",并在此处指定模型路径
CHATGLM_LOCAL_MODEL_PATH = "THUDM/glm-4-9b-chat" # 例如"/home/hmp/ChatGLM3-6B/"
# 如果使用ChatGLM2微调模型请把 LLM_MODEL="chatglmft",并在此处指定模型路径
CHATGLM_PTUNING_CHECKPOINT = "" # 例如"/home/hmp/ChatGLM2-6B/ptuning/output/6b-pt-128-1e-2/checkpoint-100"
@@ -235,13 +266,11 @@ MOONSHOT_API_KEY = ""
YIMODEL_API_KEY = ""
# 深度求索(DeepSeek) API KEY默认请求地址为"https://api.deepseek.com/v1/chat/completions"
DEEPSEEK_API_KEY = ""
# 紫东太初大模型 https://ai-maas.wair.ac.cn
TAICHU_API_KEY = ""
# Grok API KEY
GROK_API_KEY = ""
# Mathpix 拥有执行PDF的OCR功能但是需要注册账号
MATHPIX_APPID = ""
@@ -273,8 +302,8 @@ GROBID_URLS = [
]
# Searxng互联网检索服务
SEARXNG_URL = "https://cloud-1.agent-matrix.com/"
# Searxng互联网检索服务这是一个huggingface空间请前往huggingface复制该空间然后把自己新的空间地址填在这里
SEARXNG_URLS = [ f"https://kaletianlre-beardvs{i}dd.hf.space/" for i in range(1,5) ]
# 是否允许通过自然语言描述修改本页的配置,该功能具有一定的危险性,默认关闭
@@ -298,7 +327,7 @@ ARXIV_CACHE_DIR = "gpt_log/arxiv_cache"
# 除了连接OpenAI之外还有哪些场合允许使用代理请尽量不要修改
WHEN_TO_USE_PROXY = ["Download_LLM", "Download_Gradio_Theme", "Connect_Grobid",
WHEN_TO_USE_PROXY = ["Connect_OpenAI", "Download_LLM", "Download_Gradio_Theme", "Connect_Grobid",
"Warmup_Modules", "Nougat_Download", "AutoGen", "Connect_OpenAI_Embedding"]
@@ -310,6 +339,10 @@ PLUGIN_HOT_RELOAD = False
NUM_CUSTOM_BASIC_BTN = 4
# 媒体智能体的服务地址这是一个huggingface空间请前往huggingface复制该空间然后把自己新的空间地址填在这里
DAAS_SERVER_URLS = [ f"https://niuziniu-biligpt{i}.hf.space/stream" for i in range(1,5) ]
"""
--------------- 配置关联关系说明 ---------------
@@ -369,6 +402,7 @@ NUM_CUSTOM_BASIC_BTN = 4
本地大模型示意图
├── "chatglm4"
├── "chatglm3"
├── "chatglm"
├── "chatglm_onnx"
@@ -399,7 +433,7 @@ NUM_CUSTOM_BASIC_BTN = 4
插件在线服务配置依赖关系示意图
├── 互联网检索
│ └── SEARXNG_URL
│ └── SEARXNG_URLS
├── 语音功能
│ ├── ENABLE_AUDIO

View File

@@ -2,7 +2,6 @@ from toolbox import HotReload # HotReload 的意思是热更新,修改函数
from toolbox import trimmed_format_exc
from loguru import logger
def get_crazy_functions():
from crazy_functions.读文章写摘要 import 读文章写摘要
from crazy_functions.生成函数注释 import 批量生成函数注释
@@ -17,7 +16,7 @@ def get_crazy_functions():
from crazy_functions.SourceCode_Analyse import 解析一个前端项目
from crazy_functions.高级功能函数模板 import 高阶功能模板函数
from crazy_functions.高级功能函数模板 import Demo_Wrap
from crazy_functions.Latex全文润色 import Latex英文润色
from crazy_functions.Latex_Project_Polish import Latex英文润色
from crazy_functions.询问多个大语言模型 import 同时问询
from crazy_functions.SourceCode_Analyse import 解析一个Lua项目
from crazy_functions.SourceCode_Analyse import 解析一个CSharp项目
@@ -33,8 +32,8 @@ def get_crazy_functions():
from crazy_functions.PDF_Translate import 批量翻译PDF文档
from crazy_functions.谷歌检索小助手 import 谷歌检索小助手
from crazy_functions.理解PDF文档内容 import 理解PDF文档内容标准文件输入
from crazy_functions.Latex全文润色 import Latex中文润色
from crazy_functions.Latex全文润色 import Latex英文纠错
from crazy_functions.Latex_Project_Polish import Latex中文润色
from crazy_functions.Latex_Project_Polish import Latex英文纠错
from crazy_functions.Markdown_Translate import Markdown中译英
from crazy_functions.虚空终端 import 虚空终端
from crazy_functions.生成多种Mermaid图表 import Mermaid_Gen
@@ -50,8 +49,16 @@ def get_crazy_functions():
from crazy_functions.Image_Generate_Wrap import ImageGen_Wrap
from crazy_functions.SourceCode_Comment import 注释Python项目
from crazy_functions.SourceCode_Comment_Wrap import SourceCodeComment_Wrap
from crazy_functions.VideoResource_GPT import 多媒体任务
function_plugins = {
"多媒体智能体": {
"Group": "智能体",
"Color": "stop",
"AsButton": False,
"Info": "【仅测试】多媒体任务",
"Function": HotReload(多媒体任务),
},
"虚空终端": {
"Group": "对话|编程|学术|智能体",
"Color": "stop",
@@ -720,12 +727,6 @@ def get_crazy_functions():
logger.error("Load function plugin failed")
# try:
# from crazy_functions.高级功能函数模板 import 测试图表渲染
# function_plugins.update({
@@ -740,19 +741,6 @@ def get_crazy_functions():
# logger.error(trimmed_format_exc())
# print('Load function plugin failed')
# try:
# from crazy_functions.chatglm微调工具 import 微调数据集生成
# function_plugins.update({
# "黑盒模型学习: 微调数据集生成 (先上传数据集)": {
# "Color": "stop",
# "AsButton": False,
# "AdvancedArgs": True,
# "ArgsReminder": "针对数据集输入(如 绿帽子*深蓝色衬衫*黑色运动裤)给出指令,例如您可以将以下命令复制到下方: --llm_to_learn=azure-gpt-3.5 --prompt_prefix='根据下面的服装类型提示想象一个穿着者对这个人外貌、身处的环境、内心世界、过去经历进行描写。要求100字以内用第二人称。' --system_prompt=''",
# "Function": HotReload(微调数据集生成)
# }
# })
# except:
# print('Load function plugin failed')
"""
设置默认值:
@@ -772,3 +760,23 @@ def get_crazy_functions():
function_plugins[name]["Color"] = "secondary"
return function_plugins
def get_multiplex_button_functions():
"""多路复用主提交按钮的功能映射
"""
return {
"常规对话":
"",
"多模型对话":
"询问多个GPT模型", # 映射到上面的 `询问多个GPT模型` 插件
"智能召回 RAG":
"Rag智能召回", # 映射到上面的 `Rag智能召回` 插件
"多媒体查询":
"多媒体智能体", # 映射到上面的 `多媒体智能体` 插件
}

View File

@@ -172,7 +172,7 @@ def 载入对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
user_request 当前用户的请求信息IP地址等
"""
from crazy_functions.crazy_utils import get_files_from_everything
success, file_manifest, _ = get_files_from_everything(txt, type='.html')
success, file_manifest, _ = get_files_from_everything(txt, type='.html',chatbot=chatbot)
if not success:
if txt == "": txt = '空空如也的输入栏'

View File

@@ -7,7 +7,7 @@ from bs4 import BeautifulSoup
from functools import lru_cache
from itertools import zip_longest
from check_proxy import check_proxy
from toolbox import CatchException, update_ui, get_conf
from toolbox import CatchException, update_ui, get_conf, update_ui_lastest_msg
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive, input_clipping
from request_llms.bridge_all import model_info
from request_llms.bridge_all import predict_no_ui_long_connection
@@ -115,7 +115,8 @@ def get_auth_ip():
def searxng_request(query, proxies, categories='general', searxng_url=None, engines=None):
if searxng_url is None:
url = get_conf("SEARXNG_URL")
urls = get_conf("SEARXNG_URLS")
url = random.choice(urls)
else:
url = searxng_url
@@ -192,6 +193,38 @@ def scrape_text(url, proxies) -> str:
text = "\n".join(chunk for chunk in chunks if chunk)
return text
def internet_search_with_analysis_prompt(prompt, analysis_prompt, llm_kwargs, chatbot):
from toolbox import get_conf
proxies = get_conf('proxies')
categories = 'general'
searxng_url = None # 使用默认的searxng_url
engines = None # 使用默认的搜索引擎
yield from update_ui_lastest_msg(lastmsg=f"检索中: {prompt} ...", chatbot=chatbot, history=[], delay=1)
urls = searxng_request(prompt, proxies, categories, searxng_url, engines=engines)
yield from update_ui_lastest_msg(lastmsg=f"依次访问搜索到的网站 ...", chatbot=chatbot, history=[], delay=1)
if len(urls) == 0:
return None
max_search_result = 5 # 最多收纳多少个网页的结果
history = []
for index, url in enumerate(urls[:max_search_result]):
yield from update_ui_lastest_msg(lastmsg=f"依次访问搜索到的网站: {url['link']} ...", chatbot=chatbot, history=[], delay=1)
res = scrape_text(url['link'], proxies)
prefix = f"{index}份搜索结果 [源自{url['source'][0]}搜索] {url['title'][:25]}"
history.extend([prefix, res])
i_say = f"从以上搜索结果中抽取信息,然后回答问题:{prompt} {analysis_prompt}"
i_say, history = input_clipping( # 裁剪输入从最长的条目开始裁剪防止爆token
inputs=i_say,
history=history,
max_token_limit=8192
)
gpt_say = predict_no_ui_long_connection(
inputs=i_say,
llm_kwargs=llm_kwargs,
history=history,
sys_prompt="请从搜索结果中抽取信息,对最相关的两个搜索结果进行总结,然后回答问题。",
console_slience=False,
)
return gpt_say
@CatchException
def 连接网络回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):

View File

@@ -1,4 +1,4 @@
import random
from toolbox import get_conf
from crazy_functions.Internet_GPT import 连接网络回答问题
from crazy_functions.plugin_template.plugin_class_template import GptAcademicPluginTemplate, ArgProperty
@@ -20,6 +20,9 @@ class NetworkGPT_Wrap(GptAcademicPluginTemplate):
第三个参数,名称`allow_cache`,参数`type`声明这是一个下拉菜单,下拉菜单上方显示`title`+`description`,下拉菜单的选项为`options``default_value`为下拉菜单默认值;
"""
urls = get_conf("SEARXNG_URLS")
url = random.choice(urls)
gui_definition = {
"main_input":
ArgProperty(title="输入问题", description="待通过互联网检索的问题,会自动读取输入框内容", default_value="", type="string").model_dump_json(), # 主输入,自动从输入框同步
@@ -30,7 +33,7 @@ class NetworkGPT_Wrap(GptAcademicPluginTemplate):
"optimizer":
ArgProperty(title="搜索优化", options=["关闭", "开启", "开启(增强)"], default_value="关闭", description="是否使用搜索增强。注意这可能会消耗较多token", type="dropdown").model_dump_json(),
"searxng_url":
ArgProperty(title="Searxng服务地址", description="输入Searxng的地址", default_value=get_conf("SEARXNG_URL"), type="string").model_dump_json(), # 主输入,自动从输入框同步
ArgProperty(title="Searxng服务地址", description="输入Searxng的地址", default_value=url, type="string").model_dump_json(), # 主输入,自动从输入框同步
}
return gui_definition

View File

@@ -559,7 +559,7 @@ def PDF翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot, h
project_folder = move_project(project_folder)
# <-------------- set a hash tag for repeat-checking ------------->
with open(pj(project_folder, hash_tag + '.tag'), 'w') as f:
with open(pj(project_folder, hash_tag + '.tag'), 'w', encoding='utf8') as f:
f.write(hash_tag)
f.close()

View File

@@ -1,3 +1,4 @@
from shared_utils.fastapi_server import validate_path_safety
from toolbox import update_ui, trimmed_format_exc, promote_file_to_downloadzone, get_log_folder
from toolbox import CatchException, report_exception, write_history_to_file, zip_folder
from loguru import logger
@@ -155,6 +156,7 @@ def Latex英文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
import glob, os
if os.path.exists(txt):
project_folder = txt
validate_path_safety(project_folder, chatbot.get_user())
else:
if txt == "": txt = '空空如也的输入栏'
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
@@ -193,6 +195,7 @@ def Latex中文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
import glob, os
if os.path.exists(txt):
project_folder = txt
validate_path_safety(project_folder, chatbot.get_user())
else:
if txt == "": txt = '空空如也的输入栏'
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
@@ -229,6 +232,7 @@ def Latex英文纠错(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
import glob, os
if os.path.exists(txt):
project_folder = txt
validate_path_safety(project_folder, chatbot.get_user())
else:
if txt == "": txt = '空空如也的输入栏'
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")

View File

@@ -1,5 +1,6 @@
import glob, shutil, os, re
from loguru import logger
from shared_utils.fastapi_server import validate_path_safety
from toolbox import update_ui, trimmed_format_exc, gen_time_str
from toolbox import CatchException, report_exception, get_log_folder
from toolbox import write_history_to_file, promote_file_to_downloadzone
@@ -118,7 +119,7 @@ def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
def get_files_from_everything(txt, preference=''):
def get_files_from_everything(txt, preference='', chatbox=None):
if txt == "": return False, None, None
success = True
if txt.startswith('http'):
@@ -146,9 +147,11 @@ def get_files_from_everything(txt, preference=''):
# 直接给定文件
file_manifest = [txt]
project_folder = os.path.dirname(txt)
validate_path_safety(project_folder, chatbot.get_user())
elif os.path.exists(txt):
# 本地路径,递归搜索
project_folder = txt
validate_path_safety(project_folder, chatbot.get_user())
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.md', recursive=True)]
else:
project_folder = None
@@ -177,7 +180,7 @@ def Markdown英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
return
history = [] # 清空历史,以免输入溢出
success, file_manifest, project_folder = get_files_from_everything(txt, preference="Github")
success, file_manifest, project_folder = get_files_from_everything(txt, preference="Github", chatbox=chatbot)
if not success:
# 什么都没有

View File

@@ -26,7 +26,7 @@ def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
# 清空历史,以免输入溢出
history = []
success, file_manifest, project_folder = get_files_from_everything(txt, type='.pdf')
success, file_manifest, project_folder = get_files_from_everything(txt, type='.pdf', chatbot=chatbot)
# 检测输入参数,如没有给定输入参数,直接退出
if (not success) and txt == "": txt = '空空如也的输入栏。提示请先上传文件把PDF文件拖入对话'
@@ -47,7 +47,7 @@ def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
yield from 解析PDF_基于DOC2X(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, DOC2X_API_KEY, user_request)
return
except:
chatbot.append([None, f"DOC2X服务不可用现在将执行效果稍差的旧版代码{trimmed_format_exc_markdown()}"])
chatbot.append([None, f"DOC2X服务不可用请检查报错详细{trimmed_format_exc_markdown()}"])
yield from update_ui(chatbot=chatbot, history=history)
if method == "GROBID":

View File

@@ -1,4 +1,11 @@
import os,glob
from typing import List
from shared_utils.fastapi_server import validate_path_safety
from toolbox import report_exception
from toolbox import CatchException, update_ui, get_conf, get_log_folder, update_ui_lastest_msg
from shared_utils.fastapi_server import validate_path_safety
from crazy_functions.crazy_utils import input_clipping
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
@@ -7,6 +14,37 @@ MAX_HISTORY_ROUND = 5
MAX_CONTEXT_TOKEN_LIMIT = 4096
REMEMBER_PREVIEW = 1000
@CatchException
def handle_document_upload(files: List[str], llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request, rag_worker):
"""
Handles document uploads by extracting text and adding it to the vector store.
"""
from llama_index.core import Document
from crazy_functions.rag_fns.rag_file_support import extract_text, supports_format
user_name = chatbot.get_user()
checkpoint_dir = get_log_folder(user_name, plugin_name='experimental_rag')
for file_path in files:
try:
validate_path_safety(file_path, user_name)
text = extract_text(file_path)
if text is None:
chatbot.append(
[f"上传文件: {os.path.basename(file_path)}", f"文件解析失败无法提取文本内容请更换文件。失败原因可能为1.文档格式过于复杂2. 不支持的文件格式,支持的文件格式后缀有:" + ", ".join(supports_format)])
else:
chatbot.append(
[f"上传文件: {os.path.basename(file_path)}", f"上传文件前50个字符为:{text[:50]}"])
document = Document(text=text, metadata={"source": file_path})
rag_worker.add_documents_to_vector_store([document])
chatbot.append([f"上传文件: {os.path.basename(file_path)}", "文件已成功添加到知识库。"])
except Exception as e:
report_exception(chatbot, history, a=f"处理文件: {file_path}", b=str(e))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# Main Q&A function with document upload support
@CatchException
def Rag问答(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
@@ -23,6 +61,7 @@ def Rag问答(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, u
# 1. we retrieve rag worker from global context
user_name = chatbot.get_user()
checkpoint_dir = get_log_folder(user_name, plugin_name='experimental_rag')
if user_name in RAG_WORKER_REGISTER:
rag_worker = RAG_WORKER_REGISTER[user_name]
else:
@@ -30,21 +69,37 @@ def Rag问答(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, u
user_name,
llm_kwargs,
checkpoint_dir=checkpoint_dir,
auto_load_checkpoint=True)
auto_load_checkpoint=True
)
current_context = f"{VECTOR_STORE_TYPE} @ {checkpoint_dir}"
tip = "提示输入“清空向量数据库”可以清空RAG向量数据库"
if txt == "清空向量数据库":
chatbot.append([txt, f'正在清空 ({current_context}) ...'])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
rag_worker.purge()
yield from update_ui_lastest_msg('已清空', chatbot, history, delay=0) # 刷新界面
# 2. Handle special commands
if os.path.exists(txt) and os.path.isdir(txt):
project_folder = txt
validate_path_safety(project_folder, chatbot.get_user())
# Extract file paths from the user input
# Assuming the user inputs file paths separated by commas after the command
file_paths = [f for f in glob.glob(f'{project_folder}/**/*', recursive=True)]
chatbot.append([txt, f'正在处理上传的文档 ({current_context}) ...'])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
yield from handle_document_upload(file_paths, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request, rag_worker)
return
chatbot.append([txt, f'正在召回知识 ({current_context}) ...'])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
elif txt == "清空向量数据库":
chatbot.append([txt, f'正在清空 ({current_context}) ...'])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
rag_worker.purge_vector_store()
yield from update_ui_lastest_msg('已清空', chatbot, history, delay=0) # 刷新界面
return
# 2. clip history to reduce token consumption
# 2-1. reduce chat round
# 3. Normal Q&A processing
chatbot.append([txt, f'正在召回知识 ({current_context}) ...'])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# 4. Clip history to reduce token consumption
txt_origin = txt
if len(history) > MAX_HISTORY_ROUND * 2:
@@ -52,41 +107,47 @@ def Rag问答(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, u
txt_clip, history, flags = input_clipping(txt, history, max_token_limit=MAX_CONTEXT_TOKEN_LIMIT, return_clip_flags=True)
input_is_clipped_flag = (flags["original_input_len"] != flags["clipped_input_len"])
# 2-2. if input is clipped, add input to vector store before retrieve
# 5. If input is clipped, add input to vector store before retrieve
if input_is_clipped_flag:
yield from update_ui_lastest_msg('检测到长输入, 正在向量化 ...', chatbot, history, delay=0) # 刷新界面
# save input to vector store
yield from update_ui_lastest_msg('检测到长输入, 正在向量化 ...', chatbot, history, delay=0) # 刷新界面
# Save input to vector store
rag_worker.add_text_to_vector_store(txt_origin)
yield from update_ui_lastest_msg('向量化完成 ...', chatbot, history, delay=0) # 刷新界面
yield from update_ui_lastest_msg('向量化完成 ...', chatbot, history, delay=0) # 刷新界面
if len(txt_origin) > REMEMBER_PREVIEW:
HALF = REMEMBER_PREVIEW//2
HALF = REMEMBER_PREVIEW // 2
i_say_to_remember = txt[:HALF] + f" ...\n...(省略{len(txt_origin)-REMEMBER_PREVIEW}字)...\n... " + txt[-HALF:]
if (flags["original_input_len"] - flags["clipped_input_len"]) > HALF:
txt_clip = txt_clip + f" ...\n...(省略{len(txt_origin)-len(txt_clip)-HALF}字)...\n... " + txt[-HALF:]
else:
pass
i_say = txt_clip
txt_clip = txt_clip + f" ...\n...(省略{len(txt_origin)-len(txt_clip)-HALF}字)...\n... " + txt[-HALF:]
else:
i_say_to_remember = i_say = txt_clip
else:
i_say_to_remember = i_say = txt_clip
# 3. we search vector store and build prompts
# 6. Search vector store and build prompts
nodes = rag_worker.retrieve_from_store_with_query(i_say)
prompt = rag_worker.build_prompt(query=i_say, nodes=nodes)
# 7. Query language model
if len(chatbot) != 0:
chatbot.pop(-1) # Pop temp chat, because we are going to add them again inside `request_gpt_model_in_new_thread_with_ui_alive`
# 4. it is time to query llms
if len(chatbot) != 0: chatbot.pop(-1) # pop temp chat, because we are going to add them again inside `request_gpt_model_in_new_thread_with_ui_alive`
model_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=prompt, inputs_show_user=i_say,
llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,
inputs=prompt,
inputs_show_user=i_say,
llm_kwargs=llm_kwargs,
chatbot=chatbot,
history=history,
sys_prompt=system_prompt,
retry_times_at_unknown_error=0
)
# 5. remember what has been asked / answered
yield from update_ui_lastest_msg(model_say + '</br></br>' + f'对话记忆中, 请稍等 ({current_context}) ...', chatbot, history, delay=0.5) # 刷新界面
# 8. Remember Q&A
yield from update_ui_lastest_msg(
model_say + '</br></br>' + f'对话记忆中, 请稍等 ({current_context}) ...',
chatbot, history, delay=0.5
)
rag_worker.remember_qa(i_say_to_remember, model_say)
history.extend([i_say, model_say])
yield from update_ui_lastest_msg(model_say, chatbot, history, delay=0, msg=tip) # 刷新界面
# 9. Final UI Update
yield from update_ui_lastest_msg(model_say, chatbot, history, delay=0, msg=tip)

View File

@@ -0,0 +1,204 @@
import requests
import random
import time
import re
import json
from bs4 import BeautifulSoup
from functools import lru_cache
from itertools import zip_longest
from check_proxy import check_proxy
from toolbox import CatchException, update_ui, get_conf, promote_file_to_downloadzone, update_ui_lastest_msg, generate_file_link
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive, input_clipping
from request_llms.bridge_all import model_info
from request_llms.bridge_all import predict_no_ui_long_connection
from crazy_functions.prompts.internet import SearchOptimizerPrompt, SearchAcademicOptimizerPrompt
from crazy_functions.json_fns.pydantic_io import GptJsonIO, JsonStringError
from textwrap import dedent
from loguru import logger
from pydantic import BaseModel, Field
class Query(BaseModel):
search_keyword: str = Field(description="search query for video resource")
class VideoResource(BaseModel):
thought: str = Field(description="analysis of the search results based on the user's query")
title: str = Field(description="title of the video")
author: str = Field(description="author/uploader of the video")
bvid: str = Field(description="unique ID of the video")
another_failsafe_bvid: str = Field(description="provide another bvid, the other one is not working")
def get_video_resource(search_keyword):
from crazy_functions.media_fns.get_media import search_videos
# Search for videos and return the first result
videos = search_videos(
search_keyword
)
# Return the first video if results exist, otherwise return None
return videos
def download_video(bvid, user_name, chatbot, history):
# from experimental_mods.get_bilibili_resource import download_bilibili
from crazy_functions.media_fns.get_media import download_video
# pause a while
tic_time = 8
for i in range(tic_time):
yield from update_ui_lastest_msg(
lastmsg=f"即将下载音频。等待{tic_time-i}秒后自动继续, 点击“停止”键取消此操作。",
chatbot=chatbot, history=[], delay=1)
# download audio
chatbot.append((None, "下载音频, 请稍等...")); yield from update_ui(chatbot=chatbot, history=history)
downloaded_files = yield from download_video(bvid, only_audio=True, user_name=user_name, chatbot=chatbot, history=history)
if len(downloaded_files) == 0:
# failed to download audio
return []
# preview
preview_list = [promote_file_to_downloadzone(fp) for fp in downloaded_files]
file_links = generate_file_link(preview_list)
yield from update_ui_lastest_msg(f"已完成的文件: <br/>" + file_links, chatbot=chatbot, history=history, delay=0)
chatbot.append((None, f"即将下载视频。"))
# pause a while
tic_time = 16
for i in range(tic_time):
yield from update_ui_lastest_msg(
lastmsg=f"即将下载视频。等待{tic_time-i}秒后自动继续, 点击“停止”键取消此操作。",
chatbot=chatbot, history=[], delay=1)
# download video
chatbot.append((None, "下载视频, 请稍等...")); yield from update_ui(chatbot=chatbot, history=history)
downloaded_files_part2 = yield from download_video(bvid, only_audio=False, user_name=user_name, chatbot=chatbot, history=history)
# preview
preview_list = [promote_file_to_downloadzone(fp) for fp in downloaded_files_part2]
file_links = generate_file_link(preview_list)
yield from update_ui_lastest_msg(f"已完成的文件: <br/>" + file_links, chatbot=chatbot, history=history, delay=0)
# return
return downloaded_files + downloaded_files_part2
class Strategy(BaseModel):
thought: str = Field(description="analysis of the user's wish, for example, can you recall the name of the resource?")
which_methods: str = Field(description="Which method to use to find the necessary information? choose from 'method_1' and 'method_2'.")
method_1_search_keywords: str = Field(description="Generate keywords to search the internet if you choose method 1, otherwise empty.")
method_2_generate_keywords: str = Field(description="Generate keywords for video download engine if you choose method 2, otherwise empty.")
@CatchException
def 多媒体任务(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
user_wish: str = txt
# query demos:
# - "我想找一首歌里面有句歌词是“turn your face towards the sun”"
# - "一首歌,第一句是红豆生南国"
# - "一首音乐,中国航天任务专用的那首"
# - "戴森球计划在熔岩星球的音乐"
# - "hanser的百变什么精"
# - "打大圣残躯时的bgm"
# - "渊下宫战斗音乐"
# 搜索
chatbot.append((txt, "检索中, 请稍等..."))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
if "跳过联网搜索" not in user_wish:
# 结构化生成
internet_search_keyword = user_wish
yield from update_ui_lastest_msg(lastmsg=f"发起互联网检索: {internet_search_keyword} ...", chatbot=chatbot, history=[], delay=1)
from crazy_functions.Internet_GPT import internet_search_with_analysis_prompt
result = yield from internet_search_with_analysis_prompt(
prompt=internet_search_keyword,
analysis_prompt="请根据搜索结果分析,获取用户需要找的资源的名称、作者、出处等信息。",
llm_kwargs=llm_kwargs,
chatbot=chatbot
)
yield from update_ui_lastest_msg(lastmsg=f"互联网检索结论: {result} \n\n 正在生成进一步检索方案 ...", chatbot=chatbot, history=[], delay=1)
rf_req = dedent(f"""
The user wish to get the following resource:
{user_wish}
Meanwhile, you can access another expert's opinion on the user's wish:
{result}
Generate search keywords (less than 5 keywords) for video download engine accordingly.
""")
else:
user_wish = user_wish.replace("跳过联网搜索", "").strip()
rf_req = dedent(f"""
The user wish to get the following resource:
{user_wish}
Generate reseach keywords (less than 5 keywords) accordingly.
""")
gpt_json_io = GptJsonIO(Query)
inputs = rf_req + gpt_json_io.format_instructions
run_gpt_fn = lambda inputs, sys_prompt: predict_no_ui_long_connection(inputs=inputs, llm_kwargs=llm_kwargs, history=[], sys_prompt=sys_prompt, observe_window=[])
analyze_res = run_gpt_fn(inputs, "")
logger.info(analyze_res)
query: Query = gpt_json_io.generate_output_auto_repair(analyze_res, run_gpt_fn)
video_engine_keywords = query.search_keyword
# 关键词展示
chatbot.append((None, f"检索关键词已确认: {video_engine_keywords}。筛选中, 请稍等..."))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# 获取候选资源
candadate_dictionary: dict = get_video_resource(video_engine_keywords)
candadate_dictionary_as_str = json.dumps(candadate_dictionary, ensure_ascii=False, indent=4)
# 展示候选资源
candadate_display = "\n".join([f"{i+1}. {it['title']}" for i, it in enumerate(candadate_dictionary)])
chatbot.append((None, f"候选:\n\n{candadate_display}"))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# 结构化生成
rf_req_2 = dedent(f"""
The user wish to get the following resource:
{user_wish}
Select the most relevant and suitable video resource from the following search results:
{candadate_dictionary_as_str}
Note:
1. The first several search video results are more likely to satisfy the user's wish.
2. The time duration of the video should be less than 10 minutes.
3. You should analyze the search results first, before giving your answer.
4. Use Chinese if possible.
5. Beside the primary video selection, give a backup video resource `bvid`.
""")
gpt_json_io = GptJsonIO(VideoResource)
inputs = rf_req_2 + gpt_json_io.format_instructions
run_gpt_fn = lambda inputs, sys_prompt: predict_no_ui_long_connection(inputs=inputs, llm_kwargs=llm_kwargs, history=[], sys_prompt=sys_prompt, observe_window=[])
analyze_res = run_gpt_fn(inputs, "")
logger.info(analyze_res)
video_resource: VideoResource = gpt_json_io.generate_output_auto_repair(analyze_res, run_gpt_fn)
# Display
chatbot.append(
(None,
f"分析:{video_resource.thought}" "<br/>"
f"选择: `{video_resource.title}`。" "<br/>"
f"作者:{video_resource.author}"
)
)
chatbot.append((None, f"下载中, 请稍等..."))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
if video_resource and video_resource.bvid:
logger.info(video_resource)
downloaded = yield from download_video(video_resource.bvid, chatbot.get_user(), chatbot, history)
if not downloaded:
chatbot.append((None, f"下载失败, 尝试备选 ..."))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
downloaded = yield from download_video(video_resource.another_failsafe_bvid, chatbot.get_user(), chatbot, history)
@CatchException
def debug(bvid, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
yield from download_video(bvid, chatbot.get_user(), chatbot, history)

View File

@@ -1,141 +0,0 @@
from toolbox import CatchException, update_ui, promote_file_to_downloadzone
from crazy_functions.crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
import datetime, json
def fetch_items(list_of_items, batch_size):
for i in range(0, len(list_of_items), batch_size):
yield list_of_items[i:i + batch_size]
def string_to_options(arguments):
import argparse
import shlex
# Create an argparse.ArgumentParser instance
parser = argparse.ArgumentParser()
# Add command-line arguments
parser.add_argument("--llm_to_learn", type=str, help="LLM model to learn", default="gpt-3.5-turbo")
parser.add_argument("--prompt_prefix", type=str, help="Prompt prefix", default='')
parser.add_argument("--system_prompt", type=str, help="System prompt", default='')
parser.add_argument("--batch", type=int, help="System prompt", default=50)
parser.add_argument("--pre_seq_len", type=int, help="pre_seq_len", default=50)
parser.add_argument("--learning_rate", type=float, help="learning_rate", default=2e-2)
parser.add_argument("--num_gpus", type=int, help="num_gpus", default=1)
parser.add_argument("--json_dataset", type=str, help="json_dataset", default="")
parser.add_argument("--ptuning_directory", type=str, help="ptuning_directory", default="")
# Parse the arguments
args = parser.parse_args(shlex.split(arguments))
return args
@CatchException
def 微调数据集生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
"""
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
llm_kwargs gpt模型参数如温度和top_p等一般原样传递下去就行
plugin_kwargs 插件模型的参数
chatbot 聊天显示框的句柄,用于显示给用户
history 聊天历史,前情提要
system_prompt 给gpt的静默提醒
user_request 当前用户的请求信息IP地址等
"""
history = [] # 清空历史,以免输入溢出
chatbot.append(("这是什么功能?", "[Local Message] 微调数据集生成"))
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
args = plugin_kwargs.get("advanced_arg", None)
if args is None:
chatbot.append(("没给定指令", "退出"))
yield from update_ui(chatbot=chatbot, history=history); return
else:
arguments = string_to_options(arguments=args)
dat = []
with open(txt, 'r', encoding='utf8') as f:
for line in f.readlines():
json_dat = json.loads(line)
dat.append(json_dat["content"])
llm_kwargs['llm_model'] = arguments.llm_to_learn
for batch in fetch_items(dat, arguments.batch):
res = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
inputs_array=[f"{arguments.prompt_prefix}\n\n{b}" for b in (batch)],
inputs_show_user_array=[f"Show Nothing" for _ in (batch)],
llm_kwargs=llm_kwargs,
chatbot=chatbot,
history_array=[[] for _ in (batch)],
sys_prompt_array=[arguments.system_prompt for _ in (batch)],
max_workers=10 # OpenAI所允许的最大并行过载
)
with open(txt+'.generated.json', 'a+', encoding='utf8') as f:
for b, r in zip(batch, res[1::2]):
f.write(json.dumps({"content":b, "summary":r}, ensure_ascii=False)+'\n')
promote_file_to_downloadzone(txt+'.generated.json', rename_file='generated.json', chatbot=chatbot)
return
@CatchException
def 启动微调(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
"""
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
llm_kwargs gpt模型参数如温度和top_p等一般原样传递下去就行
plugin_kwargs 插件模型的参数
chatbot 聊天显示框的句柄,用于显示给用户
history 聊天历史,前情提要
system_prompt 给gpt的静默提醒
user_request 当前用户的请求信息IP地址等
"""
import subprocess
history = [] # 清空历史,以免输入溢出
chatbot.append(("这是什么功能?", "[Local Message] 微调数据集生成"))
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
args = plugin_kwargs.get("advanced_arg", None)
if args is None:
chatbot.append(("没给定指令", "退出"))
yield from update_ui(chatbot=chatbot, history=history); return
else:
arguments = string_to_options(arguments=args)
pre_seq_len = arguments.pre_seq_len # 128
learning_rate = arguments.learning_rate # 2e-2
num_gpus = arguments.num_gpus # 1
json_dataset = arguments.json_dataset # 't_code.json'
ptuning_directory = arguments.ptuning_directory # '/home/hmp/ChatGLM2-6B/ptuning'
command = f"torchrun --standalone --nnodes=1 --nproc-per-node={num_gpus} main.py \
--do_train \
--train_file AdvertiseGen/{json_dataset} \
--validation_file AdvertiseGen/{json_dataset} \
--preprocessing_num_workers 20 \
--prompt_column content \
--response_column summary \
--overwrite_cache \
--model_name_or_path THUDM/chatglm2-6b \
--output_dir output/clothgen-chatglm2-6b-pt-{pre_seq_len}-{learning_rate} \
--overwrite_output_dir \
--max_source_length 256 \
--max_target_length 256 \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps 16 \
--predict_with_generate \
--max_steps 100 \
--logging_steps 10 \
--save_steps 20 \
--learning_rate {learning_rate} \
--pre_seq_len {pre_seq_len} \
--quantization_bit 4"
process = subprocess.Popen(command, shell=True, cwd=ptuning_directory)
try:
process.communicate(timeout=3600*24)
except subprocess.TimeoutExpired:
process.kill()
return

View File

@@ -2,6 +2,7 @@ import os
import threading
from loguru import logger
from shared_utils.char_visual_effect import scolling_visual_effect
from shared_utils.fastapi_server import validate_path_safety
from toolbox import update_ui, get_conf, trimmed_format_exc, get_max_token, Singleton
def input_clipping(inputs, history, max_token_limit, return_clip_flags=False):
@@ -169,6 +170,7 @@ def can_multi_process(llm) -> bool:
def default_condition(llm) -> bool:
# legacy condition
if llm.startswith('gpt-'): return True
if llm.startswith('chatgpt-'): return True
if llm.startswith('api2d-'): return True
if llm.startswith('azure-'): return True
if llm.startswith('spark'): return True
@@ -538,7 +540,7 @@ def read_and_clean_pdf_text(fp):
return meta_txt, page_one_meta
def get_files_from_everything(txt, type): # type='.md'
def get_files_from_everything(txt, type, chatbot=None): # type='.md'
"""
这个函数是用来获取指定目录下所有指定类型(如.md的文件并且对于网络上的文件也可以获取它。
下面是对每个参数和返回值的说明:
@@ -550,6 +552,7 @@ def get_files_from_everything(txt, type): # type='.md'
- file_manifest: 文件路径列表,里面包含以指定类型为后缀名的所有文件的绝对路径。
- project_folder: 字符串,表示文件所在的文件夹路径。如果是网络上的文件,就是临时文件夹的路径。
该函数详细注释已添加,请确认是否满足您的需要。
- chatbot 带Cookies的Chatbot类为实现更多强大的功能做基础
"""
import glob, os
@@ -572,9 +575,13 @@ def get_files_from_everything(txt, type): # type='.md'
# 直接给定文件
file_manifest = [txt]
project_folder = os.path.dirname(txt)
if chatbot is not None:
validate_path_safety(project_folder, chatbot.get_user())
elif os.path.exists(txt):
# 本地路径,递归搜索
project_folder = txt
if chatbot is not None:
validate_path_safety(project_folder, chatbot.get_user())
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*'+type, recursive=True)]
if len(file_manifest) == 0:
success = False

View File

@@ -300,7 +300,8 @@ def Latex精细分解与转化(file_manifest, project_folder, llm_kwargs, plugin
write_html(pfg.sp_file_contents, pfg.sp_file_result, chatbot=chatbot, project_folder=project_folder)
# <-------- 写出文件 ---------->
msg = f"当前大语言模型: {llm_kwargs['llm_model']},当前语言模型温度设定: {llm_kwargs['temperature']}"
model_name = llm_kwargs['llm_model'].replace('_', '\\_') # 替换LLM模型名称中的下划线为转义字符
msg = f"当前大语言模型: {model_name},当前语言模型温度设定: {llm_kwargs['temperature']}"
final_tex = lps.merge_result(pfg.file_result, mode, msg)
objdump((lps, pfg.file_result, mode, msg), file=pj(project_folder,'merge_result.pkl'))
@@ -351,6 +352,41 @@ def 编译Latex(chatbot, history, main_file_original, main_file_modified, work_f
chatbot.append([f"正在编译PDF文档", f'编译已经开始。当前工作路径为{work_folder}如果程序停顿5分钟以上请直接去该路径下取回翻译结果或者重启之后再度尝试 ...']); yield from update_ui(chatbot=chatbot, history=history)
chatbot.append([f"正在编译PDF文档", '...']); yield from update_ui(chatbot=chatbot, history=history); time.sleep(1); chatbot[-1] = list(chatbot[-1]) # 刷新界面
yield from update_ui_lastest_msg('编译已经开始...', chatbot, history) # 刷新Gradio前端界面
# 检查是否需要使用xelatex
def check_if_need_xelatex(tex_path):
try:
with open(tex_path, 'r', encoding='utf-8', errors='replace') as f:
content = f.read(5000)
# 检查是否有使用xelatex的宏包
need_xelatex = any(
pkg in content
for pkg in ['fontspec', 'xeCJK', 'xetex', 'unicode-math', 'xltxtra', 'xunicode']
)
if need_xelatex:
logger.info(f"检测到宏包需要xelatex编译, 切换至xelatex编译")
else:
logger.info(f"未检测到宏包需要xelatex编译, 使用pdflatex编译")
return need_xelatex
except Exception:
return False
# 根据编译器类型返回编译命令
def get_compile_command(compiler, filename):
compile_command = f'{compiler} -interaction=batchmode -file-line-error {filename}.tex'
logger.info('Latex 编译指令: ' + compile_command)
return compile_command
# 确定使用的编译器
compiler = 'pdflatex'
if check_if_need_xelatex(pj(work_folder_modified, f'{main_file_modified}.tex')):
logger.info("检测到宏包需要xelatex编译切换至xelatex编译")
# Check if xelatex is installed
try:
import subprocess
subprocess.run(['xelatex', '--version'], capture_output=True, check=True)
compiler = 'xelatex'
except (subprocess.CalledProcessError, FileNotFoundError):
raise RuntimeError("检测到需要使用xelatex编译但系统中未安装xelatex。请先安装texlive或其他提供xelatex的LaTeX发行版。")
while True:
import os
@@ -361,10 +397,10 @@ def 编译Latex(chatbot, history, main_file_original, main_file_modified, work_f
# https://stackoverflow.com/questions/738755/dont-make-me-manually-abort-a-latex-compile-when-theres-an-error
yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 编译原始PDF ...', chatbot, history) # 刷新Gradio前端界面
ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_original}.tex', work_folder_original)
ok = compile_latex_with_timeout(get_compile_command(compiler, main_file_original), work_folder_original)
yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 编译转化后的PDF ...', chatbot, history) # 刷新Gradio前端界面
ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_modified}.tex', work_folder_modified)
ok = compile_latex_with_timeout(get_compile_command(compiler, main_file_modified), work_folder_modified)
if ok and os.path.exists(pj(work_folder_modified, f'{main_file_modified}.pdf')):
# 只有第二步成功,才能继续下面的步骤
@@ -375,10 +411,10 @@ def 编译Latex(chatbot, history, main_file_original, main_file_modified, work_f
ok = compile_latex_with_timeout(f'bibtex {main_file_modified}.aux', work_folder_modified)
yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 编译文献交叉引用 ...', chatbot, history) # 刷新Gradio前端界面
ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_original}.tex', work_folder_original)
ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_modified}.tex', work_folder_modified)
ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_original}.tex', work_folder_original)
ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_modified}.tex', work_folder_modified)
ok = compile_latex_with_timeout(get_compile_command(compiler, main_file_original), work_folder_original)
ok = compile_latex_with_timeout(get_compile_command(compiler, main_file_modified), work_folder_modified)
ok = compile_latex_with_timeout(get_compile_command(compiler, main_file_original), work_folder_original)
ok = compile_latex_with_timeout(get_compile_command(compiler, main_file_modified), work_folder_modified)
if mode!='translate_zh':
yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 使用latexdiff生成论文转化前后对比 ...', chatbot, history) # 刷新Gradio前端界面
@@ -386,10 +422,10 @@ def 编译Latex(chatbot, history, main_file_original, main_file_modified, work_f
ok = compile_latex_with_timeout(f'latexdiff --encoding=utf8 --append-safecmd=subfile {work_folder_original}/{main_file_original}.tex {work_folder_modified}/{main_file_modified}.tex --flatten > {work_folder}/merge_diff.tex', os.getcwd())
yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 正在编译对比PDF ...', chatbot, history) # 刷新Gradio前端界面
ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error merge_diff.tex', work_folder)
ok = compile_latex_with_timeout(get_compile_command(compiler, 'merge_diff'), work_folder)
ok = compile_latex_with_timeout(f'bibtex merge_diff.aux', work_folder)
ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error merge_diff.tex', work_folder)
ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error merge_diff.tex', work_folder)
ok = compile_latex_with_timeout(get_compile_command(compiler, 'merge_diff'), work_folder)
ok = compile_latex_with_timeout(get_compile_command(compiler, 'merge_diff'), work_folder)
# <---------- 检查结果 ----------->
results_ = ""

View File

@@ -6,12 +6,16 @@ class SafeUnpickler(pickle.Unpickler):
def get_safe_classes(self):
from crazy_functions.latex_fns.latex_actions import LatexPaperFileGroup, LatexPaperSplit
from crazy_functions.latex_fns.latex_toolbox import LinkedListNode
from numpy.core.multiarray import scalar
from numpy import dtype
# 定义允许的安全类
safe_classes = {
# 在这里添加其他安全的类
'LatexPaperFileGroup': LatexPaperFileGroup,
'LatexPaperSplit': LatexPaperSplit,
'LinkedListNode': LinkedListNode,
'scalar': scalar,
'dtype': dtype,
}
return safe_classes
@@ -22,8 +26,6 @@ class SafeUnpickler(pickle.Unpickler):
for class_name in self.safe_classes.keys():
if (class_name in f'{module}.{name}'):
match_class_name = class_name
if module == 'numpy' or module.startswith('numpy.'):
return super().find_class(module, name)
if match_class_name is not None:
return self.safe_classes[match_class_name]
# 如果尝试加载未授权的类,则抛出异常

View File

@@ -0,0 +1,43 @@
from toolbox import update_ui, get_conf, promote_file_to_downloadzone, update_ui_lastest_msg, generate_file_link
from shared_utils.docker_as_service_api import stream_daas
from shared_utils.docker_as_service_api import DockerServiceApiComModel
import random
def download_video(video_id, only_audio, user_name, chatbot, history):
from toolbox import get_log_folder
chatbot.append([None, "Processing..."])
yield from update_ui(chatbot, history)
client_command = f'{video_id} --audio-only' if only_audio else video_id
server_urls = get_conf('DAAS_SERVER_URLS')
server_url = random.choice(server_urls)
docker_service_api_com_model = DockerServiceApiComModel(client_command=client_command)
save_file_dir = get_log_folder(user_name, plugin_name='media_downloader')
for output_manifest in stream_daas(docker_service_api_com_model, server_url, save_file_dir):
status_buf = ""
status_buf += "DaaS message: \n\n"
status_buf += output_manifest['server_message'].replace('\n', '<br/>')
status_buf += "\n\n"
status_buf += "DaaS standard error: \n\n"
status_buf += output_manifest['server_std_err'].replace('\n', '<br/>')
status_buf += "\n\n"
status_buf += "DaaS standard output: \n\n"
status_buf += output_manifest['server_std_out'].replace('\n', '<br/>')
status_buf += "\n\n"
status_buf += "DaaS file attach: \n\n"
status_buf += str(output_manifest['server_file_attach'])
yield from update_ui_lastest_msg(status_buf, chatbot, history)
return output_manifest['server_file_attach']
def search_videos(keywords):
from toolbox import get_log_folder
client_command = keywords
server_urls = get_conf('DAAS_SERVER_URLS')
server_url = random.choice(server_urls)
server_url = server_url.replace('stream', 'search')
docker_service_api_com_model = DockerServiceApiComModel(client_command=client_command)
save_file_dir = get_log_folder("default_user", plugin_name='media_downloader')
for output_manifest in stream_daas(docker_service_api_com_model, server_url, save_file_dir):
return output_manifest['server_message']

View File

@@ -6,75 +6,128 @@ from crazy_functions.crazy_utils import get_files_from_everything
from shared_utils.colorful import *
from loguru import logger
import os
import requests
import time
def refresh_key(doc2x_api_key):
import requests, json
url = "https://api.doc2x.noedgeai.com/api/token/refresh"
res = requests.post(
url,
headers={"Authorization": "Bearer " + doc2x_api_key}
)
res_json = []
if res.status_code == 200:
decoded = res.content.decode("utf-8")
res_json = json.loads(decoded)
doc2x_api_key = res_json['data']['token']
else:
raise RuntimeError(format("[ERROR] status code: %d, body: %s" % (res.status_code, res.text)))
return doc2x_api_key
def retry_request(max_retries=3, delay=3):
"""
Decorator for retrying HTTP requests
Args:
max_retries: Maximum number of retry attempts
delay: Delay between retries in seconds
"""
def decorator(func):
def wrapper(*args, **kwargs):
for attempt in range(max_retries):
try:
return func(*args, **kwargs)
except Exception as e:
if attempt < max_retries - 1:
logger.error(
f"Request failed, retrying... ({attempt + 1}/{max_retries}) Error: {e}"
)
time.sleep(delay)
continue
raise e
return None
return wrapper
return decorator
@retry_request()
def make_request(method, url, **kwargs):
"""
Make HTTP request with retry mechanism
"""
return requests.request(method, url, **kwargs)
def doc2x_api_response_status(response, uid=""):
"""
Check the status of Doc2x API response
Args:
response_data: Response object from Doc2x API
"""
response_json = response.json()
response_data = response_json.get("data", {})
code = response_json.get("code", "Unknown")
meg = response_data.get("message", response_json)
trace_id = response.headers.get("trace-id", "Failed to get trace-id")
if response.status_code != 200:
raise RuntimeError(
f"Doc2x return an error:\nTrace ID: {trace_id} {uid}\n{response.status_code} - {response_json}"
)
if code in ["parse_page_limit_exceeded", "parse_concurrency_limit"]:
raise RuntimeError(
f"Reached the limit of Doc2x:\nTrace ID: {trace_id} {uid}\n{code} - {meg}"
)
if code not in ["ok", "success"]:
raise RuntimeError(
f"Doc2x return an error:\nTrace ID: {trace_id} {uid}\n{code} - {meg}"
)
return response_data
def 解析PDF_DOC2X_转Latex(pdf_file_path):
zip_file_path, unzipped_folder = 解析PDF_DOC2X(pdf_file_path, format='tex')
zip_file_path, unzipped_folder = 解析PDF_DOC2X(pdf_file_path, format="tex")
return unzipped_folder
def 解析PDF_DOC2X(pdf_file_path, format='tex'):
def 解析PDF_DOC2X(pdf_file_path, format="tex"):
"""
format: 'tex', 'md', 'docx'
format: 'tex', 'md', 'docx'
"""
import requests, json, os
DOC2X_API_KEY = get_conf('DOC2X_API_KEY')
DOC2X_API_KEY = get_conf("DOC2X_API_KEY")
latex_dir = get_log_folder(plugin_name="pdf_ocr_latex")
markdown_dir = get_log_folder(plugin_name="pdf_ocr")
doc2x_api_key = DOC2X_API_KEY
# < ------ 第1步预上传获取URL然后上传文件 ------ >
logger.info("Doc2x 上传文件预上传获取URL")
res = make_request(
"POST",
"https://v2.doc2x.noedgeai.com/api/v2/parse/preupload",
headers={"Authorization": "Bearer " + doc2x_api_key},
timeout=15,
)
res_data = doc2x_api_response_status(res)
upload_url = res_data["url"]
uuid = res_data["uid"]
# < ------ 第1步上传 ------ >
logger.info("Doc2x 第1步上传")
with open(pdf_file_path, 'rb') as file:
res = requests.post(
"https://v2.doc2x.noedgeai.com/api/v2/parse/pdf",
headers={"Authorization": "Bearer " + doc2x_api_key},
data=file
)
# res_json = []
if res.status_code == 200:
res_json = res.json()
else:
raise RuntimeError(f"Doc2x return an error: {res.json()}")
uuid = res_json['data']['uid']
logger.info("Doc2x 上传文件:上传文件")
with open(pdf_file_path, "rb") as file:
res = make_request("PUT", upload_url, data=file, timeout=60)
res.raise_for_status()
# < ------ 第2步轮询等待 ------ >
logger.info("Doc2x 第2步:轮询等待")
params = {'uid': uuid}
while True:
res = requests.get(
'https://v2.doc2x.noedgeai.com/api/v2/parse/status',
logger.info("Doc2x 处理文件中:轮询等待")
params = {"uid": uuid}
max_attempts = 60
attempt = 0
while attempt < max_attempts:
res = make_request(
"GET",
"https://v2.doc2x.noedgeai.com/api/v2/parse/status",
headers={"Authorization": "Bearer " + doc2x_api_key},
params=params
params=params,
timeout=15,
)
res_json = res.json()
if res_json['data']['status'] == "success":
res_data = doc2x_api_response_status(res)
if res_data["status"] == "success":
break
elif res_json['data']['status'] == "processing":
time.sleep(3)
logger.info(f"Doc2x is processing at {res_json['data']['progress']}%")
elif res_json['data']['status'] == "failed":
raise RuntimeError(f"Doc2x return an error: {res_json}")
elif res_data["status"] == "processing":
time.sleep(5)
logger.info(f"Doc2x is processing at {res_data['progress']}%")
attempt += 1
else:
raise RuntimeError(f"Doc2x return an error: {res_data}")
if attempt >= max_attempts:
raise RuntimeError("Doc2x processing timeout after maximum attempts")
# < ------ 第3步提交转化 ------ >
logger.info("Doc2x 第3步提交转化")
@@ -84,42 +137,44 @@ def 解析PDF_DOC2X(pdf_file_path, format='tex'):
"formula_mode": "dollar",
"filename": "output"
}
res = requests.post(
'https://v2.doc2x.noedgeai.com/api/v2/convert/parse',
res = make_request(
"POST",
"https://v2.doc2x.noedgeai.com/api/v2/convert/parse",
headers={"Authorization": "Bearer " + doc2x_api_key},
json=data
json=data,
timeout=15,
)
if res.status_code == 200:
res_json = res.json()
else:
raise RuntimeError(f"Doc2x return an error: {res.json()}")
doc2x_api_response_status(res, uid=f"uid: {uuid}")
# < ------ 第4步等待结果 ------ >
logger.info("Doc2x 第4步等待结果")
params = {'uid': uuid}
while True:
res = requests.get(
'https://v2.doc2x.noedgeai.com/api/v2/convert/parse/result',
params = {"uid": uuid}
max_attempts = 36
attempt = 0
while attempt < max_attempts:
res = make_request(
"GET",
"https://v2.doc2x.noedgeai.com/api/v2/convert/parse/result",
headers={"Authorization": "Bearer " + doc2x_api_key},
params=params
params=params,
timeout=15,
)
res_json = res.json()
if res_json['data']['status'] == "success":
res_data = doc2x_api_response_status(res, uid=f"uid: {uuid}")
if res_data["status"] == "success":
break
elif res_json['data']['status'] == "processing":
elif res_data["status"] == "processing":
time.sleep(3)
logger.info(f"Doc2x still processing")
elif res_json['data']['status'] == "failed":
raise RuntimeError(f"Doc2x return an error: {res_json}")
logger.info("Doc2x still processing to convert file")
attempt += 1
if attempt >= max_attempts:
raise RuntimeError("Doc2x conversion timeout after maximum attempts")
# < ------ 第5步最后的处理 ------ >
logger.info("Doc2x 第5步最后的处理")
logger.info("Doc2x 第5步下载转换后的文件")
if format=='tex':
if format == "tex":
target_path = latex_dir
if format=='md':
if format == "md":
target_path = markdown_dir
os.makedirs(target_path, exist_ok=True)
@@ -127,17 +182,18 @@ def 解析PDF_DOC2X(pdf_file_path, format='tex'):
# < ------ 下载 ------ >
for attempt in range(max_attempt):
try:
result_url = res_json['data']['url']
res = requests.get(result_url)
zip_path = os.path.join(target_path, gen_time_str() + '.zip')
result_url = res_data["url"]
res = make_request("GET", result_url, timeout=60)
zip_path = os.path.join(target_path, gen_time_str() + ".zip")
unzip_path = os.path.join(target_path, gen_time_str())
if res.status_code == 200:
with open(zip_path, "wb") as f: f.write(res.content)
with open(zip_path, "wb") as f:
f.write(res.content)
else:
raise RuntimeError(f"Doc2x return an error: {res.json()}")
except Exception as e:
if attempt < max_attempt - 1:
logger.error(f"Failed to download latex file, retrying... {e}")
logger.error(f"Failed to download uid = {uuid} file, retrying... {e}")
time.sleep(3)
continue
else:
@@ -145,22 +201,31 @@ def 解析PDF_DOC2X(pdf_file_path, format='tex'):
# < ------ 解压 ------ >
import zipfile
with zipfile.ZipFile(zip_path, 'r') as zip_ref:
with zipfile.ZipFile(zip_path, "r") as zip_ref:
zip_ref.extractall(unzip_path)
return zip_path, unzip_path
def 解析PDF_DOC2X_单文件(fp, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, DOC2X_API_KEY, user_request):
def 解析PDF_DOC2X_单文件(
fp,
project_folder,
llm_kwargs,
plugin_kwargs,
chatbot,
history,
system_prompt,
DOC2X_API_KEY,
user_request,
):
def pdf2markdown(filepath):
chatbot.append((None, f"Doc2x 解析中"))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
md_zip_path, unzipped_folder = 解析PDF_DOC2X(filepath, format='md')
md_zip_path, unzipped_folder = 解析PDF_DOC2X(filepath, format="md")
promote_file_to_downloadzone(md_zip_path, chatbot=chatbot)
chatbot.append((None, f"完成解析 {md_zip_path} ..."))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return md_zip_path
def deliver_to_markdown_plugin(md_zip_path, user_request):
@@ -174,77 +239,95 @@ def 解析PDF_DOC2X_单文件(fp, project_folder, llm_kwargs, plugin_kwargs, cha
os.makedirs(target_path_base, exist_ok=True)
shutil.copyfile(md_zip_path, this_file_path)
ex_folder = this_file_path + ".extract"
extract_archive(
file_path=this_file_path, dest_dir=ex_folder
)
extract_archive(file_path=this_file_path, dest_dir=ex_folder)
# edit markdown files
success, file_manifest, project_folder = get_files_from_everything(ex_folder, type='.md')
success, file_manifest, project_folder = get_files_from_everything(ex_folder, type='.md', chatbot=chatbot)
for generated_fp in file_manifest:
# 修正一些公式问题
with open(generated_fp, 'r', encoding='utf8') as f:
with open(generated_fp, "r", encoding="utf8") as f:
content = f.read()
# 将公式中的\[ \]替换成$$
content = content.replace(r'\[', r'$$').replace(r'\]', r'$$')
content = content.replace(r"\[", r"$$").replace(r"\]", r"$$")
# 将公式中的\( \)替换成$
content = content.replace(r'\(', r'$').replace(r'\)', r'$')
content = content.replace('```markdown', '\n').replace('```', '\n')
with open(generated_fp, 'w', encoding='utf8') as f:
content = content.replace(r"\(", r"$").replace(r"\)", r"$")
content = content.replace("```markdown", "\n").replace("```", "\n")
with open(generated_fp, "w", encoding="utf8") as f:
f.write(content)
promote_file_to_downloadzone(generated_fp, chatbot=chatbot)
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# 生成在线预览html
file_name = '在线预览翻译(原文)' + gen_time_str() + '.html'
file_name = "在线预览翻译(原文)" + gen_time_str() + ".html"
preview_fp = os.path.join(ex_folder, file_name)
from shared_utils.advanced_markdown_format import markdown_convertion_for_file
from shared_utils.advanced_markdown_format import (
markdown_convertion_for_file,
)
with open(generated_fp, "r", encoding="utf-8") as f:
md = f.read()
# # Markdown中使用不标准的表格需要在表格前加上一个emoji以便公式渲染
# md = re.sub(r'^<table>', r'.<table>', md, flags=re.MULTILINE)
html = markdown_convertion_for_file(md)
with open(preview_fp, "w", encoding="utf-8") as f: f.write(html)
with open(preview_fp, "w", encoding="utf-8") as f:
f.write(html)
chatbot.append([None, f"生成在线预览:{generate_file_link([preview_fp])}"])
promote_file_to_downloadzone(preview_fp, chatbot=chatbot)
chatbot.append((None, f"调用Markdown插件 {ex_folder} ..."))
plugin_kwargs['markdown_expected_output_dir'] = ex_folder
plugin_kwargs["markdown_expected_output_dir"] = ex_folder
translated_f_name = 'translated_markdown.md'
generated_fp = plugin_kwargs['markdown_expected_output_path'] = os.path.join(ex_folder, translated_f_name)
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
yield from Markdown英译中(ex_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request)
translated_f_name = "translated_markdown.md"
generated_fp = plugin_kwargs["markdown_expected_output_path"] = os.path.join(
ex_folder, translated_f_name
)
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
yield from Markdown英译中(
ex_folder,
llm_kwargs,
plugin_kwargs,
chatbot,
history,
system_prompt,
user_request,
)
if os.path.exists(generated_fp):
# 修正一些公式问题
with open(generated_fp, 'r', encoding='utf8') as f: content = f.read()
content = content.replace('```markdown', '\n').replace('```', '\n')
with open(generated_fp, "r", encoding="utf8") as f:
content = f.read()
content = content.replace("```markdown", "\n").replace("```", "\n")
# Markdown中使用不标准的表格需要在表格前加上一个emoji以便公式渲染
# content = re.sub(r'^<table>', r'.<table>', content, flags=re.MULTILINE)
with open(generated_fp, 'w', encoding='utf8') as f: f.write(content)
with open(generated_fp, "w", encoding="utf8") as f:
f.write(content)
# 生成在线预览html
file_name = '在线预览翻译' + gen_time_str() + '.html'
file_name = "在线预览翻译" + gen_time_str() + ".html"
preview_fp = os.path.join(ex_folder, file_name)
from shared_utils.advanced_markdown_format import markdown_convertion_for_file
from shared_utils.advanced_markdown_format import (
markdown_convertion_for_file,
)
with open(generated_fp, "r", encoding="utf-8") as f:
md = f.read()
html = markdown_convertion_for_file(md)
with open(preview_fp, "w", encoding="utf-8") as f: f.write(html)
with open(preview_fp, "w", encoding="utf-8") as f:
f.write(html)
promote_file_to_downloadzone(preview_fp, chatbot=chatbot)
# 生成包含图片的压缩包
dest_folder = get_log_folder(chatbot.get_user())
zip_name = '翻译后的带图文档.zip'
zip_folder(source_folder=ex_folder, dest_folder=dest_folder, zip_name=zip_name)
zip_name = "翻译后的带图文档.zip"
zip_folder(
source_folder=ex_folder, dest_folder=dest_folder, zip_name=zip_name
)
zip_fp = os.path.join(dest_folder, zip_name)
promote_file_to_downloadzone(zip_fp, chatbot=chatbot)
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
md_zip_path = yield from pdf2markdown(fp)
yield from deliver_to_markdown_plugin(md_zip_path, user_request)
def 解析PDF_基于DOC2X(file_manifest, *args):
for index, fp in enumerate(file_manifest):
yield from 解析PDF_DOC2X_单文件(fp, *args)
return

View File

@@ -27,10 +27,10 @@ def extract_text_from_files(txt, chatbot, history):
return False, final_result, page_one, file_manifest, excption #如输入区内容不是文件则直接返回输入区内容
#查找输入区内容中的文件
file_pdf,pdf_manifest,folder_pdf = get_files_from_everything(txt, '.pdf')
file_md,md_manifest,folder_md = get_files_from_everything(txt, '.md')
file_word,word_manifest,folder_word = get_files_from_everything(txt, '.docx')
file_doc,doc_manifest,folder_doc = get_files_from_everything(txt, '.doc')
file_pdf,pdf_manifest,folder_pdf = get_files_from_everything(txt, '.pdf', chatbot=chatbot)
file_md,md_manifest,folder_md = get_files_from_everything(txt, '.md', chatbot=chatbot)
file_word,word_manifest,folder_word = get_files_from_everything(txt, '.docx', chatbot=chatbot)
file_doc,doc_manifest,folder_doc = get_files_from_everything(txt, '.doc', chatbot=chatbot)
if file_doc:
excption = "word"

View File

@@ -1,17 +1,13 @@
import llama_index
import os
import atexit
from loguru import logger
from typing import List
from llama_index.core import Document
from llama_index.core.schema import TextNode
from request_llms.embed_models.openai_embed import OpenAiEmbeddingModel
from shared_utils.connect_void_terminal import get_chat_default_kwargs
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from crazy_functions.rag_fns.vector_store_index import GptacVectorStoreIndex
from llama_index.core.ingestion import run_transformations
from llama_index.core import PromptTemplate
from llama_index.core.response_synthesizers import TreeSummarize
from llama_index.core.schema import TextNode
from crazy_functions.rag_fns.vector_store_index import GptacVectorStoreIndex
from request_llms.embed_models.openai_embed import OpenAiEmbeddingModel
DEFAULT_QUERY_GENERATION_PROMPT = """\
Now, you have context information as below:
@@ -63,7 +59,7 @@ class SaveLoad():
def purge(self):
import shutil
shutil.rmtree(self.checkpoint_dir, ignore_errors=True)
self.vs_index = self.create_new_vs()
self.vs_index = self.create_new_vs(self.checkpoint_dir)
class LlamaIndexRagWorker(SaveLoad):
@@ -75,7 +71,7 @@ class LlamaIndexRagWorker(SaveLoad):
if auto_load_checkpoint:
self.vs_index = self.load_from_checkpoint(checkpoint_dir)
else:
self.vs_index = self.create_new_vs(checkpoint_dir)
self.vs_index = self.create_new_vs()
atexit.register(lambda: self.save_to_checkpoint(checkpoint_dir))
def assign_embedding_model(self):
@@ -91,32 +87,38 @@ class LlamaIndexRagWorker(SaveLoad):
logger.info('oo --------inspect_vector_store end--------')
return vector_store_preview
def add_documents_to_vector_store(self, document_list):
documents = [Document(text=t) for t in document_list]
def add_documents_to_vector_store(self, document_list: List[Document]):
"""
Adds a list of Document objects to the vector store after processing.
"""
documents = document_list
documents_nodes = run_transformations(
documents, # type: ignore
self.vs_index._transformations,
show_progress=True
)
documents, # type: ignore
self.vs_index._transformations,
show_progress=True
)
self.vs_index.insert_nodes(documents_nodes)
if self.debug_mode: self.inspect_vector_store()
if self.debug_mode:
self.inspect_vector_store()
def add_text_to_vector_store(self, text):
def add_text_to_vector_store(self, text: str):
node = TextNode(text=text)
documents_nodes = run_transformations(
[node],
self.vs_index._transformations,
show_progress=True
)
[node],
self.vs_index._transformations,
show_progress=True
)
self.vs_index.insert_nodes(documents_nodes)
if self.debug_mode: self.inspect_vector_store()
if self.debug_mode:
self.inspect_vector_store()
def remember_qa(self, question, answer):
formatted_str = QUESTION_ANSWER_RECORD.format(question=question, answer=answer)
self.add_text_to_vector_store(formatted_str)
def retrieve_from_store_with_query(self, query):
if self.debug_mode: self.inspect_vector_store()
if self.debug_mode:
self.inspect_vector_store()
retriever = self.vs_index.as_retriever()
return retriever.retrieve(query)
@@ -128,3 +130,9 @@ class LlamaIndexRagWorker(SaveLoad):
buf = "\n".join(([f"(No.{i+1} | score {n.score:.3f}): {n.text}" for i, n in enumerate(nodes)]))
if self.debug_mode: logger.info(buf)
return buf
def purge_vector_store(self):
"""
Purges the current vector store and creates a new one.
"""
self.purge()

View File

@@ -0,0 +1,22 @@
import os
from llama_index.core import SimpleDirectoryReader
supports_format = ['.csv', '.docx', '.epub', '.ipynb', '.mbox', '.md', '.pdf', '.txt', '.ppt',
'.pptm', '.pptx']
# 修改后的 extract_text 函数,结合 SimpleDirectoryReader 和自定义解析逻辑
def extract_text(file_path):
_, ext = os.path.splitext(file_path.lower())
# 使用 SimpleDirectoryReader 处理它支持的文件格式
if ext in supports_format:
try:
reader = SimpleDirectoryReader(input_files=[file_path])
documents = reader.load_data()
if len(documents) > 0:
return documents[0].text
except Exception as e:
pass
return None

File diff suppressed because it is too large Load Diff

View File

@@ -104,6 +104,8 @@ def 总结word文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_pr
# 检测输入参数,如没有给定输入参数,直接退出
if os.path.exists(txt):
project_folder = txt
from shared_utils.fastapi_server import validate_path_safety
validate_path_safety(project_folder, chatbot.get_user())
else:
if txt == "": txt = '空空如也的输入栏'
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")

View File

@@ -61,7 +61,7 @@ def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
history = []
from crazy_functions.crazy_utils import get_files_from_everything
success, file_manifest, project_folder = get_files_from_everything(txt, type='.pdf')
success, file_manifest, project_folder = get_files_from_everything(txt, type='.pdf', chatbot=chatbot)
if len(file_manifest) > 0:
# 尝试导入依赖,如果缺少依赖,则给出安装建议
try:
@@ -73,7 +73,7 @@ def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade nougat-ocr tiktoken```。")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
success_mmd, file_manifest_mmd, _ = get_files_from_everything(txt, type='.mmd')
success_mmd, file_manifest_mmd, _ = get_files_from_everything(txt, type='.mmd', chatbot=chatbot)
success = success or success_mmd
file_manifest += file_manifest_mmd
chatbot.append(["文件列表:", ", ".join([e.split('/')[-1] for e in file_manifest])]);

View File

@@ -87,6 +87,8 @@ def 理解PDF文档内容标准文件输入(txt, llm_kwargs, plugin_kwargs, chat
# 检测输入参数,如没有给定输入参数,直接退出
if os.path.exists(txt):
project_folder = txt
from shared_utils.fastapi_server import validate_path_safety
validate_path_safety(project_folder, chatbot.get_user())
else:
if txt == "":
txt = '空空如也的输入栏'

View File

@@ -39,6 +39,8 @@ def 批量生成函数注释(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
import glob, os
if os.path.exists(txt):
project_folder = txt
from shared_utils.fastapi_server import validate_path_safety
validate_path_safety(project_folder, chatbot.get_user())
else:
if txt == "": txt = '空空如也的输入栏'
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")

View File

@@ -49,7 +49,7 @@ def 知识库文件注入(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
file_manifest = []
spl = ["txt", "doc", "docx", "email", "epub", "html", "json", "md", "msg", "pdf", "ppt", "pptx", "rtf"]
for sp in spl:
_, file_manifest_tmp, _ = get_files_from_everything(txt, type=f'.{sp}')
_, file_manifest_tmp, _ = get_files_from_everything(txt, type=f'.{sp}', chatbot=chatbot)
file_manifest += file_manifest_tmp
if len(file_manifest) == 0:

View File

@@ -126,6 +126,8 @@ def 解析ipynb文件(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
import os
if os.path.exists(txt):
project_folder = txt
from shared_utils.fastapi_server import validate_path_safety
validate_path_safety(project_folder, chatbot.get_user())
else:
if txt == "":
txt = '空空如也的输入栏'

View File

@@ -48,6 +48,8 @@ def 读文章写摘要(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_
import glob, os
if os.path.exists(txt):
project_folder = txt
from shared_utils.fastapi_server import validate_path_safety
validate_path_safety(project_folder, chatbot.get_user())
else:
if txt == "": txt = '空空如也的输入栏'
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")

View File

@@ -5,6 +5,10 @@ FROM fuqingxu/11.3.1-runtime-ubuntu20.04-with-texlive:latest
# edge-tts需要的依赖某些pip包所需的依赖
RUN apt update && apt install ffmpeg build-essential -y
RUN apt-get install -y fontconfig
RUN ln -s /usr/local/texlive/2023/texmf-dist/fonts/truetype /usr/share/fonts/truetype/texlive
RUN fc-cache -fv
RUN apt-get clean
# use python3 as the system default python
WORKDIR /gpt
@@ -30,7 +34,7 @@ RUN python3 -m pip install -r request_llms/requirements_qwen.txt
RUN python3 -m pip install -r request_llms/requirements_chatglm.txt
RUN python3 -m pip install -r request_llms/requirements_newbing.txt
RUN python3 -m pip install nougat-ocr
RUN python3 -m pip cache purge
# 预热Tiktoken模块
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'

View File

@@ -7,6 +7,7 @@ RUN apt-get install -y git python python3 python-dev python3-dev --fix-missing
# edge-tts需要的依赖某些pip包所需的依赖
RUN apt update && apt install ffmpeg build-essential -y
RUN apt-get clean
# use python3 as the system default python
RUN curl -sS https://bootstrap.pypa.io/get-pip.py | python3.8
@@ -22,6 +23,7 @@ RUN python3 -m pip install -r request_llms/requirements_moss.txt
RUN python3 -m pip install -r request_llms/requirements_qwen.txt
RUN python3 -m pip install -r request_llms/requirements_chatglm.txt
RUN python3 -m pip install -r request_llms/requirements_newbing.txt
RUN python3 -m pip cache purge
# 预热Tiktoken模块

View File

@@ -18,5 +18,7 @@ RUN apt update && apt install ffmpeg -y
# 可选步骤,用于预热模块
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'
RUN python3 -m pip cache purge && apt-get clean
# 启动
CMD ["python3", "-u", "main.py"]

View File

@@ -30,5 +30,7 @@ COPY --chown=gptuser:gptuser . .
# 可选步骤,用于预热模块
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'
RUN python3 -m pip cache purge
# 启动
CMD ["python3", "-u", "main.py"]

View File

@@ -24,6 +24,8 @@ RUN apt update && apt install ffmpeg -y
# 可选步骤,用于预热模块
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'
RUN python3 -m pip cache purge && apt-get clean
# 启动
CMD ["python3", "-u", "main.py"]

26
docs/WindowsRun.bat Normal file
View File

@@ -0,0 +1,26 @@
@echo off
setlocal
:: 设置环境变量
set ENV_NAME=gpt
set ENV_PATH=%~dp0%ENV_NAME%
set SCRIPT_PATH=%~dp0main.py
:: 判断环境是否已解压
if not exist "%ENV_PATH%" (
echo Extracting environment...
mkdir "%ENV_PATH%"
tar -xzf gpt.tar.gz -C "%ENV_PATH%"
:: 运行conda环境激活脚本
call "%ENV_PATH%\Scripts\activate.bat"
) else (
:: 如果环境已存在,直接激活
call "%ENV_PATH%\Scripts\activate.bat"
)
echo Start to run program:
:: 运行Python脚本
python "%SCRIPT_PATH%"
endlocal
pause

36
main.py
View File

@@ -1,4 +1,4 @@
import os, json; os.environ['no_proxy'] = '*' # 避免代理网络产生意外污染
import os; os.environ['no_proxy'] = '*' # 避免代理网络产生意外污染
help_menu_description = \
"""Github源代码开源和更新[地址🚀](https://github.com/binary-husky/gpt_academic),
@@ -34,7 +34,7 @@ def encode_plugin_info(k, plugin)->str:
def main():
import gradio as gr
if gr.__version__ not in ['3.32.9', '3.32.10', '3.32.11']:
if gr.__version__ not in ['3.32.12']:
raise ModuleNotFoundError("使用项目内置Gradio获取最优体验! 请运行 `pip install -r requirements.txt` 指令安装内置Gradio及其他依赖, 详情信息见requirements.txt.")
# 一些基础工具
@@ -49,7 +49,7 @@ def main():
# 读取配置
proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION = get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION')
CHATBOT_HEIGHT, LAYOUT, AVAIL_LLM_MODELS, AUTO_CLEAR_TXT = get_conf('CHATBOT_HEIGHT', 'LAYOUT', 'AVAIL_LLM_MODELS', 'AUTO_CLEAR_TXT')
ENABLE_AUDIO, AUTO_CLEAR_TXT, PATH_LOGGING, AVAIL_THEMES, THEME, ADD_WAIFU = get_conf('ENABLE_AUDIO', 'AUTO_CLEAR_TXT', 'PATH_LOGGING', 'AVAIL_THEMES', 'THEME', 'ADD_WAIFU')
ENABLE_AUDIO, AUTO_CLEAR_TXT, AVAIL_FONTS, AVAIL_THEMES, THEME, ADD_WAIFU = get_conf('ENABLE_AUDIO', 'AUTO_CLEAR_TXT', 'AVAIL_FONTS', 'AVAIL_THEMES', 'THEME', 'ADD_WAIFU')
NUM_CUSTOM_BASIC_BTN, SSL_KEYFILE, SSL_CERTFILE = get_conf('NUM_CUSTOM_BASIC_BTN', 'SSL_KEYFILE', 'SSL_CERTFILE')
DARK_MODE, INIT_SYS_PROMPT, ADD_WAIFU, TTS_TYPE = get_conf('DARK_MODE', 'INIT_SYS_PROMPT', 'ADD_WAIFU', 'TTS_TYPE')
if LLM_MODEL not in AVAIL_LLM_MODELS: AVAIL_LLM_MODELS += [LLM_MODEL]
@@ -57,8 +57,8 @@ def main():
# 如果WEB_PORT是-1, 则随机选取WEB端口
PORT = find_free_port() if WEB_PORT <= 0 else WEB_PORT
from check_proxy import get_current_version
from themes.theme import adjust_theme, advanced_css, theme_declaration, js_code_clear, js_code_reset, js_code_show_or_hide, js_code_show_or_hide_group2
from themes.theme import js_code_for_toggle_darkmode, js_code_for_persistent_cookie_init
from themes.theme import adjust_theme, advanced_css, theme_declaration, js_code_clear, js_code_show_or_hide, js_code_show_or_hide_group2
from themes.theme import js_code_for_toggle_darkmode
from themes.theme import load_dynamic_theme, to_cookie_str, from_cookie_str, assign_user_uuid
title_html = f"<h1 align=\"center\">GPT 学术优化 {get_current_version()}</h1>{theme_declaration}"
@@ -68,7 +68,7 @@ def main():
functional = get_core_functions()
# 高级函数插件
from crazy_functional import get_crazy_functions
from crazy_functional import get_crazy_functions, get_multiplex_button_functions
DEFAULT_FN_GROUPS = get_conf('DEFAULT_FN_GROUPS')
plugins = get_crazy_functions()
all_plugin_groups = list(set([g for _, plugin in plugins.items() for g in plugin['Group'].split('|')]))
@@ -106,7 +106,7 @@ def main():
with gr_L2(scale=2, elem_id="gpt-chat"):
chatbot = gr.Chatbot(label=f"当前模型:{LLM_MODEL}", elem_id="gpt-chatbot")
if LAYOUT == "TOP-DOWN": chatbot.style(height=CHATBOT_HEIGHT)
history, history_cache, history_cache_update = make_history_cache() # 定义 后端statehistory、前端history_cache、后端setterhistory_cache_update三兄弟
history, _, _ = make_history_cache() # 定义 后端statehistory、前端history_cache、后端setterhistory_cache_update三兄弟
with gr_L2(scale=1, elem_id="gpt-panel"):
with gr.Accordion("输入区", open=True, elem_id="input-panel") as area_input_primary:
with gr.Row():
@@ -114,12 +114,7 @@ def main():
with gr.Row(elem_id="gpt-submit-row"):
multiplex_submit_btn = gr.Button("提交", elem_id="elem_submit_visible", variant="primary")
multiplex_sel = gr.Dropdown(
choices=[
"常规对话",
"多模型对话",
"智能召回 RAG",
# "智能上下文",
], value="常规对话",
choices=get_multiplex_button_functions().keys(), value="常规对话",
interactive=True, label='', show_label=False,
elem_classes='normal_mut_select', elem_id="gpt-submit-dropdown").style(container=False)
submit_btn = gr.Button("提交", elem_id="elem_submit", variant="primary", visible=False)
@@ -179,16 +174,20 @@ def main():
with gr.Accordion("点击展开“文件下载区”。", open=False) as area_file_up:
file_upload = gr.Files(label="任何文件, 推荐上传压缩文件(zip, tar)", file_count="multiple", elem_id="elem_upload")
# 左上角工具栏定义
from themes.gui_toolbar import define_gui_toolbar
checkboxes, checkboxes_2, max_length_sl, theme_dropdown, system_prompt, file_upload_2, md_dropdown, top_p, temperature = \
define_gui_toolbar(AVAIL_LLM_MODELS, LLM_MODEL, INIT_SYS_PROMPT, THEME, AVAIL_THEMES, ADD_WAIFU, help_menu_description, js_code_for_toggle_darkmode)
define_gui_toolbar(AVAIL_LLM_MODELS, LLM_MODEL, INIT_SYS_PROMPT, THEME, AVAIL_THEMES, AVAIL_FONTS, ADD_WAIFU, help_menu_description, js_code_for_toggle_darkmode)
# 浮动菜单定义
from themes.gui_floating_menu import define_gui_floating_menu
area_input_secondary, txt2, area_customize, _, resetBtn2, clearBtn2, stopBtn2 = \
define_gui_floating_menu(customize_btns, functional, predefined_btns, cookies, web_cookie_cache)
# 浮动时间线定义
gr.Spark()
# 插件二级菜单的实现
from themes.gui_advanced_plugin_class import define_gui_advanced_plugin_class
new_plugin_callback, route_switchy_bt_with_arg, usr_confirmed_arg = \
@@ -227,11 +226,8 @@ def main():
multiplex_sel.select(
None, [multiplex_sel], None, _js=f"""(multiplex_sel)=>run_multiplex_shift(multiplex_sel)""")
cancel_handles.append(submit_btn.click(**predict_args))
resetBtn.click(None, None, [chatbot, history, status], _js=js_code_reset) # 先在前端快速清除chatbot&status
resetBtn2.click(None, None, [chatbot, history, status], _js=js_code_reset) # 先在前端快速清除chatbot&status
reset_server_side_args = (lambda history: ([], [], "已重置", json.dumps(history)), [history], [chatbot, history, status, history_cache])
resetBtn.click(*reset_server_side_args) # 再在后端清除history把history转存history_cache备用
resetBtn2.click(*reset_server_side_args) # 再在后端清除history把history转存history_cache备用
resetBtn.click(None, None, [chatbot, history, status], _js= """clear_conversation""") # 先在前端快速清除chatbot&status
resetBtn2.click(None, None, [chatbot, history, status], _js="""clear_conversation""") # 先在前端快速清除chatbot&status
clearBtn.click(None, None, [txt, txt2], _js=js_code_clear)
clearBtn2.click(None, None, [txt, txt2], _js=js_code_clear)
if AUTO_CLEAR_TXT:
@@ -331,7 +327,7 @@ def main():
from shared_utils.cookie_manager import load_web_cookie_cache__fn_builder
load_web_cookie_cache = load_web_cookie_cache__fn_builder(customize_btns, cookies, predefined_btns)
app_block.load(load_web_cookie_cache, inputs = [web_cookie_cache, cookies],
outputs = [web_cookie_cache, cookies, *customize_btns.values(), *predefined_btns.values()], _js=js_code_for_persistent_cookie_init)
outputs = [web_cookie_cache, cookies, *customize_btns.values(), *predefined_btns.values()], _js="""persistent_cookie_init""")
app_block.load(None, inputs=[], outputs=None, _js=f"""()=>GptAcademicJavaScriptInit("{DARK_MODE}","{INIT_SYS_PROMPT}","{ADD_WAIFU}","{LAYOUT}","{TTS_TYPE}")""") # 配置暗色主题或亮色主题
app_block.load(None, inputs=[], outputs=None, _js="""()=>{REP}""".replace("REP", register_advanced_plugin_init_arr))

View File

@@ -26,6 +26,9 @@ from .bridge_chatglm import predict as chatglm_ui
from .bridge_chatglm3 import predict_no_ui_long_connection as chatglm3_noui
from .bridge_chatglm3 import predict as chatglm3_ui
from .bridge_chatglm4 import predict_no_ui_long_connection as chatglm4_noui
from .bridge_chatglm4 import predict as chatglm4_ui
from .bridge_qianfan import predict_no_ui_long_connection as qianfan_noui
from .bridge_qianfan import predict as qianfan_ui
@@ -76,6 +79,7 @@ cohere_endpoint = "https://api.cohere.ai/v1/chat"
ollama_endpoint = "http://localhost:11434/api/chat"
yimodel_endpoint = "https://api.lingyiwanwu.com/v1/chat/completions"
deepseekapi_endpoint = "https://api.deepseek.com/v1/chat/completions"
grok_model_endpoint = "https://api.x.ai/v1/chat/completions"
if not AZURE_ENDPOINT.endswith('/'): AZURE_ENDPOINT += '/'
azure_endpoint = AZURE_ENDPOINT + f'openai/deployments/{AZURE_ENGINE}/chat/completions?api-version=2023-05-15'
@@ -97,6 +101,7 @@ if cohere_endpoint in API_URL_REDIRECT: cohere_endpoint = API_URL_REDIRECT[coher
if ollama_endpoint in API_URL_REDIRECT: ollama_endpoint = API_URL_REDIRECT[ollama_endpoint]
if yimodel_endpoint in API_URL_REDIRECT: yimodel_endpoint = API_URL_REDIRECT[yimodel_endpoint]
if deepseekapi_endpoint in API_URL_REDIRECT: deepseekapi_endpoint = API_URL_REDIRECT[deepseekapi_endpoint]
if grok_model_endpoint in API_URL_REDIRECT: grok_model_endpoint = API_URL_REDIRECT[grok_model_endpoint]
# 获取tokenizer
tokenizer_gpt35 = LazyloadTiktoken("gpt-3.5-turbo")
@@ -212,6 +217,16 @@ model_info = {
"token_cnt": get_token_num_gpt4,
},
"chatgpt-4o-latest": {
"fn_with_ui": chatgpt_ui,
"fn_without_ui": chatgpt_noui,
"endpoint": openai_endpoint,
"has_multimodal_capacity": True,
"max_token": 128000,
"tokenizer": tokenizer_gpt4,
"token_cnt": get_token_num_gpt4,
},
"gpt-4o-2024-05-13": {
"fn_with_ui": chatgpt_ui,
"fn_without_ui": chatgpt_noui,
@@ -258,7 +273,9 @@ model_info = {
"token_cnt": get_token_num_gpt4,
"openai_disable_system_prompt": True,
"openai_disable_stream": True,
"openai_force_temperature_one": True,
},
"o1-mini": {
"fn_with_ui": chatgpt_ui,
"fn_without_ui": chatgpt_noui,
@@ -268,6 +285,31 @@ model_info = {
"token_cnt": get_token_num_gpt4,
"openai_disable_system_prompt": True,
"openai_disable_stream": True,
"openai_force_temperature_one": True,
},
"o1-2024-12-17": {
"fn_with_ui": chatgpt_ui,
"fn_without_ui": chatgpt_noui,
"endpoint": openai_endpoint,
"max_token": 200000,
"tokenizer": tokenizer_gpt4,
"token_cnt": get_token_num_gpt4,
"openai_disable_system_prompt": True,
"openai_disable_stream": True,
"openai_force_temperature_one": True,
},
"o1": {
"fn_with_ui": chatgpt_ui,
"fn_without_ui": chatgpt_noui,
"endpoint": openai_endpoint,
"max_token": 200000,
"tokenizer": tokenizer_gpt4,
"token_cnt": get_token_num_gpt4,
"openai_disable_system_prompt": True,
"openai_disable_stream": True,
"openai_force_temperature_one": True,
},
"gpt-4-turbo": {
@@ -385,6 +427,14 @@ model_info = {
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
"glm-4-plus":{
"fn_with_ui": zhipu_ui,
"fn_without_ui": zhipu_noui,
"endpoint": None,
"max_token": 10124 * 8,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
# api_2d (此后不需要在此处添加api2d的接口了因为下面的代码会自动添加)
"api2d-gpt-4": {
@@ -396,6 +446,7 @@ model_info = {
"token_cnt": get_token_num_gpt4,
},
# ChatGLM本地模型
# 将 chatglm 直接对齐到 chatglm2
"chatglm": {
"fn_with_ui": chatglm_ui,
@@ -421,6 +472,14 @@ model_info = {
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
"chatglm4": {
"fn_with_ui": chatglm4_ui,
"fn_without_ui": chatglm4_noui,
"endpoint": None,
"max_token": 8192,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
"qianfan": {
"fn_with_ui": qianfan_ui,
"fn_without_ui": qianfan_noui,
@@ -753,7 +812,8 @@ if "qwen-local" in AVAIL_LLM_MODELS:
except:
logger.error(trimmed_format_exc())
# -=-=-=-=-=-=- 通义-在线模型 -=-=-=-=-=-=-
if "qwen-turbo" in AVAIL_LLM_MODELS or "qwen-plus" in AVAIL_LLM_MODELS or "qwen-max" in AVAIL_LLM_MODELS: # zhipuai
qwen_models = ["qwen-max-latest", "qwen-max-2025-01-25","qwen-max","qwen-turbo","qwen-plus"]
if any(item in qwen_models for item in AVAIL_LLM_MODELS):
try:
from .bridge_qwen import predict_no_ui_long_connection as qwen_noui
from .bridge_qwen import predict as qwen_ui
@@ -763,7 +823,7 @@ if "qwen-turbo" in AVAIL_LLM_MODELS or "qwen-plus" in AVAIL_LLM_MODELS or "qwen-
"fn_without_ui": qwen_noui,
"can_multi_thread": True,
"endpoint": None,
"max_token": 6144,
"max_token": 100000,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
@@ -772,7 +832,7 @@ if "qwen-turbo" in AVAIL_LLM_MODELS or "qwen-plus" in AVAIL_LLM_MODELS or "qwen-
"fn_without_ui": qwen_noui,
"can_multi_thread": True,
"endpoint": None,
"max_token": 30720,
"max_token": 129024,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
@@ -781,7 +841,25 @@ if "qwen-turbo" in AVAIL_LLM_MODELS or "qwen-plus" in AVAIL_LLM_MODELS or "qwen-
"fn_without_ui": qwen_noui,
"can_multi_thread": True,
"endpoint": None,
"max_token": 28672,
"max_token": 30720,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
"qwen-max-latest": {
"fn_with_ui": qwen_ui,
"fn_without_ui": qwen_noui,
"can_multi_thread": True,
"endpoint": None,
"max_token": 30720,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
"qwen-max-2025-01-25": {
"fn_with_ui": qwen_ui,
"fn_without_ui": qwen_noui,
"can_multi_thread": True,
"endpoint": None,
"max_token": 30720,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
}
@@ -868,6 +946,31 @@ if any(item in yi_models for item in AVAIL_LLM_MODELS):
})
except:
logger.error(trimmed_format_exc())
# -=-=-=-=-=-=- Grok model from x.ai -=-=-=-=-=-=-
grok_models = ["grok-beta"]
if any(item in grok_models for item in AVAIL_LLM_MODELS):
try:
grok_beta_128k_noui, grok_beta_128k_ui = get_predict_function(
api_key_conf_name="GROK_API_KEY", max_output_token=8192, disable_proxy=False
)
model_info.update({
"grok-beta": {
"fn_with_ui": grok_beta_128k_ui,
"fn_without_ui": grok_beta_128k_noui,
"can_multi_thread": True,
"endpoint": grok_model_endpoint,
"max_token": 128000,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
})
except:
logger.error(trimmed_format_exc())
# -=-=-=-=-=-=- 讯飞星火认知大模型 -=-=-=-=-=-=-
if "spark" in AVAIL_LLM_MODELS:
try:
@@ -987,18 +1090,18 @@ if "deepseekcoder" in AVAIL_LLM_MODELS: # deepseekcoder
except:
logger.error(trimmed_format_exc())
# -=-=-=-=-=-=- 幻方-深度求索大模型在线API -=-=-=-=-=-=-
if "deepseek-chat" in AVAIL_LLM_MODELS or "deepseek-coder" in AVAIL_LLM_MODELS:
if "deepseek-chat" in AVAIL_LLM_MODELS or "deepseek-coder" in AVAIL_LLM_MODELS or "deepseek-reasoner" in AVAIL_LLM_MODELS:
try:
deepseekapi_noui, deepseekapi_ui = get_predict_function(
api_key_conf_name="DEEPSEEK_API_KEY", max_output_token=4096, disable_proxy=False
)
)
model_info.update({
"deepseek-chat":{
"fn_with_ui": deepseekapi_ui,
"fn_without_ui": deepseekapi_noui,
"endpoint": deepseekapi_endpoint,
"can_multi_thread": True,
"max_token": 32000,
"max_token": 64000,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
@@ -1011,6 +1114,16 @@ if "deepseek-chat" in AVAIL_LLM_MODELS or "deepseek-coder" in AVAIL_LLM_MODELS:
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
"deepseek-reasoner":{
"fn_with_ui": deepseekapi_ui,
"fn_without_ui": deepseekapi_noui,
"endpoint": deepseekapi_endpoint,
"can_multi_thread": True,
"max_token": 64000,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
"enable_reasoning": True
},
})
except:
logger.error(trimmed_format_exc())
@@ -1278,6 +1391,11 @@ def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot,
inputs = apply_gpt_academic_string_mask(inputs, mode="show_llm")
if llm_kwargs['llm_model'] not in model_info:
from toolbox import update_ui
chatbot.append([inputs, f"很抱歉,模型 '{llm_kwargs['llm_model']}' 暂不支持<br/>(1) 检查config中的AVAIL_LLM_MODELS选项<br/>(2) 检查request_llms/bridge_all.py中的模型路由"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
method = model_info[llm_kwargs['llm_model']]["fn_with_ui"] # 如果这里报错检查config中的AVAIL_LLM_MODELS选项
if additional_fn: # 根据基础功能区 ModelOverride 参数调整模型类型

View File

@@ -23,39 +23,33 @@ class GetGLM3Handle(LocalLLMHandle):
import os
import platform
LOCAL_MODEL_QUANT, device = get_conf("LOCAL_MODEL_QUANT", "LOCAL_MODEL_DEVICE")
_model_name_ = "THUDM/chatglm3-6b"
# if LOCAL_MODEL_QUANT == "INT4": # INT4
# _model_name_ = "THUDM/chatglm3-6b-int4"
# elif LOCAL_MODEL_QUANT == "INT8": # INT8
# _model_name_ = "THUDM/chatglm3-6b-int8"
# else:
# _model_name_ = "THUDM/chatglm3-6b" # FP16
LOCAL_MODEL_PATH, LOCAL_MODEL_QUANT, device = get_conf("CHATGLM_LOCAL_MODEL_PATH", "LOCAL_MODEL_QUANT", "LOCAL_MODEL_DEVICE")
model_path = LOCAL_MODEL_PATH
with ProxyNetworkActivate("Download_LLM"):
chatglm_tokenizer = AutoTokenizer.from_pretrained(
_model_name_, trust_remote_code=True
model_path, trust_remote_code=True
)
if device == "cpu":
chatglm_model = AutoModel.from_pretrained(
_model_name_,
model_path,
trust_remote_code=True,
device="cpu",
).float()
elif LOCAL_MODEL_QUANT == "INT4": # INT4
chatglm_model = AutoModel.from_pretrained(
pretrained_model_name_or_path=_model_name_,
pretrained_model_name_or_path=model_path,
trust_remote_code=True,
quantization_config=BitsAndBytesConfig(load_in_4bit=True),
)
elif LOCAL_MODEL_QUANT == "INT8": # INT8
chatglm_model = AutoModel.from_pretrained(
pretrained_model_name_or_path=_model_name_,
pretrained_model_name_or_path=model_path,
trust_remote_code=True,
quantization_config=BitsAndBytesConfig(load_in_8bit=True),
)
else:
chatglm_model = AutoModel.from_pretrained(
pretrained_model_name_or_path=_model_name_,
pretrained_model_name_or_path=model_path,
trust_remote_code=True,
device="cuda",
)

View File

@@ -0,0 +1,81 @@
model_name = "ChatGLM4"
cmd_to_install = """
`pip install -r request_llms/requirements_chatglm4.txt`
`pip install modelscope`
`modelscope download --model ZhipuAI/glm-4-9b-chat --local_dir ./THUDM/glm-4-9b-chat`
"""
from toolbox import get_conf, ProxyNetworkActivate
from .local_llm_class import LocalLLMHandle, get_local_llm_predict_fns
# ------------------------------------------------------------------------------------------------------------------------
# 🔌💻 Local Model
# ------------------------------------------------------------------------------------------------------------------------
class GetGLM4Handle(LocalLLMHandle):
def load_model_info(self):
# 🏃‍♂️🏃‍♂️🏃‍♂️ 子进程执行
self.model_name = model_name
self.cmd_to_install = cmd_to_install
def load_model_and_tokenizer(self):
# 🏃‍♂️🏃‍♂️🏃‍♂️ 子进程执行
import torch
from transformers import AutoModel, AutoModelForCausalLM, AutoTokenizer
import os
LOCAL_MODEL_PATH, device = get_conf("CHATGLM_LOCAL_MODEL_PATH", "LOCAL_MODEL_DEVICE")
model_path = LOCAL_MODEL_PATH
chatglm_tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
chatglm_model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
trust_remote_code=True,
device=device
).eval().to(device)
self._model = chatglm_model
self._tokenizer = chatglm_tokenizer
return self._model, self._tokenizer
def llm_stream_generator(self, **kwargs):
# 🏃‍♂️🏃‍♂️🏃‍♂️ 子进程执行
def adaptor(kwargs):
query = kwargs["query"]
max_length = kwargs["max_length"]
top_p = kwargs["top_p"]
temperature = kwargs["temperature"]
history = kwargs["history"]
return query, max_length, top_p, temperature, history
query, max_length, top_p, temperature, history = adaptor(kwargs)
inputs = self._tokenizer.apply_chat_template([{"role": "user", "content": query}],
add_generation_prompt=True,
tokenize=True,
return_tensors="pt",
return_dict=True
).to(self._model.device)
gen_kwargs = {"max_length": max_length, "do_sample": True, "top_k": top_p}
outputs = self._model.generate(**inputs, **gen_kwargs)
outputs = outputs[:, inputs['input_ids'].shape[1]:]
response = self._tokenizer.decode(outputs[0], skip_special_tokens=True)
yield response
def try_to_import_special_deps(self, **kwargs):
# import something that will raise error if the user does not install requirement_*.txt
# 🏃‍♂️🏃‍♂️🏃‍♂️ 主进程执行
import importlib
# importlib.import_module('modelscope')
# ------------------------------------------------------------------------------------------------------------------------
# 🔌💻 GPT-Academic Interface
# ------------------------------------------------------------------------------------------------------------------------
predict_no_ui_long_connection, predict = get_local_llm_predict_fns(
GetGLM4Handle, model_name, history_format="chatglm3"
)

View File

@@ -23,8 +23,13 @@ from loguru import logger
from toolbox import get_conf, update_ui, is_any_api_key, select_api_key, what_keys, clip_history
from toolbox import trimmed_format_exc, is_the_upload_folder, read_one_api_model_name, log_chat
from toolbox import ChatBotWithCookies, have_any_recent_upload_image_files, encode_image
proxies, TIMEOUT_SECONDS, MAX_RETRY, API_ORG, AZURE_CFG_ARRAY = \
get_conf('proxies', 'TIMEOUT_SECONDS', 'MAX_RETRY', 'API_ORG', 'AZURE_CFG_ARRAY')
proxies, WHEN_TO_USE_PROXY, TIMEOUT_SECONDS, MAX_RETRY, API_ORG, AZURE_CFG_ARRAY = \
get_conf('proxies', 'WHEN_TO_USE_PROXY', 'TIMEOUT_SECONDS', 'MAX_RETRY', 'API_ORG', 'AZURE_CFG_ARRAY')
if "Connect_OpenAI" not in WHEN_TO_USE_PROXY:
if proxies is not None:
logger.error("虽然您配置了代理设置但不会在连接OpenAI的过程中起作用请检查WHEN_TO_USE_PROXY配置。")
proxies = None
timeout_bot_msg = '[Local Message] Request timeout. Network error. Please check proxy settings in config.py.' + \
'网络错误,检查代理服务器是否可用,以及代理设置的格式是否正确,格式须是[协议]://[地址]:[端口],缺一不可。'
@@ -180,14 +185,20 @@ def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[],
raise ConnectionAbortedError("正常结束但显示Token不足导致输出不完整请削减单次输入的文本量。")
else:
raise RuntimeError("OpenAI拒绝了请求" + error_msg)
if ('data: [DONE]' in chunk_decoded): break # api2d 正常完成
if ('data: [DONE]' in chunk_decoded): break # api2d & one-api 正常完成
# 提前读取一些信息 (用于判断异常)
if has_choices and not choice_valid:
# 一些垃圾第三方接口的出现这样的错误
continue
json_data = chunkjson['choices'][0]
delta = json_data["delta"]
if len(delta) == 0: break
if len(delta) == 0:
is_termination_certain = False
if (has_choices) and (chunkjson['choices'][0].get('finish_reason', 'null') == 'stop'): is_termination_certain = True
if is_termination_certain: break
else: continue # 对于不符合规范的狗屎接口,这里需要继续
if (not has_content) and has_role: continue
if (not has_content) and (not has_role): continue # raise RuntimeError("发现不标准的第三方接口:"+delta)
if has_content: # has_role = True/False
@@ -285,6 +296,8 @@ def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:ChatBotWith
history.extend([inputs, ""])
retry = 0
previous_ui_reflesh_time = 0
ui_reflesh_min_interval = 0.0
while True:
try:
# make a POST request to the API endpoint, stream=True
@@ -297,13 +310,13 @@ def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:ChatBotWith
yield from update_ui(chatbot=chatbot, history=history, msg="请求超时"+retry_msg) # 刷新界面
if retry > MAX_RETRY: raise TimeoutError
if not stream:
# 该分支仅适用于不支持stream的o1模型其他情形一律不适用
yield from handle_o1_model_special(response, inputs, llm_kwargs, chatbot, history)
return
if stream:
reach_termination = False # 处理一些 new-api 的奇葩异常
gpt_replying_buffer = ""
is_head_of_the_stream = True
stream_response = response.iter_lines()
@@ -316,11 +329,14 @@ def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:ChatBotWith
error_msg = chunk_decoded
# 首先排除一个one-api没有done数据包的第三方Bug情形
if len(gpt_replying_buffer.strip()) > 0 and len(error_msg) == 0:
yield from update_ui(chatbot=chatbot, history=history, msg="检测到有缺陷的非OpenAI官方接口,建议选择更稳定的接口。")
yield from update_ui(chatbot=chatbot, history=history, msg="检测到有缺陷的接口,建议选择更稳定的接口。")
if not reach_termination:
reach_termination = True
log_chat(llm_model=llm_kwargs["llm_model"], input_str=inputs, output_str=gpt_replying_buffer)
break
# 其他情况,直接返回报错
chatbot, history = handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg)
yield from update_ui(chatbot=chatbot, history=history, msg="非OpenAI官方接口返回了错误:" + chunk.decode()) # 刷新界面
yield from update_ui(chatbot=chatbot, history=history, msg="接口返回了错误:" + chunk.decode()) # 刷新界面
return
# 提前读取一些信息 (用于判断异常)
@@ -330,6 +346,8 @@ def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:ChatBotWith
# 数据流的第一帧不携带content
is_head_of_the_stream = False; continue
if "error" in chunk_decoded: logger.error(f"接口返回了未知错误: {chunk_decoded}")
if chunk:
try:
if has_choices and not choice_valid:
@@ -338,14 +356,25 @@ def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:ChatBotWith
if ('data: [DONE]' not in chunk_decoded) and len(chunk_decoded) > 0 and (chunkjson is None):
# 传递进来一些奇怪的东西
raise ValueError(f'无法读取以下数据,请检查配置。\n\n{chunk_decoded}')
# 前者是API2D的结束条件后者是OPENAI的结束条件
if ('data: [DONE]' in chunk_decoded) or (len(chunkjson['choices'][0]["delta"]) == 0):
# 判定为数据流的结束gpt_replying_buffer也写完了
log_chat(llm_model=llm_kwargs["llm_model"], input_str=inputs, output_str=gpt_replying_buffer)
break
# 前者是API2D & One-API的结束条件后者是OPENAI的结束条件
one_api_terminate = ('data: [DONE]' in chunk_decoded)
openai_terminate = (has_choices) and (len(chunkjson['choices'][0]["delta"]) == 0)
if one_api_terminate or openai_terminate:
is_termination_certain = False
if one_api_terminate: is_termination_certain = True # 抓取符合规范的结束条件
elif (has_choices) and (chunkjson['choices'][0].get('finish_reason', 'null') == 'stop'): is_termination_certain = True # 抓取符合规范的结束条件
if is_termination_certain:
reach_termination = True
log_chat(llm_model=llm_kwargs["llm_model"], input_str=inputs, output_str=gpt_replying_buffer)
break # 对于符合规范的接口这里可以break
else:
continue # 对于不符合规范的狗屎接口,这里需要继续
# 到这里我们已经可以假定必须包含choice了
try:
status_text = f"finish_reason: {chunkjson['choices'][0].get('finish_reason', 'null')}"
except:
logger.error(f"一些垃圾第三方接口出现这样的错误,兼容一下吧: {chunk_decoded}")
# 处理数据流的主体
status_text = f"finish_reason: {chunkjson['choices'][0].get('finish_reason', 'null')}"
# 如果这里抛出异常一般是文本过长详情见get_full_error的输出
if has_content:
# 正常情况
gpt_replying_buffer = gpt_replying_buffer + chunkjson['choices'][0]["delta"]["content"]
@@ -354,21 +383,26 @@ def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:ChatBotWith
continue
else:
# 至此已经超出了正常接口应该进入的范围,一些垃圾第三方接口会出现这样的错误
if chunkjson['choices'][0]["delta"]["content"] is None: continue # 一些垃圾第三方接口出现这样的错误,兼容一下吧
if chunkjson['choices'][0]["delta"].get("content", None) is None:
logger.error(f"一些垃圾第三方接口出现这样的错误,兼容一下吧: {chunk_decoded}")
continue
gpt_replying_buffer = gpt_replying_buffer + chunkjson['choices'][0]["delta"]["content"]
history[-1] = gpt_replying_buffer
chatbot[-1] = (history[-2], history[-1])
yield from update_ui(chatbot=chatbot, history=history, msg=status_text) # 刷新界面
if time.time() - previous_ui_reflesh_time > ui_reflesh_min_interval:
yield from update_ui(chatbot=chatbot, history=history, msg=status_text) # 刷新界面
previous_ui_reflesh_time = time.time()
except Exception as e:
yield from update_ui(chatbot=chatbot, history=history, msg="Json解析不合常规") # 刷新界面
chunk = get_full_error(chunk, stream_response)
chunk_decoded = chunk.decode()
error_msg = chunk_decoded
chatbot, history = handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg)
yield from update_ui(chatbot=chatbot, history=history, msg="Json解析异常" + error_msg) # 刷新界面
logger.error(error_msg)
yield from update_ui(chatbot=chatbot, history=history, msg="Json解析异常" + error_msg) # 刷新界面
return
yield from update_ui(chatbot=chatbot, history=history, msg="完成") # 刷新界面
return # return from stream-branch
def handle_o1_model_special(response, inputs, llm_kwargs, chatbot, history):
@@ -536,6 +570,8 @@ def generate_payload(inputs:str, llm_kwargs:dict, history:list, system_prompt:st
"n": 1,
"stream": stream,
}
openai_force_temperature_one = model_info[llm_kwargs['llm_model']].get('openai_force_temperature_one', False)
if openai_force_temperature_one:
payload.pop('temperature')
return headers,payload

View File

@@ -26,7 +26,7 @@ class GetLlamaHandle(LocalLLMHandle):
import platform
huggingface_token, device = get_conf('HUGGINGFACE_ACCESS_TOKEN', 'LOCAL_MODEL_DEVICE')
assert len(huggingface_token) != 0, "没有填写 HUGGINGFACE_ACCESS_TOKEN"
with open(os.path.expanduser('~/.cache/huggingface/token'), 'w') as f:
with open(os.path.expanduser('~/.cache/huggingface/token'), 'w', encoding='utf8') as f:
f.write(huggingface_token)
model_id = 'meta-llama/Llama-2-7b-chat-hf'
with ProxyNetworkActivate('Download_LLM'):

View File

@@ -31,7 +31,7 @@ class MoonShotInit:
files.append(f)
for file in files:
if file.split('.')[-1] in ['pdf']:
with open(file, 'r') as fp:
with open(file, 'r', encoding='utf8') as fp:
from crazy_functions.crazy_utils import read_and_clean_pdf_text
file_content, _ = read_and_clean_pdf_text(fp)
what_ask.append({"role": "system", "content": file_content})

View File

@@ -75,7 +75,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
# make a POST request to the API endpoint, stream=False
from .bridge_all import model_info
endpoint = model_info[llm_kwargs['llm_model']]['endpoint']
response = requests.post(endpoint, headers=headers, proxies=proxies,
response = requests.post(endpoint, headers=headers, proxies=None,
json=payload, stream=True, timeout=TIMEOUT_SECONDS); break
except requests.exceptions.ReadTimeout as e:
retry += 1
@@ -152,10 +152,12 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
history.append(inputs); history.append("")
retry = 0
if proxies is not None:
logger.error("Ollama不会使用代理服务器, 忽略了proxies的设置。")
while True:
try:
# make a POST request to the API endpoint, stream=True
response = requests.post(endpoint, headers=headers, proxies=proxies,
response = requests.post(endpoint, headers=headers, proxies=None,
json=payload, stream=True, timeout=TIMEOUT_SECONDS);break
except:
retry += 1

View File

@@ -170,7 +170,7 @@ def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[],
except requests.exceptions.ConnectionError:
chunk = next(stream_response) # 失败了,重试一次?再失败就没办法了。
chunk_decoded, chunkjson, has_choices, choice_valid, has_content, has_role = decode_chunk(chunk)
if len(chunk_decoded)==0: continue
if len(chunk_decoded)==0 or chunk_decoded.startswith(':'): continue
if not chunk_decoded.startswith('data:'):
error_msg = get_full_error(chunk, stream_response).decode()
if "reduce the length" in error_msg:
@@ -181,9 +181,6 @@ def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[],
raise RuntimeError("OpenAI拒绝了请求" + error_msg)
if ('data: [DONE]' in chunk_decoded): break # api2d 正常完成
# 提前读取一些信息 (用于判断异常)
if (has_choices and not choice_valid) or ('OPENROUTER PROCESSING' in chunk_decoded):
# 一些垃圾第三方接口的出现这样的错误openrouter的特殊处理
continue
json_data = chunkjson['choices'][0]
delta = json_data["delta"]
if len(delta) == 0: break
@@ -328,8 +325,7 @@ def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:ChatBotWith
if chunk:
try:
if (has_choices and not choice_valid) or ('OPENROUTER PROCESSING' in chunk_decoded):
# 一些垃圾第三方接口的出现这样的错误, 或者OPENROUTER的特殊处理,因为OPENROUTER的数据流未连接到模型时会出现OPENROUTER PROCESSING
if (has_choices and not choice_valid) or chunk_decoded.startswith(':'):
continue
if ('data: [DONE]' not in chunk_decoded) and len(chunk_decoded) > 0 and (chunkjson is None):
# 传递进来一些奇怪的东西

View File

@@ -202,16 +202,29 @@ class GoogleChatInit:
) # 处理 history
messages.append(self.__conversation_user(inputs, llm_kwargs, enable_multimodal_capacity)) # 处理用户对话
payload = {
"contents": messages,
"generationConfig": {
# "maxOutputTokens": llm_kwargs.get("max_token", 1024),
"stopSequences": str(llm_kwargs.get("stop", "")).split(" "),
"temperature": llm_kwargs.get("temperature", 1),
"topP": llm_kwargs.get("top_p", 0.8),
"topK": 10,
},
}
stop_sequences = str(llm_kwargs.get("stop", "")).split(" ")
# 过滤空字符串并确保至少有一个停止序列
stop_sequences = [s for s in stop_sequences if s]
if not stop_sequences:
payload = {
"contents": messages,
"generationConfig": {
"temperature": llm_kwargs.get("temperature", 1),
"topP": llm_kwargs.get("top_p", 0.8),
"topK": 10,
},
}
else:
payload = {
"contents": messages,
"generationConfig": {
# "maxOutputTokens": llm_kwargs.get("max_token", 1024),
"stopSequences": stop_sequences,
"temperature": llm_kwargs.get("temperature", 1),
"topP": llm_kwargs.get("top_p", 0.8),
"topK": 10,
},
}
return header, payload

View File

@@ -24,18 +24,13 @@ class QwenRequestInstance():
def generate(self, inputs, llm_kwargs, history, system_prompt):
# import _thread as thread
from dashscope import Generation
QWEN_MODEL = {
'qwen-turbo': Generation.Models.qwen_turbo,
'qwen-plus': Generation.Models.qwen_plus,
'qwen-max': Generation.Models.qwen_max,
}[llm_kwargs['llm_model']]
top_p = llm_kwargs.get('top_p', 0.8)
if top_p == 0: top_p += 1e-5
if top_p == 1: top_p -= 1e-5
self.result_buf = ""
responses = Generation.call(
model=QWEN_MODEL,
model=llm_kwargs['llm_model'],
messages=generate_message_payload(inputs, llm_kwargs, history, system_prompt),
top_p=top_p,
temperature=llm_kwargs.get('temperature', 1.0),

View File

@@ -36,10 +36,11 @@ def get_full_error(chunk, stream_response):
def decode_chunk(chunk):
"""
用于解读"content""finish_reason"的内容
用于解读"content""finish_reason"的内容(如果支持思维链也会返回"reasoning_content"内容)
"""
chunk = chunk.decode()
respose = ""
reasoning_content = ""
finish_reason = "False"
try:
chunk = json.loads(chunk[6:])
@@ -57,14 +58,20 @@ def decode_chunk(chunk):
return respose, finish_reason
try:
respose = chunk["choices"][0]["delta"]["content"]
if chunk["choices"][0]["delta"]["content"] is not None:
respose = chunk["choices"][0]["delta"]["content"]
except:
pass
try:
if chunk["choices"][0]["delta"]["reasoning_content"] is not None:
reasoning_content = chunk["choices"][0]["delta"]["reasoning_content"]
except:
pass
try:
finish_reason = chunk["choices"][0]["finish_reason"]
except:
pass
return respose, finish_reason
return respose, reasoning_content, finish_reason
def generate_message(input, model, key, history, max_output_token, system_prompt, temperature):
@@ -149,6 +156,7 @@ def get_predict_function(
observe_window = None
用于负责跨越线程传递已经输出的部分大部分时候仅仅为了fancy的视觉效果留空即可。observe_window[0]观测窗。observe_window[1]:看门狗
"""
from .bridge_all import model_info
watch_dog_patience = 5 # 看门狗的耐心设置5秒不准咬人(咬的也不是人
if len(APIKEY) == 0:
raise RuntimeError(f"APIKEY为空,请检查配置文件的{APIKEY}")
@@ -163,29 +171,21 @@ def get_predict_function(
system_prompt=sys_prompt,
temperature=llm_kwargs["temperature"],
)
reasoning = model_info[llm_kwargs['llm_model']].get('enable_reasoning', False)
retry = 0
while True:
try:
from .bridge_all import model_info
endpoint = model_info[llm_kwargs["llm_model"]]["endpoint"]
if not disable_proxy:
response = requests.post(
endpoint,
headers=headers,
proxies=proxies,
json=playload,
stream=True,
timeout=TIMEOUT_SECONDS,
)
else:
response = requests.post(
endpoint,
headers=headers,
json=playload,
stream=True,
timeout=TIMEOUT_SECONDS,
)
response = requests.post(
endpoint,
headers=headers,
proxies=None if disable_proxy else proxies,
json=playload,
stream=True,
timeout=TIMEOUT_SECONDS,
)
break
except:
retry += 1
@@ -195,9 +195,12 @@ def get_predict_function(
if MAX_RETRY != 0:
logger.error(f"请求超时,正在重试 ({retry}/{MAX_RETRY}) ……")
stream_response = response.iter_lines()
result = ""
finish_reason = ""
if reasoning:
resoning_buffer = ""
stream_response = response.iter_lines()
while True:
try:
chunk = next(stream_response)
@@ -207,9 +210,9 @@ def get_predict_function(
break
except requests.exceptions.ConnectionError:
chunk = next(stream_response) # 失败了,重试一次?再失败就没办法了。
response_text, finish_reason = decode_chunk(chunk)
response_text, reasoning_content, finish_reason = decode_chunk(chunk)
# 返回的数据流第一次为空,继续等待
if response_text == "" and finish_reason != "False":
if response_text == "" and (reasoning == False or reasoning_content == "") and finish_reason != "False":
continue
if response_text == "API_ERROR" and (
finish_reason != "False" or finish_reason != "stop"
@@ -227,6 +230,8 @@ def get_predict_function(
print(f"[response] {result}")
break
result += response_text
if reasoning:
resoning_buffer += reasoning_content
if observe_window is not None:
# 观测窗,把已经获取的数据显示出去
if len(observe_window) >= 1:
@@ -241,6 +246,10 @@ def get_predict_function(
error_msg = chunk_decoded
logger.error(error_msg)
raise RuntimeError("Json解析不合常规")
if reasoning:
# reasoning 的部分加上框 (>)
return '\n'.join(map(lambda x: '> ' + x, resoning_buffer.split('\n'))) + \
'\n\n' + result
return result
def predict(
@@ -262,6 +271,7 @@ def get_predict_function(
chatbot 为WebUI中显示的对话列表修改它然后yeild出去可以直接修改对话界面内容
additional_fn代表点击的哪个按钮按钮见functional.py
"""
from .bridge_all import model_info
if len(APIKEY) == 0:
raise RuntimeError(f"APIKEY为空,请检查配置文件的{APIKEY}")
if inputs == "":
@@ -299,31 +309,22 @@ def get_predict_function(
temperature=llm_kwargs["temperature"],
)
reasoning = model_info[llm_kwargs['llm_model']].get('enable_reasoning', False)
history.append(inputs)
history.append("")
retry = 0
while True:
try:
from .bridge_all import model_info
endpoint = model_info[llm_kwargs["llm_model"]]["endpoint"]
if not disable_proxy:
response = requests.post(
endpoint,
headers=headers,
proxies=proxies,
json=playload,
stream=True,
timeout=TIMEOUT_SECONDS,
)
else:
response = requests.post(
endpoint,
headers=headers,
json=playload,
stream=True,
timeout=TIMEOUT_SECONDS,
)
response = requests.post(
endpoint,
headers=headers,
proxies=None if disable_proxy else proxies,
json=playload,
stream=True,
timeout=TIMEOUT_SECONDS,
)
break
except:
retry += 1
@@ -338,6 +339,8 @@ def get_predict_function(
raise TimeoutError
gpt_replying_buffer = ""
if reasoning:
gpt_reasoning_buffer = ""
stream_response = response.iter_lines()
while True:
@@ -347,9 +350,9 @@ def get_predict_function(
break
except requests.exceptions.ConnectionError:
chunk = next(stream_response) # 失败了,重试一次?再失败就没办法了。
response_text, finish_reason = decode_chunk(chunk)
response_text, reasoning_content, finish_reason = decode_chunk(chunk)
# 返回的数据流第一次为空,继续等待
if response_text == "" and finish_reason != "False":
if response_text == "" and (reasoning == False or reasoning_content == "") and finish_reason != "False":
status_text = f"finish_reason: {finish_reason}"
yield from update_ui(
chatbot=chatbot, history=history, msg=status_text
@@ -379,9 +382,14 @@ def get_predict_function(
logger.info(f"[response] {gpt_replying_buffer}")
break
status_text = f"finish_reason: {finish_reason}"
gpt_replying_buffer += response_text
# 如果这里抛出异常一般是文本过长详情见get_full_error的输出
history[-1] = gpt_replying_buffer
if reasoning:
gpt_replying_buffer += response_text
gpt_reasoning_buffer += reasoning_content
history[-1] = '\n'.join(map(lambda x: '> ' + x, gpt_reasoning_buffer.split('\n'))) + '\n\n' + gpt_replying_buffer
else:
gpt_replying_buffer += response_text
# 如果这里抛出异常一般是文本过长详情见get_full_error的输出
history[-1] = gpt_replying_buffer
chatbot[-1] = (history[-2], history[-1])
yield from update_ui(
chatbot=chatbot, history=history, msg=status_text

View File

@@ -0,0 +1,7 @@
protobuf
cpm_kernels
torch>=1.10
transformers>=4.44
mdtex2html
sentencepiece
accelerate

View File

@@ -1,4 +1,4 @@
https://public.agent-matrix.com/publish/gradio-3.32.10-py3-none-any.whl
https://public.agent-matrix.com/publish/gradio-3.32.12-py3-none-any.whl
fastapi==0.110
gradio-client==0.8
pypdf2==2.12.1
@@ -13,7 +13,7 @@ scipdf_parser>=0.52
spacy==3.7.4
anthropic>=0.18.1
python-markdown-math
pymdown-extensions
pymdown-extensions>=10.14
websocket-client
beautifulsoup4
prompt_toolkit
@@ -25,7 +25,7 @@ pyautogen
colorama
Markdown
pygments
edge-tts
edge-tts>=7.0.0
pymupdf
openai
rjsmin

View File

@@ -2,6 +2,7 @@ import markdown
import re
import os
import math
import html
from loguru import logger
from textwrap import dedent
@@ -384,6 +385,24 @@ def markdown_convertion(txt):
)
def code_block_title_replace_format(match):
lang = match.group(1)
filename = match.group(2)
return f"```{lang} {{title=\"{filename}\"}}\n"
def get_last_backticks_indent(text):
# 从后向前查找最后一个 ```
lines = text.splitlines()
for line in reversed(lines):
if '```' in line:
# 计算前面的空格数量
indent = len(line) - len(line.lstrip())
return indent
return 0 # 如果没找到返回0
@lru_cache(maxsize=16) # 使用lru缓存
def close_up_code_segment_during_stream(gpt_reply):
"""
在gpt输出代码的中途输出了前面的```,但还没输出完后面的```),补上后面的```
@@ -397,6 +416,12 @@ def close_up_code_segment_during_stream(gpt_reply):
"""
if "```" not in gpt_reply:
return gpt_reply
# replace [```python:warp.py] to [```python {title="warp.py"}]
pattern = re.compile(r"```([a-z]{1,12}):([^:\n]{1,35}\.([a-zA-Z^:\n]{1,3}))\n")
if pattern.search(gpt_reply):
gpt_reply = pattern.sub(code_block_title_replace_format, gpt_reply)
if gpt_reply.endswith("```"):
return gpt_reply
@@ -404,7 +429,11 @@ def close_up_code_segment_during_stream(gpt_reply):
segments = gpt_reply.split("```")
n_mark = len(segments) - 1
if n_mark % 2 == 1:
return gpt_reply + "\n```" # 输出代码片段中!
try:
num_padding = get_last_backticks_indent(gpt_reply)
except:
num_padding = 0
return gpt_reply + "\n" + " "*num_padding + "```" # 输出代码片段中!
else:
return gpt_reply
@@ -421,6 +450,19 @@ def special_render_issues_for_mermaid(text):
return text
def contain_html_tag(text):
"""
判断文本中是否包含HTML标签。
"""
pattern = r'</?([a-zA-Z0-9_]{3,16})>|<script\s+[^>]*src=["\']([^"\']+)["\'][^>]*>'
return re.search(pattern, text) is not None
def contain_image(text):
pattern = r'<br/><br/><div align="center"><img src="file=(.*?)" base64="(.*?)"></div>'
return re.search(pattern, text) is not None
def compat_non_markdown_input(text):
"""
改善非markdown输入的显示效果例如将空格转换为&nbsp;,将换行符转换为</br>等。
@@ -429,9 +471,13 @@ def compat_non_markdown_input(text):
# careful inputmarkdown输入
text = special_render_issues_for_mermaid(text) # 处理特殊的渲染问题
return text
elif "</div>" in text:
elif ("<" in text) and (">" in text) and contain_html_tag(text):
# careful inputhtml输入
return text
if contain_image(text):
return text
else:
escaped_text = html.escape(text)
return escaped_text
else:
# whatever input非markdown输入
lines = text.split("\n")

View File

@@ -77,16 +77,28 @@ def make_history_cache():
# 定义 后端statehistory、前端history_cache、后端setterhistory_cache_update三兄弟
import gradio as gr
# 定义history的后端state
history = gr.State([])
# 定义history的一个孪生的前端存储区隐藏
history_cache = gr.Textbox(visible=False, elem_id="history_cache")
# 定义history_cache->history的更新方法隐藏。在触发这个按钮时会先执行js代码更新history_cache然后再执行python代码更新history
def process_history_cache(history_cache):
return json.loads(history_cache)
# 另一种更简单的setter方法
history_cache_update = gr.Button("", elem_id="elem_update_history", visible=False).click(
process_history_cache, inputs=[history_cache], outputs=[history])
return history, history_cache, history_cache_update
# history = gr.State([])
history = gr.Textbox(visible=False, elem_id="history-ng")
# # 定义history的一个孪生的前端存储区隐藏
# history_cache = gr.Textbox(visible=False, elem_id="history_cache")
# # 定义history_cache->history的更新方法隐藏。在触发这个按钮时会先执行js代码更新history_cache然后再执行python代码更新history
# def process_history_cache(history_cache):
# return json.loads(history_cache)
# # 另一种更简单的setter方法
# history_cache_update = gr.Button("", elem_id="elem_update_history", visible=False).click(
# process_history_cache, inputs=[history_cache], outputs=[history])
# # save history to history_cache
# def process_history_cache(history_cache):
# return json.dumps(history_cache)
# # 定义history->history_cache的更新方法隐藏
# def sync_history_cache(history):
# print("sync_history_cache", history)
# return json.dumps(history)
# # history.change(sync_history_cache, inputs=[history], outputs=[history_cache])
# # history_cache_sync = gr.Button("", elem_id="elem_sync_history", visible=False).click(
# # lambda history: (json.dumps(history)), inputs=[history_cache], outputs=[history])
return history, None, None

View File

@@ -0,0 +1,83 @@
import requests
import pickle
import io
import os
from pydantic import BaseModel, Field
from typing import Optional, Dict, Any
from loguru import logger
class DockerServiceApiComModel(BaseModel):
client_command: Optional[str] = Field(default=None, title="Client command", description="The command to be executed on the client side")
client_file_attach: Optional[dict] = Field(default=None, title="Client file attach", description="The file to be attached to the client side")
server_message: Optional[Any] = Field(default=None, title="Server standard error", description="The standard error from the server side")
server_std_err: Optional[str] = Field(default=None, title="Server standard error", description="The standard error from the server side")
server_std_out: Optional[str] = Field(default=None, title="Server standard output", description="The standard output from the server side")
server_file_attach: Optional[dict] = Field(default=None, title="Server file attach", description="The file to be attached to the server side")
def process_received(received: DockerServiceApiComModel, save_file_dir="./daas_output", output_manifest=None):
# Process the received data
if received.server_message:
try:
output_manifest['server_message'] += received.server_message
except:
output_manifest['server_message'] = received.server_message
if received.server_std_err:
output_manifest['server_std_err'] += received.server_std_err
if received.server_std_out:
output_manifest['server_std_out'] += received.server_std_out
if received.server_file_attach:
# print(f"Recv file attach: {received.server_file_attach}")
for file_name, file_content in received.server_file_attach.items():
new_fp = os.path.join(save_file_dir, file_name)
new_fp_dir = os.path.dirname(new_fp)
if not os.path.exists(new_fp_dir):
os.makedirs(new_fp_dir, exist_ok=True)
with open(new_fp, 'wb') as f:
f.write(file_content)
output_manifest['server_file_attach'].append(new_fp)
return output_manifest
def stream_daas(docker_service_api_com_model, server_url, save_file_dir):
# Prepare the file
# Pickle the object
pickled_data = pickle.dumps(docker_service_api_com_model)
# Create a file-like object from the pickled data
file_obj = io.BytesIO(pickled_data)
# Prepare the file for sending
files = {'file': ('docker_service_api_com_model.pkl', file_obj, 'application/octet-stream')}
# Send the POST request
response = requests.post(server_url, files=files, stream=True)
max_full_package_size = 1024 * 1024 * 1024 * 1 # 1 GB
received_output_manifest = {}
received_output_manifest['server_message'] = ""
received_output_manifest['server_std_err'] = ""
received_output_manifest['server_std_out'] = ""
received_output_manifest['server_file_attach'] = []
# Check if the request was successful
if response.status_code == 200:
# Process the streaming response
chunk_buf = None
for chunk in response.iter_content(max_full_package_size):
if chunk:
if chunk_buf is None: chunk_buf = chunk
else: chunk_buf += chunk
try:
received = pickle.loads(chunk_buf)
chunk_buf = None
received_output_manifest = process_received(received, save_file_dir, output_manifest = received_output_manifest)
yield received_output_manifest
except Exception as e:
# logger.error(f"pickle data was truncated, but don't worry, we will continue to receive the rest of the data.")
continue
else:
logger.error(f"Error: Received status code {response.status_code}, response.text: {response.text}")
return received_output_manifest

View File

@@ -51,7 +51,7 @@ def validate_path_safety(path_or_url, user):
from toolbox import get_conf, default_user_name
from toolbox import FriendlyException
PATH_PRIVATE_UPLOAD, PATH_LOGGING = get_conf('PATH_PRIVATE_UPLOAD', 'PATH_LOGGING')
sensitive_path = None
sensitive_path = None # 必须不能包含 '/',即不能是多级路径
path_or_url = os.path.relpath(path_or_url)
if path_or_url.startswith(PATH_LOGGING): # 日志文件(按用户划分)
sensitive_path = PATH_LOGGING
@@ -138,7 +138,9 @@ def start_app(app_block, CONCURRENT_COUNT, AUTHENTICATION, PORT, SSL_KEYFILE, SS
app_block.is_sagemaker = False
gradio_app = App.create_app(app_block)
for route in list(gradio_app.router.routes):
if route.path == "/proxy={url_path:path}":
gradio_app.router.routes.remove(route)
# --- --- replace gradio endpoint to forbid access to sensitive files --- ---
if len(AUTHENTICATION) > 0:
dependencies = []
@@ -154,9 +156,13 @@ def start_app(app_block, CONCURRENT_COUNT, AUTHENTICATION, PORT, SSL_KEYFILE, SS
@gradio_app.head("/file={path_or_url:path}", dependencies=dependencies)
@gradio_app.get("/file={path_or_url:path}", dependencies=dependencies)
async def file(path_or_url: str, request: fastapi.Request):
if len(AUTHENTICATION) > 0:
if not _authorize_user(path_or_url, request, gradio_app):
return "越权访问!"
if not _authorize_user(path_or_url, request, gradio_app):
return "越权访问!"
stripped = path_or_url.lstrip().lower()
if stripped.startswith("https://") or stripped.startswith("http://"):
return "账户密码授权模式下, 禁止链接!"
if '../' in stripped:
return "非法路径!"
return await endpoint(path_or_url, request)
from fastapi import Request, status
@@ -167,6 +173,26 @@ def start_app(app_block, CONCURRENT_COUNT, AUTHENTICATION, PORT, SSL_KEYFILE, SS
response.delete_cookie('access-token')
response.delete_cookie('access-token-unsecure')
return response
else:
dependencies = []
endpoint = None
for route in list(gradio_app.router.routes):
if route.path == "/file/{path:path}":
gradio_app.router.routes.remove(route)
if route.path == "/file={path_or_url:path}":
dependencies = route.dependencies
endpoint = route.endpoint
gradio_app.router.routes.remove(route)
@gradio_app.get("/file/{path:path}", dependencies=dependencies)
@gradio_app.head("/file={path_or_url:path}", dependencies=dependencies)
@gradio_app.get("/file={path_or_url:path}", dependencies=dependencies)
async def file(path_or_url: str, request: fastapi.Request):
stripped = path_or_url.lstrip().lower()
if stripped.startswith("https://") or stripped.startswith("http://"):
return "账户密码授权模式下, 禁止链接!"
if '../' in stripped:
return "非法路径!"
return await endpoint(path_or_url, request)
# --- --- enable TTS (text-to-speech) functionality --- ---
TTS_TYPE = get_conf("TTS_TYPE")

View File

@@ -104,17 +104,27 @@ def extract_archive(file_path, dest_dir):
logger.info("Successfully extracted zip archive to {}".format(dest_dir))
elif file_extension in [".tar", ".gz", ".bz2"]:
with tarfile.open(file_path, "r:*") as tarobj:
# 清理提取路径,移除任何不安全的元素
for member in tarobj.getmembers():
member_path = os.path.normpath(member.name)
full_path = os.path.join(dest_dir, member_path)
full_path = os.path.abspath(full_path)
if not full_path.startswith(os.path.abspath(dest_dir) + os.sep):
raise Exception(f"Attempted Path Traversal in {member.name}")
try:
with tarfile.open(file_path, "r:*") as tarobj:
# 清理提取路径,移除任何不安全的元素
for member in tarobj.getmembers():
member_path = os.path.normpath(member.name)
full_path = os.path.join(dest_dir, member_path)
full_path = os.path.abspath(full_path)
if not full_path.startswith(os.path.abspath(dest_dir) + os.sep):
raise Exception(f"Attempted Path Traversal in {member.name}")
tarobj.extractall(path=dest_dir)
logger.info("Successfully extracted tar archive to {}".format(dest_dir))
tarobj.extractall(path=dest_dir)
logger.info("Successfully extracted tar archive to {}".format(dest_dir))
except tarfile.ReadError as e:
if file_extension == ".gz":
# 一些特别奇葩的项目是一个gz文件里面不是tar只有一个tex文件
import gzip
with gzip.open(file_path, 'rb') as f_in:
with open(os.path.join(dest_dir, 'main.tex'), 'wb') as f_out:
f_out.write(f_in.read())
else:
raise e
# 第三方库需要预先pip install rarfile
# 此外Windows上还需要安装winrar软件配置其Path环境变量如"C:\Program Files\WinRAR"才可以

View File

@@ -4,7 +4,6 @@ from functools import wraps, lru_cache
from shared_utils.advanced_markdown_format import format_io
from shared_utils.config_loader import get_conf as get_conf
pj = os.path.join
default_user_name = 'default_user'
@@ -12,10 +11,13 @@ default_user_name = 'default_user'
openai_regex = re.compile(
r"sk-[a-zA-Z0-9_-]{48}$|" +
r"sk-[a-zA-Z0-9_-]{92}$|" +
r"sk-proj-[a-zA-Z0-9_-]{48}$|"+
r"sk-proj-[a-zA-Z0-9_-]{124}$|"+
r"sk-proj-[a-zA-Z0-9_-]{48}$|" +
r"sk-proj-[a-zA-Z0-9_-]{124}$|" +
r"sk-proj-[a-zA-Z0-9_-]{156}$|" + #新版apikey位数不匹配故修改此正则表达式
r"sess-[a-zA-Z0-9]{40}$"
)
def is_openai_api_key(key):
CUSTOM_API_KEY_PATTERN = get_conf('CUSTOM_API_KEY_PATTERN')
if len(CUSTOM_API_KEY_PATTERN) != 0:
@@ -26,7 +28,7 @@ def is_openai_api_key(key):
def is_azure_api_key(key):
API_MATCH_AZURE = re.match(r"[a-zA-Z0-9]{32}$", key)
API_MATCH_AZURE = re.match(r"^[a-zA-Z0-9]{32}$|^[a-zA-Z0-9]{84}", key)
return bool(API_MATCH_AZURE)
@@ -34,16 +36,25 @@ def is_api2d_key(key):
API_MATCH_API2D = re.match(r"fk[a-zA-Z0-9]{6}-[a-zA-Z0-9]{32}$", key)
return bool(API_MATCH_API2D)
def is_openroute_api_key(key):
API_MATCH_OPENROUTE = re.match(r"sk-or-v1-[a-zA-Z0-9]{64}$", key)
return bool(API_MATCH_OPENROUTE)
def is_cohere_api_key(key):
API_MATCH_AZURE = re.match(r"[a-zA-Z0-9]{40}$", key)
return bool(API_MATCH_AZURE)
def is_any_api_key(key):
# key 一般只包含字母、数字、下划线、逗号、中划线
if not re.match(r"^[a-zA-Z0-9_\-,]+$", key):
# 如果配置了 CUSTOM_API_KEY_PATTERN再检查以下以免误杀
if CUSTOM_API_KEY_PATTERN := get_conf('CUSTOM_API_KEY_PATTERN'):
return bool(re.match(CUSTOM_API_KEY_PATTERN, key))
return False
if ',' in key:
keys = key.split(',')
for k in keys:
@@ -77,7 +88,8 @@ def select_api_key(keys, llm_model):
avail_key_list = []
key_list = keys.split(',')
if llm_model.startswith('gpt-') or llm_model.startswith('one-api-') or llm_model.startswith('o1-'):
if llm_model.startswith('gpt-') or llm_model.startswith('chatgpt-') or \
llm_model.startswith('one-api-') or llm_model == 'o1' or llm_model.startswith('o1-'):
for k in key_list:
if is_openai_api_key(k): avail_key_list.append(k)
@@ -100,7 +112,7 @@ def select_api_key(keys, llm_model):
if len(avail_key_list) == 0:
raise RuntimeError(f"您提供的api-key不满足要求不包含任何可用于{llm_model}的api-key。您可能选择了错误的模型或请求源左上角更换模型菜单中可切换openai,azure,claude,cohere等请求源")
api_key = random.choice(avail_key_list) # 随机负载均衡
api_key = random.choice(avail_key_list) # 随机负载均衡
return api_key
@@ -116,5 +128,5 @@ def select_api_key_for_embed_models(keys, llm_model):
if len(avail_key_list) == 0:
raise RuntimeError(f"您提供的api-key不满足要求不包含任何可用于{llm_model}的api-key。您可能选择了错误的模型或请求源。")
api_key = random.choice(avail_key_list) # 随机负载均衡
api_key = random.choice(avail_key_list) # 随机负载均衡
return api_key

View File

@@ -0,0 +1,15 @@
"""
对项目中的各个插件进行测试。运行方法:直接运行 python tests/test_plugins.py
"""
import init_test
import os, sys
if __name__ == "__main__":
from experimental_mods.get_bilibili_resource import download_bilibili
download_bilibili("BV1LSSHYXEtv", only_audio=True, user_name="test")
# if __name__ == "__main__":
# from test_utils import plugin_test
# plugin_test(plugin='crazy_functions.VideoResource_GPT->视频任务', main_input="帮我找到《天文馆的猫》,歌手泠鸢")

View File

@@ -19,4 +19,8 @@ if __name__ == "__main__":
plugin_test = importlib.import_module('test_utils').plugin_test
plugin_test(plugin='crazy_functions.Latex_Function->Latex翻译中文并重新编译PDF', main_input="2203.01927")
# plugin_test(plugin='crazy_functions.Latex_Function->Latex翻译中文并重新编译PDF', main_input="2203.01927")
# plugin_test(plugin='crazy_functions.Latex_Function->Latex翻译中文并重新编译PDF', main_input="gpt_log/arxiv_cache/2203.01927/workfolder")
# plugin_test(plugin='crazy_functions.Latex_Function->Latex翻译中文并重新编译PDF', main_input="2410.05779")
plugin_test(plugin='crazy_functions.Latex_Function->Latex翻译中文并重新编译PDF', main_input="gpt_log/default_user/workfolder")

View File

@@ -20,7 +20,7 @@ Replace 'Tex/' with the actual directory path where your files are located befor
md = """
Following code including wrapper
```mermaid
```python:wrapper.py
graph TD
A[Enter Chart Definition] --> B(Preview)
B --> C{decide}
@@ -29,8 +29,45 @@ graph TD
E --> B
D --> F[Save Image and Code]
F --> B
```
<details>
<summary><b>My section header in bold</b></summary>
Any folded content here. It requires an empty line just above it.
</details>
"""
md ="""
在这种场景中,您希望机器 B 能够通过轮询机制来间接地“请求”机器 A而实际上机器 A 只能主动向机器 B 发出请求。这是一种典型的客户端-服务器轮询模式。下面是如何实现这种机制的详细步骤:
### 机器 B 的实现
1. **安装 FastAPI 和必要的依赖库**
```bash
pip install fastapi uvicorn
```
2. **创建 FastAPI 服务**
```python
from fastapi import FastAPI
from fastapi.responses import JSONResponse
from uuid import uuid4
from threading import Lock
import time
app = FastAPI()
# 字典用于存储请求和状态
requests = {}
process_lock = Lock()
"""
def validate_path():
import os, sys
@@ -43,10 +80,12 @@ def validate_path():
validate_path() # validate path so you can run from base directory
from toolbox import markdown_convertion
from shared_utils.advanced_markdown_format import markdown_convertion_for_file
with open("gpt_log/default_user/shared/2024-04-22-01-27-43.zip.extract/translated_markdown.md", "r", encoding="utf-8") as f:
md = f.read()
html = markdown_convertion_for_file(md)
# from shared_utils.advanced_markdown_format import markdown_convertion_for_file
from shared_utils.advanced_markdown_format import close_up_code_segment_during_stream
# with open("gpt_log/default_user/shared/2024-04-22-01-27-43.zip.extract/translated_markdown.md", "r", encoding="utf-8") as f:
# md = f.read()
md = close_up_code_segment_during_stream(md)
html = markdown_convertion(md)
# print(html)
with open("test.html", "w", encoding="utf-8") as f:
f.write(html)

67
tests/test_media.py Normal file
View File

@@ -0,0 +1,67 @@
"""
对项目中的各个插件进行测试。运行方法:直接运行 python tests/test_plugins.py
"""
import init_test
import os, sys
if __name__ == "__main__":
from test_utils import plugin_test
plugin_test(plugin='crazy_functions.VideoResource_GPT->多媒体任务', main_input="我想找一首歌里面有句歌词是“turn your face towards the sun”")
# plugin_test(plugin='crazy_functions.Internet_GPT->连接网络回答问题', main_input="谁是应急食品?")
# plugin_test(plugin='crazy_functions.函数动态生成->函数动态生成', main_input='交换图像的蓝色通道和红色通道', advanced_arg={"file_path_arg": "./build/ants.jpg"})
# plugin_test(plugin='crazy_functions.Latex_Function->Latex翻译中文并重新编译PDF', main_input="2307.07522")
# plugin_test(plugin='crazy_functions.PDF_Translate->批量翻译PDF文档', main_input='build/pdf/t1.pdf')
# plugin_test(
# plugin="crazy_functions.Latex_Function->Latex翻译中文并重新编译PDF",
# main_input="G:/SEAFILE_LOCAL/50503047/我的资料库/学位/paperlatex/aaai/Fu_8368_with_appendix",
# )
# plugin_test(plugin='crazy_functions.虚空终端->虚空终端', main_input='修改api-key为sk-jhoejriotherjep')
# plugin_test(plugin='crazy_functions.批量翻译PDF文档_NOUGAT->批量翻译PDF文档', main_input='crazy_functions/test_project/pdf_and_word/aaai.pdf')
# plugin_test(plugin='crazy_functions.虚空终端->虚空终端', main_input='调用插件对C:/Users/fuqingxu/Desktop/旧文件/gpt/chatgpt_academic/crazy_functions/latex_fns中的python文件进行解析')
# plugin_test(plugin='crazy_functions.命令行助手->命令行助手', main_input='查看当前的docker容器列表')
# plugin_test(plugin='crazy_functions.SourceCode_Analyse->解析一个Python项目', main_input="crazy_functions/test_project/python/dqn")
# plugin_test(plugin='crazy_functions.SourceCode_Analyse->解析一个C项目', main_input="crazy_functions/test_project/cpp/cppipc")
# plugin_test(plugin='crazy_functions.Latex_Project_Polish->Latex英文润色', main_input="crazy_functions/test_project/latex/attention")
# plugin_test(plugin='crazy_functions.Markdown_Translate->Markdown中译英', main_input="README.md")
# plugin_test(plugin='crazy_functions.PDF_Translate->批量翻译PDF文档', main_input='crazy_functions/test_project/pdf_and_word/aaai.pdf')
# plugin_test(plugin='crazy_functions.谷歌检索小助手->谷歌检索小助手', main_input="https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=auto+reinforcement+learning&btnG=")
# plugin_test(plugin='crazy_functions.总结word文档->总结word文档', main_input="crazy_functions/test_project/pdf_and_word")
# plugin_test(plugin='crazy_functions.下载arxiv论文翻译摘要->下载arxiv论文并翻译摘要', main_input="1812.10695")
# plugin_test(plugin='crazy_functions.联网的ChatGPT->连接网络回答问题', main_input="谁是应急食品?")
# plugin_test(plugin='crazy_functions.解析JupyterNotebook->解析ipynb文件', main_input="crazy_functions/test_samples")
# plugin_test(plugin='crazy_functions.数学动画生成manim->动画生成', main_input="A ball split into 2, and then split into 4, and finally split into 8.")
# for lang in ["English", "French", "Japanese", "Korean", "Russian", "Italian", "German", "Portuguese", "Arabic"]:
# plugin_test(plugin='crazy_functions.Markdown_Translate->Markdown翻译指定语言', main_input="README.md", advanced_arg={"advanced_arg": lang})
# plugin_test(plugin='crazy_functions.知识库文件注入->知识库文件注入', main_input="./")
# plugin_test(plugin='crazy_functions.知识库文件注入->读取知识库作答', main_input="What is the installation method")
# plugin_test(plugin='crazy_functions.知识库文件注入->读取知识库作答', main_input="远程云服务器部署?")
# plugin_test(plugin='crazy_functions.Latex_Function->Latex翻译中文并重新编译PDF', main_input="2210.03629")

View File

@@ -36,7 +36,7 @@ if __name__ == "__main__":
# plugin_test(plugin='crazy_functions.SourceCode_Analyse->解析一个C项目', main_input="crazy_functions/test_project/cpp/cppipc")
# plugin_test(plugin='crazy_functions.Latex全文润色->Latex英文润色', main_input="crazy_functions/test_project/latex/attention")
# plugin_test(plugin='crazy_functions.Latex_Project_Polish->Latex英文润色', main_input="crazy_functions/test_project/latex/attention")
# plugin_test(plugin='crazy_functions.Markdown_Translate->Markdown中译英', main_input="README.md")
@@ -65,8 +65,3 @@ if __name__ == "__main__":
# plugin_test(plugin='crazy_functions.Latex_Function->Latex翻译中文并重新编译PDF', main_input="2210.03629")
# advanced_arg = {"advanced_arg":"--llm_to_learn=gpt-3.5-turbo --prompt_prefix='根据下面的服装类型提示想象一个穿着者对这个人外貌、身处的环境、内心世界、人设进行描写。要求100字以内用第二人称。' --system_prompt=''" }
# plugin_test(plugin='crazy_functions.chatglm微调工具->微调数据集生成', main_input='build/dev.json', advanced_arg=advanced_arg)
# advanced_arg = {"advanced_arg":"--pre_seq_len=128 --learning_rate=2e-2 --num_gpus=1 --json_dataset='t_code.json' --ptuning_directory='/home/hmp/ChatGLM2-6B/ptuning' " }
# plugin_test(plugin='crazy_functions.chatglm微调工具->启动微调', main_input='build/dev.json', advanced_arg=advanced_arg)

33
tests/test_tts.py Normal file
View File

@@ -0,0 +1,33 @@
import edge_tts
import os
import httpx
from toolbox import get_conf
async def test_tts():
async with httpx.AsyncClient() as client:
try:
# Forward the request to the target service
import tempfile
import edge_tts
import wave
import uuid
from pydub import AudioSegment
voice = get_conf("EDGE_TTS_VOICE")
tts = edge_tts.Communicate(text="测试", voice=voice)
temp_folder = tempfile.gettempdir()
temp_file_name = str(uuid.uuid4().hex)
temp_file = os.path.join(temp_folder, f'{temp_file_name}.mp3')
await tts.save(temp_file)
try:
mp3_audio = AudioSegment.from_file(temp_file, format="mp3")
mp3_audio.export(temp_file, format="wav")
with open(temp_file, 'rb') as wav_file: t = wav_file.read()
except:
raise RuntimeError("ffmpeg未安装无法处理EdgeTTS音频。安装方法见`https://github.com/jiaaro/pydub#getting-ffmpeg-set-up`")
except httpx.RequestError as e:
raise RuntimeError(f"请求失败: {e}")
if __name__ == "__main__":
import asyncio
asyncio.run(test_tts())

View File

@@ -1,9 +1,16 @@
:root {
--gpt-academic-message-font-size: 15px;
}
.message {
font-size: var(--gpt-academic-message-font-size) !important;
}
#plugin_arg_menu {
transform: translate(-50%, -50%);
border: dashed;
}
/* hide remove all button */
.remove-all.svelte-aqlk7e.svelte-aqlk7e.svelte-aqlk7e {
visibility: hidden;
@@ -25,7 +32,6 @@
visibility: hidden;
}
/* height of the upload box */
.wrap.svelte-xwlu1w {
min-height: var(--size-32);
@@ -97,13 +103,9 @@
min-width: min(80px, 100%);
}
#cbs {
background-color: var(--block-background-fill) !important;
}
#cbs,
#cbsc {
background-color: var(--block-background-fill) !important;
background-color: rgba(var(--block-background-fill), 0.5) !important;
}
#interact-panel .form {
@@ -155,7 +157,7 @@
transform: translate(-50%, -50%);
flex-wrap: wrap;
justify-content: center;
transition: opacity 1s ease-in-out;
transition: opacity 0.6s ease-in-out;
opacity: 0;
}
.welcome-card-container.show {
@@ -207,6 +209,7 @@
.welcome-content {
text-wrap: balance;
height: 55px;
font-size: 13px;
display: flex;
align-items: center;
}
@@ -271,3 +274,40 @@
#gpt-submit-row #gpt-submit-dropdown > *:hover {
cursor: context-menu;
}
.tooltip.svelte-p2nen8.svelte-p2nen8 {
box-shadow: 10px 10px 15px rgba(0, 0, 0, 0.5);
left: 10px;
}
#tooltip .hidden {
/* display: none; */
opacity: 0;
transition: opacity 0.5s ease;
}
#tooltip .visible {
/* display: block; */
opacity: 1;
transition: opacity 0.5s ease;
}
#elem_fontsize,
#elem_top_p,
#elem_temperature,
#elem_max_length_sl,
#elem_prompt {
/* 左右为0顶部为0底部为2px */
padding: 0 0 4px 0;
backdrop-filter: blur(10px);
background-color: rgba(var(--block-background-fill), 0.5);
}
#tooltip #cbs,
#tooltip #cbsc,
#tooltip .svelte-b6y5bg,
#tooltip .tabitem {
backdrop-filter: blur(10px);
background-color: rgba(var(--block-background-fill), 0.5);
}

View File

@@ -318,7 +318,7 @@ function addCopyButton(botElement, index, is_last_in_arr) {
}
});
if (enable_tts){
if (enable_tts) {
var audioButton = document.createElement('button');
audioButton.classList.add('audio-toggle-btn');
audioButton.innerHTML = audioIcon;
@@ -346,7 +346,7 @@ function addCopyButton(botElement, index, is_last_in_arr) {
var messageBtnColumn = document.createElement('div');
messageBtnColumn.classList.add('message-btn-row');
messageBtnColumn.appendChild(copyButton);
if (enable_tts){
if (enable_tts) {
messageBtnColumn.appendChild(audioButton);
}
botElement.appendChild(messageBtnColumn);
@@ -391,6 +391,9 @@ function chatbotContentChanged(attempt = 1, force = false) {
// Now pass both the message element and the is_last_in_arr boolean to addCopyButton
addCopyButton(message, index, is_last_in_arr);
// save_conversation_history
save_conversation_history_slow_down();
});
// gradioApp().querySelectorAll('#gpt-chatbot .message-wrap .message.bot').forEach(addCopyButton);
}, i === 0 ? 0 : 200);
@@ -747,10 +750,24 @@ function minor_ui_adjustment() {
var bar_btn_width = [];
// 自动隐藏超出范围的toolbar按钮
function auto_hide_toolbar() {
var qq = document.getElementById('tooltip');
var tab_nav = qq.getElementsByClassName('tab-nav');
// if chatbot hit upper page boarder, hide all
const elem_chatbot = document.getElementById('gpt-chatbot');
const chatbot_top = elem_chatbot.getBoundingClientRect().top;
var tooltip = document.getElementById('tooltip');
var tab_nav = tooltip.getElementsByClassName('tab-nav')[0];
// 20 px 大概是一个字的高度
if (chatbot_top < 20) {
// tab_nav.style.display = 'none';
if (tab_nav.classList.contains('visible')) {tab_nav.classList.remove('visible');}
if (!tab_nav.classList.contains('hidden')) {tab_nav.classList.add('hidden');}
return;
}
if (tab_nav.classList.contains('hidden')) {tab_nav.classList.remove('hidden');}
if (!tab_nav.classList.contains('visible')) {tab_nav.classList.add('visible');}
// tab_nav.style.display = '';
if (tab_nav.length == 0) { return; }
var btn_list = tab_nav[0].getElementsByTagName('button')
var btn_list = tab_nav.getElementsByTagName('button')
if (btn_list.length == 0) { return; }
// 获取页面宽度
var page_width = document.documentElement.clientWidth;
@@ -854,8 +871,7 @@ function limit_scroll_position() {
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
function loadLive2D() {
if (document.querySelector(".waifu") )
{
if (document.querySelector(".waifu")) {
$('.waifu').show();
} else {
try {
@@ -922,12 +938,12 @@ function gpt_academic_gradio_saveload(
if (save_or_load === "load") {
let value = getCookie(cookie_key);
if (value) {
console.log('加载cookie', elem_id, value)
// console.log('加载cookie', elem_id, value)
push_data_to_gradio_component(value, elem_id, load_type);
}
else {
if (load_default) {
console.log('加载cookie的默认值', elem_id, load_default_value)
// console.log('加载cookie的默认值', elem_id, load_default_value)
push_data_to_gradio_component(load_default_value, elem_id, load_type);
}
}
@@ -937,113 +953,153 @@ function gpt_academic_gradio_saveload(
}
}
function generateUUID() {
// Generate a random number and convert it to a hexadecimal string
function randomHexDigit() {
return Math.floor((1 + Math.random()) * 0x10000).toString(16).slice(1);
}
// Construct the UUID using the randomHexDigit function
return (
randomHexDigit() + randomHexDigit() + '-' +
randomHexDigit() + '-' +
'4' + randomHexDigit().slice(0, 3) + '-' + // Version 4 UUID
((Math.floor(Math.random() * 4) + 8).toString(16)) + randomHexDigit().slice(0, 3) + '-' +
randomHexDigit() + randomHexDigit() + randomHexDigit()
);
}
function update_conversation_metadata() {
// Create a conversation UUID and timestamp
try {
const conversationId = generateUUID();
console.log('Create conversation ID:', conversationId);
const timestamp = new Date().toISOString();
const conversationMetaData = {
id: conversationId,
timestamp: timestamp
};
localStorage.setItem("conversation_metadata", JSON.stringify(conversationMetaData));
} catch (e) {
console.error('Error in updating conversation metadata:', e);
}
}
// Helper function to generate conversation preview
function generatePreview(conversation, timestamp, maxLength = 100) {
if (!conversation || conversation.length === 0) return "";
// Join all messages with dash separator
let preview = conversation.join("\n");
const readableDate = new Date(timestamp).toLocaleString();
preview = readableDate + "\n" + preview;
if (preview.length <= maxLength) return preview;
return preview.substring(0, maxLength) + "...";
}
async function save_conversation_history() {
let chatbot = await get_data_from_gradio_component('gpt-chatbot');
let history = await get_data_from_gradio_component('history-ng');
let conversation = {};
let conversation_metadata = localStorage.getItem("conversation_metadata");
try {
conversation_metadata = JSON.parse(conversation_metadata);
conversation = {
timestamp: conversation_metadata.timestamp,
id: conversation_metadata.id,
metadata: conversation_metadata,
conversation: chatbot,
history: history,
preview: generatePreview(JSON.parse(history), conversation_metadata.timestamp)
};
} catch (e) {
// console.error('Conversation metadata parse error, recreate conversation metadata');
update_conversation_metadata();
return;
}
// Get existing conversation history from local storage
let conversation_history = [];
try {
const stored = localStorage.getItem('conversation_history');
if (stored) {
conversation_history = JSON.parse(stored);
}
} catch (e) {
// console.error('Error reading conversation history from localStorage:', e);
}
// Find existing conversation with same ID
const existingIndex = conversation_history.findIndex(c => c.id === conversation.id);
if (existingIndex >= 0) {
// Update existing conversation
conversation_history[existingIndex] = conversation;
} else {
// Add new conversation
conversation_history.push(conversation);
}
// Sort conversations by timestamp, newest first
conversation_history.sort((a, b) => {
const timeA = new Date(a.timestamp).getTime();
const timeB = new Date(b.timestamp).getTime();
return timeB - timeA;
});
const max_chat_preserve = 10;
if (conversation_history.length >= max_chat_preserve + 1) {
toast_push('对话时间线记录已满,正在移除最早的对话记录。您也可以点击左侧的记录点进行手动清理。', 3000);
conversation_history = conversation_history.slice(0, max_chat_preserve);
}
// Save back to local storage
try {
localStorage.setItem('conversation_history', JSON.stringify(conversation_history));
const LOCAL_STORAGE_UPDATED = "gptac_conversation_history_updated";
window.dispatchEvent(
new CustomEvent(LOCAL_STORAGE_UPDATED, {
detail: conversation_history
})
);
} catch (e) {
console.error('Error saving conversation history to localStorage:', e);
}
}
save_conversation_history_slow_down = do_something_but_not_too_frequently(300, save_conversation_history);
function restore_chat_from_local_storage(event) {
let conversation = event.detail;
push_data_to_gradio_component(conversation.conversation, "gpt-chatbot", "obj");
push_data_to_gradio_component(conversation.history, "history-ng", "obj");
const conversationId = conversation.id;
const timestamp = conversation.timestamp;
const conversationData = {
id: conversationId,
timestamp: timestamp
};
localStorage.setItem("conversation_metadata", JSON.stringify(conversationData));
}
async function clear_conversation(a, b, c) {
await save_conversation_history();
update_conversation_metadata();
let stopButton = document.getElementById("elem_stop");
stopButton.click();
return reset_conversation(a, b);
}
function reset_conversation(a, b) {
// console.log("js_code_reset");
a = btoa(unescape(encodeURIComponent(JSON.stringify(a))));
setCookie("js_previous_chat_cookie", a, 1);
gen_restore_btn();
return [[], [], "已重置"];
}
// clear -> 将 history 缓存至 history_cache -> 点击复原 -> restore_previous_chat() -> 触发elem_update_history -> 读取 history_cache
function restore_previous_chat() {
console.log("restore_previous_chat");
let chat = getCookie("js_previous_chat_cookie");
chat = JSON.parse(decodeURIComponent(escape(atob(chat))));
push_data_to_gradio_component(chat, "gpt-chatbot", "obj");
document.querySelector("#elem_update_history").click(); // in order to call set_history_gr_state, and send history state to server
}
function gen_restore_btn() {
// 创建按钮元素
const button = document.createElement('div');
// const recvIcon = '<span><svg stroke="currentColor" fill="none" stroke-width="2" viewBox="0 0 24 24" stroke-linecap="round" stroke-linejoin="round" height=".8em" width=".8em" xmlns="http://www.w3.org/2000/svg"><polyline points="20 6 9 17 4 12"></polyline></svg></span>';
const rec_svg = '<svg t="1714361184567" style="transform:translate(1px, 2.5px)" class="icon" viewBox="0 0 1024 1024" version="1.1" xmlns="http://www.w3.org/2000/svg" p-id="4389" width="35" height="35"><path d="M320 512h384v64H320zM320 384h384v64H320zM320 640h192v64H320z" p-id="4390" fill="#ffffff"></path><path d="M863.7 544c-1.9 44-11.4 86.8-28.5 127.2-18.5 43.8-45.1 83.2-78.9 117-33.8 33.8-73.2 60.4-117 78.9C593.9 886.3 545.7 896 496 896s-97.9-9.7-143.2-28.9c-43.8-18.5-83.2-45.1-117-78.9-33.8-33.8-60.4-73.2-78.9-117C137.7 625.9 128 577.7 128 528s9.7-97.9 28.9-143.2c18.5-43.8 45.1-83.2 78.9-117s73.2-60.4 117-78.9C398.1 169.7 446.3 160 496 160s97.9 9.7 143.2 28.9c23.5 9.9 45.8 22.2 66.5 36.7l-119.7 20 9.9 59.4 161.6-27 59.4-9.9-9.9-59.4-27-161.5-59.4 9.9 19 114.2C670.3 123.8 586.4 96 496 96 257.4 96 64 289.4 64 528s193.4 432 432 432c233.2 0 423.3-184.8 431.7-416h-64z" p-id="4391" fill="#ffffff"></path></svg>'
const recvIcon = '<span>' + rec_svg + '</span>';
// 设置按钮的样式和属性
button.id = 'floatingButton';
button.className = 'glow';
button.style.textAlign = 'center';
button.style.position = 'fixed';
button.style.bottom = '10px';
button.style.left = '10px';
button.style.width = '50px';
button.style.height = '50px';
button.style.borderRadius = '50%';
button.style.backgroundColor = '#007bff';
button.style.color = 'white';
button.style.display = 'flex';
button.style.alignItems = 'center';
button.style.justifyContent = 'center';
button.style.cursor = 'pointer';
button.style.transition = 'all 0.3s ease';
button.style.boxShadow = '0 0 10px rgba(0,0,0,0.2)';
button.innerHTML = recvIcon;
// 添加发光动画的关键帧
const styleSheet = document.createElement('style');
styleSheet.id = 'floatingButtonStyle';
styleSheet.innerText = `
@keyframes glow {
from {
box-shadow: 0 0 10px rgba(0,0,0,0.2);
}
to {
box-shadow: 0 0 13px rgba(0,0,0,0.5);
}
}
#floatingButton.glow {
animation: glow 1s infinite alternate;
}
#floatingButton:hover {
transform: scale(1.2);
box-shadow: 0 0 20px rgba(0,0,0,0.4);
}
#floatingButton.disappearing {
animation: shrinkAndDisappear 0.5s forwards;
}
`;
// only add when not exist
if (!document.getElementById('recvButtonStyle'))
{
document.head.appendChild(styleSheet);
}
// 鼠标悬停和移开的事件监听器
button.addEventListener('mouseover', function () {
this.textContent = "还原\n对话";
});
button.addEventListener('mouseout', function () {
this.innerHTML = recvIcon;
});
// 点击事件监听器
button.addEventListener('click', function () {
// 添加一个类来触发缩小和消失的动画
restore_previous_chat();
this.classList.add('disappearing');
// 在动画结束后移除按钮
document.body.removeChild(this);
});
// only add when not exist
if (!document.getElementById('recvButton'))
{
document.body.appendChild(button);
}
// 将按钮添加到页面中
}
async function on_plugin_exe_complete(fn_name) {
console.log(fn_name);
// console.log(fn_name);
if (fn_name === "保存当前的对话") {
// get chat profile path
let chatbot = await get_data_from_gradio_component('gpt-chatbot');
@@ -1062,15 +1118,15 @@ async function on_plugin_exe_complete(fn_name) {
}
let href = get_href(may_have_chat_profile_info);
if (href) {
const cleanedHref = href.replace('file=', ''); // /home/fuqingxu/chatgpt_academic/gpt_log/default_user/chat_history/GPT-Academic对话存档2024-04-12-00-35-06.html
console.log(cleanedHref);
const cleanedHref = href.replace('file=', ''); // gpt_log/default_user/chat_history/GPT-Academic对话存档2024-04-12-00-35-06.html
// console.log(cleanedHref);
}
}
}
async function generate_menu(guiBase64String, btnName){
async function generate_menu(guiBase64String, btnName) {
// assign the button and menu data
push_data_to_gradio_component(guiBase64String, "invisible_current_pop_up_plugin_arg", "string");
push_data_to_gradio_component(btnName, "invisible_callback_btn_for_plugin_exe", "string");
@@ -1104,22 +1160,22 @@ async function generate_menu(guiBase64String, btnName){
///////////////////////////////////////////////////////////////////////////////////////////
//////////////////////////////////// Textbox ////////////////////////////////////
///////////////////////////////////////////////////////////////////////////////////////////
if (gui_args[key].type=='string'){ // PLUGIN_ARG_MENU
if (gui_args[key].type == 'string') { // PLUGIN_ARG_MENU
const component_name = "plugin_arg_txt_" + text_cnt;
push_data_to_gradio_component({
visible: true,
label: gui_args[key].title + "(" + gui_args[key].description + ")",
label: gui_args[key].title + "(" + gui_args[key].description + ")",
// label: gui_args[key].title,
placeholder: gui_args[key].description,
__type__: 'update'
}, component_name, "obj");
if (key === "main_input"){
if (key === "main_input") {
// 为了与旧插件兼容,生成菜单时,自动加载输入栏的值
let current_main_input = await get_data_from_gradio_component('user_input_main');
let current_main_input_2 = await get_data_from_gradio_component('user_input_float');
push_data_to_gradio_component(current_main_input + current_main_input_2, component_name, "obj");
}
else if (key === "advanced_arg"){
else if (key === "advanced_arg") {
// 为了与旧插件兼容,生成菜单时,自动加载旧高级参数输入区的值
let advance_arg_input_legacy = await get_data_from_gradio_component('advance_arg_input_legacy');
push_data_to_gradio_component(advance_arg_input_legacy, component_name, "obj");
@@ -1134,12 +1190,12 @@ async function generate_menu(guiBase64String, btnName){
///////////////////////////////////////////////////////////////////////////////////////////
//////////////////////////////////// Dropdown ////////////////////////////////////
///////////////////////////////////////////////////////////////////////////////////////////
if (gui_args[key].type=='dropdown'){ // PLUGIN_ARG_MENU
if (gui_args[key].type == 'dropdown') { // PLUGIN_ARG_MENU
const component_name = "plugin_arg_drop_" + dropdown_cnt;
push_data_to_gradio_component({
visible: true,
choices: gui_args[key].options,
label: gui_args[key].title + "(" + gui_args[key].description + ")",
label: gui_args[key].title + "(" + gui_args[key].description + ")",
// label: gui_args[key].title,
placeholder: gui_args[key].description,
__type__: 'update'
@@ -1154,7 +1210,7 @@ async function generate_menu(guiBase64String, btnName){
}
}
async function execute_current_pop_up_plugin(){
async function execute_current_pop_up_plugin() {
let guiBase64String = await get_data_from_gradio_component('invisible_current_pop_up_plugin_arg');
const stringData = atob(guiBase64String);
let guiJsonData = JSON.parse(stringData);
@@ -1170,8 +1226,8 @@ async function execute_current_pop_up_plugin(){
let text_cnt = 0;
for (const key in gui_args) {
if (gui_args.hasOwnProperty(key)) {
if (gui_args[key].type=='string'){ // PLUGIN_ARG_MENU
corrisponding_elem_id = "plugin_arg_txt_"+text_cnt
if (gui_args[key].type == 'string') { // PLUGIN_ARG_MENU
corrisponding_elem_id = "plugin_arg_txt_" + text_cnt
gui_args[key].user_confirmed_value = await get_data_from_gradio_component(corrisponding_elem_id);
text_cnt += 1;
}
@@ -1180,8 +1236,8 @@ async function execute_current_pop_up_plugin(){
let dropdown_cnt = 0;
for (const key in gui_args) {
if (gui_args.hasOwnProperty(key)) {
if (gui_args[key].type=='dropdown'){ // PLUGIN_ARG_MENU
corrisponding_elem_id = "plugin_arg_drop_"+dropdown_cnt
if (gui_args[key].type == 'dropdown') { // PLUGIN_ARG_MENU
corrisponding_elem_id = "plugin_arg_drop_" + dropdown_cnt
gui_args[key].user_confirmed_value = await get_data_from_gradio_component(corrisponding_elem_id);
dropdown_cnt += 1;
}
@@ -1200,29 +1256,29 @@ async function execute_current_pop_up_plugin(){
}
function hide_all_elem(){
// PLUGIN_ARG_MENU
for (text_cnt = 0; text_cnt < 8; text_cnt++){
function hide_all_elem() {
// PLUGIN_ARG_MENU
for (text_cnt = 0; text_cnt < 8; text_cnt++) {
push_data_to_gradio_component({
visible: false,
label: "",
__type__: 'update'
}, "plugin_arg_txt_"+text_cnt, "obj");
document.getElementById("plugin_arg_txt_"+text_cnt).parentNode.parentNode.style.display = 'none';
}, "plugin_arg_txt_" + text_cnt, "obj");
document.getElementById("plugin_arg_txt_" + text_cnt).parentNode.parentNode.style.display = 'none';
}
for (dropdown_cnt = 0; dropdown_cnt < 8; dropdown_cnt++){
for (dropdown_cnt = 0; dropdown_cnt < 8; dropdown_cnt++) {
push_data_to_gradio_component({
visible: false,
choices: [],
label: "",
__type__: 'update'
}, "plugin_arg_drop_"+dropdown_cnt, "obj");
document.getElementById("plugin_arg_drop_"+dropdown_cnt).parentNode.style.display = 'none';
}, "plugin_arg_drop_" + dropdown_cnt, "obj");
document.getElementById("plugin_arg_drop_" + dropdown_cnt).parentNode.style.display = 'none';
}
}
function close_current_pop_up_plugin(){
// PLUGIN_ARG_MENU
function close_current_pop_up_plugin() {
// PLUGIN_ARG_MENU
push_data_to_gradio_component({
visible: false,
__type__: 'update'
@@ -1233,15 +1289,13 @@ function close_current_pop_up_plugin(){
// 生成高级插件的选择菜单
plugin_init_info_lib = {}
function register_plugin_init(key, base64String){
function register_plugin_init(key, base64String) {
// console.log('x')
const stringData = atob(base64String);
let guiJsonData = JSON.parse(stringData);
if (key in plugin_init_info_lib)
{
if (key in plugin_init_info_lib) {
}
else
{
else {
plugin_init_info_lib[key] = {};
}
plugin_init_info_lib[key].info = guiJsonData.Info;
@@ -1251,28 +1305,26 @@ function register_plugin_init(key, base64String){
plugin_init_info_lib[key].enable_advanced_arg = guiJsonData.AdvancedArgs;
plugin_init_info_lib[key].arg_reminder = guiJsonData.ArgsReminder;
}
function register_advanced_plugin_init_code(key, code){
if (key in plugin_init_info_lib)
{
function register_advanced_plugin_init_code(key, code) {
if (key in plugin_init_info_lib) {
}
else
{
else {
plugin_init_info_lib[key] = {};
}
plugin_init_info_lib[key].secondary_menu_code = code;
}
function run_advanced_plugin_launch_code(key){
function run_advanced_plugin_launch_code(key) {
// convert js code string to function
generate_menu(plugin_init_info_lib[key].secondary_menu_code, key);
}
function on_flex_button_click(key){
if (plugin_init_info_lib.hasOwnProperty(key) && plugin_init_info_lib[key].hasOwnProperty('secondary_menu_code')){
function on_flex_button_click(key) {
if (plugin_init_info_lib.hasOwnProperty(key) && plugin_init_info_lib[key].hasOwnProperty('secondary_menu_code')) {
run_advanced_plugin_launch_code(key);
}else{
} else {
document.getElementById("old_callback_btn_for_plugin_exe").click();
}
}
async function run_dropdown_shift(dropdown){
async function run_dropdown_shift(dropdown) {
let key = dropdown;
push_data_to_gradio_component({
value: key,
@@ -1281,7 +1333,7 @@ async function run_dropdown_shift(dropdown){
__type__: 'update'
}, "elem_switchy_bt", "obj");
if (plugin_init_info_lib[key].enable_advanced_arg){
if (plugin_init_info_lib[key].enable_advanced_arg) {
push_data_to_gradio_component({
visible: true,
label: plugin_init_info_lib[key].label,
@@ -1303,9 +1355,9 @@ async function duplicate_in_new_window() {
window.open(url, '_blank');
}
async function run_classic_plugin_via_id(plugin_elem_id){
for (key in plugin_init_info_lib){
if (plugin_init_info_lib[key].elem_id == plugin_elem_id){
async function run_classic_plugin_via_id(plugin_elem_id) {
for (key in plugin_init_info_lib) {
if (plugin_init_info_lib[key].elem_id == plugin_elem_id) {
// 获取按钮名称
let current_btn_name = await get_data_from_gradio_component(plugin_elem_id);
// 执行
@@ -1326,7 +1378,7 @@ async function call_plugin_via_name(current_btn_name) {
hide_all_elem();
// 为了与旧插件兼容,生成菜单时,自动加载旧高级参数输入区的值
let advance_arg_input_legacy = await get_data_from_gradio_component('advance_arg_input_legacy');
if (advance_arg_input_legacy.length != 0){
if (advance_arg_input_legacy.length != 0) {
gui_args["advanced_arg"] = {};
gui_args["advanced_arg"].user_confirmed_value = advance_arg_input_legacy;
}
@@ -1349,18 +1401,11 @@ async function multiplex_function_begin(multiplex_sel) {
click_real_submit_btn();
return;
}
if (multiplex_sel === "多模型对话") {
let _align_name_in_crazy_function_py = "询问多个GPT模型";
call_plugin_via_name(_align_name_in_crazy_function_py);
return;
}
if (multiplex_sel === "智能召回 RAG") {
let _align_name_in_crazy_function_py = "Rag智能召回";
call_plugin_via_name(_align_name_in_crazy_function_py);
return;
}
// do not delete `REPLACE_EXTENDED_MULTIPLEX_FUNCTIONS_HERE`! It will be read and replaced by Python code.
// REPLACE_EXTENDED_MULTIPLEX_FUNCTIONS_HERE
}
async function run_multiplex_shift(multiplex_sel){
async function run_multiplex_shift(multiplex_sel) {
let key = multiplex_sel;
if (multiplex_sel === "常规对话") {
key = "提交";
@@ -1372,3 +1417,8 @@ async function run_multiplex_shift(multiplex_sel){
__type__: 'update'
}, "elem_submit_visible", "obj");
}
async function persistent_cookie_init(web_cookie_cache, cookie) {
return [localStorage.getItem('web_cookie_cache'), cookie];
}

View File

@@ -2,6 +2,25 @@ from functools import lru_cache
from toolbox import get_conf
CODE_HIGHLIGHT, ADD_WAIFU, LAYOUT = get_conf("CODE_HIGHLIGHT", "ADD_WAIFU", "LAYOUT")
def inject_mutex_button_code(js_content):
from crazy_functional import get_multiplex_button_functions
fns = get_multiplex_button_functions()
template = """
if (multiplex_sel === "{x}") {
let _align_name_in_crazy_function_py = "{y}";
call_plugin_via_name(_align_name_in_crazy_function_py);
return;
}
"""
replacement = ""
for fn in fns.keys():
if fn == "常规对话": continue
replacement += template.replace("{x}", fn).replace("{y}", fns[fn])
js_content = js_content.replace("// REPLACE_EXTENDED_MULTIPLEX_FUNCTIONS_HERE", replacement)
return js_content
def minimize_js(common_js_path):
try:
import rjsmin, hashlib, glob, os
@@ -10,14 +29,16 @@ def minimize_js(common_js_path):
os.remove(old_min_js)
# use rjsmin to minimize `common_js_path`
c_jsmin = rjsmin.jsmin
with open(common_js_path, "r") as f:
with open(common_js_path, "r", encoding='utf-8') as f:
js_content = f.read()
if common_js_path == "themes/common.js":
js_content = inject_mutex_button_code(js_content)
minimized_js_content = c_jsmin(js_content)
# compute sha256 hash of minimized js content
sha_hash = hashlib.sha256(minimized_js_content.encode()).hexdigest()[:8]
minimized_js_path = common_js_path + '.min.' + sha_hash + '.js'
# save to minimized js file
with open(minimized_js_path, "w") as f:
with open(minimized_js_path, "w", encoding='utf-8') as f:
f.write(minimized_js_content)
# return minimized js file path
return minimized_js_path

File diff suppressed because it is too large Load Diff

View File

@@ -10,6 +10,14 @@ theme_dir = os.path.dirname(__file__)
def adjust_theme():
try:
set_theme = gr.themes.Soft(
font=[
"Helvetica",
"Microsoft YaHei",
"ui-sans-serif",
"sans-serif",
"system-ui",
],
font_mono=["ui-monospace", "Consolas", "monospace"],
primary_hue=gr.themes.Color(
c50="#EBFAF2",
c100="#CFF3E1",

View File

@@ -1,7 +1,7 @@
import gradio as gr
def define_gui_floating_menu(customize_btns, functional, predefined_btns, cookies, web_cookie_cache):
with gr.Floating(init_x="20%", init_y="50%", visible=False, width="40%", drag="top") as area_input_secondary:
with gr.Floating(init_x="20%", init_y="50%", visible=False, width="40%", drag="top", elem_id="f_area_input_secondary") as area_input_secondary:
with gr.Accordion("浮动输入区", open=True, elem_id="input-panel2"):
with gr.Row() as row:
row.style(equal_height=True)
@@ -17,7 +17,7 @@ def define_gui_floating_menu(customize_btns, functional, predefined_btns, cookie
clearBtn2 = gr.Button("清除", elem_id="elem_clear2", variant="secondary", visible=False); clearBtn2.style(size="sm")
with gr.Floating(init_x="20%", init_y="50%", visible=False, width="40%", drag="top") as area_customize:
with gr.Floating(init_x="20%", init_y="50%", visible=False, width="40%", drag="top", elem_id="f_area_customize") as area_customize:
with gr.Accordion("自定义菜单", open=True, elem_id="edit-panel"):
with gr.Row() as row:
with gr.Column(scale=10):
@@ -35,9 +35,9 @@ def define_gui_floating_menu(customize_btns, functional, predefined_btns, cookie
# update btn
h = basic_fn_confirm.click(assign_btn, [web_cookie_cache, cookies, basic_btn_dropdown, basic_fn_title, basic_fn_prefix, basic_fn_suffix],
[web_cookie_cache, cookies, *customize_btns.values(), *predefined_btns.values()])
h.then(None, [web_cookie_cache], None, _js="""(web_cookie_cache)=>{setCookie("web_cookie_cache", web_cookie_cache, 365);}""")
h.then(None, [web_cookie_cache], None, _js="""(web_cookie_cache)=>{localStorage.setItem("web_cookie_cache", web_cookie_cache);}""")
# clean up btn
h2 = basic_fn_clean.click(assign_btn, [web_cookie_cache, cookies, basic_btn_dropdown, basic_fn_title, basic_fn_prefix, basic_fn_suffix, gr.State(True)],
[web_cookie_cache, cookies, *customize_btns.values(), *predefined_btns.values()])
h2.then(None, [web_cookie_cache], None, _js="""(web_cookie_cache)=>{setCookie("web_cookie_cache", web_cookie_cache, 365);}""")
h2.then(None, [web_cookie_cache], None, _js="""(web_cookie_cache)=>{localStorage.setItem("web_cookie_cache", web_cookie_cache);}""")
return area_input_secondary, txt2, area_customize, submitBtn2, resetBtn2, clearBtn2, stopBtn2

View File

@@ -1,6 +1,7 @@
import gradio as gr
from toolbox import get_conf
def define_gui_toolbar(AVAIL_LLM_MODELS, LLM_MODEL, INIT_SYS_PROMPT, THEME, AVAIL_THEMES, ADD_WAIFU, help_menu_description, js_code_for_toggle_darkmode):
def define_gui_toolbar(AVAIL_LLM_MODELS, LLM_MODEL, INIT_SYS_PROMPT, THEME, AVAIL_THEMES, AVAIL_FONTS, ADD_WAIFU, help_menu_description, js_code_for_toggle_darkmode):
with gr.Floating(init_x="0%", init_y="0%", visible=True, width=None, drag="forbidden", elem_id="tooltip"):
with gr.Row():
with gr.Tab("上传文件", elem_id="interact-panel"):
@@ -9,12 +10,12 @@ def define_gui_toolbar(AVAIL_LLM_MODELS, LLM_MODEL, INIT_SYS_PROMPT, THEME, AVAI
with gr.Tab("更换模型", elem_id="interact-panel"):
md_dropdown = gr.Dropdown(AVAIL_LLM_MODELS, value=LLM_MODEL, elem_id="elem_model_sel", label="更换LLM模型/请求源").style(container=False)
top_p = gr.Slider(minimum=-0, maximum=1.0, value=1.0, step=0.01,interactive=True, label="Top-p (nucleus sampling)",)
top_p = gr.Slider(minimum=-0, maximum=1.0, value=1.0, step=0.01,interactive=True, label="Top-p (nucleus sampling)", elem_id="elem_top_p")
temperature = gr.Slider(minimum=-0, maximum=2.0, value=1.0, step=0.01, interactive=True, label="Temperature", elem_id="elem_temperature")
max_length_sl = gr.Slider(minimum=256, maximum=1024*32, value=4096, step=128, interactive=True, label="Local LLM MaxLength",)
max_length_sl = gr.Slider(minimum=256, maximum=1024*32, value=4096, step=128, interactive=True, label="Local LLM MaxLength", elem_id="elem_max_length_sl")
system_prompt = gr.Textbox(show_label=True, lines=2, placeholder=f"System Prompt", label="System prompt", value=INIT_SYS_PROMPT, elem_id="elem_prompt")
temperature.change(None, inputs=[temperature], outputs=None,
_js="""(temperature)=>gpt_academic_gradio_saveload("save", "elem_prompt", "js_temperature_cookie", temperature)""")
_js="""(temperature)=>gpt_academic_gradio_saveload("save", "elem_temperature", "js_temperature_cookie", temperature)""")
system_prompt.change(None, inputs=[system_prompt], outputs=None,
_js="""(system_prompt)=>gpt_academic_gradio_saveload("save", "elem_prompt", "js_system_prompt_cookie", system_prompt)""")
md_dropdown.change(None, inputs=[md_dropdown], outputs=None,
@@ -22,6 +23,8 @@ def define_gui_toolbar(AVAIL_LLM_MODELS, LLM_MODEL, INIT_SYS_PROMPT, THEME, AVAI
with gr.Tab("界面外观", elem_id="interact-panel"):
theme_dropdown = gr.Dropdown(AVAIL_THEMES, value=THEME, label="更换UI主题").style(container=False)
fontfamily_dropdown = gr.Dropdown(AVAIL_FONTS, value=get_conf("FONT"), elem_id="elem_fontfamily", label="更换字体类型").style(container=False)
fontsize_slider = gr.Slider(minimum=5, maximum=25, value=15, step=1, interactive=True, label="字体大小(默认15)", elem_id="elem_fontsize")
checkboxes = gr.CheckboxGroup(["基础功能区", "函数插件区", "浮动输入区", "输入清除键", "插件参数区"], value=["基础功能区", "函数插件区"], label="显示/隐藏功能区", elem_id='cbs').style(container=False)
opt = ["自定义菜单"]
value=[]
@@ -31,7 +34,10 @@ def define_gui_toolbar(AVAIL_LLM_MODELS, LLM_MODEL, INIT_SYS_PROMPT, THEME, AVAI
dark_mode_btn.click(None, None, None, _js=js_code_for_toggle_darkmode)
open_new_tab = gr.Button("打开新对话", variant="secondary").style(size="sm")
open_new_tab.click(None, None, None, _js=f"""()=>duplicate_in_new_window()""")
fontfamily_dropdown.select(None, inputs=[fontfamily_dropdown], outputs=None,
_js="""(fontfamily)=>{gpt_academic_gradio_saveload("save", "elem_fontfamily", "js_fontfamily", fontfamily); gpt_academic_change_chatbot_font(fontfamily, null, null);}""")
fontsize_slider.change(None, inputs=[fontsize_slider], outputs=None,
_js="""(fontsize)=>{gpt_academic_gradio_saveload("save", "elem_fontsize", "js_fontsize", fontsize); gpt_academic_change_chatbot_font(null, fontsize, null);}""")
with gr.Tab("帮助", elem_id="interact-panel"):
gr.Markdown(help_menu_description)

View File

@@ -1,8 +1,141 @@
function remove_legacy_cookie() {
setCookie("web_cookie_cache", "", -1);
setCookie("js_previous_chat_cookie", "", -1);
setCookie("js_previous_history_cookie", "", -1);
}
function processFontFamily(fontfamily) {
// 检查是否包含括号
if (fontfamily.includes('(')) {
// 分割字符串
const parts = fontfamily.split('(');
const fontNamePart = parts[1].split(')')[0].trim(); // 获取括号内的部分
// 检查是否包含 @
if (fontNamePart.includes('@')) {
const [fontName, fontUrl] = fontNamePart.split('@').map(part => part.trim());
return { fontName, fontUrl };
} else {
return { fontName: fontNamePart, fontUrl: null };
}
} else {
return { fontName: fontfamily, fontUrl: null };
}
}
// 检查字体是否存在
function checkFontAvailability(fontfamily) {
return new Promise((resolve) => {
const canvas = document.createElement('canvas');
const context = canvas.getContext('2d');
// 设置两个不同的字体进行比较
const testText = 'abcdefghijklmnopqrstuvwxyz0123456789';
context.font = `16px ${fontfamily}, sans-serif`;
const widthWithFont = context.measureText(testText).width;
context.font = '16px sans-serif';
const widthWithFallback = context.measureText(testText).width;
// 如果宽度相同,说明字体不存在
resolve(widthWithFont !== widthWithFallback);
});
}
async function checkFontAvailabilityV2(fontfamily) {
fontName = fontfamily;
console.log('Checking font availability:', fontName);
if ('queryLocalFonts' in window) {
try {
const fonts = await window.queryLocalFonts();
const fontExists = fonts.some(font => font.family === fontName);
console.log(`Local Font "${fontName}" exists:`, fontExists);
return fontExists;
} catch (error) {
console.error('Error querying local fonts:', error);
return false;
}
} else {
console.error('queryLocalFonts is not supported in this browser.');
return false;
}
}
// 动态加载字体
function loadFont(fontfamily, fontUrl) {
return new Promise((resolve, reject) => {
// 使用 Google Fonts 或其他字体来源
const link = document.createElement('link');
link.rel = 'stylesheet';
link.href = fontUrl;
link.onload = () => {
toast_push(`字体 "${fontfamily}" 已成功加载`, 3000);
resolve();
};
link.onerror = (error) => {
reject(error);
};
document.head.appendChild(link);
});
}
function gpt_academic_change_chatbot_font(fontfamily, fontsize, fontcolor) {
const chatbot = document.querySelector('#gpt-chatbot');
// 检查元素是否存在
if (chatbot) {
if (fontfamily != null) {
// 更改字体
const result = processFontFamily(fontfamily);
if (result.fontName == "Theme-Default-Font") {
chatbot.style.fontFamily = result.fontName;
return;
}
// 检查字体是否存在
checkFontAvailability(result.fontName).then((isAvailable) => {
if (isAvailable) {
// 如果字体存在,直接应用
chatbot.style.fontFamily = result.fontName;
} else {
if (result.fontUrl == null) {
// toast_push('无法加载字体本地字体不存在且URL未提供', 3000);
// 直接把失效的字体放上去让系统自动fallback
chatbot.style.fontFamily = result.fontName;
return;
} else {
toast_push('正在下载字体', 3000);
// 如果字体不存在,尝试加载字体
loadFont(result.fontName, result.fontUrl).then(() => {
chatbot.style.fontFamily = result.fontName;
}).catch((error) => {
console.error(`无法加载字体 "${result.fontName}":`, error);
});
}
}
});
}
if (fontsize != null) {
// 修改字体大小
document.documentElement.style.setProperty(
'--gpt-academic-message-font-size',
`${fontsize}px`
);
}
if (fontcolor != null) {
// 更改字体颜色
chatbot.style.color = fontcolor;
}
} else {
console.error('#gpt-chatbot is missing');
}
}
async function GptAcademicJavaScriptInit(dark, prompt, live2d, layout, tts) {
// 第一部分,布局初始化
remove_legacy_cookie();
audio_fn_init();
minor_ui_adjustment();
ButtonWithDropdown_init();
update_conversation_metadata();
window.addEventListener("gptac_restore_chat_from_local_storage", restore_chat_from_local_storage);
// 加载欢迎页面
const welcomeMessage = new WelcomeMessage();
@@ -36,7 +169,7 @@ async function GptAcademicJavaScriptInit(dark, prompt, live2d, layout, tts) {
}
// 自动朗读
if (tts != "DISABLE"){
if (tts != "DISABLE") {
enable_tts = true;
if (getCookie("js_auto_read_cookie")) {
auto_read_tts = getCookie("js_auto_read_cookie")
@@ -46,7 +179,11 @@ async function GptAcademicJavaScriptInit(dark, prompt, live2d, layout, tts) {
}
}
}
// 字体
gpt_academic_gradio_saveload("load", "elem_fontfamily", "js_fontfamily", null, "str");
gpt_academic_change_chatbot_font(getCookie("js_fontfamily"), null, null);
gpt_academic_gradio_saveload("load", "elem_fontsize", "js_fontsize", null, "str");
gpt_academic_change_chatbot_font(null, getCookie("js_fontsize"), null);
// SysPrompt 系统静默提示词
gpt_academic_gradio_saveload("load", "elem_prompt", "js_system_prompt_cookie", null, "str");
// Temperature 大模型温度参数
@@ -56,7 +193,7 @@ async function GptAcademicJavaScriptInit(dark, prompt, live2d, layout, tts) {
const cached_model = getCookie("js_md_dropdown_cookie");
var model_sel = await get_gradio_component("elem_model_sel");
// determine whether the cached model is in the choices
if (model_sel.props.choices.includes(cached_model)){
if (model_sel.props.choices.includes(cached_model)) {
// change dropdown
gpt_academic_gradio_saveload("load", "elem_model_sel", "js_md_dropdown_cookie", null, "str");
// 连锁修改chatbot的label

View File

@@ -87,21 +87,6 @@ js_code_for_toggle_darkmode = """() => {
}"""
js_code_for_persistent_cookie_init = """(web_cookie_cache, cookie) => {
return [getCookie("web_cookie_cache"), cookie];
}
"""
# 详见 themes/common.js
js_code_reset = """
(a,b,c)=>{
let stopButton = document.getElementById("elem_stop");
stopButton.click();
return reset_conversation(a,b);
}
"""
js_code_clear = """
(a,b)=>{
return ["", ""];

View File

@@ -84,8 +84,9 @@ class WelcomeMessage {
this.max_welcome_card_num = 6;
this.card_array = [];
this.static_welcome_message_previous = [];
this.reflesh_time_interval = 15*1000;
this.reflesh_time_interval = 15 * 1000;
this.update_time_interval = 2 * 1000;
this.major_title = "欢迎使用GPT-Academic";
const reflesh_render_status = () => {
for (let index = 0; index < this.card_array.length; index++) {
@@ -96,16 +97,31 @@ class WelcomeMessage {
};
const pageFocusHandler = new PageFocusHandler();
pageFocusHandler.addFocusCallback(reflesh_render_status);
// call update when page size change, call this.update when page size change
window.addEventListener('resize', this.update.bind(this));
// add a loop to reflesh cards
this.startRefleshCards();
this.startAutoUpdate();
}
begin_render() {
this.update();
}
async startAutoUpdate() {
// sleep certain time
await new Promise(r => setTimeout(r, this.update_time_interval));
this.update();
}
async startRefleshCards() {
// sleep certain time
await new Promise(r => setTimeout(r, this.reflesh_time_interval));
await this.reflesh_cards();
if (this.visible){
// checkout visible status
if (this.visible) {
// if visible, then reflesh cards
await this.reflesh_cards();
setTimeout(() => {
this.startRefleshCards.call(this);
}, 1);
@@ -113,7 +129,7 @@ class WelcomeMessage {
}
async reflesh_cards() {
if (!this.visible){
if (!this.visible) {
return;
}
@@ -126,6 +142,7 @@ class WelcomeMessage {
// combine two lists
this.static_welcome_message_previous = not_shown_previously.concat(already_shown_previously);
this.static_welcome_message_previous = this.static_welcome_message_previous.slice(0, this.max_welcome_card_num);
(async () => {
// 使用 for...of 循环来处理异步操作
@@ -142,8 +159,10 @@ class WelcomeMessage {
continue;
}
// 等待动画结束
card.addEventListener('transitionend', () => {
card.classList.add('hide');
const timeout = 100; // 与CSS中transition的时间保持一致(0.1s)
setTimeout(() => {
// 更新卡片信息
const message = this.static_welcome_message_previous[index];
const title = card.getElementsByClassName('welcome-card-title')[0];
@@ -155,16 +174,14 @@ class WelcomeMessage {
text.href = message.url;
content.textContent = message.content;
card.classList.remove('hide');
// 等待动画结束
card.addEventListener('transitionend', () => {
card.classList.remove('show');
}, { once: true });
card.classList.add('show');
const timeout = 100; // 与CSS中transition的时间保持一致(0.1s)
setTimeout(() => {
card.classList.remove('show');
}, timeout);
}, timeout);
}, { once: true });
card.classList.add('hide');
// 等待 250 毫秒
await new Promise(r => setTimeout(r, 200));
@@ -173,43 +190,55 @@ class WelcomeMessage {
}
shuffle(array) {
var currentIndex = array.length, randomIndex;
var currentIndex = array.length, randomIndex;
// While there remain elements to shuffle...
while (currentIndex != 0) {
// Pick a remaining element...
randomIndex = Math.floor(Math.random() * currentIndex);
currentIndex--;
// Pick a remaining element...
randomIndex = Math.floor(Math.random() * currentIndex);
currentIndex--;
// And swap it with the current element.
[array[currentIndex], array[randomIndex]] = [
array[randomIndex], array[currentIndex]];
// And swap it with the current element.
[array[currentIndex], array[randomIndex]] = [
array[randomIndex], array[currentIndex]];
}
return array;
}
async update() {
// console.log('update')
async can_display() {
// update the card visibility
const elem_chatbot = document.getElementById('gpt-chatbot');
const chatbot_top = elem_chatbot.getBoundingClientRect().top;
const welcome_card_container = document.getElementsByClassName('welcome-card-container')[0];
// detect if welcome card overflow
let welcome_card_overflow = false;
if (welcome_card_container) {
const welcome_card_top = welcome_card_container.getBoundingClientRect().top;
if (welcome_card_top < chatbot_top) {
welcome_card_overflow = true;
}
}
var page_width = document.documentElement.clientWidth;
const width_to_hide_welcome = 1200;
if (!await this.isChatbotEmpty() || page_width < width_to_hide_welcome) {
if (this.visible) {
this.removeWelcome();
this.visible = false;
this.card_array = [];
this.static_welcome_message_previous = [];
}
if (!await this.isChatbotEmpty() || page_width < width_to_hide_welcome || welcome_card_overflow) {
// cannot display
return false;
}
return true;
}
async update() {
const can_display = await this.can_display();
if (can_display && !this.visible) {
this.showWelcome();
return;
}
if (this.visible){
if (!can_display && this.visible) {
this.removeWelcome();
return;
}
// console.log("welcome");
this.showWelcome();
this.visible = true;
this.startRefleshCards();
}
showCard(message) {
@@ -220,28 +249,28 @@ class WelcomeMessage {
const title = document.createElement('div');
title.classList.add('welcome-card-title');
// 创建图标
const svg = document.createElement('img');
svg.classList.add('welcome-svg');
svg.src = message.svg;
svg.style.height = '30px';
title.appendChild(svg);
// 创建图标
const svg = document.createElement('img');
svg.classList.add('welcome-svg');
svg.src = message.svg;
svg.style.height = '30px';
title.appendChild(svg);
// 创建标题
const text = document.createElement('a');
text.textContent = message.title;
text.classList.add('welcome-title-text');
text.href = message.url;
text.target = "_blank";
title.appendChild(text)
// 创建标题
const text = document.createElement('a');
text.textContent = message.title;
text.classList.add('welcome-title-text');
text.href = message.url;
text.target = "_blank";
title.appendChild(text)
// 创建内容
const content = document.createElement('div');
content.classList.add('welcome-content');
const content_c = document.createElement('div');
content_c.classList.add('welcome-content-c');
content_c.textContent = message.content;
content.appendChild(content_c);
const content_c = document.createElement('div');
content_c.classList.add('welcome-content-c');
content_c.textContent = message.content;
content.appendChild(content_c);
// 将标题和内容添加到卡片 div 中
card.appendChild(title);
@@ -250,7 +279,7 @@ class WelcomeMessage {
}
async showWelcome() {
this.visible = true;
// 首先,找到想要添加子元素的父元素
const elem_chatbot = document.getElementById('gpt-chatbot');
@@ -261,7 +290,7 @@ class WelcomeMessage {
// 创建主标题
const major_title = document.createElement('div');
major_title.classList.add('welcome-title');
major_title.textContent = "欢迎使用GPT-Academic";
major_title.textContent = this.major_title;
welcome_card_container.appendChild(major_title)
// 创建卡片
@@ -276,6 +305,16 @@ class WelcomeMessage {
});
elem_chatbot.appendChild(welcome_card_container);
const can_display = await this.can_display();
if (!can_display) {
// undo
this.visible = false;
this.card_array = [];
this.static_welcome_message_previous = [];
elem_chatbot.removeChild(welcome_card_container);
await new Promise(r => setTimeout(r, this.update_time_interval / 2));
return;
}
// 添加显示动画
requestAnimationFrame(() => {
@@ -284,15 +323,24 @@ class WelcomeMessage {
}
async removeWelcome() {
this.visible = false;
// remove welcome-card-container
const elem_chatbot = document.getElementById('gpt-chatbot');
const welcome_card_container = document.getElementsByClassName('welcome-card-container')[0];
// 添加隐藏动画
// begin hide animation
welcome_card_container.classList.add('hide');
// 等待动画结束后再移除元素
welcome_card_container.addEventListener('transitionend', () => {
elem_chatbot.removeChild(welcome_card_container);
this.card_array = [];
this.static_welcome_message_previous = [];
}, { once: true });
// add a fail safe timeout
const timeout = 600; // 与 CSS 中 transition 的时间保持一致(1s)
setTimeout(() => {
if (welcome_card_container.parentNode) {
elem_chatbot.removeChild(welcome_card_container);
}
}, timeout);
}
async isChatbotEmpty() {
@@ -307,28 +355,28 @@ class WelcomeMessage {
class PageFocusHandler {
constructor() {
this.hasReturned = false;
this.focusCallbacks = [];
this.hasReturned = false;
this.focusCallbacks = [];
// Bind the focus and blur event handlers
window.addEventListener('visibilitychange', this.handleFocus.bind(this));
// Bind the focus and blur event handlers
window.addEventListener('visibilitychange', this.handleFocus.bind(this));
}
// Method to handle the focus event
handleFocus() {
if (this.hasReturned) {
this.focusCallbacks.forEach(callback => callback());
}
this.hasReturned = true;
if (this.hasReturned) {
this.focusCallbacks.forEach(callback => callback());
}
this.hasReturned = true;
}
// Method to add a custom callback function
addFocusCallback(callback) {
if (typeof callback === 'function') {
this.focusCallbacks.push(callback);
} else {
throw new Error('Callback must be a function');
}
if (typeof callback === 'function') {
this.focusCallbacks.push(callback);
} else {
throw new Error('Callback must be a function');
}
}
}

View File

@@ -8,6 +8,7 @@ import base64
import gradio
import shutil
import glob
import json
import uuid
from loguru import logger
from functools import wraps
@@ -92,8 +93,9 @@ def ArgsGeneralWrapper(f):
"""
def decorated(request: gradio.Request, cookies:dict, max_length:int, llm_model:str,
txt:str, txt2:str, top_p:float, temperature:float, chatbot:list,
history:list, system_prompt:str, plugin_advanced_arg:dict, *args):
json_history:str, system_prompt:str, plugin_advanced_arg:dict, *args):
txt_passon = txt
history = json.loads(json_history) if json_history else []
if txt == "" and txt2 != "": txt_passon = txt2
# 引入一个有cookie的chatbot
if request.username is not None:
@@ -148,10 +150,11 @@ def ArgsGeneralWrapper(f):
return decorated
def update_ui(chatbot:ChatBotWithCookies, history, msg="正常", **kwargs): # 刷新界面
def update_ui(chatbot:ChatBotWithCookies, history:list, msg:str="正常", **kwargs): # 刷新界面
"""
刷新用户界面
"""
assert isinstance(history, list), "history必须是一个list"
assert isinstance(
chatbot, ChatBotWithCookies
), "在传递chatbot的过程中不要将其丢弃。必要时, 可用clear将其清空, 然后用for+append循环重新赋值。"
@@ -175,10 +178,12 @@ def update_ui(chatbot:ChatBotWithCookies, history, msg="正常", **kwargs): #
else:
chatbot_gr = chatbot
yield cookies, chatbot_gr, history, msg
history = [str(history_item) for history_item in history] # ensure all items are string
json_history = json.dumps(history, ensure_ascii=False)
yield cookies, chatbot_gr, json_history, msg
def update_ui_lastest_msg(lastmsg:str, chatbot:ChatBotWithCookies, history:list, delay=1, msg="正常"): # 刷新界面
def update_ui_lastest_msg(lastmsg:str, chatbot:ChatBotWithCookies, history:list, delay:float=1, msg:str="正常"): # 刷新界面
"""
刷新用户界面
"""
@@ -494,6 +499,22 @@ def to_markdown_tabs(head: list, tabs: list, alignment=":---:", column=False, om
return tabs_list
def validate_file_size(files, max_size_mb=500):
"""
验证文件大小是否在允许范围内。
:param files: 文件的完整路径的列表
:param max_size_mb: 最大文件大小单位为MB默认500MB
:return: True 如果文件大小有效,否则抛出异常
"""
# 获取文件大小(字节)
total_size = 0
max_size_bytes = max_size_mb * 1024 * 1024
for file in files:
total_size += os.path.getsize(file.name)
if total_size > max_size_bytes:
raise ValueError(f"File size exceeds the allowed limit of {max_size_mb} MB. "
f"Current size: {total_size / (1024 * 1024):.2f} MB")
return True
def on_file_uploaded(
request: gradio.Request, files:List[str], chatbot:ChatBotWithCookies,
@@ -505,6 +526,7 @@ def on_file_uploaded(
if len(files) == 0:
return chatbot, txt
validate_file_size(files, max_size_mb=500)
# 创建工作路径
user_name = default_user_name if not request.username else request.username
time_tag = gen_time_str()

View File

@@ -1,5 +1,5 @@
{
"version": 3.90,
"version": 3.93,
"show_feature": true,
"new_feature": "增加RAG组件 <-> 升级多合一主提交键"
"new_feature": "支持deepseek-reason(r1) <-> 字体和字体大小自定义 <-> 优化前端并修复TTS的BUG <-> 添加时间线回溯功能 <-> 支持chatgpt-4o-latest <-> 增加RAG组件 <-> 升级多合一主提交键"
}