Compare commits

...

59 Commits

Author SHA1 Message Date
binary-husky
59076547c7 Update requirements.txt 2024-01-16 20:08:17 +08:00
binary-husky
5f18d4a1af Update README.md (#1477)
* Update README.md

* Update README.md
2024-01-16 02:14:08 +08:00
hongyi-zhao
2bc65a99ca Update bridge_all.py (#1472)
删除 "chatgpt_website" 函数,从而不再支持域基于逆向工程的方法的接口,该方法对应的实现项目为:https://github.com/acheong08/ChatGPT-to-API/。目前,该项目已被开发者 archived,且该方法由于其实现的原理,而不可能是稳健和完美的,因此不是可持续维护的。
2024-01-13 14:35:04 +08:00
binary-husky
0a2805513e better gui interaction (#1459) 2024-01-07 19:13:12 +08:00
binary-husky
c22867b74c Merge pull request #1458 from binary-husky/frontier
introduce Gemini & Format code
2024-01-07 16:24:33 +08:00
binary-husky
2abe665521 Merge branch 'master' into frontier 2024-01-05 16:12:41 +08:00
binary-husky
b0e6c4d365 change ui prompt 2024-01-05 16:11:30 +08:00
fzcqc
d883c7f34b fix: expected_words添加反斜杆 (#1442) 2024-01-03 19:57:10 +08:00
Menghuan1918
aba871342f 修复分割函数中使用的变量错误 (#1443)
* Fix force_breakdown function parameter name

* Add handling for PDFs with lowercase starting paragraphs

* Change first lowercase word in meta_txt to uppercase
2024-01-03 19:49:17 +08:00
qingxu fu
37744a9cb1 jpeg type align for gemini 2023-12-31 20:28:39 +08:00
qingxu fu
480516380d re-format code to with pre-commit 2023-12-31 19:30:32 +08:00
qingxu fu
60ba712131 use legacy image io for gemini 2023-12-31 19:02:40 +08:00
XIao
a7c960dcb0 适配 google gemini 优化为从用户input中提取文件 (#1419)
适配 google gemini 优化为从用户input中提取文件
2023-12-31 18:05:55 +08:00
binary-husky
a96f842b3a minor ui change 2023-12-30 02:57:42 +08:00
binary-husky
417ca91e23 minor css change 2023-12-30 00:55:52 +08:00
binary-husky
ef8fadfa18 fix ui element padding 2023-12-29 15:16:33 +08:00
binary-husky
865c4ca993 Update README.md 2023-12-26 22:51:56 +08:00
binary-husky
31304f481a remove console log 2023-12-25 22:57:09 +08:00
binary-husky
1bd3637d32 modify image gen plugin user interaction 2023-12-25 22:24:12 +08:00
binary-husky
160a683667 smart input panel swap 2023-12-25 22:05:14 +08:00
binary-husky
49ca03ca06 Merge branch 'master' into frontier 2023-12-25 21:43:33 +08:00
binary-husky
c625348ce1 smarter chatbot height adjustment 2023-12-25 21:26:24 +08:00
binary-husky
6d4a74893a Merge pull request #1415 from binary-husky/frontier
Merge Frontier Branch
2023-12-25 20:18:56 +08:00
qingxu fu
5c7499cada compat with some third party api 2023-12-25 17:21:35 +08:00
binary-husky
f522691529 Merge pull request #1398 from leike0813/frontier
添加通义千问在线模型系列支持&增加插件
2023-12-24 18:21:45 +08:00
binary-husky
ca85573ec1 Update README.md 2023-12-24 18:14:57 +08:00
binary-husky
2c7bba5c63 change dash scope api key check behavior 2023-12-23 21:35:42 +08:00
binary-husky
e22f0226d5 Merge branch 'master' into leike0813-frontier 2023-12-23 21:00:38 +08:00
binary-husky
0f250305b4 add urllib3 version limit 2023-12-23 20:59:32 +08:00
binary-husky
7606f5c130 name fix 2023-12-23 20:55:58 +08:00
binary-husky
4f0dcc431c Merge branch 'frontier' of https://github.com/leike0813/gpt_academic into leike0813-frontier 2023-12-23 20:42:43 +08:00
binary-husky
6ca0dd2f9e Merge pull request #1410 from binary-husky/frontier
fix spark image understanding api
2023-12-23 17:49:35 +08:00
binary-husky
e3e9921f6b correct the misuse of spark image understanding 2023-12-23 17:46:25 +08:00
binary-husky
867ddd355e adjust green theme layout 2023-12-22 21:59:18 +08:00
binary-husky
bb431db7d3 upgrade to version 3.64 2023-12-21 14:44:35 +08:00
binary-husky
43568b83e1 improve file upload notification 2023-12-21 14:39:58 +08:00
Keldos
2b90302851 feat: drag file to chatbot to upload 拖动以上传文件 (#1396)
* feat: 拖动以上传文件

* 上传文件过程中转圈圈

* fix: 解决仅在第一次上传时才有上传动画的问题

---------

Co-authored-by: 505030475 <qingxu.fu@outlook.com>
2023-12-21 10:24:11 +08:00
binary-husky
f7588d4776 avoid adding the same file multiple times
to the chatbot's files_to_promote list
2023-12-20 11:50:54 +08:00
binary-husky
a0bfa7ba1c improve long text breakdown perfomance 2023-12-20 11:50:54 +08:00
leike0813
c60a7452bf Improve NOUGAT pdf plugin
Add an API version of NOUGAT plugin
Add advanced argument support to NOUGAT plugin

Adapt new text breakdown function

bugfix
2023-12-20 08:57:27 +08:00
leike0813
68a49d3758 Add 2 plugins
相当于将“批量总结PDF文档”插件拆成了两部分,目的在于使用廉价的模型干粗活,再将关键的最终总结交给GPT-4,降低使用成本
批量总结PDF文档_初步:初步总结PDF,每个PDF输出一个md文档
批量总结Markdown文档_进阶:将所有md文档高度凝练并汇总至一个md文档,可直接使用“批量总结PDF文档_初步”的输出结果作为输入
2023-12-20 07:44:53 +08:00
leike0813
ac3d4cf073 Add support to aliyun qwen online models.
Rename model tag "qwen" to "qwen-local"
Add model tag "qwen-turbo", "qwen-plus", "qwen-max"
Add corresponding model interfaces in request_llms/bridge_all.py
Add configuration variable “DASHSCOPE_API_KEY"
Rename request_llms/bridge_qwen.py to bridge_qwen_local.py to distinguish it from the online model interface
2023-12-20 07:37:26 +08:00
binary-husky
9479dd984c avoid adding the same file multiple times
to the chatbot's files_to_promote list
2023-12-19 19:43:03 +08:00
binary-husky
3c271302cc improve long text breakdown perfomance 2023-12-19 19:30:44 +08:00
binary-husky
6e9936531d fix theme shifting bug 2023-12-17 19:45:37 +08:00
binary-husky
439147e4b7 re-arrange main.py 2023-12-17 15:55:15 +08:00
binary-husky
8d13821099 a lm-based story writing game 2023-12-15 23:27:12 +08:00
binary-husky
49fe06ed69 add light edge for audio btn 2023-12-15 21:12:39 +08:00
binary-husky
7882ce7304 Merge branch 'master' into frontier 2023-12-15 16:34:06 +08:00
binary-husky
dc68e601a5 optimize audio plugin 2023-12-15 16:28:42 +08:00
binary-husky
d169fb4b16 fix typo 2023-12-15 13:32:39 +08:00
binary-husky
36e19d5202 compat further with one api 2023-12-15 13:16:06 +08:00
binary-husky
c5f1e4e392 version 3.63 2023-12-15 13:03:52 +08:00
binary-husky
d3f7267a63 Merge branch 'master' into frontier 2023-12-15 12:58:05 +08:00
qingxu fu
f4127a9c9c change clip history policy 2023-12-15 12:52:21 +08:00
binary-husky
c181ad38b4 Update build-with-all-capacity-beta.yml 2023-12-14 12:23:49 +08:00
binary-husky
107944f5b7 Merge branch 'master' of github.com:binary-husky/chatgpt_academic 2023-12-14 11:01:02 +08:00
binary-husky
8c7569b689 修复protobuf版本错误 2023-12-14 11:00:55 +08:00
binary-husky
fa374bf1fc try full dockerfile with vector store 2023-12-11 22:50:19 +08:00
93 changed files with 2460 additions and 909 deletions

View File

@@ -34,7 +34,7 @@ body:
- Others | 非最新版
validations:
required: true
- type: dropdown
id: os
attributes:
@@ -47,7 +47,7 @@ body:
- Docker
validations:
required: true
- type: textarea
id: describe
attributes:
@@ -55,7 +55,7 @@ body:
description: Describe the bug | 简述
validations:
required: true
- type: textarea
id: screenshot
attributes:
@@ -63,15 +63,9 @@ body:
description: Screen Shot | 有帮助的截图
validations:
required: true
- type: textarea
id: traceback
attributes:
label: Terminal Traceback & Material to Help Reproduce Bugs | 终端traceback如有 + 帮助我们复现的测试材料样本(如有)
description: Terminal Traceback & Material to Help Reproduce Bugs | 终端traceback如有 + 帮助我们复现的测试材料样本(如有)

View File

@@ -21,8 +21,3 @@ body:
attributes:
label: Feature Request | 功能请求
description: Feature Request | 功能请求

View File

@@ -0,0 +1,44 @@
# https://docs.github.com/en/actions/publishing-packages/publishing-docker-images#publishing-images-to-github-packages
name: build-with-all-capacity-beta
on:
push:
branches:
- 'master'
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}_with_all_capacity_beta
jobs:
build-and-push-image:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Log in to the Container registry
uses: docker/login-action@v2
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@v4
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
- name: Build and push Docker image
uses: docker/build-push-action@v4
with:
context: .
push: true
file: docs/GithubAction+AllCapacityBeta
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}

View File

@@ -15,7 +15,7 @@ jobs:
permissions:
issues: write
pull-requests: read
steps:
- uses: actions/stale@v8
with:

1
.gitignore vendored
View File

@@ -152,3 +152,4 @@ request_llms/moss
media
flagged
request_llms/ChatGLM-6b-onnx-u8s8
.pre-commit-config.yaml

View File

@@ -1,8 +1,8 @@
> **Caution**
>
>
> 2023.11.12: 某些依赖包尚不兼容python 3.12推荐python 3.11。
>
> 2023.11.7: 安装依赖时,请选择`requirements.txt`中**指定的版本**。 安装命令:`pip install -r requirements.txt`。本项目开源免费,近期发现有人蔑视开源协议并利用本项目违规圈钱,请提高警惕,谨防上当受骗
>
> 2023.12.26: 安装依赖时,请选择`requirements.txt`中**指定的版本**。 安装命令:`pip install -r requirements.txt`。本项目完全开源免费,您可通过订阅[在线服务](https://github.com/binary-husky/gpt_academic/wiki/online)的方式鼓励本项目的发展
<br>
@@ -47,7 +47,7 @@ Read this in [English](docs/README.English.md) | [日本語](docs/README.Japanes
>
> 2.本项目中每个文件的功能都在[自译解报告](https://github.com/binary-husky/gpt_academic/wiki/GPTAcademic项目自译解报告)`self_analysis.md`详细说明。随着版本的迭代您也可以随时自行点击相关函数插件调用GPT重新生成项目的自我解析报告。常见问题请查阅wiki。
> [![常规安装方法](https://img.shields.io/static/v1?label=&message=常规安装方法&color=gray)](#installation) [![一键安装脚本](https://img.shields.io/static/v1?label=&message=一键安装脚本&color=gray)](https://github.com/binary-husky/gpt_academic/releases) [![配置说明](https://img.shields.io/static/v1?label=&message=配置说明&color=gray)](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明) [![wiki](https://img.shields.io/static/v1?label=&message=wiki&color=gray)]([https://github.com/binary-husky/gpt_academic/wiki/项目配置说明](https://github.com/binary-husky/gpt_academic/wiki))
>
>
> 3.本项目兼容并鼓励尝试国产大语言模型ChatGLM等。支持多个api-key共存可在配置文件中填写如`API_KEY="openai-key1,openai-key2,azure-key3,api2d-key4"`。需要临时更换`API_KEY`时,在输入区输入临时的`API_KEY`然后回车键提交即可生效。
<br><br>
@@ -65,7 +65,7 @@ Read this in [English](docs/README.English.md) | [日本語](docs/README.Japanes
Latex全文[翻译](https://www.bilibili.com/video/BV1nk4y1Y7Js/)、[润色](https://www.bilibili.com/video/BV1FT411H7c5/) | [插件] 一键翻译或润色latex论文
批量注释生成 | [插件] 一键批量生成函数注释
Markdown[中英互译](https://www.bilibili.com/video/BV1yo4y157jV/) | [插件] 看到上面5种语言的[README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md)了吗?就是出自他的手笔
chat分析报告生成 | [插件] 运行后自动生成总结汇报
⭐支持mermaid图像渲染 | 支持让GPT生成[流程图](https://www.bilibili.com/video/BV18c41147H9/)、状态转移图、甘特图、饼状图、GitGraph等等3.7版本)
[PDF论文全文翻译功能](https://www.bilibili.com/video/BV1KT411x7Wn) | [插件] PDF论文提取题目&摘要+翻译全文(多线程)
[Arxiv小助手](https://www.bilibili.com/video/BV1LM4y1279X) | [插件] 输入arxiv文章url即可一键翻译摘要+下载PDF
Latex论文一键校对 | [插件] 仿Grammarly对Latex文章进行语法、拼写纠错+输出对照PDF
@@ -111,7 +111,7 @@ Latex论文一键校对 | [插件] 仿Grammarly对Latex文章进行语法、拼
<img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="700" >
</div>
- 多种大语言模型混合调用ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4
- 多种大语言模型混合调用ChatGLM + OpenAI-GPT3.5 + GPT4
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/232537274-deca0563-7aa6-4b5d-94a2-b7c453c47794.png" width="700" >
</div>
@@ -119,7 +119,7 @@ Latex论文一键校对 | [插件] 仿Grammarly对Latex文章进行语法、拼
<br><br>
# Installation
### 安装方法I直接运行 (Windows, Linux or MacOS)
### 安装方法I直接运行 (Windows, Linux or MacOS)
1. 下载项目
@@ -156,7 +156,7 @@ Latex论文一键校对 | [插件] 仿Grammarly对Latex文章进行语法、拼
```sh
# 【可选步骤I】支持清华ChatGLM2。清华ChatGLM备注如果遇到"Call ChatGLM fail 不能正常加载ChatGLM的参数" 错误,参考如下: 1以上默认安装的为torch+cpu版使用cuda需要卸载torch重新安装torch+cuda 2如因本机配置不够无法加载模型可以修改request_llm/bridge_chatglm.py中的模型精度, 将 AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) 都修改为 AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)
python -m pip install -r request_llms/requirements_chatglm.txt
python -m pip install -r request_llms/requirements_chatglm.txt
# 【可选步骤II】支持复旦MOSS
python -m pip install -r request_llms/requirements_moss.txt
@@ -243,8 +243,8 @@ P.S. 如果需要依赖Latex的插件功能请见Wiki。另外您也可以
```python
"超级英译中": {
# 前缀,会被加在你的输入之前。例如,用来描述你的要求,例如翻译、解释代码、润色等等
"Prefix": "请翻译把下面一段内容成中文然后用一个markdown表格逐一解释文中出现的专有名词\n\n",
"Prefix": "请翻译把下面一段内容成中文然后用一个markdown表格逐一解释文中出现的专有名词\n\n",
# 后缀,会被加在你的输入之后。例如,配合前缀可以把你的输入内容用引号圈起来。
"Suffix": "",
},
@@ -308,9 +308,9 @@ Tip不指定文件直接点击 `载入对话历史存档` 可以查看历史h
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/bc7ab234-ad90-48a0-8d62-f703d9e74665" width="500" >
</div>
8. OpenAI音频解析与总结
8. 基于mermaid的流图、脑图绘制
<div align="center">
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/709ccf95-3aee-498a-934a-e1c22d3d5d5b" width="500" >
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/c518b82f-bd53-46e2-baf5-ad1b081c1da4" width="500" >
</div>
9. Latex全文校对纠错
@@ -370,8 +370,8 @@ GPT Academic开发者QQ群`610599535`
1. `master` 分支: 主分支,稳定版
2. `frontier` 分支: 开发分支,测试版
3. 如何接入其他大模型:[接入其他大模型](request_llms/README.md)
3. 如何[接入其他大模型](request_llms/README.md)
4. 访问GPT-Academic的[在线服务并支持我们](https://github.com/binary-husky/gpt_academic/wiki/online)
### V参考与学习

View File

@@ -89,11 +89,14 @@ DEFAULT_FN_GROUPS = ['对话', '编程', '学术', '智能体']
LLM_MODEL = "gpt-3.5-turbo" # 可选 ↓↓↓
AVAIL_LLM_MODELS = ["gpt-3.5-turbo-1106","gpt-4-1106-preview","gpt-4-vision-preview",
"gpt-3.5-turbo-16k", "gpt-3.5-turbo", "azure-gpt-3.5",
"api2d-gpt-3.5-turbo", 'api2d-gpt-3.5-turbo-16k',
"gpt-4", "gpt-4-32k", "azure-gpt-4", "api2d-gpt-4",
"chatglm3", "moss", "claude-2"]
# P.S. 其他可用的模型还包括 ["zhipuai", "qianfan", "deepseekcoder", "llama2", "qwen", "gpt-3.5-turbo-0613", "gpt-3.5-turbo-16k-0613", "gpt-3.5-random"
# "spark", "sparkv2", "sparkv3", "chatglm_onnx", "claude-1-100k", "claude-2", "internlm", "jittorllms_pangualpha", "jittorllms_llama"]
"gemini-pro", "chatglm3", "moss", "claude-2"]
# P.S. 其他可用的模型还包括 [
# "qwen-turbo", "qwen-plus", "qwen-max"
# "zhipuai", "qianfan", "deepseekcoder", "llama2", "qwen-local", "gpt-3.5-turbo-0613",
# "gpt-3.5-turbo-16k-0613", "gpt-3.5-random", "api2d-gpt-3.5-turbo", 'api2d-gpt-3.5-turbo-16k',
# "spark", "sparkv2", "sparkv3", "chatglm_onnx", "claude-1-100k", "claude-2", "internlm", "jittorllms_pangualpha", "jittorllms_llama"
# ]
# 定义界面上“询问多个GPT模型”插件应该使用哪些模型请从AVAIL_LLM_MODELS中选择并在不同模型之间用`&`间隔,例如"gpt-3.5-turbo&chatglm3&azure-gpt-4"
@@ -103,7 +106,11 @@ MULTI_QUERY_LLM_MODELS = "gpt-3.5-turbo&chatglm3"
# 选择本地模型变体只有当AVAIL_LLM_MODELS包含了对应本地模型时才会起作用
# 如果你选择Qwen系列的模型那么请在下面的QWEN_MODEL_SELECTION中指定具体的模型
# 也可以是具体的模型路径
QWEN_MODEL_SELECTION = "Qwen/Qwen-1_8B-Chat-Int8"
QWEN_LOCAL_MODEL_SELECTION = "Qwen/Qwen-1_8B-Chat-Int8"
# 接入通义千问在线大模型 https://dashscope.console.aliyun.com/
DASHSCOPE_API_KEY = "" # 阿里灵积云API_KEY
# 百度千帆LLM_MODEL="qianfan"
@@ -199,6 +206,10 @@ ANTHROPIC_API_KEY = ""
CUSTOM_API_KEY_PATTERN = ""
# Google Gemini API-Key
GEMINI_API_KEY = ''
# HUGGINGFACE的TOKEN下载LLAMA时起作用 https://huggingface.co/docs/hub/security-tokens
HUGGINGFACE_ACCESS_TOKEN = "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"
@@ -284,6 +295,12 @@ NUM_CUSTOM_BASIC_BTN = 4
│ ├── ZHIPUAI_API_KEY
│ └── ZHIPUAI_MODEL
├── "qwen-turbo" 等通义千问大模型
│ └── DASHSCOPE_API_KEY
├── "Gemini"
│ └── GEMINI_API_KEY
└── "newbing" Newbing接口不再稳定不推荐使用
├── NEWBING_STYLE
└── NEWBING_COOKIES
@@ -300,7 +317,7 @@ NUM_CUSTOM_BASIC_BTN = 4
├── "jittorllms_pangualpha"
├── "jittorllms_llama"
├── "deepseekcoder"
├── "qwen"
├── "qwen-local"
├── RWKV的支持见Wiki
└── "llama2"

View File

@@ -345,7 +345,7 @@ def get_crazy_functions():
"Color": "stop",
"AsButton": False,
"AdvancedArgs": True, # 调用时唤起高级参数输入区默认False
"ArgsReminder": "支持任意数量的llm接口用&符号分隔。例如chatglm&gpt-3.5-turbo&api2d-gpt-4", # 高级参数输入区的显示提示
"ArgsReminder": "支持任意数量的llm接口用&符号分隔。例如chatglm&gpt-3.5-turbo&gpt-4", # 高级参数输入区的显示提示
"Function": HotReload(同时问询_指定模型)
},
})
@@ -356,7 +356,7 @@ def get_crazy_functions():
try:
from crazy_functions.图片生成 import 图片生成_DALLE2, 图片生成_DALLE3, 图片修改_DALLE2
function_plugins.update({
"图片生成_DALLE2 (先切换模型到openai或api2d": {
"图片生成_DALLE2 (先切换模型到gpt-*": {
"Group": "对话",
"Color": "stop",
"AsButton": False,
@@ -367,7 +367,7 @@ def get_crazy_functions():
},
})
function_plugins.update({
"图片生成_DALLE3 (先切换模型到openai或api2d": {
"图片生成_DALLE3 (先切换模型到gpt-*": {
"Group": "对话",
"Color": "stop",
"AsButton": False,
@@ -378,7 +378,7 @@ def get_crazy_functions():
},
})
function_plugins.update({
"图片修改_DALLE2 (先切换模型到openai或api2d": {
"图片修改_DALLE2 (先切换模型到gpt-*": {
"Group": "对话",
"Color": "stop",
"AsButton": False,
@@ -590,19 +590,19 @@ def get_crazy_functions():
print(trimmed_format_exc())
print('Load function plugin failed')
# try:
# from crazy_functions.互动小游戏 import 随机小游戏
# function_plugins.update({
# "随机小游戏": {
# "Group": "智能体",
# "Color": "stop",
# "AsButton": True,
# "Function": HotReload(随机小游戏)
# }
# })
# except:
# print(trimmed_format_exc())
# print('Load function plugin failed')
try:
from crazy_functions.互动小游戏 import 随机小游戏
function_plugins.update({
"随机互动小游戏(仅供测试)": {
"Group": "智能体",
"Color": "stop",
"AsButton": False,
"Function": HotReload(随机小游戏)
}
})
except:
print(trimmed_format_exc())
print('Load function plugin failed')
# try:
# from crazy_functions.chatglm微调工具 import 微调数据集生成

View File

@@ -26,8 +26,8 @@ class PaperFileGroup():
self.sp_file_index.append(index)
self.sp_file_tag.append(self.file_paths[index])
else:
from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
segments = breakdown_txt_to_satisfy_token_limit_for_pdf(file_content, self.get_token_num, max_token_limit)
from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
segments = breakdown_text_to_satisfy_token_limit(file_content, max_token_limit)
for j, segment in enumerate(segments):
self.sp_file_contents.append(segment)
self.sp_file_index.append(index)

View File

@@ -26,8 +26,8 @@ class PaperFileGroup():
self.sp_file_index.append(index)
self.sp_file_tag.append(self.file_paths[index])
else:
from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
segments = breakdown_txt_to_satisfy_token_limit_for_pdf(file_content, self.get_token_num, max_token_limit)
from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
segments = breakdown_text_to_satisfy_token_limit(file_content, max_token_limit)
for j, segment in enumerate(segments):
self.sp_file_contents.append(segment)
self.sp_file_index.append(index)

View File

@@ -139,6 +139,8 @@ def can_multi_process(llm):
if llm.startswith('gpt-'): return True
if llm.startswith('api2d-'): return True
if llm.startswith('azure-'): return True
if llm.startswith('spark'): return True
if llm.startswith('zhipuai'): return True
return False
def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
@@ -312,95 +314,6 @@ def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
return gpt_response_collection
def breakdown_txt_to_satisfy_token_limit(txt, get_token_fn, limit):
def cut(txt_tocut, must_break_at_empty_line): # 递归
if get_token_fn(txt_tocut) <= limit:
return [txt_tocut]
else:
lines = txt_tocut.split('\n')
estimated_line_cut = limit / get_token_fn(txt_tocut) * len(lines)
estimated_line_cut = int(estimated_line_cut)
for cnt in reversed(range(estimated_line_cut)):
if must_break_at_empty_line:
if lines[cnt] != "":
continue
print(cnt)
prev = "\n".join(lines[:cnt])
post = "\n".join(lines[cnt:])
if get_token_fn(prev) < limit:
break
if cnt == 0:
raise RuntimeError("存在一行极长的文本!")
# print(len(post))
# 列表递归接龙
result = [prev]
result.extend(cut(post, must_break_at_empty_line))
return result
try:
return cut(txt, must_break_at_empty_line=True)
except RuntimeError:
return cut(txt, must_break_at_empty_line=False)
def force_breakdown(txt, limit, get_token_fn):
"""
当无法用标点、空行分割时,我们用最暴力的方法切割
"""
for i in reversed(range(len(txt))):
if get_token_fn(txt[:i]) < limit:
return txt[:i], txt[i:]
return "Tiktoken未知错误", "Tiktoken未知错误"
def breakdown_txt_to_satisfy_token_limit_for_pdf(txt, get_token_fn, limit):
# 递归
def cut(txt_tocut, must_break_at_empty_line, break_anyway=False):
if get_token_fn(txt_tocut) <= limit:
return [txt_tocut]
else:
lines = txt_tocut.split('\n')
estimated_line_cut = limit / get_token_fn(txt_tocut) * len(lines)
estimated_line_cut = int(estimated_line_cut)
cnt = 0
for cnt in reversed(range(estimated_line_cut)):
if must_break_at_empty_line:
if lines[cnt] != "":
continue
prev = "\n".join(lines[:cnt])
post = "\n".join(lines[cnt:])
if get_token_fn(prev) < limit:
break
if cnt == 0:
if break_anyway:
prev, post = force_breakdown(txt_tocut, limit, get_token_fn)
else:
raise RuntimeError(f"存在一行极长的文本!{txt_tocut}")
# print(len(post))
# 列表递归接龙
result = [prev]
result.extend(cut(post, must_break_at_empty_line, break_anyway=break_anyway))
return result
try:
# 第1次尝试将双空行\n\n作为切分点
return cut(txt, must_break_at_empty_line=True)
except RuntimeError:
try:
# 第2次尝试将单空行\n作为切分点
return cut(txt, must_break_at_empty_line=False)
except RuntimeError:
try:
# 第3次尝试将英文句号.)作为切分点
res = cut(txt.replace('.', '\n'), must_break_at_empty_line=False) # 这个中文的句号是故意的,作为一个标识而存在
return [r.replace('\n', '.') for r in res]
except RuntimeError as e:
try:
# 第4次尝试将中文句号作为切分点
res = cut(txt.replace('', '。。\n'), must_break_at_empty_line=False)
return [r.replace('。。\n', '') for r in res]
except RuntimeError as e:
# 第5次尝试没办法了随便切一下敷衍吧
return cut(txt, must_break_at_empty_line=False, break_anyway=True)
def read_and_clean_pdf_text(fp):
"""
@@ -553,6 +466,9 @@ def read_and_clean_pdf_text(fp):
return True
else:
return False
# 对于某些PDF会有第一个段落就以小写字母开头,为了避免索引错误将其更改为大写
if starts_with_lowercase_word(meta_txt[0]):
meta_txt[0] = meta_txt[0].capitalize()
for _ in range(100):
for index, block_txt in enumerate(meta_txt):
if starts_with_lowercase_word(block_txt):
@@ -631,7 +547,6 @@ def get_files_from_everything(txt, type): # type='.md'
@Singleton
class nougat_interface():
def __init__(self):

View File

@@ -0,0 +1,42 @@
from toolbox import CatchException, update_ui, update_ui_lastest_msg
from crazy_functions.multi_stage.multi_stage_utils import GptAcademicGameBaseState
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
from request_llms.bridge_all import predict_no_ui_long_connection
from crazy_functions.game_fns.game_utils import get_code_block, is_same_thing
import random
class MiniGame_ASCII_Art(GptAcademicGameBaseState):
def step(self, prompt, chatbot, history):
if self.step_cnt == 0:
chatbot.append(["我画你猜(动物)", "请稍等..."])
else:
if prompt.strip() == 'exit':
self.delete_game = True
yield from update_ui_lastest_msg(lastmsg=f"谜底是{self.obj},游戏结束。", chatbot=chatbot, history=history, delay=0.)
return
chatbot.append([prompt, ""])
yield from update_ui(chatbot=chatbot, history=history)
if self.step_cnt == 0:
self.lock_plugin(chatbot)
self.cur_task = 'draw'
if self.cur_task == 'draw':
avail_obj = ["","","","","老鼠",""]
self.obj = random.choice(avail_obj)
inputs = "I want to play a game called Guess the ASCII art. You can draw the ASCII art and I will try to guess it. " + \
f"This time you draw a {self.obj}. Note that you must not indicate what you have draw in the text, and you should only produce the ASCII art wrapped by ```. "
raw_res = predict_no_ui_long_connection(inputs=inputs, llm_kwargs=self.llm_kwargs, history=[], sys_prompt="")
self.cur_task = 'identify user guess'
res = get_code_block(raw_res)
history += ['', f'the answer is {self.obj}', inputs, res]
yield from update_ui_lastest_msg(lastmsg=res, chatbot=chatbot, history=history, delay=0.)
elif self.cur_task == 'identify user guess':
if is_same_thing(self.obj, prompt, self.llm_kwargs):
self.delete_game = True
yield from update_ui_lastest_msg(lastmsg="你猜对了!", chatbot=chatbot, history=history, delay=0.)
else:
self.cur_task = 'identify user guess'
yield from update_ui_lastest_msg(lastmsg="猜错了再试试输入“exit”获取答案。", chatbot=chatbot, history=history, delay=0.)

View File

@@ -0,0 +1,212 @@
prompts_hs = """ 请以“{headstart}”为开头,编写一个小说的第一幕。
- 尽量短,不要包含太多情节,因为你接下来将会与用户互动续写下面的情节,要留出足够的互动空间。
- 出现人物时,给出人物的名字。
- 积极地运用环境描写、人物描写等手法,让读者能够感受到你的故事世界。
- 积极地运用修辞手法,比如比喻、拟人、排比、对偶、夸张等等。
- 字数要求第一幕的字数少于300字且少于2个段落。
"""
prompts_interact = """ 小说的前文回顾:
{previously_on_story}
你是一个作家根据以上的情节给出4种不同的后续剧情发展方向每个发展方向都精明扼要地用一句话说明。稍后我将在这4个选择中挑选一种剧情发展。
输出格式例如:
1. 后续剧情发展1
2. 后续剧情发展2
3. 后续剧情发展3
4. 后续剧情发展4
"""
prompts_resume = """小说的前文回顾:
{previously_on_story}
你是一个作家,我们正在互相讨论,确定后续剧情的发展。
在以下的剧情发展中,
{choice}
我认为更合理的是:{user_choice}
请在前文的基础上(不要重复前文),围绕我选定的剧情情节,编写小说的下一幕。
- 禁止杜撰不符合我选择的剧情。
- 尽量短,不要包含太多情节,因为你接下来将会与用户互动续写下面的情节,要留出足够的互动空间。
- 不要重复前文。
- 出现人物时,给出人物的名字。
- 积极地运用环境描写、人物描写等手法,让读者能够感受到你的故事世界。
- 积极地运用修辞手法,比如比喻、拟人、排比、对偶、夸张等等。
- 小说的下一幕字数少于300字且少于2个段落。
"""
prompts_terminate = """小说的前文回顾:
{previously_on_story}
你是一个作家,我们正在互相讨论,确定后续剧情的发展。
现在,故事该结束了,我认为最合理的故事结局是:{user_choice}
请在前文的基础上(不要重复前文),编写小说的最后一幕。
- 不要重复前文。
- 出现人物时,给出人物的名字。
- 积极地运用环境描写、人物描写等手法,让读者能够感受到你的故事世界。
- 积极地运用修辞手法,比如比喻、拟人、排比、对偶、夸张等等。
- 字数要求最后一幕的字数少于1000字。
"""
from toolbox import CatchException, update_ui, update_ui_lastest_msg
from crazy_functions.multi_stage.multi_stage_utils import GptAcademicGameBaseState
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
from request_llms.bridge_all import predict_no_ui_long_connection
from crazy_functions.game_fns.game_utils import get_code_block, is_same_thing
import random
class MiniGame_ResumeStory(GptAcademicGameBaseState):
story_headstart = [
'先行者知道,他现在是全宇宙中唯一的一个人了。',
'深夜一个年轻人穿过天安门广场向纪念堂走去。在二十二世纪编年史中计算机把他的代号定为M102。',
'他知道,这最后一课要提前讲了。又一阵剧痛从肝部袭来,几乎使他晕厥过去。',
'在距地球五万光年的远方,在银河系的中心,一场延续了两万年的星际战争已接近尾声。那里的太空中渐渐隐现出一个方形区域,仿佛灿烂的群星的背景被剪出一个方口。',
'伊依一行三人乘坐一艘游艇在南太平洋上做吟诗航行,他们的目的地是南极,如果几天后能顺利到达那里,他们将钻出地壳去看诗云。',
'很多人生来就会莫名其妙地迷上一样东西,仿佛他的出生就是要和这东西约会似的,正是这样,圆圆迷上了肥皂泡。'
]
def begin_game_step_0(self, prompt, chatbot, history):
# init game at step 0
self.headstart = random.choice(self.story_headstart)
self.story = []
chatbot.append(["互动写故事", f"这次的故事开头是:{self.headstart}"])
self.sys_prompt_ = '你是一个想象力丰富的杰出作家。正在与你的朋友互动一起写故事因此你每次写的故事段落应少于300字结局除外'
def generate_story_image(self, story_paragraph):
try:
from crazy_functions.图片生成 import gen_image
prompt_ = predict_no_ui_long_connection(inputs=story_paragraph, llm_kwargs=self.llm_kwargs, history=[], sys_prompt='你需要根据用户给出的小说段落进行简短的环境描写。要求80字以内。')
image_url, image_path = gen_image(self.llm_kwargs, prompt_, '512x512', model="dall-e-2", quality='standard', style='natural')
return f'<br/><div align="center"><img src="file={image_path}"></div>'
except:
return ''
def step(self, prompt, chatbot, history):
"""
首先,处理游戏初始化等特殊情况
"""
if self.step_cnt == 0:
self.begin_game_step_0(prompt, chatbot, history)
self.lock_plugin(chatbot)
self.cur_task = 'head_start'
else:
if prompt.strip() == 'exit' or prompt.strip() == '结束剧情':
# should we terminate game here?
self.delete_game = True
yield from update_ui_lastest_msg(lastmsg=f"游戏结束。", chatbot=chatbot, history=history, delay=0.)
return
if '剧情收尾' in prompt:
self.cur_task = 'story_terminate'
# # well, game resumes
# chatbot.append([prompt, ""])
# update ui, don't keep the user waiting
yield from update_ui(chatbot=chatbot, history=history)
"""
处理游戏的主体逻辑
"""
if self.cur_task == 'head_start':
"""
这是游戏的第一步
"""
inputs_ = prompts_hs.format(headstart=self.headstart)
history_ = []
story_paragraph = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs_, '故事开头', self.llm_kwargs,
chatbot, history_, self.sys_prompt_
)
self.story.append(story_paragraph)
# # 配图
yield from update_ui_lastest_msg(lastmsg=story_paragraph + '<br/>正在生成插图中 ...', chatbot=chatbot, history=history, delay=0.)
yield from update_ui_lastest_msg(lastmsg=story_paragraph + '<br/>'+ self.generate_story_image(story_paragraph), chatbot=chatbot, history=history, delay=0.)
# # 构建后续剧情引导
previously_on_story = ""
for s in self.story:
previously_on_story += s + '\n'
inputs_ = prompts_interact.format(previously_on_story=previously_on_story)
history_ = []
self.next_choices = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs_, '请在以下几种故事走向中,选择一种(当然,您也可以选择给出其他故事走向):', self.llm_kwargs,
chatbot,
history_,
self.sys_prompt_
)
self.cur_task = 'user_choice'
elif self.cur_task == 'user_choice':
"""
根据用户的提示,确定故事的下一步
"""
if '请在以下几种故事走向中,选择一种' in chatbot[-1][0]: chatbot.pop(-1)
previously_on_story = ""
for s in self.story:
previously_on_story += s + '\n'
inputs_ = prompts_resume.format(previously_on_story=previously_on_story, choice=self.next_choices, user_choice=prompt)
history_ = []
story_paragraph = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs_, f'下一段故事(您的选择是:{prompt})。', self.llm_kwargs,
chatbot, history_, self.sys_prompt_
)
self.story.append(story_paragraph)
# # 配图
yield from update_ui_lastest_msg(lastmsg=story_paragraph + '<br/>正在生成插图中 ...', chatbot=chatbot, history=history, delay=0.)
yield from update_ui_lastest_msg(lastmsg=story_paragraph + '<br/>'+ self.generate_story_image(story_paragraph), chatbot=chatbot, history=history, delay=0.)
# # 构建后续剧情引导
previously_on_story = ""
for s in self.story:
previously_on_story += s + '\n'
inputs_ = prompts_interact.format(previously_on_story=previously_on_story)
history_ = []
self.next_choices = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs_,
'请在以下几种故事走向中,选择一种。当然,您也可以给出您心中的其他故事走向。另外,如果您希望剧情立即收尾,请输入剧情走向,并以“剧情收尾”四个字提示程序。', self.llm_kwargs,
chatbot,
history_,
self.sys_prompt_
)
self.cur_task = 'user_choice'
elif self.cur_task == 'story_terminate':
"""
根据用户的提示,确定故事的结局
"""
previously_on_story = ""
for s in self.story:
previously_on_story += s + '\n'
inputs_ = prompts_terminate.format(previously_on_story=previously_on_story, user_choice=prompt)
history_ = []
story_paragraph = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs_, f'故事收尾(您的选择是:{prompt})。', self.llm_kwargs,
chatbot, history_, self.sys_prompt_
)
# # 配图
yield from update_ui_lastest_msg(lastmsg=story_paragraph + '<br/>正在生成插图中 ...', chatbot=chatbot, history=history, delay=0.)
yield from update_ui_lastest_msg(lastmsg=story_paragraph + '<br/>'+ self.generate_story_image(story_paragraph), chatbot=chatbot, history=history, delay=0.)
# terminate game
self.delete_game = True
return

View File

@@ -0,0 +1,37 @@
import platform
import pickle
import multiprocessing
def run_in_subprocess_wrapper_func(v_args):
func, args, kwargs, return_dict, exception_dict = pickle.loads(v_args)
import sys
try:
result = func(*args, **kwargs)
return_dict['result'] = result
except Exception as e:
exc_info = sys.exc_info()
exception_dict['exception'] = exc_info
def run_in_subprocess_with_timeout(func, timeout=60):
if platform.system() == 'Linux':
def wrapper(*args, **kwargs):
return_dict = multiprocessing.Manager().dict()
exception_dict = multiprocessing.Manager().dict()
v_args = pickle.dumps((func, args, kwargs, return_dict, exception_dict))
process = multiprocessing.Process(target=run_in_subprocess_wrapper_func, args=(v_args,))
process.start()
process.join(timeout)
if process.is_alive():
process.terminate()
raise TimeoutError(f'功能单元{str(func)}未能在规定时间内完成任务')
process.close()
if 'exception' in exception_dict:
# ooops, the subprocess ran into an exception
exc_info = exception_dict['exception']
raise exc_info[1].with_traceback(exc_info[2])
if 'result' in return_dict.keys():
# If the subprocess ran successfully, return the result
return return_dict['result']
return wrapper
else:
return func

View File

@@ -175,7 +175,6 @@ class LatexPaperFileGroup():
self.sp_file_contents = []
self.sp_file_index = []
self.sp_file_tag = []
# count_token
from request_llms.bridge_all import model_info
enc = model_info["gpt-3.5-turbo"]['tokenizer']
@@ -192,13 +191,12 @@ class LatexPaperFileGroup():
self.sp_file_index.append(index)
self.sp_file_tag.append(self.file_paths[index])
else:
from ..crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
segments = breakdown_txt_to_satisfy_token_limit_for_pdf(file_content, self.get_token_num, max_token_limit)
from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
segments = breakdown_text_to_satisfy_token_limit(file_content, max_token_limit)
for j, segment in enumerate(segments):
self.sp_file_contents.append(segment)
self.sp_file_index.append(index)
self.sp_file_tag.append(self.file_paths[index] + f".part-{j}.tex")
print('Segmentation: done')
def merge_result(self):
self.file_result = ["" for _ in range(len(self.file_paths))]
@@ -404,7 +402,7 @@ def 编译Latex(chatbot, history, main_file_original, main_file_modified, work_f
result_pdf = pj(work_folder_modified, f'merge_diff.pdf') # get pdf path
promote_file_to_downloadzone(result_pdf, rename_file=None, chatbot=chatbot) # promote file to web UI
if modified_pdf_success:
yield from update_ui_lastest_msg(f'转化PDF编译已经成功, 即将退出 ...', chatbot, history) # 刷新Gradio前端界面
yield from update_ui_lastest_msg(f'转化PDF编译已经成功, 正在尝试生成对比PDF, 请稍候 ...', chatbot, history) # 刷新Gradio前端界面
result_pdf = pj(work_folder_modified, f'{main_file_modified}.pdf') # get pdf path
origin_pdf = pj(work_folder_original, f'{main_file_original}.pdf') # get pdf path
if os.path.exists(pj(work_folder, '..', 'translation')):

View File

@@ -250,8 +250,8 @@ def find_main_tex_file(file_manifest, mode):
else: # if len(canidates) >= 2 通过一些Latex模板中常见但通常不会出现在正文的单词对不同latex源文件扣分取评分最高者返回
canidates_score = []
# 给出一些判定模板文档的词作为扣分项
unexpected_words = ['\LaTeX', 'manuscript', 'Guidelines', 'font', 'citations', 'rejected', 'blind review', 'reviewers']
expected_words = ['\input', '\ref', '\cite']
unexpected_words = ['\\LaTeX', 'manuscript', 'Guidelines', 'font', 'citations', 'rejected', 'blind review', 'reviewers']
expected_words = ['\\input', '\\ref', '\\cite']
for texf in canidates:
canidates_score.append(0)
with open(texf, 'r', encoding='utf8', errors='ignore') as f:

View File

@@ -0,0 +1,125 @@
from crazy_functions.ipc_fns.mp import run_in_subprocess_with_timeout
def force_breakdown(txt, limit, get_token_fn):
""" 当无法用标点、空行分割时,我们用最暴力的方法切割
"""
for i in reversed(range(len(txt))):
if get_token_fn(txt[:i]) < limit:
return txt[:i], txt[i:]
return "Tiktoken未知错误", "Tiktoken未知错误"
def maintain_storage(remain_txt_to_cut, remain_txt_to_cut_storage):
""" 为了加速计算,我们采样一个特殊的手段。当 remain_txt_to_cut > `_max` 时, 我们把 _max 后的文字转存至 remain_txt_to_cut_storage
当 remain_txt_to_cut < `_min` 时,我们再把 remain_txt_to_cut_storage 中的部分文字取出
"""
_min = int(5e4)
_max = int(1e5)
# print(len(remain_txt_to_cut), len(remain_txt_to_cut_storage))
if len(remain_txt_to_cut) < _min and len(remain_txt_to_cut_storage) > 0:
remain_txt_to_cut = remain_txt_to_cut + remain_txt_to_cut_storage
remain_txt_to_cut_storage = ""
if len(remain_txt_to_cut) > _max:
remain_txt_to_cut_storage = remain_txt_to_cut[_max:] + remain_txt_to_cut_storage
remain_txt_to_cut = remain_txt_to_cut[:_max]
return remain_txt_to_cut, remain_txt_to_cut_storage
def cut(limit, get_token_fn, txt_tocut, must_break_at_empty_line, break_anyway=False):
""" 文本切分
"""
res = []
total_len = len(txt_tocut)
fin_len = 0
remain_txt_to_cut = txt_tocut
remain_txt_to_cut_storage = ""
# 为了加速计算,我们采样一个特殊的手段。当 remain_txt_to_cut > `_max` 时, 我们把 _max 后的文字转存至 remain_txt_to_cut_storage
remain_txt_to_cut, remain_txt_to_cut_storage = maintain_storage(remain_txt_to_cut, remain_txt_to_cut_storage)
while True:
if get_token_fn(remain_txt_to_cut) <= limit:
# 如果剩余文本的token数小于限制那么就不用切了
res.append(remain_txt_to_cut); fin_len+=len(remain_txt_to_cut)
break
else:
# 如果剩余文本的token数大于限制那么就切
lines = remain_txt_to_cut.split('\n')
# 估计一个切分点
estimated_line_cut = limit / get_token_fn(remain_txt_to_cut) * len(lines)
estimated_line_cut = int(estimated_line_cut)
# 开始查找合适切分点的偏移cnt
cnt = 0
for cnt in reversed(range(estimated_line_cut)):
if must_break_at_empty_line:
# 首先尝试用双空行(\n\n作为切分点
if lines[cnt] != "":
continue
prev = "\n".join(lines[:cnt])
post = "\n".join(lines[cnt:])
if get_token_fn(prev) < limit:
break
if cnt == 0:
# 如果没有找到合适的切分点
if break_anyway:
# 是否允许暴力切分
prev, post = force_breakdown(remain_txt_to_cut, limit, get_token_fn)
else:
# 不允许直接报错
raise RuntimeError(f"存在一行极长的文本!{remain_txt_to_cut}")
# 追加列表
res.append(prev); fin_len+=len(prev)
# 准备下一次迭代
remain_txt_to_cut = post
remain_txt_to_cut, remain_txt_to_cut_storage = maintain_storage(remain_txt_to_cut, remain_txt_to_cut_storage)
process = fin_len/total_len
print(f'正在文本切分 {int(process*100)}%')
if len(remain_txt_to_cut.strip()) == 0:
break
return res
def breakdown_text_to_satisfy_token_limit_(txt, limit, llm_model="gpt-3.5-turbo"):
""" 使用多种方式尝试切分文本,以满足 token 限制
"""
from request_llms.bridge_all import model_info
enc = model_info[llm_model]['tokenizer']
def get_token_fn(txt): return len(enc.encode(txt, disallowed_special=()))
try:
# 第1次尝试将双空行\n\n作为切分点
return cut(limit, get_token_fn, txt, must_break_at_empty_line=True)
except RuntimeError:
try:
# 第2次尝试将单空行\n作为切分点
return cut(limit, get_token_fn, txt, must_break_at_empty_line=False)
except RuntimeError:
try:
# 第3次尝试将英文句号.)作为切分点
res = cut(limit, get_token_fn, txt.replace('.', '\n'), must_break_at_empty_line=False) # 这个中文的句号是故意的,作为一个标识而存在
return [r.replace('\n', '.') for r in res]
except RuntimeError as e:
try:
# 第4次尝试将中文句号作为切分点
res = cut(limit, get_token_fn, txt.replace('', '。。\n'), must_break_at_empty_line=False)
return [r.replace('。。\n', '') for r in res]
except RuntimeError as e:
# 第5次尝试没办法了随便切一下吧
return cut(limit, get_token_fn, txt, must_break_at_empty_line=False, break_anyway=True)
breakdown_text_to_satisfy_token_limit = run_in_subprocess_with_timeout(breakdown_text_to_satisfy_token_limit_, timeout=60)
if __name__ == '__main__':
from crazy_functions.crazy_utils import read_and_clean_pdf_text
file_content, page_one = read_and_clean_pdf_text("build/assets/at.pdf")
from request_llms.bridge_all import model_info
for i in range(5):
file_content += file_content
print(len(file_content))
TOKEN_LIMIT_PER_FRAGMENT = 2500
res = breakdown_text_to_satisfy_token_limit(file_content, TOKEN_LIMIT_PER_FRAGMENT)

View File

@@ -74,7 +74,7 @@ def produce_report_markdown(gpt_response_collection, meta, paper_meta_info, chat
def translate_pdf(article_dict, llm_kwargs, chatbot, fp, generated_conclusion_files, TOKEN_LIMIT_PER_FRAGMENT, DST_LANG):
from crazy_functions.pdf_fns.report_gen_html import construct_html
from crazy_functions.crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
from crazy_functions.crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
@@ -116,7 +116,7 @@ def translate_pdf(article_dict, llm_kwargs, chatbot, fp, generated_conclusion_fi
# find a smooth token limit to achieve even seperation
count = int(math.ceil(raw_token_num / TOKEN_LIMIT_PER_FRAGMENT))
token_limit_smooth = raw_token_num // count + count
return breakdown_txt_to_satisfy_token_limit_for_pdf(txt, get_token_fn=get_token_num, limit=token_limit_smooth)
return breakdown_text_to_satisfy_token_limit(txt, limit=token_limit_smooth, llm_model=llm_kwargs['llm_model'])
for section in article_dict.get('sections'):
if len(section['text']) == 0: continue

View File

@@ -3,47 +3,28 @@ from crazy_functions.multi_stage.multi_stage_utils import GptAcademicGameBaseSta
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
from request_llms.bridge_all import predict_no_ui_long_connection
from crazy_functions.game_fns.game_utils import get_code_block, is_same_thing
import random
class MiniGame_ASCII_Art(GptAcademicGameBaseState):
def step(self, prompt, chatbot, history):
if self.step_cnt == 0:
chatbot.append(["我画你猜(动物)", "请稍等..."])
else:
if prompt.strip() == 'exit':
self.delete_game = True
yield from update_ui_lastest_msg(lastmsg=f"谜底是{self.obj},游戏结束。", chatbot=chatbot, history=history, delay=0.)
return
chatbot.append([prompt, ""])
yield from update_ui(chatbot=chatbot, history=history)
if self.step_cnt == 0:
self.lock_plugin(chatbot)
self.cur_task = 'draw'
if self.cur_task == 'draw':
avail_obj = ["","","","","老鼠",""]
self.obj = random.choice(avail_obj)
inputs = "I want to play a game called Guess the ASCII art. You can draw the ASCII art and I will try to guess it. " + f"This time you draw a {self.obj}. Note that you must not indicate what you have draw in the text, and you should only produce the ASCII art wrapped by ```. "
raw_res = predict_no_ui_long_connection(inputs=inputs, llm_kwargs=self.llm_kwargs, history=[], sys_prompt="")
self.cur_task = 'identify user guess'
res = get_code_block(raw_res)
history += ['', f'the answer is {self.obj}', inputs, res]
yield from update_ui_lastest_msg(lastmsg=res, chatbot=chatbot, history=history, delay=0.)
elif self.cur_task == 'identify user guess':
if is_same_thing(self.obj, prompt, self.llm_kwargs):
self.delete_game = True
yield from update_ui_lastest_msg(lastmsg="你猜对了!", chatbot=chatbot, history=history, delay=0.)
else:
self.cur_task = 'identify user guess'
yield from update_ui_lastest_msg(lastmsg="猜错了再试试输入“exit”获取答案。", chatbot=chatbot, history=history, delay=0.)
@CatchException
def 随机小游戏(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
from crazy_functions.game_fns.game_interactive_story import MiniGame_ResumeStory
# 清空历史
history = []
# 选择游戏
cls = MiniGame_ResumeStory
# 如果之前已经初始化了游戏实例,则继续该实例;否则重新初始化
state = cls.sync_state(chatbot,
llm_kwargs,
cls,
plugin_name='MiniGame_ResumeStory',
callback_fn='crazy_functions.互动小游戏->随机小游戏',
lock_plugin=True
)
yield from state.continue_game(prompt, chatbot, history)
@CatchException
def 随机小游戏1(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
from crazy_functions.game_fns.game_ascii_art import MiniGame_ASCII_Art
# 清空历史
history = []
# 选择游戏
@@ -53,7 +34,7 @@ def 随机小游戏(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_
llm_kwargs,
cls,
plugin_name='MiniGame_ASCII_Art',
callback_fn='crazy_functions.互动小游戏->随机小游戏',
callback_fn='crazy_functions.互动小游戏->随机小游戏1',
lock_plugin=True
)
yield from state.continue_game(prompt, chatbot, history)

View File

@@ -104,7 +104,11 @@ def 图片生成_DALLE2(prompt, llm_kwargs, plugin_kwargs, chatbot, history, sys
web_port 当前软件运行的端口号
"""
history = [] # 清空历史,以免输入溢出
chatbot.append(("您正在调用“图像生成”插件。", "[Local Message] 生成图像, 请先把模型切换至gpt-*或者api2d-*。如果中文Prompt效果不理想, 请尝试英文Prompt。正在处理中 ....."))
if prompt.strip() == "":
chatbot.append((prompt, "[Local Message] 图像生成提示为空白,请在“输入区”输入图像生成提示。"))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 界面更新
return
chatbot.append(("您正在调用“图像生成”插件。", "[Local Message] 生成图像, 请先把模型切换至gpt-*。如果中文Prompt效果不理想, 请尝试英文Prompt。正在处理中 ....."))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 由于请求gpt需要一段时间,我们先及时地做一次界面更新
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
resolution = plugin_kwargs.get("advanced_arg", '1024x1024')
@@ -121,7 +125,11 @@ def 图片生成_DALLE2(prompt, llm_kwargs, plugin_kwargs, chatbot, history, sys
@CatchException
def 图片生成_DALLE3(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
history = [] # 清空历史,以免输入溢出
chatbot.append(("您正在调用“图像生成”插件。", "[Local Message] 生成图像, 请先把模型切换至gpt-*或者api2d-*。如果中文Prompt效果不理想, 请尝试英文Prompt。正在处理中 ....."))
if prompt.strip() == "":
chatbot.append((prompt, "[Local Message] 图像生成提示为空白,请在“输入区”输入图像生成提示。"))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 界面更新
return
chatbot.append(("您正在调用“图像生成”插件。", "[Local Message] 生成图像, 请先把模型切换至gpt-*。如果中文Prompt效果不理想, 请尝试英文Prompt。正在处理中 ....."))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 由于请求gpt需要一段时间,我们先及时地做一次界面更新
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
resolution_arg = plugin_kwargs.get("advanced_arg", '1024x1024-standard-vivid').lower()

View File

@@ -29,17 +29,12 @@ def 解析docx(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot
except:
raise RuntimeError('请先将.doc文档转换为.docx文档。')
print(file_content)
# private_upload里面的文件名在解压zip后容易出现乱码rar和7z格式正常故可以只分析文章内容不输入文件名
from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
from request_llms.bridge_all import model_info
max_token = model_info[llm_kwargs['llm_model']]['max_token']
TOKEN_LIMIT_PER_FRAGMENT = max_token * 3 // 4
paper_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf(
txt=file_content,
get_token_fn=model_info[llm_kwargs['llm_model']]['token_cnt'],
limit=TOKEN_LIMIT_PER_FRAGMENT
)
paper_fragments = breakdown_text_to_satisfy_token_limit(txt=file_content, limit=TOKEN_LIMIT_PER_FRAGMENT, llm_model=llm_kwargs['llm_model'])
this_paper_history = []
for i, paper_frag in enumerate(paper_fragments):
i_say = f'请对下面的文章片段用中文做概述,文件名是{os.path.relpath(fp, project_folder)},文章内容是 ```{paper_frag}```'

View File

@@ -28,8 +28,8 @@ class PaperFileGroup():
self.sp_file_index.append(index)
self.sp_file_tag.append(self.file_paths[index])
else:
from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
segments = breakdown_txt_to_satisfy_token_limit_for_pdf(file_content, self.get_token_num, max_token_limit)
from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
segments = breakdown_text_to_satisfy_token_limit(file_content, max_token_limit)
for j, segment in enumerate(segments):
self.sp_file_contents.append(segment)
self.sp_file_index.append(index)

View File

@@ -20,14 +20,9 @@ def 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot,
TOKEN_LIMIT_PER_FRAGMENT = 2500
from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
from request_llms.bridge_all import model_info
enc = model_info["gpt-3.5-turbo"]['tokenizer']
def get_token_num(txt): return len(enc.encode(txt, disallowed_special=()))
paper_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf(
txt=file_content, get_token_fn=get_token_num, limit=TOKEN_LIMIT_PER_FRAGMENT)
page_one_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf(
txt=str(page_one), get_token_fn=get_token_num, limit=TOKEN_LIMIT_PER_FRAGMENT//4)
from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
paper_fragments = breakdown_text_to_satisfy_token_limit(txt=file_content, limit=TOKEN_LIMIT_PER_FRAGMENT, llm_model=llm_kwargs['llm_model'])
page_one_fragments = breakdown_text_to_satisfy_token_limit(txt=str(page_one), limit=TOKEN_LIMIT_PER_FRAGMENT//4, llm_model=llm_kwargs['llm_model'])
# 为了更好的效果我们剥离Introduction之后的部分如果有
paper_meta = page_one_fragments[0].split('introduction')[0].split('Introduction')[0].split('INTRODUCTION')[0]

View File

@@ -91,14 +91,9 @@ def 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot,
page_one = str(page_one).encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
# 递归地切割PDF文件
from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
from request_llms.bridge_all import model_info
enc = model_info["gpt-3.5-turbo"]['tokenizer']
def get_token_num(txt): return len(enc.encode(txt, disallowed_special=()))
paper_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf(
txt=file_content, get_token_fn=get_token_num, limit=TOKEN_LIMIT_PER_FRAGMENT)
page_one_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf(
txt=page_one, get_token_fn=get_token_num, limit=TOKEN_LIMIT_PER_FRAGMENT//4)
from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
paper_fragments = breakdown_text_to_satisfy_token_limit(txt=file_content, limit=TOKEN_LIMIT_PER_FRAGMENT, llm_model=llm_kwargs['llm_model'])
page_one_fragments = breakdown_text_to_satisfy_token_limit(txt=page_one, limit=TOKEN_LIMIT_PER_FRAGMENT//4, llm_model=llm_kwargs['llm_model'])
# 为了更好的效果我们剥离Introduction之后的部分如果有
paper_meta = page_one_fragments[0].split('introduction')[0].split('Introduction')[0].split('INTRODUCTION')[0]

View File

@@ -18,14 +18,9 @@ def 解析PDF(file_name, llm_kwargs, plugin_kwargs, chatbot, history, system_pro
TOKEN_LIMIT_PER_FRAGMENT = 2500
from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
from request_llms.bridge_all import model_info
enc = model_info["gpt-3.5-turbo"]['tokenizer']
def get_token_num(txt): return len(enc.encode(txt, disallowed_special=()))
paper_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf(
txt=file_content, get_token_fn=get_token_num, limit=TOKEN_LIMIT_PER_FRAGMENT)
page_one_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf(
txt=str(page_one), get_token_fn=get_token_num, limit=TOKEN_LIMIT_PER_FRAGMENT//4)
from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
paper_fragments = breakdown_text_to_satisfy_token_limit(txt=file_content, limit=TOKEN_LIMIT_PER_FRAGMENT, llm_model=llm_kwargs['llm_model'])
page_one_fragments = breakdown_text_to_satisfy_token_limit(txt=str(page_one), limit=TOKEN_LIMIT_PER_FRAGMENT//4, llm_model=llm_kwargs['llm_model'])
# 为了更好的效果我们剥离Introduction之后的部分如果有
paper_meta = page_one_fragments[0].split('introduction')[0].split('Introduction')[0].split('INTRODUCTION')[0]
@@ -45,7 +40,7 @@ def 解析PDF(file_name, llm_kwargs, plugin_kwargs, chatbot, history, system_pro
for i in range(n_fragment):
NUM_OF_WORD = MAX_WORD_TOTAL // n_fragment
i_say = f"Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} words: {paper_fragments[i]}"
i_say_show_user = f"[{i+1}/{n_fragment}] Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} words: {paper_fragments[i][:200]}"
i_say_show_user = f"[{i+1}/{n_fragment}] Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} words: {paper_fragments[i][:200]} ...."
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(i_say, i_say_show_user, # i_say=真正给chatgpt的提问 i_say_show_user=给用户看的提问
llm_kwargs, chatbot,
history=["The main idea of the previous section is?", last_iteration_result], # 迭代上一次的结果

View File

@@ -12,13 +12,6 @@ class PaperFileGroup():
self.sp_file_index = []
self.sp_file_tag = []
# count_token
from request_llms.bridge_all import model_info
enc = model_info["gpt-3.5-turbo"]['tokenizer']
def get_token_num(txt): return len(
enc.encode(txt, disallowed_special=()))
self.get_token_num = get_token_num
def run_file_split(self, max_token_limit=1900):
"""
将长文本分离开来
@@ -29,9 +22,8 @@ class PaperFileGroup():
self.sp_file_index.append(index)
self.sp_file_tag.append(self.file_paths[index])
else:
from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
segments = breakdown_txt_to_satisfy_token_limit_for_pdf(
file_content, self.get_token_num, max_token_limit)
from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
segments = breakdown_text_to_satisfy_token_limit(file_content, max_token_limit)
for j, segment in enumerate(segments):
self.sp_file_contents.append(segment)
self.sp_file_index.append(index)

View File

@@ -129,7 +129,7 @@ services:
runtime: nvidia
devices:
- /dev/nvidia0:/dev/nvidia0
# 与宿主的网络融合
network_mode: "host"
command: >
@@ -163,7 +163,7 @@ services:
runtime: nvidia
devices:
- /dev/nvidia0:/dev/nvidia0
# 与宿主的网络融合
network_mode: "host"
@@ -229,4 +229,3 @@ services:
# 不使用代理网络拉取最新代码
command: >
bash -c "python3 -u main.py"

View File

@@ -1,2 +1 @@
# 此Dockerfile不再维护请前往docs/GithubAction+ChatGLM+Moss

View File

@@ -1 +1 @@
# 此Dockerfile不再维护请前往docs/GithubAction+JittorLLMs
# 此Dockerfile不再维护请前往docs/GithubAction+JittorLLMs

View File

@@ -0,0 +1,53 @@
# docker build -t gpt-academic-all-capacity -f docs/GithubAction+AllCapacity --network=host --build-arg http_proxy=http://localhost:10881 --build-arg https_proxy=http://localhost:10881 .
# docker build -t gpt-academic-all-capacity -f docs/GithubAction+AllCapacityBeta --network=host .
# docker run -it --net=host gpt-academic-all-capacity bash
# 从NVIDIA源从而支持显卡检查宿主的nvidia-smi中的cuda版本必须>=11.3
FROM fuqingxu/11.3.1-runtime-ubuntu20.04-with-texlive:latest
# use python3 as the system default python
WORKDIR /gpt
RUN curl -sS https://bootstrap.pypa.io/get-pip.py | python3.8
# # 非必要步骤更换pip源 (以下三行,可以删除)
# RUN echo '[global]' > /etc/pip.conf && \
# echo 'index-url = https://mirrors.aliyun.com/pypi/simple/' >> /etc/pip.conf && \
# echo 'trusted-host = mirrors.aliyun.com' >> /etc/pip.conf
# 下载pytorch
RUN python3 -m pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/cu113
# 准备pip依赖
RUN python3 -m pip install openai numpy arxiv rich
RUN python3 -m pip install colorama Markdown pygments pymupdf
RUN python3 -m pip install python-docx moviepy pdfminer
RUN python3 -m pip install zh_langchain==0.2.1 pypinyin
RUN python3 -m pip install rarfile py7zr
RUN python3 -m pip install aliyun-python-sdk-core==2.13.3 pyOpenSSL webrtcvad scipy git+https://github.com/aliyun/alibabacloud-nls-python-sdk.git
# 下载分支
WORKDIR /gpt
RUN git clone --depth=1 https://github.com/binary-husky/gpt_academic.git
WORKDIR /gpt/gpt_academic
RUN git clone --depth=1 https://github.com/OpenLMLab/MOSS.git request_llms/moss
RUN python3 -m pip install -r requirements.txt
RUN python3 -m pip install -r request_llms/requirements_moss.txt
RUN python3 -m pip install -r request_llms/requirements_qwen.txt
RUN python3 -m pip install -r request_llms/requirements_chatglm.txt
RUN python3 -m pip install -r request_llms/requirements_newbing.txt
RUN python3 -m pip install nougat-ocr
# 预热Tiktoken模块
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'
# 安装知识库插件的额外依赖
RUN apt-get update && apt-get install libgl1 -y
RUN pip3 install transformers protobuf langchain sentence-transformers faiss-cpu nltk beautifulsoup4 bitsandbytes tabulate icetk --upgrade
RUN pip3 install unstructured[all-docs] --upgrade
RUN python3 -c 'from check_proxy import warm_up_vectordb; warm_up_vectordb()'
RUN rm -rf /usr/local/lib/python3.8/dist-packages/tests
# COPY .cache /root/.cache
# COPY config_private.py config_private.py
# 启动
CMD ["python3", "-u", "main.py"]

View File

@@ -15,7 +15,7 @@ WORKDIR /gpt
RUN pip3 install openai numpy arxiv rich
RUN pip3 install colorama Markdown pygments pymupdf
RUN pip3 install python-docx pdfminer
RUN pip3 install python-docx pdfminer
RUN pip3 install nougat-ocr
# 装载项目文件

View File

@@ -17,10 +17,10 @@ RUN apt-get update && apt-get install libgl1 -y
RUN pip3 install torch torchvision --index-url https://download.pytorch.org/whl/cpu
RUN pip3 install transformers protobuf langchain sentence-transformers faiss-cpu nltk beautifulsoup4 bitsandbytes tabulate icetk --upgrade
RUN pip3 install unstructured[all-docs] --upgrade
RUN python3 -c 'from check_proxy import warm_up_vectordb; warm_up_vectordb()'
# 可选步骤,用于预热模块
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'
RUN python3 -c 'from check_proxy import warm_up_vectordb; warm_up_vectordb()'
# 启动
CMD ["python3", "-u", "main.py"]

View File

@@ -2,9 +2,9 @@
> **ملحوظة**
>
>
> تمت ترجمة هذا الملف README باستخدام GPT (بواسطة المكون الإضافي لهذا المشروع) وقد لا تكون الترجمة 100٪ موثوقة، يُرجى التمييز بعناية بنتائج الترجمة.
>
>
> 2023.11.7: عند تثبيت التبعيات، يُرجى اختيار الإصدار المُحدد في `requirements.txt`. الأمر للتثبيت: `pip install -r requirements.txt`.
# <div align=center><img src="logo.png" width="40"> GPT الأكاديمي</div>
@@ -12,14 +12,14 @@
**إذا كنت تحب هذا المشروع، فيُرجى إعطاؤه Star. لترجمة هذا المشروع إلى لغة عشوائية باستخدام GPT، قم بقراءة وتشغيل [`multi_language.py`](multi_language.py) (تجريبي).
> **ملحوظة**
>
>
> 1. يُرجى ملاحظة أنها الإضافات (الأزرار) المميزة فقط التي تدعم قراءة الملفات، وبعض الإضافات توجد في قائمة منسدلة في منطقة الإضافات. بالإضافة إلى ذلك، نرحب بأي Pull Request جديد بأعلى أولوية لأي إضافة جديدة.
>
>
> 2. تُوضّح كل من الملفات في هذا المشروع وظيفتها بالتفصيل في [تقرير الفهم الذاتي `self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/GPTAcademic项目自译解报告). يمكنك في أي وقت أن تنقر على إضافة وظيفة ذات صلة لاستدعاء GPT وإعادة إنشاء تقرير الفهم الذاتي للمشروع. للأسئلة الشائعة [`الويكي`](https://github.com/binary-husky/gpt_academic/wiki). [طرق التثبيت العادية](#installation) | [نصب بنقرة واحدة](https://github.com/binary-husky/gpt_academic/releases) | [تعليمات التكوين](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明).
>
>
> 3. يتم توافق هذا المشروع مع ودعم توصيات اللغة البيجائية الأكبر شمولًا وشجاعة لمثل ChatGLM. يمكنك توفير العديد من مفاتيح Api المشتركة في تكوين الملف، مثل `API_KEY="openai-key1,openai-key2,azure-key3,api2d-key4"`. عند تبديل مؤقت لـ `API_KEY`، قم بإدخال `API_KEY` المؤقت في منطقة الإدخال ثم اضغط على زر "إدخال" لجعله ساري المفعول.
<div align="center">
@@ -46,7 +46,7 @@
⭐إضغط على وكيل "شارلوت الذكي" | [وظائف] استكمال الذكاء للكأس الأول للذكاء المكتسب من مايكروسوفت، اكتشاف وتطوير عالمي العميل
تبديل الواجهة المُظلمة | يمكنك التبديل إلى الواجهة المظلمة بإضافة ```/?__theme=dark``` إلى نهاية عنوان URL في المتصفح
دعم المزيد من نماذج LLM | دعم لجميع GPT3.5 وGPT4 و[ChatGLM2 في جامعة ثوه في لين](https://github.com/THUDM/ChatGLM2-6B) و[MOSS في جامعة فودان](https://github.com/OpenLMLab/MOSS)
⭐تحوي انطباعة "ChatGLM2" | يدعم استيراد "ChatGLM2" ويوفر إضافة المساعدة في تعديله
⭐تحوي انطباعة "ChatGLM2" | يدعم استيراد "ChatGLM2" ويوفر إضافة المساعدة في تعديله
دعم المزيد من نماذج "LLM"، دعم [نشر الحديس](https://huggingface.co/spaces/qingxu98/gpt-academic) | انضم إلى واجهة "Newbing" (Bing الجديدة)،نقدم نماذج Jittorllms الجديدة تؤيدهم [LLaMA](https://github.com/facebookresearch/llama) و [盘古α](https://openi.org.cn/pangu/)
⭐حزمة "void-terminal" للشبكة (pip) | قم بطلب كافة وظائف إضافة هذا المشروع في python بدون واجهة رسومية (قيد التطوير)
⭐PCI-Express لإعلام (PCI) | [وظائف] باللغة الطبيعية، قم بتنفيذ المِهام الأخرى في المشروع
@@ -200,8 +200,8 @@ docker-compose up
```
"ترجمة سوبر الإنجليزية إلى العربية": {
# البادئة، ستتم إضافتها قبل إدخالاتك. مثلاً، لوصف ما تريده مثل ترجمة أو شرح كود أو تلوين وهلم جرا
"بادئة": "يرجى ترجمة النص التالي إلى العربية ثم استخدم جدول Markdown لشرح المصطلحات المختصة المذكورة في النص:\n\n",
"بادئة": "يرجى ترجمة النص التالي إلى العربية ثم استخدم جدول Markdown لشرح المصطلحات المختصة المذكورة في النص:\n\n",
# اللاحقة، سيتم إضافتها بعد إدخالاتك. يمكن استخدامها لوضع علامات اقتباس حول إدخالك.
"لاحقة": "",
},
@@ -341,4 +341,3 @@ https://github.com/oobabooga/one-click-installers
# المزيد:
https://github.com/gradio-app/gradio
https://github.com/fghrsh/live2d_demo

View File

@@ -18,11 +18,11 @@ To translate this project to arbitrary language with GPT, read and run [`multi_l
> 1.Please note that only plugins (buttons) highlighted in **bold** support reading files, and some plugins are located in the **dropdown menu** in the plugin area. Additionally, we welcome and process any new plugins with the **highest priority** through PRs.
>
> 2.The functionalities of each file in this project are described in detail in the [self-analysis report `self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/GPTAcademic项目自译解报告). As the version iterates, you can also click on the relevant function plugin at any time to call GPT to regenerate the project's self-analysis report. Common questions are in the [`wiki`](https://github.com/binary-husky/gpt_academic/wiki). [Regular installation method](#installation) | [One-click installation script](https://github.com/binary-husky/gpt_academic/releases) | [Configuration instructions](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明).
>
>
> 3.This project is compatible with and encourages the use of domestic large-scale language models such as ChatGLM. Multiple api-keys can be used together. You can fill in the configuration file with `API_KEY="openai-key1,openai-key2,azure-key3,api2d-key4"` to temporarily switch `API_KEY` during input, enter the temporary `API_KEY`, and then press enter to apply it.
<div align="center">
@@ -126,7 +126,7 @@ python -m pip install -r requirements.txt # This step is the same as the pip ins
【Optional Step】If you need to support THU ChatGLM2 or Fudan MOSS as backends, you need to install additional dependencies (Prerequisites: Familiar with Python + Familiar with Pytorch + Sufficient computer configuration):
```sh
# 【Optional Step I】Support THU ChatGLM2. Note: If you encounter the "Call ChatGLM fail unable to load ChatGLM parameters" error, refer to the following: 1. The default installation above is for torch+cpu version. To use cuda, uninstall torch and reinstall torch+cuda; 2. If the model cannot be loaded due to insufficient local configuration, you can modify the model accuracy in request_llm/bridge_chatglm.py. Change AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) to AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)
python -m pip install -r request_llms/requirements_chatglm.txt
python -m pip install -r request_llms/requirements_chatglm.txt
# 【Optional Step II】Support Fudan MOSS
python -m pip install -r request_llms/requirements_moss.txt
@@ -204,8 +204,8 @@ For example:
```
"Super Translation": {
# Prefix: will be added before your input. For example, used to describe your request, such as translation, code explanation, proofreading, etc.
"Prefix": "Please translate the following paragraph into Chinese and then explain each proprietary term in the text using a markdown table:\n\n",
"Prefix": "Please translate the following paragraph into Chinese and then explain each proprietary term in the text using a markdown table:\n\n",
# Suffix: will be added after your input. For example, used to wrap your input in quotation marks along with the prefix.
"Suffix": "",
},
@@ -355,4 +355,3 @@ https://github.com/oobabooga/one-click-installers
# More:
https://github.com/gradio-app/gradio
https://github.com/fghrsh/live2d_demo

View File

@@ -2,9 +2,9 @@
> **Remarque**
>
>
> Ce README a été traduit par GPT (implémenté par le plugin de ce projet) et n'est pas fiable à 100 %. Veuillez examiner attentivement les résultats de la traduction.
>
>
> 7 novembre 2023 : Lors de l'installation des dépendances, veuillez choisir les versions **spécifiées** dans le fichier `requirements.txt`. Commande d'installation : `pip install -r requirements.txt`.
@@ -12,7 +12,7 @@
**Si vous aimez ce projet, merci de lui donner une étoile ; si vous avez inventé des raccourcis ou des plugins utiles, n'hésitez pas à envoyer des demandes d'extraction !**
Si vous aimez ce projet, veuillez lui donner une étoile.
Si vous aimez ce projet, veuillez lui donner une étoile.
Pour traduire ce projet dans une langue arbitraire avec GPT, lisez et exécutez [`multi_language.py`](multi_language.py) (expérimental).
> **Remarque**
@@ -22,7 +22,7 @@ Pour traduire ce projet dans une langue arbitraire avec GPT, lisez et exécutez
> 2. Les fonctionnalités de chaque fichier de ce projet sont spécifiées en détail dans [le rapport d'auto-analyse `self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/GPTAcademic个项目自译解报告). Vous pouvez également cliquer à tout moment sur les plugins de fonctions correspondants pour appeler GPT et générer un rapport d'auto-analyse du projet. Questions fréquemment posées [wiki](https://github.com/binary-husky/gpt_academic/wiki). [Méthode d'installation standard](#installation) | [Script d'installation en un clic](https://github.com/binary-husky/gpt_academic/releases) | [Instructions de configuration](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)..
>
> 3. Ce projet est compatible avec et recommande l'expérimentation de grands modèles de langage chinois tels que ChatGLM, etc. Prend en charge plusieurs clés API, vous pouvez les remplir dans le fichier de configuration comme `API_KEY="openai-key1,openai-key2,azure-key3,api2d-key4"`. Pour changer temporairement la clé API, entrez la clé API temporaire dans la zone de saisie, puis appuyez sur Entrée pour soumettre et activer celle-ci.
<div align="center">
@@ -128,7 +128,7 @@ python -m pip install -r requirements.txt # This step is the same as the pip ins
[Optional Steps] If you need to support Tsinghua ChatGLM2/Fudan MOSS as backends, you need to install additional dependencies (Prerequisites: Familiar with Python + Have used PyTorch + Sufficient computer configuration):
```sh
# [Optional Step I] Support Tsinghua ChatGLM2. Comment on this note: If you encounter the error "Call ChatGLM generated an error and cannot load the parameters of ChatGLM", refer to the following: 1: The default installation is the torch+cpu version. To use cuda, you need to uninstall torch and reinstall torch+cuda; 2: If the model cannot be loaded due to insufficient computer configuration, you can modify the model precision in request_llm/bridge_chatglm.py. Change AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) to AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True).
python -m pip install -r request_llms/requirements_chatglm.txt
python -m pip install -r request_llms/requirements_chatglm.txt
# [Optional Step II] Support Fudan MOSS
python -m pip install -r request_llms/requirements_moss.txt
@@ -201,7 +201,7 @@ Par exemple:
"Traduction avancée de l'anglais vers le français": {
# Préfixe, ajouté avant votre saisie. Par exemple, utilisez-le pour décrire votre demande, telle que la traduction, l'explication du code, l'amélioration, etc.
"Prefix": "Veuillez traduire le contenu suivant en français, puis expliquer chaque terme propre à la langue anglaise utilisé dans le texte à l'aide d'un tableau markdown : \n\n",
# Suffixe, ajouté après votre saisie. Par exemple, en utilisant le préfixe, vous pouvez entourer votre contenu par des guillemets.
"Suffix": "",
},
@@ -354,4 +354,3 @@ https://github.com/oobabooga/one-click-installers
# Plus
https://github.com/gradio-app/gradio
https://github.com/fghrsh/live2d_demo

View File

@@ -2,9 +2,9 @@
> **Hinweis**
>
> Dieses README wurde mithilfe der GPT-Übersetzung (durch das Plugin dieses Projekts) erstellt und ist nicht zu 100 % zuverlässig. Bitte überprüfen Sie die Übersetzungsergebnisse sorgfältig.
>
>
> Dieses README wurde mithilfe der GPT-Übersetzung (durch das Plugin dieses Projekts) erstellt und ist nicht zu 100 % zuverlässig. Bitte überprüfen Sie die Übersetzungsergebnisse sorgfältig.
>
> 7. November 2023: Beim Installieren der Abhängigkeiten bitte nur die in der `requirements.txt` **angegebenen Versionen** auswählen. Installationsbefehl: `pip install -r requirements.txt`.
@@ -12,19 +12,19 @@
**Wenn Ihnen dieses Projekt gefällt, geben Sie ihm bitte einen Star. Wenn Sie praktische Tastenkombinationen oder Plugins entwickelt haben, sind Pull-Anfragen willkommen!**
Wenn Ihnen dieses Projekt gefällt, geben Sie ihm bitte einen Star.
Wenn Ihnen dieses Projekt gefällt, geben Sie ihm bitte einen Star.
Um dieses Projekt mit GPT in eine beliebige Sprache zu übersetzen, lesen Sie [`multi_language.py`](multi_language.py) (experimentell).
> **Hinweis**
>
> 1. Beachten Sie bitte, dass nur die mit **hervorgehobenen** Plugins (Schaltflächen) Dateien lesen können. Einige Plugins befinden sich im **Drop-down-Menü** des Plugin-Bereichs. Außerdem freuen wir uns über jede neue Plugin-PR mit **höchster Priorität**.
>
>
> 2. Die Funktionen jeder Datei in diesem Projekt sind im [Selbstanalysebericht `self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/GPT-Academic-Selbstanalysebericht) ausführlich erläutert. Sie können jederzeit auf die relevanten Funktions-Plugins klicken und GPT aufrufen, um den Selbstanalysebericht des Projekts neu zu generieren. Häufig gestellte Fragen finden Sie im [`Wiki`](https://github.com/binary-husky/gpt_academic/wiki). [Standardinstallationsmethode](#installation) | [Ein-Klick-Installationsskript](https://github.com/binary-husky/gpt_academic/releases) | [Konfigurationsanleitung](https://github.com/binary-husky/gpt_academic/wiki/Projekt-Konfigurationsanleitung).
>
>
> 3. Dieses Projekt ist kompatibel mit und unterstützt auch die Verwendung von inländischen Sprachmodellen wie ChatGLM. Die gleichzeitige Verwendung mehrerer API-Schlüssel ist möglich, indem Sie sie in der Konfigurationsdatei wie folgt angeben: `API_KEY="openai-key1,openai-key2,azure-key3,api2d-key4"`. Wenn Sie den `API_KEY` vorübergehend ändern möchten, geben Sie vorübergehend den temporären `API_KEY` im Eingabebereich ein und drücken Sie die Eingabetaste, um die Änderung wirksam werden zu lassen.
<div align="center">
@@ -93,7 +93,7 @@ Weitere Funktionen anzeigen (z. B. Bildgenerierung) …… | Siehe das Ende dies
</div>
# Installation
### Installation Method I: Run directly (Windows, Linux or MacOS)
### Installation Method I: Run directly (Windows, Linux or MacOS)
1. Download the project
```sh
@@ -128,7 +128,7 @@ python -m pip install -r requirements.txt # This step is the same as installing
[Optional] If you need to support Tsinghua ChatGLM2/Fudan MOSS as the backend, you need to install additional dependencies (Prerequisites: Familiar with Python + Have used PyTorch + Strong computer configuration):
```sh
# [Optional Step I] Support Tsinghua ChatGLM2. Tsinghua ChatGLM note: If you encounter the error "Call ChatGLM fail cannot load ChatGLM parameters normally", refer to the following: 1: The default installation above is torch+cpu version. To use cuda, you need to uninstall torch and reinstall torch+cuda; 2: If you cannot load the model due to insufficient computer configuration, you can modify the model accuracy in request_llm/bridge_chatglm.py. Change AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) to AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)
python -m pip install -r request_llms/requirements_chatglm.txt
python -m pip install -r request_llms/requirements_chatglm.txt
# [Optional Step II] Support Fudan MOSS
python -m pip install -r request_llms/requirements_moss.txt
@@ -207,8 +207,8 @@ Beispiel:
```
"Übersetzung von Englisch nach Chinesisch": {
# Präfix, wird vor Ihrer Eingabe hinzugefügt. Zum Beispiel, um Ihre Anforderungen zu beschreiben, z.B. Übersetzen, Code erklären, verbessern usw.
"Präfix": "Bitte übersetzen Sie den folgenden Abschnitt ins Chinesische und erklären Sie dann jedes Fachwort in einer Markdown-Tabelle:\n\n",
"Präfix": "Bitte übersetzen Sie den folgenden Abschnitt ins Chinesische und erklären Sie dann jedes Fachwort in einer Markdown-Tabelle:\n\n",
# Suffix, wird nach Ihrer Eingabe hinzugefügt. Zum Beispiel, um Ihre Eingabe in Anführungszeichen zu setzen.
"Suffix": "",
},
@@ -361,4 +361,3 @@ https://github.com/oobabooga/one-click-installers
# Weitere:
https://github.com/gradio-app/gradio
https://github.com/fghrsh/live2d_demo

View File

@@ -12,7 +12,7 @@
**Se ti piace questo progetto, per favore dagli una stella; se hai idee o plugin utili, fai una pull request!**
Se ti piace questo progetto, dagli una stella.
Se ti piace questo progetto, dagli una stella.
Per tradurre questo progetto in qualsiasi lingua con GPT, leggi ed esegui [`multi_language.py`](multi_language.py) (sperimentale).
> **Nota**
@@ -20,11 +20,11 @@ Per tradurre questo progetto in qualsiasi lingua con GPT, leggi ed esegui [`mult
> 1. Fai attenzione che solo i plugin (pulsanti) **evidenziati** supportano la lettura dei file, alcuni plugin si trovano nel **menu a tendina** nell'area dei plugin. Inoltre, accogliamo e gestiamo con **massima priorità** qualsiasi nuovo plugin attraverso pull request.
>
> 2. Le funzioni di ogni file in questo progetto sono descritte in dettaglio nel [rapporto di traduzione automatica del progetto `self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/GPTAcademic项目自译解报告). Con l'iterazione della versione, puoi anche fare clic sui plugin delle funzioni rilevanti in qualsiasi momento per richiamare GPT e rigenerare il rapporto di auto-analisi del progetto. Domande frequenti [`wiki`](https://github.com/binary-husky/gpt_academic/wiki) | [Metodo di installazione standard](#installazione) | [Script di installazione one-click](https://github.com/binary-husky/gpt_academic/releases) | [Configurazione](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)。
>
>
> 3. Questo progetto è compatibile e incoraggia l'uso di modelli di linguaggio di grandi dimensioni nazionali, come ChatGLM. Supporto per la coesistenza di più chiavi API, puoi compilare nel file di configurazione come `API_KEY="openai-key1,openai-key2,azure-key3,api2d-key4"`. Quando è necessario sostituire temporaneamente `API_KEY`, inserisci temporaneamente `API_KEY` nell'area di input e premi Invio per confermare.
<div align="center">
@@ -128,7 +128,7 @@ python -m pip install -r requirements.txt # Questo passaggio è identico alla pr
[Optional] Se desideri utilizzare ChatGLM2 di Tsinghua/Fudan MOSS come backend, è necessario installare ulteriori dipendenze (Requisiti: conoscenza di Python + esperienza con Pytorch + hardware potente):
```sh
# [Optional Step I] Supporto per ChatGLM2 di Tsinghua. Note di ChatGLM di Tsinghua: Se si verifica l'errore "Call ChatGLM fail non può caricare i parametri di ChatGLM", fare riferimento a quanto segue: 1: L'installazione predefinita è la versione torch+cpu, per usare cuda è necessario disinstallare torch ed installare nuovamente la versione con torch+cuda; 2: Se il modello non può essere caricato a causa di una configurazione insufficiente, è possibile modificare la precisione del modello in request_llm/bridge_chatglm.py, sostituendo AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) con AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)
python -m pip install -r request_llms/requirements_chatglm.txt
python -m pip install -r request_llms/requirements_chatglm.txt
# [Optional Step II] Supporto per Fudan MOSS
python -m pip install -r request_llms/requirements_moss.txt
@@ -206,8 +206,8 @@ Ad esempio,
```
"Traduzione avanzata Cinese-Inglese": {
# Prefisso, sarà aggiunto prima del tuo input. Ad esempio, utilizzato per descrivere la tua richiesta, come traduzione, spiegazione del codice, rifinitura, ecc.
"Prefisso": "Si prega di tradurre il seguente testo in cinese e fornire spiegazione per i termini tecnici utilizzati, utilizzando una tabella in markdown uno per uno:\n\n",
"Prefisso": "Si prega di tradurre il seguente testo in cinese e fornire spiegazione per i termini tecnici utilizzati, utilizzando una tabella in markdown uno per uno:\n\n",
# Suffisso, sarà aggiunto dopo il tuo input. Ad esempio, in combinazione con il prefisso, puoi circondare il tuo input con virgolette.
"Suffisso": "",
},
@@ -224,7 +224,7 @@ La scrittura di plugin per questo progetto è facile e richiede solo conoscenze
# Aggiornamenti
### I: Aggiornamenti
1. Funzionalità di salvataggio della conversazione. Chiamare `Salva la conversazione corrente` nell'area del plugin per salvare la conversazione corrente come un file html leggibile e ripristinabile.
1. Funzionalità di salvataggio della conversazione. Chiamare `Salva la conversazione corrente` nell'area del plugin per salvare la conversazione corrente come un file html leggibile e ripristinabile.
Inoltre, nella stessa area del plugin (menu a tendina) chiamare `Carica la cronologia della conversazione` per ripristinare una conversazione precedente.
Suggerimento: fare clic su `Carica la cronologia della conversazione` senza specificare un file per visualizzare la tua cronologia di archiviazione HTML.
<div align="center">
@@ -358,4 +358,3 @@ https://github.com/oobabooga/one-click-installers
# Altre risorse:
https://github.com/gradio-app/gradio
https://github.com/fghrsh/live2d_demo

View File

@@ -2,9 +2,9 @@
> **注意**
>
>
> 此READMEはGPTによる翻訳で生成されましたこのプロジェクトのプラグインによって実装されています、翻訳結果は100%正確ではないため、注意してください。
>
>
> 2023年11月7日: 依存関係をインストールする際は、`requirements.txt`で**指定されたバージョン**を選択してください。 インストールコマンド: `pip install -r requirements.txt`。
@@ -18,11 +18,11 @@ GPTを使用してこのプロジェクトを任意の言語に翻訳するに
> 1. **強調された** プラグインボタンのみがファイルを読み込むことができることに注意してください。一部のプラグインは、プラグインエリアのドロップダウンメニューにあります。また、新しいプラグインのPRを歓迎し、最優先で対応します。
>
> 2. このプロジェクトの各ファイルの機能は、[自己分析レポート`self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/GPTAcademic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E5%A0%82)で詳しく説明されています。バージョンが進化するにつれて、関連する関数プラグインをクリックして、プロジェクトの自己分析レポートをGPTで再生成することもできます。よくある質問については、[`wiki`](https://github.com/binary-husky/gpt_academic/wiki)をご覧ください。[標準的なインストール方法](#installation) | [ワンクリックインストールスクリプト](https://github.com/binary-husky/gpt_academic/releases) | [構成の説明](https://github.com/binary-husky/gpt_academic/wiki/Project-Configuration-Explain)。
>
>
> 3. このプロジェクトは、[ChatGLM](https://www.chatglm.dev/)などの中国製の大規模言語モデルも互換性があり、試してみることを推奨しています。複数のAPIキーを共存させることができ、設定ファイルに`API_KEY="openai-key1,openai-key2,azure-key3,api2d-key4"`のように記入できます。`API_KEY`を一時的に変更する必要がある場合は、入力エリアに一時的な`API_KEY`を入力し、Enterキーを押して提出すると有効になります。
<div align="center">
@@ -189,7 +189,7 @@ Python環境に詳しくないWindowsユーザーは、[リリース](https://gi
"超级英译中" {
# プレフィックス、入力の前に追加されます。例えば、要求を記述するために使用されます。翻訳、コードの解説、校正など
"プレフィックス" "下記の内容を中国語に翻訳し、専門用語を一つずつマークダウンテーブルで解説してください:\n\n"、
# サフィックス、入力の後に追加されます。プレフィックスと一緒に使用して、入力内容を引用符で囲むことができます。
"サフィックス" ""、
}、
@@ -342,4 +342,3 @@ https://github.com/oobabooga/one-click-installers
# その他:
https://github.com/gradio-app/gradio
https://github.com/fghrsh/live2d_demo

View File

@@ -27,7 +27,7 @@ GPT를 사용하여 이 프로젝트를 임의의 언어로 번역하려면 [`mu
<div align="center">
@@ -130,7 +130,7 @@ python -m pip install -r requirements.txt # This step is the same as the pip ins
[Optional Step] If you need support for Tsinghua ChatGLM2/Fudan MOSS as the backend, you need to install additional dependencies (Prerequisites: Familiar with Python + Have used Pytorch + Sufficient computer configuration):
```sh
# [Optional Step I] Support for Tsinghua ChatGLM2. Note for Tsinghua ChatGLM: If you encounter the error "Call ChatGLM fail cannot load ChatGLM parameters", refer to the following: 1: The default installation above is torch+cpu version. To use cuda, uninstall torch and reinstall torch+cuda; 2: If you cannot load the model due to insufficient computer configuration, you can modify the model precision in request_llm/bridge_chatglm.py, change AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) to AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)
python -m pip install -r request_llms/requirements_chatglm.txt
python -m pip install -r request_llms/requirements_chatglm.txt
# [Optional Step II] Support for Fudan MOSS
python -m pip install -r request_llms/requirements_moss.txt
@@ -208,8 +208,8 @@ Please visit the [cloud server remote deployment wiki](https://github.com/binary
```
"초급영문 번역": {
# 접두사, 입력 내용 앞에 추가됩니다. 예를 들어 요구 사항을 설명하는 데 사용됩니다. 예를 들어 번역, 코드 설명, 교정 등
"Prefix": "다음 내용을 한국어로 번역하고 전문 용어에 대한 설명을 적용한 마크다운 표를 사용하세요:\n\n",
"Prefix": "다음 내용을 한국어로 번역하고 전문 용어에 대한 설명을 적용한 마크다운 표를 사용하세요:\n\n",
# 접미사, 입력 내용 뒤에 추가됩니다. 예를 들어 접두사와 함께 입력 내용을 따옴표로 감쌀 수 있습니다.
"Suffix": "",
},
@@ -361,4 +361,3 @@ https://github.com/oobabooga/one-click-installers
# 더보기:
https://github.com/gradio-app/gradio
https://github.com/fghrsh/live2d_demo

View File

@@ -2,9 +2,9 @@
> **Nota**
>
>
> Este README foi traduzido pelo GPT (implementado por um plugin deste projeto) e não é 100% confiável. Por favor, verifique cuidadosamente o resultado da tradução.
>
>
> 7 de novembro de 2023: Ao instalar as dependências, favor selecionar as **versões especificadas** no `requirements.txt`. Comando de instalação: `pip install -r requirements.txt`.
# <div align=center><img src="logo.png" width="40"> GPT Acadêmico</div>
@@ -15,12 +15,12 @@ Para traduzir este projeto para qualquer idioma utilizando o GPT, leia e execute
> **Nota**
>
> 1. Observe que apenas os plugins (botões) marcados em **destaque** são capazes de ler arquivos, alguns plugins estão localizados no **menu suspenso** do plugin area. Também damos boas-vindas e prioridade máxima a qualquer novo plugin via PR.
>
>
> 2. As funcionalidades de cada arquivo deste projeto estão detalhadamente explicadas em [autoanálise `self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/GPTAcademic项目自译解报告). Com a iteração das versões, você também pode clicar nos plugins de funções relevantes a qualquer momento para chamar o GPT para regerar o relatório de autonálise do projeto. Perguntas frequentes [`wiki`](https://github.com/binary-husky/gpt_academic/wiki) | [Método de instalação convencional](#installation) | [Script de instalação em um clique](https://github.com/binary-husky/gpt_academic/releases) | [Explicação de configuração](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)。
>
> 3. Este projeto é compatível e encoraja o uso de modelos de linguagem chineses, como ChatGLM. Vários api-keys podem ser usados simultaneamente, podendo ser especificados no arquivo de configuração como `API_KEY="openai-key1,openai-key2,azure-key3,api2d-key4"`. Quando precisar alterar temporariamente o `API_KEY`, insira o `API_KEY` temporário na área de entrada e pressione Enter para que ele seja efetivo.
<div align="center">
Funcionalidades (⭐= funcionalidade recentemente adicionada) | Descrição
@@ -89,7 +89,7 @@ Apresentação de mais novas funcionalidades (geração de imagens, etc.) ... |
</div>
# Instalação
### Método de instalação I: Executar diretamente (Windows, Linux ou MacOS)
### Método de instalação I: Executar diretamente (Windows, Linux ou MacOS)
1. Baixe o projeto
```sh
@@ -124,7 +124,7 @@ python -m pip install -r requirements.txt # Este passo é igual ao da instalaç
[Opcional] Se você quiser suporte para o ChatGLM2 do THU/ MOSS do Fudan, precisará instalar dependências extras (pré-requisitos: familiarizado com o Python + já usou o PyTorch + o computador tem configuração suficiente):
```sh
# [Opcional Passo I] Suporte para ChatGLM2 do THU. Observações sobre o ChatGLM2 do THU: Se você encontrar o erro "Call ChatGLM fail 不能正常加载ChatGLM的参数" (Falha ao chamar o ChatGLM, não é possível carregar os parâmetros do ChatGLM), consulte o seguinte: 1: A versão instalada por padrão é a versão torch+cpu. Se você quiser usar a versão cuda, desinstale o torch e reinstale uma versão com torch+cuda; 2: Se a sua configuração não for suficiente para carregar o modelo, você pode modificar a precisão do modelo em request_llm/bridge_chatglm.py, alterando todas as ocorrências de AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) para AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)
python -m pip install -r request_llms/requirements_chatglm.txt
python -m pip install -r request_llms/requirements_chatglm.txt
# [Opcional Passo II] Suporte para MOSS do Fudan
python -m pip install -r request_llms/requirements_moss.txt
@@ -202,8 +202,8 @@ Por exemplo:
```
"超级英译中": {
# Prefixo, adicionado antes do seu input. Por exemplo, usado para descrever sua solicitação, como traduzir, explicar o código, revisar, etc.
"Prefix": "Por favor, traduza o parágrafo abaixo para o chinês e explique cada termo técnico dentro de uma tabela markdown:\n\n",
"Prefix": "Por favor, traduza o parágrafo abaixo para o chinês e explique cada termo técnico dentro de uma tabela markdown:\n\n",
# Sufixo, adicionado após o seu input. Por exemplo, em conjunto com o prefixo, pode-se colocar seu input entre aspas.
"Suffix": "",
},
@@ -355,4 +355,3 @@ https://github.com/oobabooga/instaladores-de-um-clique
# Mais:
https://github.com/gradio-app/gradio
https://github.com/fghrsh/live2d_demo

View File

@@ -2,9 +2,9 @@
> **Примечание**
>
>
> Этот README был переведен с помощью GPT (реализовано с помощью плагина этого проекта) и не может быть полностью надежным, пожалуйста, внимательно проверьте результаты перевода.
>
>
> 7 ноября 2023 года: При установке зависимостей, пожалуйста, выберите **указанные версии** из `requirements.txt`. Команда установки: `pip install -r requirements.txt`.
@@ -17,12 +17,12 @@
>
> 1. Пожалуйста, обратите внимание, что только плагины (кнопки), выделенные **жирным шрифтом**, поддерживают чтение файлов, некоторые плагины находятся в выпадающем меню **плагинов**. Кроме того, мы с радостью приветствуем и обрабатываем PR для любых новых плагинов с **наивысшим приоритетом**.
>
> 2. Функции каждого файла в этом проекте подробно описаны в [отчете о самостоятельном анализе проекта `self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/GPTAcademic项目自译解报告). С каждым новым релизом вы также можете в любое время нажать на соответствующий функциональный плагин, вызвать GPT для повторной генерации сводного отчета о самоанализе проекта. Часто задаваемые вопросы [`wiki`](https://github.com/binary-husky/gpt_academic/wiki) | [обычные методы установки](#installation) | [скрипт одношаговой установки](https://github.com/binary-husky/gpt_academic/releases) | [инструкции по настройке](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明).
> 2. Функции каждого файла в этом проекте подробно описаны в [отчете о самостоятельном анализе проекта `self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/GPTAcademic项目自译解报告). С каждым новым релизом вы также можете в любое время нажать на соответствующий функциональный плагин, вызвать GPT для повторной генерации сводного отчета о самоанализе проекта. Часто задаваемые вопросы [`wiki`](https://github.com/binary-husky/gpt_academic/wiki) | [обычные методы установки](#installation) | [скрипт одношаговой установки](https://github.com/binary-husky/gpt_academic/releases) | [инструкции по настройке](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明).
>
> 3. Этот проект совместим и настоятельно рекомендуется использование китайской NLP-модели ChatGLM и других моделей больших языков производства Китая. Поддерживает одновременное использование нескольких ключей API, которые можно указать в конфигурационном файле, например, `API_KEY="openai-key1,openai-key2,azure-key3,api2d-key4"`. Если нужно временно заменить `API_KEY`, введите временный `API_KEY` в окне ввода и нажмите Enter для его подтверждения.
<div align="center">
@@ -204,8 +204,8 @@ docker-compose up
```
"Супер-англо-русский перевод": {
# Префикс, который будет добавлен перед вашим вводом. Например, используется для описания вашего запроса, например, перевода, объяснения кода, редактирования и т.д.
"Префикс": "Пожалуйста, переведите следующий абзац на русский язык, а затем покажите каждый термин на экране с помощью таблицы Markdown:\n\n",
"Префикс": "Пожалуйста, переведите следующий абзац на русский язык, а затем покажите каждый термин на экране с помощью таблицы Markdown:\n\n",
# Суффикс, который будет добавлен после вашего ввода. Например, можно использовать с префиксом, чтобы заключить ваш ввод в кавычки.
"Суффикс": "",
},
@@ -335,7 +335,7 @@ GPT Academic Группа QQ разработчиков: `610599535`
```
В коде использовались многие функции, представленные в других отличных проектах, поэтому их порядок не имеет значения:
# ChatGLM2-6B от Тиньхуа:
# ChatGLM2-6B от Тиньхуа:
https://github.com/THUDM/ChatGLM2-6B
# Линейные модели с ограниченной памятью от Тиньхуа:
@@ -358,4 +358,3 @@ https://github.com/oobabooga/one-click-installers
# Больше:
https://github.com/gradio-app/gradio
https://github.com/fghrsh/live2d_demo

View File

@@ -17,18 +17,18 @@ nano config.py
- # 如果需要在二级路径下运行
- # CUSTOM_PATH = get_conf('CUSTOM_PATH')
- # if CUSTOM_PATH != "/":
- # if CUSTOM_PATH != "/":
- # from toolbox import run_gradio_in_subpath
- # run_gradio_in_subpath(demo, auth=AUTHENTICATION, port=PORT, custom_path=CUSTOM_PATH)
- # else:
- # else:
- # demo.launch(server_name="0.0.0.0", server_port=PORT, auth=AUTHENTICATION, favicon_path="docs/logo.png")
+ 如果需要在二级路径下运行
+ CUSTOM_PATH = get_conf('CUSTOM_PATH')
+ if CUSTOM_PATH != "/":
+ if CUSTOM_PATH != "/":
+ from toolbox import run_gradio_in_subpath
+ run_gradio_in_subpath(demo, auth=AUTHENTICATION, port=PORT, custom_path=CUSTOM_PATH)
+ else:
+ else:
+ demo.launch(server_name="0.0.0.0", server_port=PORT, auth=AUTHENTICATION, favicon_path="docs/logo.png")
if __name__ == "__main__":

View File

@@ -7,13 +7,27 @@ sample = """
"""
import re
def preprocess_newbing_out(s):
pattern = r'\^(\d+)\^' # 匹配^数字^
pattern2 = r'\[(\d+)\]' # 匹配^数字^
sub = lambda m: '\['+m.group(1)+'\]' # 将匹配到的数字作为替换值
result = re.sub(pattern, sub, s) # 替换操作
if '[1]' in result:
result += '<br/><hr style="border-top: dotted 1px #44ac5c;"><br/><small>' + "<br/>".join([re.sub(pattern2, sub, r) for r in result.split('\n') if r.startswith('[')]) + '</small>'
pattern = r"\^(\d+)\^" # 匹配^数字^
pattern2 = r"\[(\d+)\]" # 匹配^数字^
def sub(m):
return "\\[" + m.group(1) + "\\]" # 将匹配到的数字作为替换值
result = re.sub(pattern, sub, s) # 替换操作
if "[1]" in result:
result += (
'<br/><hr style="border-top: dotted 1px #44ac5c;"><br/><small>'
+ "<br/>".join(
[
re.sub(pattern2, sub, r)
for r in result.split("\n")
if r.startswith("[")
]
)
+ "</small>"
)
return result
@@ -28,37 +42,39 @@ def close_up_code_segment_during_stream(gpt_reply):
str: 返回一个新的字符串,将输出代码片段的“后面的```”补上。
"""
if '```' not in gpt_reply:
if "```" not in gpt_reply:
return gpt_reply
if gpt_reply.endswith('```'):
if gpt_reply.endswith("```"):
return gpt_reply
# 排除了以上两个情况,我们
segments = gpt_reply.split('```')
segments = gpt_reply.split("```")
n_mark = len(segments) - 1
if n_mark % 2 == 1:
# print('输出代码片段中!')
return gpt_reply+'\n```'
return gpt_reply + "\n```"
else:
return gpt_reply
import markdown
from latex2mathml.converter import convert as tex2mathml
from functools import wraps, lru_cache
def markdown_convertion(txt):
"""
将Markdown格式的文本转换为HTML格式。如果包含数学公式则先将公式转换为HTML格式。
"""
pre = '<div class="markdown-body">'
suf = '</div>'
suf = "</div>"
if txt.startswith(pre) and txt.endswith(suf):
# print('警告,输入了已经经过转化的字符串,二次转化可能出问题')
return txt # 已经被转化过,不需要再次转化
return txt # 已经被转化过,不需要再次转化
markdown_extension_configs = {
'mdx_math': {
'enable_dollar_delimiter': True,
'use_gitlab_delimiters': False,
"mdx_math": {
"enable_dollar_delimiter": True,
"use_gitlab_delimiters": False,
},
}
find_equation_pattern = r'<script type="math/tex(?:.*?)>(.*?)</script>'
@@ -72,19 +88,19 @@ def markdown_convertion(txt):
def replace_math_no_render(match):
content = match.group(1)
if 'mode=display' in match.group(0):
content = content.replace('\n', '</br>')
return f"<font color=\"#00FF00\">$$</font><font color=\"#FF00FF\">{content}</font><font color=\"#00FF00\">$$</font>"
if "mode=display" in match.group(0):
content = content.replace("\n", "</br>")
return f'<font color="#00FF00">$$</font><font color="#FF00FF">{content}</font><font color="#00FF00">$$</font>'
else:
return f"<font color=\"#00FF00\">$</font><font color=\"#FF00FF\">{content}</font><font color=\"#00FF00\">$</font>"
return f'<font color="#00FF00">$</font><font color="#FF00FF">{content}</font><font color="#00FF00">$</font>'
def replace_math_render(match):
content = match.group(1)
if 'mode=display' in match.group(0):
if '\\begin{aligned}' in content:
content = content.replace('\\begin{aligned}', '\\begin{array}')
content = content.replace('\\end{aligned}', '\\end{array}')
content = content.replace('&', ' ')
if "mode=display" in match.group(0):
if "\\begin{aligned}" in content:
content = content.replace("\\begin{aligned}", "\\begin{array}")
content = content.replace("\\end{aligned}", "\\end{array}")
content = content.replace("&", " ")
content = tex2mathml_catch_exception(content, display="block")
return content
else:
@@ -94,37 +110,58 @@ def markdown_convertion(txt):
"""
解决一个mdx_math的bug单$包裹begin命令时多余<script>
"""
content = content.replace('<script type="math/tex">\n<script type="math/tex; mode=display">', '<script type="math/tex; mode=display">')
content = content.replace('</script>\n</script>', '</script>')
content = content.replace(
'<script type="math/tex">\n<script type="math/tex; mode=display">',
'<script type="math/tex; mode=display">',
)
content = content.replace("</script>\n</script>", "</script>")
return content
if ('$' in txt) and ('```' not in txt): # 有$标识的公式符号,且没有代码段```的标识
if ("$" in txt) and ("```" not in txt): # 有$标识的公式符号,且没有代码段```的标识
# convert everything to html format
split = markdown.markdown(text='---')
convert_stage_1 = markdown.markdown(text=txt, extensions=['mdx_math', 'fenced_code', 'tables', 'sane_lists'], extension_configs=markdown_extension_configs)
split = markdown.markdown(text="---")
convert_stage_1 = markdown.markdown(
text=txt,
extensions=["mdx_math", "fenced_code", "tables", "sane_lists"],
extension_configs=markdown_extension_configs,
)
convert_stage_1 = markdown_bug_hunt(convert_stage_1)
# re.DOTALL: Make the '.' special character match any character at all, including a newline; without this flag, '.' will match anything except a newline. Corresponds to the inline flag (?s).
# 1. convert to easy-to-copy tex (do not render math)
convert_stage_2_1, n = re.subn(find_equation_pattern, replace_math_no_render, convert_stage_1, flags=re.DOTALL)
convert_stage_2_1, n = re.subn(
find_equation_pattern,
replace_math_no_render,
convert_stage_1,
flags=re.DOTALL,
)
# 2. convert to rendered equation
convert_stage_2_2, n = re.subn(find_equation_pattern, replace_math_render, convert_stage_1, flags=re.DOTALL)
convert_stage_2_2, n = re.subn(
find_equation_pattern, replace_math_render, convert_stage_1, flags=re.DOTALL
)
# cat them together
return pre + convert_stage_2_1 + f'{split}' + convert_stage_2_2 + suf
return pre + convert_stage_2_1 + f"{split}" + convert_stage_2_2 + suf
else:
return pre + markdown.markdown(txt, extensions=['fenced_code', 'codehilite', 'tables', 'sane_lists']) + suf
return (
pre
+ markdown.markdown(
txt, extensions=["fenced_code", "codehilite", "tables", "sane_lists"]
)
+ suf
)
sample = preprocess_newbing_out(sample)
sample = close_up_code_segment_during_stream(sample)
sample = markdown_convertion(sample)
with open('tmp.html', 'w', encoding='utf8') as f:
f.write("""
with open("tmp.html", "w", encoding="utf8") as f:
f.write(
"""
<head>
<title>My Website</title>
<link rel="stylesheet" type="text/css" href="style.css">
</head>
""")
"""
)
f.write(sample)

View File

@@ -2863,7 +2863,7 @@
"加载API_KEY": "Loading API_KEY",
"协助您编写代码": "Assist you in writing code",
"我可以为您提供以下服务": "I can provide you with the following services",
"排队中请稍 ...": "Please wait in line ...",
"排队中请稍 ...": "Please wait in line ...",
"建议您使用英文提示词": "It is recommended to use English prompts",
"不能支撑AutoGen运行": "Cannot support AutoGen operation",
"帮助您解决编程问题": "Help you solve programming problems",

View File

@@ -2106,4 +2106,4 @@
"改变输入参数的顺序与结构": "入力パラメータの順序と構造を変更する",
"正在精细切分latex文件": "LaTeXファイルを細かく分割しています",
"读取文件": "ファイルを読み込んでいます"
}
}

View File

@@ -98,4 +98,4 @@
"图片生成_DALLE2": "ImageGeneration_DALLE2",
"图片生成_DALLE3": "ImageGeneration_DALLE3",
"图片修改_DALLE2": "ImageModification_DALLE2"
}
}

View File

@@ -61,4 +61,3 @@ VI 两种音频监听模式切换时,需要刷新页面才有效。
VII 非localhost运行+非https情况下无法打开录音功能的坑https://blog.csdn.net/weixin_39461487/article/details/109594434
## 5.点击函数插件区“实时音频采集” 或者其他音频交互功能

View File

@@ -8,8 +8,8 @@ try {
live2d_settings['modelId'] = 5; // 默认模型 ID
live2d_settings['modelTexturesId'] = 1; // 默认材质 ID
live2d_settings['modelStorage'] = false; // 不储存模型 ID
live2d_settings['waifuSize'] = '210x187';
live2d_settings['waifuTipsSize'] = '187x52';
live2d_settings['waifuSize'] = '210x187';
live2d_settings['waifuTipsSize'] = '187x52';
live2d_settings['canSwitchModel'] = true;
live2d_settings['canSwitchTextures'] = true;
live2d_settings['canSwitchHitokoto'] = false;

View File

@@ -123,4 +123,4 @@
<glyph unicode="&#xe65e;" d="M512 748.8l211.2 179.2 300.8-198.4-204.8-166.4-307.2 185.6zM1024 396.8l-300.8-198.4-211.2 172.8 300.8 185.6 211.2-160zM300.8 198.4l-300.8 198.4 204.8 166.4 307.2-192-211.2-172.8zM0 729.6l300.8 198.4 211.2-179.2-300.8-192-211.2 172.8zM512 332.8l211.2-179.2 89.6 57.6v-64l-300.8-179.2-300.8 179.2v64l89.6-51.2 211.2 172.8z" />
<glyph unicode="&#xe65f;" d="M864 249.6c-38.4 0-64 32-64 64v256c0 38.4 32 64 64 64 38.4 0 64-32 64-64v-256c0-32-25.6-64-64-64zM697.6 102.4h-38.4v-108.8c0-38.4-25.6-64-57.6-64s-57.6 25.6-57.6 64v108.8h-70.4v-108.8c0-38.4-25.6-64-57.6-64s-57.6 25.6-57.6 64v108.8h-32c-19.2 0-38.4 19.2-38.4 44.8v428.8h448v-422.4c0-32-12.8-51.2-38.4-51.2zM736 633.6h-448c0 89.6 32 153.6 76.8 192l-70.4 83.2c-6.4 12.8-6.4 25.6 0 38.4 12.8 12.8 25.6 12.8 38.4 0l83.2-96c32 12.8 64 19.2 96 19.2s70.4-6.4 96-19.2l83.2 96c12.8 12.8 25.6 12.8 38.4 0s12.8-32 0-38.4l-70.4-83.2c44.8-32 76.8-102.4 76.8-192zM441.6 761.6c-12.8 0-25.6-12.8-25.6-32s12.8-32 25.6-32 25.6 12.8 25.6 32-12.8 32-25.6 32zM582.4 761.6c-12.8 0-25.6-12.8-25.6-32s12.8-32 25.6-32 25.6 19.2 25.6 32-12.8 32-25.6 32zM160 249.6c-38.4 0-64 32-64 64v256c0 38.4 25.6 64 64 64s64-32 64-64v-256c0-32-25.6-64-64-64z" />
<glyph unicode="&#xe660;" d="M921.6 211.2c-32-153.6-115.2-211.2-147.2-249.6-32-25.6-121.6-25.6-153.6-6.4-38.4 25.6-134.4 25.6-166.4 0-44.8-32-115.2-19.2-128-12.8-256 179.2-352 716.8 12.8 774.4 64 12.8 134.4-32 134.4-32 51.2-25.6 70.4-12.8 115.2 6.4 96 44.8 243.2 44.8 313.6-76.8-147.2-96-153.6-294.4 19.2-403.2zM716.8 960c12.8-70.4-64-224-204.8-230.4-12.8 38.4 32 217.6 204.8 230.4z" />
</font></defs></svg>
</font></defs></svg>

Before

Width:  |  Height:  |  Size: 56 KiB

After

Width:  |  Height:  |  Size: 56 KiB

File diff suppressed because one or more lines are too long

View File

@@ -1 +1 @@
https://github.com/fghrsh/live2d_demo
https://github.com/fghrsh/live2d_demo

View File

@@ -5,11 +5,11 @@ window.live2d_settings = Array(); /*
      /`ー'    L//`ヽ、 Live2D 看板娘 参数设置
     /  ,  /|  ,  ,    ', Version 1.4.2
   イ  / /-/  L_ ハ ヽ!  i Update 2018.11.12
    レ ヘ 7イ  レ'ァ-ト、!ハ|  |
    レ ヘ 7イ  レ'ァ-ト、!ハ|  |
     !,/7 '0'   ´0iソ|   |   
     |.从"  _   ,,,, / |./   | 网页添加 Live2D 看板娘
     レ'| i.、,,__ _,.イ /  .i  | https://www.fghrsh.net/post/123.html
      レ'| | / k__/レ'ヽ, ハ. |
      レ'| | / k__/レ'ヽ, ハ. |
       | |/i 〈|/  i ,.ヘ | i | Thanks
      .|/ /    ヘ!   | journey-ad / https://github.com/journey-ad/live2d_src
        kヽ>、ハ   _,.ヘ、   /、! xiazeyu / https://github.com/xiazeyu/live2d-widget.js
@@ -77,11 +77,11 @@ String.prototype.render = function(context) {
return this.replace(tokenReg, function (word, slash1, token, slash2) {
if (slash1 || slash2) { return word.replace('\\', ''); }
var variables = token.replace(/\s/g, '').split('.');
var currentObject = context;
var i, length, variable;
for (i = 0, length = variables.length; i < length; ++i) {
variable = variables[i];
currentObject = currentObject[variable];
@@ -101,9 +101,9 @@ function showMessage(text, timeout, flag) {
if(flag || sessionStorage.getItem('waifu-text') === '' || sessionStorage.getItem('waifu-text') === null){
if(Array.isArray(text)) text = text[Math.floor(Math.random() * text.length + 1)-1];
if (live2d_settings.showF12Message) console.log('[Message]', text.replace(/<[^<>]+>/g,''));
if(flag) sessionStorage.setItem('waifu-text', text);
$('.waifu-tips').stop();
$('.waifu-tips').html(text).fadeTo(200, 1);
if (timeout === undefined) timeout = 5000;
@@ -121,15 +121,15 @@ function hideMessage(timeout) {
function initModel(waifuPath, type) {
/* console welcome message */
eval(function(p,a,c,k,e,r){e=function(c){return(c<a?'':e(parseInt(c/a)))+((c=c%a)>35?String.fromCharCode(c+29):c.toString(36))};if(!''.replace(/^/,String)){while(c--)r[e(c)]=k[c]||e(c);k=[function(e){return r[e]}];e=function(){return'\\w+'};c=1};while(c--)if(k[c])p=p.replace(new RegExp('\\b'+e(c)+'\\b','g'),k[c]);return p}('8.d(" ");8.d("\\U,.\\y\\5.\\1\\1\\1\\1/\\1,\\u\\2 \\H\\n\\1\\1\\1\\1\\1\\b \', !-\\r\\j-i\\1/\\1/\\g\\n\\1\\1\\1 \\1 \\a\\4\\f\'\\1\\1\\1 L/\\a\\4\\5\\2\\n\\1\\1 \\1 /\\1 \\a,\\1 /|\\1 ,\\1 ,\\1\\1\\1 \',\\n\\1\\1\\1\\q \\1/ /-\\j/\\1\\h\\E \\9 \\5!\\1 i\\n\\1\\1\\1 \\3 \\6 7\\q\\4\\c\\1 \\3\'\\s-\\c\\2!\\t|\\1 |\\n\\1\\1\\1\\1 !,/7 \'0\'\\1\\1 \\X\\w| \\1 |\\1\\1\\1\\n\\1\\1\\1\\1 |.\\x\\"\\1\\l\\1\\1 ,,,, / |./ \\1 |\\n\\1\\1\\1\\1 \\3\'| i\\z.\\2,,A\\l,.\\B / \\1.i \\1|\\n\\1\\1\\1\\1\\1 \\3\'| | / C\\D/\\3\'\\5,\\1\\9.\\1|\\n\\1\\1\\1\\1\\1\\1 | |/i \\m|/\\1 i\\1,.\\6 |\\F\\1|\\n\\1\\1\\1\\1\\1\\1.|/ /\\1\\h\\G \\1 \\6!\\1\\1\\b\\1|\\n\\1\\1\\1 \\1 \\1 k\\5>\\2\\9 \\1 o,.\\6\\2 \\1 /\\2!\\n\\1\\1\\1\\1\\1\\1 !\'\\m//\\4\\I\\g\', \\b \\4\'7\'\\J\'\\n\\1\\1\\1\\1\\1\\1 \\3\'\\K|M,p,\\O\\3|\\P\\n\\1\\1\\1\\1\\1 \\1\\1\\1\\c-,/\\1|p./\\n\\1\\1\\1\\1\\1 \\1\\1\\1\'\\f\'\\1\\1!o,.:\\Q \\R\\S\\T v"+e.V+" / W "+e.N);8.d(" ");',60,60,'|u3000|uff64|uff9a|uff40|u30fd|uff8d||console|uff8a|uff0f|uff3c|uff84|log|live2d_settings|uff70|u00b4|uff49||u2010||u3000_|u3008||_|___|uff72|u2500|uff67|u30cf|u30fc||u30bd|u4ece|u30d8|uff1e|__|u30a4|k_|uff17_|u3000L_|u3000i|uff1a|u3009|uff34|uff70r|u30fdL__||___i|l2dVerDate|u30f3|u30ce|nLive2D|u770b|u677f|u5a18|u304f__|l2dVersion|FGHRSH|u00b40i'.split('|'),0,{}));
/* 判断 JQuery */
if (typeof($.ajax) != 'function') typeof(jQuery.ajax) == 'function' ? window.$ = jQuery : console.log('[Error] JQuery is not defined.');
/* 加载看板娘样式 */
live2d_settings.waifuSize = live2d_settings.waifuSize.split('x');
live2d_settings.waifuTipsSize = live2d_settings.waifuTipsSize.split('x');
live2d_settings.waifuEdgeSide = live2d_settings.waifuEdgeSide.split(':');
$("#live2d").attr("width",live2d_settings.waifuSize[0]);
$("#live2d").attr("height",live2d_settings.waifuSize[1]);
$(".waifu-tips").width(live2d_settings.waifuTipsSize[0]);
@@ -138,32 +138,32 @@ function initModel(waifuPath, type) {
$(".waifu-tips").css("font-size",live2d_settings.waifuFontSize);
$(".waifu-tool").css("font-size",live2d_settings.waifuToolFont);
$(".waifu-tool span").css("line-height",live2d_settings.waifuToolLine);
if (live2d_settings.waifuEdgeSide[0] == 'left') $(".waifu").css("left",live2d_settings.waifuEdgeSide[1]+'px');
else if (live2d_settings.waifuEdgeSide[0] == 'right') $(".waifu").css("right",live2d_settings.waifuEdgeSide[1]+'px');
window.waifuResize = function() { $(window).width() <= Number(live2d_settings.waifuMinWidth.replace('px','')) ? $(".waifu").hide() : $(".waifu").show(); };
if (live2d_settings.waifuMinWidth != 'disable') { waifuResize(); $(window).resize(function() {waifuResize()}); }
try {
if (live2d_settings.waifuDraggable == 'axis-x') $(".waifu").draggable({ axis: "x", revert: live2d_settings.waifuDraggableRevert });
else if (live2d_settings.waifuDraggable == 'unlimited') $(".waifu").draggable({ revert: live2d_settings.waifuDraggableRevert });
else $(".waifu").css("transition", 'all .3s ease-in-out');
} catch(err) { console.log('[Error] JQuery UI is not defined.') }
live2d_settings.homePageUrl = live2d_settings.homePageUrl == 'auto' ? window.location.protocol+'//'+window.location.hostname+'/' : live2d_settings.homePageUrl;
if (window.location.protocol == 'file:' && live2d_settings.modelAPI.substr(0,2) == '//') live2d_settings.modelAPI = 'http:'+live2d_settings.modelAPI;
$('.waifu-tool .fui-home').click(function (){
//window.location = 'https://www.fghrsh.net/';
window.location = live2d_settings.homePageUrl;
});
$('.waifu-tool .fui-info-circle').click(function (){
//window.open('https://imjad.cn/archives/lab/add-dynamic-poster-girl-with-live2d-to-your-blog-02');
window.open(live2d_settings.aboutPageUrl);
});
if (typeof(waifuPath) == "object") loadTipsMessage(waifuPath); else {
$.ajax({
cache: true,
@@ -172,7 +172,7 @@ function initModel(waifuPath, type) {
success: function (result){ loadTipsMessage(result); }
});
}
if (!live2d_settings.showToolMenu) $('.waifu-tool').hide();
if (!live2d_settings.canCloseLive2d) $('.waifu-tool .fui-cross').hide();
if (!live2d_settings.canSwitchModel) $('.waifu-tool .fui-eye').hide();
@@ -185,7 +185,7 @@ function initModel(waifuPath, type) {
if (waifuPath === undefined) waifuPath = '';
var modelId = localStorage.getItem('modelId');
var modelTexturesId = localStorage.getItem('modelTexturesId');
if (!live2d_settings.modelStorage || modelId == null) {
var modelId = live2d_settings.modelId;
var modelTexturesId = live2d_settings.modelTexturesId;
@@ -204,7 +204,7 @@ function loadModel(modelId, modelTexturesId=0) {
function loadTipsMessage(result) {
window.waifu_tips = result;
$.each(result.mouseover, function (index, tips){
$(document).on("mouseover", tips.selector, function (){
var text = getRandText(tips.text);
@@ -223,50 +223,50 @@ function loadTipsMessage(result) {
var now = new Date();
var after = tips.date.split('-')[0];
var before = tips.date.split('-')[1] || after;
if((after.split('/')[0] <= now.getMonth()+1 && now.getMonth()+1 <= before.split('/')[0]) &&
if((after.split('/')[0] <= now.getMonth()+1 && now.getMonth()+1 <= before.split('/')[0]) &&
(after.split('/')[1] <= now.getDate() && now.getDate() <= before.split('/')[1])){
var text = getRandText(tips.text);
text = text.render({year: now.getFullYear()});
showMessage(text, 6000, true);
}
});
if (live2d_settings.showF12OpenMsg) {
re.toString = function() {
showMessage(getRandText(result.waifu.console_open_msg), 5000, true);
return '';
};
}
if (live2d_settings.showCopyMessage) {
$(document).on('copy', function() {
showMessage(getRandText(result.waifu.copy_message), 5000, true);
});
}
$('.waifu-tool .fui-photo').click(function(){
showMessage(getRandText(result.waifu.screenshot_message), 5000, true);
window.Live2D.captureName = live2d_settings.screenshotCaptureName;
window.Live2D.captureFrame = true;
});
$('.waifu-tool .fui-cross').click(function(){
sessionStorage.setItem('waifu-dsiplay', 'none');
showMessage(getRandText(result.waifu.hidden_message), 1300, true);
window.setTimeout(function() {$('.waifu').hide();}, 1300);
});
window.showWelcomeMessage = function(result) {
showMessage('欢迎使用GPT-Academic', 6000);
}; if (live2d_settings.showWelcomeMessage) showWelcomeMessage(result);
var waifu_tips = result.waifu;
function loadOtherModel() {
var modelId = modelStorageGetItem('modelId');
var modelRandMode = live2d_settings.modelRandMode;
$.ajax({
cache: modelRandMode == 'switch' ? true : false,
url: live2d_settings.modelAPI+modelRandMode+'/?id='+modelId,
@@ -279,12 +279,12 @@ function loadTipsMessage(result) {
}
});
}
function loadRandTextures() {
var modelId = modelStorageGetItem('modelId');
var modelTexturesId = modelStorageGetItem('modelTexturesId');
var modelTexturesRandMode = live2d_settings.modelTexturesRandMode;
$.ajax({
cache: modelTexturesRandMode == 'switch' ? true : false,
url: live2d_settings.modelAPI+modelTexturesRandMode+'_textures/?id='+modelId+'-'+modelTexturesId,
@@ -297,32 +297,32 @@ function loadTipsMessage(result) {
}
});
}
function modelStorageGetItem(key) { return live2d_settings.modelStorage ? localStorage.getItem(key) : sessionStorage.getItem(key); }
/* 检测用户活动状态,并在空闲时显示一言 */
if (live2d_settings.showHitokoto) {
window.getActed = false; window.hitokotoTimer = 0; window.hitokotoInterval = false;
$(document).mousemove(function(e){getActed = true;}).keydown(function(){getActed = true;});
setInterval(function(){ if (!getActed) ifActed(); else elseActed(); }, 1000);
}
function ifActed() {
if (!hitokotoInterval) {
hitokotoInterval = true;
hitokotoTimer = window.setInterval(showHitokotoActed, 30000);
}
}
function elseActed() {
getActed = hitokotoInterval = false;
window.clearInterval(hitokotoTimer);
}
function showHitokotoActed() {
if ($(document)[0].visibilityState == 'visible') showHitokoto();
}
function showHitokoto() {
switch(live2d_settings.hitokotoAPI) {
case 'lwl12.com':
@@ -366,7 +366,7 @@ function loadTipsMessage(result) {
});
}
}
$('.waifu-tool .fui-eye').click(function (){loadOtherModel()});
$('.waifu-tool .fui-user').click(function (){loadRandTextures()});
$('.waifu-tool .fui-chat').click(function (){showHitokoto()});

View File

@@ -31,7 +31,7 @@
},
"model_message": {
"1": ["来自 Potion Maker 的 Pio 酱 ~"],
"2": ["来自 Potion Maker 的 Tia 酱 ~"]
"2": ["来自 Potion Maker 的 Tia 酱 ~"]
},
"hitokoto_api_message": {
"lwl12.com": ["这句一言来自 <span style=\"color:#0099cc;\">『{source}』</span>", ",是 <span style=\"color:#0099cc;\">{creator}</span> 投稿的", "。"],
@@ -111,4 +111,4 @@
{ "date": "11/05-11/12", "text": ["今年的<span style=\"color:#0099cc;\">双十一</span>是和谁一起过的呢~"] },
{ "date": "12/20-12/31", "text": ["这几天是<span style=\"color:#0099cc;\">圣诞节</span>,主人肯定又去剁手买买买了~"] }
]
}
}

View File

@@ -287,4 +287,4 @@
}
.fui-user:before {
content: "\e631";
}
}

121
main.py
View File

@@ -1,14 +1,25 @@
import os; os.environ['no_proxy'] = '*' # 避免代理网络产生意外污染
import pickle
import base64
help_menu_description = \
"""Github源代码开源和更新[地址🚀](https://github.com/binary-husky/gpt_academic),
感谢热情的[开发者们❤️](https://github.com/binary-husky/gpt_academic/graphs/contributors).
</br></br>常见问题请查阅[项目Wiki](https://github.com/binary-husky/gpt_academic/wiki),
如遇到Bug请前往[Bug反馈](https://github.com/binary-husky/gpt_academic/issues).
</br></br>普通对话使用说明: 1. 输入问题; 2. 点击提交
</br></br>基础功能区使用说明: 1. 输入文本; 2. 点击任意基础功能区按钮
</br></br>函数插件区使用说明: 1. 输入路径/问题, 或者上传文件; 2. 点击任意函数插件区按钮
</br></br>虚空终端使用说明: 点击虚空终端, 然后根据提示输入指令, 再次点击虚空终端
</br></br>如何保存对话: 点击保存当前的对话按钮
</br></br>如何语音对话: 请阅读Wiki
</br></br>如何临时更换API_KEY: 在输入区输入临时API_KEY后提交网页刷新后失效"""
def main():
import gradio as gr
if gr.__version__ not in ['3.32.6']:
if gr.__version__ not in ['3.32.6', '3.32.7']:
raise ModuleNotFoundError("使用项目内置Gradio获取最优体验! 请运行 `pip install -r requirements.txt` 指令安装内置Gradio及其他依赖, 详情信息见requirements.txt.")
from request_llms.bridge_all import predict
from toolbox import format_io, find_free_port, on_file_uploaded, on_report_generated, get_conf, ArgsGeneralWrapper, load_chat_cookies, DummyWith
# 建议您复制一个config_private.py放自己的秘密, 如API和代理网址, 避免不小心传github被别人看到
# 建议您复制一个config_private.py放自己的秘密, 如API和代理网址
proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION = get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION')
CHATBOT_HEIGHT, LAYOUT, AVAIL_LLM_MODELS, AUTO_CLEAR_TXT = get_conf('CHATBOT_HEIGHT', 'LAYOUT', 'AVAIL_LLM_MODELS', 'AUTO_CLEAR_TXT')
ENABLE_AUDIO, AUTO_CLEAR_TXT, PATH_LOGGING, AVAIL_THEMES, THEME = get_conf('ENABLE_AUDIO', 'AUTO_CLEAR_TXT', 'PATH_LOGGING', 'AVAIL_THEMES', 'THEME')
@@ -18,21 +29,11 @@ def main():
# 如果WEB_PORT是-1, 则随机选取WEB端口
PORT = find_free_port() if WEB_PORT <= 0 else WEB_PORT
from check_proxy import get_current_version
from themes.theme import adjust_theme, advanced_css, theme_declaration, load_dynamic_theme
from themes.theme import adjust_theme, advanced_css, theme_declaration
from themes.theme import js_code_for_css_changing, js_code_for_darkmode_init, js_code_for_toggle_darkmode, js_code_for_persistent_cookie_init
from themes.theme import load_dynamic_theme, to_cookie_str, from_cookie_str, init_cookie
title_html = f"<h1 align=\"center\">GPT 学术优化 {get_current_version()}</h1>{theme_declaration}"
description = "Github源代码开源和更新[地址🚀](https://github.com/binary-husky/gpt_academic), "
description += "感谢热情的[开发者们❤️](https://github.com/binary-husky/gpt_academic/graphs/contributors)."
description += "</br></br>常见问题请查阅[项目Wiki](https://github.com/binary-husky/gpt_academic/wiki), "
description += "如遇到Bug请前往[Bug反馈](https://github.com/binary-husky/gpt_academic/issues)."
description += "</br></br>普通对话使用说明: 1. 输入问题; 2. 点击提交"
description += "</br></br>基础功能区使用说明: 1. 输入文本; 2. 点击任意基础功能区按钮"
description += "</br></br>函数插件区使用说明: 1. 输入路径/问题, 或者上传文件; 2. 点击任意函数插件区按钮"
description += "</br></br>虚空终端使用说明: 点击虚空终端, 然后根据提示输入指令, 再次点击虚空终端"
description += "</br></br>如何保存对话: 点击保存当前的对话按钮"
description += "</br></br>如何语音对话: 请阅读Wiki"
description += "</br></br>如何临时更换API_KEY: 在输入区输入临时API_KEY后提交网页刷新后失效"
# 问询记录, python 版本建议3.9+(越新越好)
import logging, uuid
os.makedirs(PATH_LOGGING, exist_ok=True)
@@ -138,17 +139,17 @@ def main():
with gr.Row():
switchy_bt = gr.Button(r"请先从插件列表中选择", variant="secondary").style(size="sm")
with gr.Row():
with gr.Accordion("点击展开“文件上传区”。上传本地文件/压缩包供函数插件调用", open=False) as area_file_up:
with gr.Accordion("点击展开“文件下载区”", open=False) as area_file_up:
file_upload = gr.Files(label="任何文件, 推荐上传压缩文件(zip, tar)", file_count="multiple", elem_id="elem_upload")
with gr.Floating(init_x="0%", init_y="0%", visible=True, width=None, drag="forbidden"):
with gr.Floating(init_x="0%", init_y="0%", visible=True, width=None, drag="forbidden", elem_id="tooltip"):
with gr.Row():
with gr.Tab("上传文件", elem_id="interact-panel"):
gr.Markdown("请上传本地文件/压缩包供“函数插件区”功能调用。请注意: 上传文件后会自动把输入区修改为相应路径。")
file_upload_2 = gr.Files(label="任何文件, 推荐上传压缩文件(zip, tar)", file_count="multiple", elem_id="elem_upload_float")
with gr.Tab("更换模型 & Prompt", elem_id="interact-panel"):
with gr.Tab("更换模型", elem_id="interact-panel"):
md_dropdown = gr.Dropdown(AVAIL_LLM_MODELS, value=LLM_MODEL, label="更换LLM模型/请求源").style(container=False)
top_p = gr.Slider(minimum=-0, maximum=1.0, value=1.0, step=0.01,interactive=True, label="Top-p (nucleus sampling)",)
temperature = gr.Slider(minimum=-0, maximum=2.0, value=1.0, step=0.01, interactive=True, label="Temperature",)
@@ -160,18 +161,11 @@ def main():
checkboxes = gr.CheckboxGroup(["基础功能区", "函数插件区", "浮动输入区", "输入清除键", "插件参数区"],
value=["基础功能区", "函数插件区"], label="显示/隐藏功能区", elem_id='cbs').style(container=False)
checkboxes_2 = gr.CheckboxGroup(["自定义菜单"],
value=[], label="显示/隐藏自定义菜单", elem_id='cbs').style(container=False)
value=[], label="显示/隐藏自定义菜单", elem_id='cbsc').style(container=False)
dark_mode_btn = gr.Button("切换界面明暗 ☀", variant="secondary").style(size="sm")
dark_mode_btn.click(None, None, None, _js="""() => {
if (document.querySelectorAll('.dark').length) {
document.querySelectorAll('.dark').forEach(el => el.classList.remove('dark'));
} else {
document.querySelector('body').classList.add('dark');
}
}""",
)
dark_mode_btn.click(None, None, None, _js=js_code_for_toggle_darkmode)
with gr.Tab("帮助", elem_id="interact-panel"):
gr.Markdown(description)
gr.Markdown(help_menu_description)
with gr.Floating(init_x="20%", init_y="50%", visible=False, width="40%", drag="top") as area_input_secondary:
with gr.Accordion("浮动输入区", open=True, elem_id="input-panel2"):
@@ -186,16 +180,6 @@ def main():
stopBtn2 = gr.Button("停止", variant="secondary"); stopBtn2.style(size="sm")
clearBtn2 = gr.Button("清除", variant="secondary", visible=False); clearBtn2.style(size="sm")
def to_cookie_str(d):
# Pickle the dictionary and encode it as a string
pickled_dict = pickle.dumps(d)
cookie_value = base64.b64encode(pickled_dict).decode('utf-8')
return cookie_value
def from_cookie_str(c):
# Decode the base64-encoded string and unpickle it into a dictionary
pickled_dict = base64.b64decode(c.encode('utf-8'))
return pickle.loads(pickled_dict)
with gr.Floating(init_x="20%", init_y="50%", visible=False, width="40%", drag="top") as area_customize:
with gr.Accordion("自定义菜单", open=True, elem_id="edit-panel"):
@@ -227,11 +211,11 @@ def main():
else:
ret.update({predefined_btns[basic_btn_dropdown_]: gr.update(visible=True, value=basic_fn_title)})
ret.update({cookies: cookies_})
try: persistent_cookie_ = from_cookie_str(persistent_cookie_) # persistent cookie to dict
try: persistent_cookie_ = from_cookie_str(persistent_cookie_) # persistent cookie to dict
except: persistent_cookie_ = {}
persistent_cookie_["custom_bnt"] = customize_fn_overwrite_ # dict update new value
persistent_cookie_ = to_cookie_str(persistent_cookie_) # persistent cookie to dict
ret.update({persistent_cookie: persistent_cookie_}) # write persistent cookie
persistent_cookie_["custom_bnt"] = customize_fn_overwrite_ # dict update new value
persistent_cookie_ = to_cookie_str(persistent_cookie_) # persistent cookie to dict
ret.update({persistent_cookie: persistent_cookie_}) # write persistent cookie
return ret
def reflesh_btn(persistent_cookie_, cookies_):
@@ -252,10 +236,11 @@ def main():
else: ret.update({predefined_btns[k]: gr.update(visible=True, value=v['Title'])})
return ret
basic_fn_load.click(reflesh_btn, [persistent_cookie, cookies],[cookies, *customize_btns.values(), *predefined_btns.values()])
basic_fn_load.click(reflesh_btn, [persistent_cookie, cookies], [cookies, *customize_btns.values(), *predefined_btns.values()])
h = basic_fn_confirm.click(assign_btn, [persistent_cookie, cookies, basic_btn_dropdown, basic_fn_title, basic_fn_prefix, basic_fn_suffix],
[persistent_cookie, cookies, *customize_btns.values(), *predefined_btns.values()])
h.then(None, [persistent_cookie], None, _js="""(persistent_cookie)=>{setCookie("persistent_cookie", persistent_cookie, 5);}""") # save persistent cookie
# save persistent cookie
h.then(None, [persistent_cookie], None, _js="""(persistent_cookie)=>{setCookie("persistent_cookie", persistent_cookie, 5);}""")
# 功能区显示开关与功能区的互动
def fn_area_visibility(a):
@@ -305,8 +290,8 @@ def main():
click_handle = btn.click(fn=ArgsGeneralWrapper(predict), inputs=[*input_combo, gr.State(True), gr.State(btn.value)], outputs=output_combo)
cancel_handles.append(click_handle)
# 文件上传区接收文件后与chatbot的互动
file_upload.upload(on_file_uploaded, [file_upload, chatbot, txt, txt2, checkboxes, cookies], [chatbot, txt, txt2, cookies])
file_upload_2.upload(on_file_uploaded, [file_upload_2, chatbot, txt, txt2, checkboxes, cookies], [chatbot, txt, txt2, cookies])
file_upload.upload(on_file_uploaded, [file_upload, chatbot, txt, txt2, checkboxes, cookies], [chatbot, txt, txt2, cookies]).then(None, None, None, _js=r"()=>{toast_push('上传完毕 ...'); cancel_loading_status();}")
file_upload_2.upload(on_file_uploaded, [file_upload_2, chatbot, txt, txt2, checkboxes, cookies], [chatbot, txt, txt2, cookies]).then(None, None, None, _js=r"()=>{toast_push('上传完毕 ...'); cancel_loading_status();}")
# 函数插件-固定按钮区
for k in plugins:
if not plugins[k].get("AsButton", True): continue
@@ -342,18 +327,7 @@ def main():
None,
[secret_css],
None,
_js="""(css) => {
var existingStyles = document.querySelectorAll("style[data-loaded-css]");
for (var i = 0; i < existingStyles.length; i++) {
var style = existingStyles[i];
style.parentNode.removeChild(style);
}
var styleElement = document.createElement('style');
styleElement.setAttribute('data-loaded-css', css);
styleElement.innerHTML = css;
document.head.appendChild(styleElement);
}
"""
_js=js_code_for_css_changing
)
# 随变按钮的回调函数注册
def route(request: gr.Request, k, *args, **kwargs):
@@ -385,27 +359,10 @@ def main():
rad.feed(cookies['uuid'].hex, audio)
audio_mic.stream(deal_audio, inputs=[audio_mic, cookies])
def init_cookie(cookies, chatbot):
# 为每一位访问的用户赋予一个独一无二的uuid编码
cookies.update({'uuid': uuid.uuid4()})
return cookies
demo.load(init_cookie, inputs=[cookies, chatbot], outputs=[cookies])
darkmode_js = """(dark) => {
dark = dark == "True";
if (document.querySelectorAll('.dark').length) {
if (!dark){
document.querySelectorAll('.dark').forEach(el => el.classList.remove('dark'));
}
} else {
if (dark){
document.querySelector('body').classList.add('dark');
}
}
}"""
load_cookie_js = """(persistent_cookie) => {
return getCookie("persistent_cookie");
}"""
demo.load(None, inputs=None, outputs=[persistent_cookie], _js=load_cookie_js)
darkmode_js = js_code_for_darkmode_init
demo.load(None, inputs=None, outputs=[persistent_cookie], _js=js_code_for_persistent_cookie_init)
demo.load(None, inputs=[dark_mode], outputs=None, _js=darkmode_js) # 配置暗色主题或亮色主题
demo.load(None, inputs=[gr.Textbox(LAYOUT, visible=False)], outputs=None, _js='(LAYOUT)=>{GptAcademicJavaScriptInit(LAYOUT);}')
@@ -418,7 +375,7 @@ def main():
def auto_updates(): time.sleep(0); auto_update()
def open_browser(): time.sleep(2); webbrowser.open_new_tab(f"http://localhost:{PORT}")
def warm_up_mods(): time.sleep(4); warm_up_modules()
def warm_up_mods(): time.sleep(6); warm_up_modules()
threading.Thread(target=auto_updates, name="self-upgrade", daemon=True).start() # 查看自动更新
threading.Thread(target=open_browser, name="open-browser", daemon=True).start() # 打开浏览器页面

View File

@@ -32,4 +32,4 @@ P.S. 如果您按照以下步骤成功接入了新的大模型欢迎发Pull R
5. 测试通过后,在`request_llms/bridge_all.py`中做最后的修改,把你的模型完全接入到框架中(聪慧如您,只需要看一眼该文件就明白怎么修改了)
6. 修改`LLM_MODEL`配置,然后运行`python main.py`,测试最后的效果
6. 修改`LLM_MODEL`配置,然后运行`python main.py`,测试最后的效果

View File

@@ -28,6 +28,9 @@ from .bridge_chatglm3 import predict as chatglm3_ui
from .bridge_qianfan import predict_no_ui_long_connection as qianfan_noui
from .bridge_qianfan import predict as qianfan_ui
from .bridge_google_gemini import predict as genai_ui
from .bridge_google_gemini import predict_no_ui_long_connection as genai_noui
colors = ['#FF00FF', '#00FFFF', '#FF0000', '#990099', '#009999', '#990044']
class LazyloadTiktoken(object):
@@ -246,6 +249,22 @@ model_info = {
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
"gemini-pro": {
"fn_with_ui": genai_ui,
"fn_without_ui": genai_noui,
"endpoint": None,
"max_token": 1024 * 32,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
"gemini-pro-vision": {
"fn_with_ui": genai_ui,
"fn_without_ui": genai_noui,
"endpoint": None,
"max_token": 1024 * 32,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
}
# -=-=-=-=-=-=- api2d 对齐支持 -=-=-=-=-=-=-
@@ -431,14 +450,14 @@ if "chatglm_onnx" in AVAIL_LLM_MODELS:
})
except:
print(trimmed_format_exc())
if "qwen" in AVAIL_LLM_MODELS:
if "qwen-local" in AVAIL_LLM_MODELS:
try:
from .bridge_qwen import predict_no_ui_long_connection as qwen_noui
from .bridge_qwen import predict as qwen_ui
from .bridge_qwen_local import predict_no_ui_long_connection as qwen_local_noui
from .bridge_qwen_local import predict as qwen_local_ui
model_info.update({
"qwen": {
"fn_with_ui": qwen_ui,
"fn_without_ui": qwen_noui,
"qwen-local": {
"fn_with_ui": qwen_local_ui,
"fn_without_ui": qwen_local_noui,
"endpoint": None,
"max_token": 4096,
"tokenizer": tokenizer_gpt35,
@@ -447,16 +466,32 @@ if "qwen" in AVAIL_LLM_MODELS:
})
except:
print(trimmed_format_exc())
if "chatgpt_website" in AVAIL_LLM_MODELS: # 接入一些逆向工程https://github.com/acheong08/ChatGPT-to-API/
if "qwen-turbo" in AVAIL_LLM_MODELS or "qwen-plus" in AVAIL_LLM_MODELS or "qwen-max" in AVAIL_LLM_MODELS: # zhipuai
try:
from .bridge_chatgpt_website import predict_no_ui_long_connection as chatgpt_website_noui
from .bridge_chatgpt_website import predict as chatgpt_website_ui
from .bridge_qwen import predict_no_ui_long_connection as qwen_noui
from .bridge_qwen import predict as qwen_ui
model_info.update({
"chatgpt_website": {
"fn_with_ui": chatgpt_website_ui,
"fn_without_ui": chatgpt_website_noui,
"endpoint": openai_endpoint,
"max_token": 4096,
"qwen-turbo": {
"fn_with_ui": qwen_ui,
"fn_without_ui": qwen_noui,
"endpoint": None,
"max_token": 6144,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
"qwen-plus": {
"fn_with_ui": qwen_ui,
"fn_without_ui": qwen_noui,
"endpoint": None,
"max_token": 30720,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
"qwen-max": {
"fn_with_ui": qwen_ui,
"fn_without_ui": qwen_noui,
"endpoint": None,
"max_token": 28672,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
}

View File

@@ -51,7 +51,8 @@ def decode_chunk(chunk):
chunkjson = json.loads(chunk_decoded[6:])
has_choices = 'choices' in chunkjson
if has_choices: choice_valid = (len(chunkjson['choices']) > 0)
if has_choices and choice_valid: has_content = "content" in chunkjson['choices'][0]["delta"]
if has_choices and choice_valid: has_content = ("content" in chunkjson['choices'][0]["delta"])
if has_content: has_content = (chunkjson['choices'][0]["delta"]["content"] is not None)
if has_choices and choice_valid: has_role = "role" in chunkjson['choices'][0]["delta"]
except:
pass
@@ -101,20 +102,25 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
result = ''
json_data = None
while True:
try: chunk = next(stream_response).decode()
try: chunk = next(stream_response)
except StopIteration:
break
except requests.exceptions.ConnectionError:
chunk = next(stream_response).decode() # 失败了,重试一次?再失败就没办法了。
if len(chunk)==0: continue
if not chunk.startswith('data:'):
error_msg = get_full_error(chunk.encode('utf8'), stream_response).decode()
chunk = next(stream_response) # 失败了,重试一次?再失败就没办法了。
chunk_decoded, chunkjson, has_choices, choice_valid, has_content, has_role = decode_chunk(chunk)
if len(chunk_decoded)==0: continue
if not chunk_decoded.startswith('data:'):
error_msg = get_full_error(chunk, stream_response).decode()
if "reduce the length" in error_msg:
raise ConnectionAbortedError("OpenAI拒绝了请求:" + error_msg)
else:
raise RuntimeError("OpenAI拒绝了请求" + error_msg)
if ('data: [DONE]' in chunk): break # api2d 正常完成
json_data = json.loads(chunk.lstrip('data:'))['choices'][0]
if ('data: [DONE]' in chunk_decoded): break # api2d 正常完成
# 提前读取一些信息 (用于判断异常)
if has_choices and not choice_valid:
# 一些垃圾第三方接口的出现这样的错误
continue
json_data = chunkjson['choices'][0]
delta = json_data["delta"]
if len(delta) == 0: break
if "role" in delta: continue

View File

@@ -0,0 +1,109 @@
# encoding: utf-8
# @Time : 2023/12/21
# @Author : Spike
# @Descr :
import json
import re
import os
import time
from request_llms.com_google import GoogleChatInit
from toolbox import get_conf, update_ui, update_ui_lastest_msg, have_any_recent_upload_image_files, trimmed_format_exc
proxies, TIMEOUT_SECONDS, MAX_RETRY = get_conf('proxies', 'TIMEOUT_SECONDS', 'MAX_RETRY')
timeout_bot_msg = '[Local Message] Request timeout. Network error. Please check proxy settings in config.py.' + \
'网络错误,检查代理服务器是否可用,以及代理设置的格式是否正确,格式须是[协议]://[地址]:[端口],缺一不可。'
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=None,
console_slience=False):
# 检查API_KEY
if get_conf("GEMINI_API_KEY") == "":
raise ValueError(f"请配置 GEMINI_API_KEY。")
genai = GoogleChatInit()
watch_dog_patience = 5 # 看门狗的耐心, 设置5秒即可
gpt_replying_buffer = ''
stream_response = genai.generate_chat(inputs, llm_kwargs, history, sys_prompt)
for response in stream_response:
results = response.decode()
match = re.search(r'"text":\s*"((?:[^"\\]|\\.)*)"', results, flags=re.DOTALL)
error_match = re.search(r'\"message\":\s*\"(.*?)\"', results, flags=re.DOTALL)
if match:
try:
paraphrase = json.loads('{"text": "%s"}' % match.group(1))
except:
raise ValueError(f"解析GEMINI消息出错。")
buffer = paraphrase['text']
gpt_replying_buffer += buffer
if len(observe_window) >= 1:
observe_window[0] = gpt_replying_buffer
if len(observe_window) >= 2:
if (time.time() - observe_window[1]) > watch_dog_patience: raise RuntimeError("程序终止。")
if error_match:
raise RuntimeError(f'{gpt_replying_buffer} 对话错误')
return gpt_replying_buffer
def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream=True, additional_fn=None):
# 检查API_KEY
if get_conf("GEMINI_API_KEY") == "":
yield from update_ui_lastest_msg(f"请配置 GEMINI_API_KEY。", chatbot=chatbot, history=history, delay=0)
return
if "vision" in llm_kwargs["llm_model"]:
have_recent_file, image_paths = have_any_recent_upload_image_files(chatbot)
def make_media_input(inputs, image_paths):
for image_path in image_paths:
inputs = inputs + f'<br/><br/><div align="center"><img src="file={os.path.abspath(image_path)}"></div>'
return inputs
if have_recent_file:
inputs = make_media_input(inputs, image_paths)
chatbot.append((inputs, ""))
yield from update_ui(chatbot=chatbot, history=history)
genai = GoogleChatInit()
retry = 0
while True:
try:
stream_response = genai.generate_chat(inputs, llm_kwargs, history, system_prompt)
break
except Exception as e:
retry += 1
chatbot[-1] = ((chatbot[-1][0], trimmed_format_exc()))
yield from update_ui(chatbot=chatbot, history=history, msg="请求失败") # 刷新界面
return
gpt_replying_buffer = ""
gpt_security_policy = ""
history.extend([inputs, ''])
for response in stream_response:
results = response.decode("utf-8") # 被这个解码给耍了。。
gpt_security_policy += results
match = re.search(r'"text":\s*"((?:[^"\\]|\\.)*)"', results, flags=re.DOTALL)
error_match = re.search(r'\"message\":\s*\"(.*)\"', results, flags=re.DOTALL)
if match:
try:
paraphrase = json.loads('{"text": "%s"}' % match.group(1))
except:
raise ValueError(f"解析GEMINI消息出错。")
gpt_replying_buffer += paraphrase['text'] # 使用 json 解析库进行处理
chatbot[-1] = (inputs, gpt_replying_buffer)
history[-1] = gpt_replying_buffer
yield from update_ui(chatbot=chatbot, history=history)
if error_match:
history = history[-2] # 错误的不纳入对话
chatbot[-1] = (inputs, gpt_replying_buffer + f"对话错误请查看message\n\n```\n{error_match.group(1)}\n```")
yield from update_ui(chatbot=chatbot, history=history)
raise RuntimeError('对话错误')
if not gpt_replying_buffer:
history = history[-2] # 错误的不纳入对话
chatbot[-1] = (inputs, gpt_replying_buffer + f"触发了Google的安全访问策略没有回答\n\n```\n{gpt_security_policy}\n```")
yield from update_ui(chatbot=chatbot, history=history)
if __name__ == '__main__':
import sys
llm_kwargs = {'llm_model': 'gemini-pro'}
result = predict('Write long a story about a magic backpack.', llm_kwargs, llm_kwargs, [])
for i in result:
print(i)

View File

@@ -1,59 +1,62 @@
model_name = "Qwen"
cmd_to_install = "`pip install -r request_llms/requirements_qwen.txt`"
import time
import os
from toolbox import update_ui, get_conf, update_ui_lastest_msg
from toolbox import check_packages, report_exception
from toolbox import ProxyNetworkActivate, get_conf
from .local_llm_class import LocalLLMHandle, get_local_llm_predict_fns
model_name = 'Qwen'
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False):
"""
⭐多线程方法
函数的说明请见 request_llms/bridge_all.py
"""
watch_dog_patience = 5
response = ""
from .com_qwenapi import QwenRequestInstance
sri = QwenRequestInstance()
for response in sri.generate(inputs, llm_kwargs, history, sys_prompt):
if len(observe_window) >= 1:
observe_window[0] = response
if len(observe_window) >= 2:
if (time.time()-observe_window[1]) > watch_dog_patience: raise RuntimeError("程序终止。")
return response
# ------------------------------------------------------------------------------------------------------------------------
# 🔌💻 Local Model
# ------------------------------------------------------------------------------------------------------------------------
class GetQwenLMHandle(LocalLLMHandle):
def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None):
"""
⭐单线程方法
函数的说明请见 request_llms/bridge_all.py
"""
chatbot.append((inputs, ""))
yield from update_ui(chatbot=chatbot, history=history)
def load_model_info(self):
# 🏃‍♂️🏃‍♂️🏃‍♂️ 子进程执行
self.model_name = model_name
self.cmd_to_install = cmd_to_install
# 尝试导入依赖,如果缺少依赖,则给出安装建议
try:
check_packages(["dashscope"])
except:
yield from update_ui_lastest_msg(f"导入软件依赖失败。使用该模型需要额外依赖,安装方法```pip install --upgrade dashscope```。",
chatbot=chatbot, history=history, delay=0)
return
def load_model_and_tokenizer(self):
# 🏃‍♂️🏃‍♂️🏃‍♂️ 子进程执行
# from modelscope import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation import GenerationConfig
with ProxyNetworkActivate('Download_LLM'):
model_id = get_conf('QWEN_MODEL_SELECTION')
self._tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True, resume_download=True)
# use fp16
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", trust_remote_code=True).eval()
model.generation_config = GenerationConfig.from_pretrained(model_id, trust_remote_code=True) # 可指定不同的生成长度、top_p等相关超参
self._model = model
# 检查DASHSCOPE_API_KEY
if get_conf("DASHSCOPE_API_KEY") == "":
yield from update_ui_lastest_msg(f"请配置 DASHSCOPE_API_KEY。",
chatbot=chatbot, history=history, delay=0)
return
return self._model, self._tokenizer
if additional_fn is not None:
from core_functional import handle_core_functionality
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
def llm_stream_generator(self, **kwargs):
# 🏃‍♂️🏃‍♂️🏃‍♂️ 子进程执行
def adaptor(kwargs):
query = kwargs['query']
max_length = kwargs['max_length']
top_p = kwargs['top_p']
temperature = kwargs['temperature']
history = kwargs['history']
return query, max_length, top_p, temperature, history
# 开始接收回复
from .com_qwenapi import QwenRequestInstance
sri = QwenRequestInstance()
for response in sri.generate(inputs, llm_kwargs, history, system_prompt):
chatbot[-1] = (inputs, response)
yield from update_ui(chatbot=chatbot, history=history)
query, max_length, top_p, temperature, history = adaptor(kwargs)
for response in self._model.chat_stream(self._tokenizer, query, history=history):
yield response
def try_to_import_special_deps(self, **kwargs):
# import something that will raise error if the user does not install requirement_*.txt
# 🏃‍♂️🏃‍♂️🏃‍♂️ 主进程执行
import importlib
importlib.import_module('modelscope')
# ------------------------------------------------------------------------------------------------------------------------
# 🔌💻 GPT-Academic Interface
# ------------------------------------------------------------------------------------------------------------------------
predict_no_ui_long_connection, predict = get_local_llm_predict_fns(GetQwenLMHandle, model_name)
# 总结输出
if response == f"[Local Message] 等待{model_name}响应中 ...":
response = f"[Local Message] {model_name}响应异常 ..."
history.extend([inputs, response])
yield from update_ui(chatbot=chatbot, history=history)

View File

@@ -0,0 +1,59 @@
model_name = "Qwen_Local"
cmd_to_install = "`pip install -r request_llms/requirements_qwen_local.txt`"
from toolbox import ProxyNetworkActivate, get_conf
from .local_llm_class import LocalLLMHandle, get_local_llm_predict_fns
# ------------------------------------------------------------------------------------------------------------------------
# 🔌💻 Local Model
# ------------------------------------------------------------------------------------------------------------------------
class GetQwenLMHandle(LocalLLMHandle):
def load_model_info(self):
# 🏃‍♂️🏃‍♂️🏃‍♂️ 子进程执行
self.model_name = model_name
self.cmd_to_install = cmd_to_install
def load_model_and_tokenizer(self):
# 🏃‍♂️🏃‍♂️🏃‍♂️ 子进程执行
# from modelscope import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation import GenerationConfig
with ProxyNetworkActivate('Download_LLM'):
model_id = get_conf('QWEN_LOCAL_MODEL_SELECTION')
self._tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True, resume_download=True)
# use fp16
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", trust_remote_code=True).eval()
model.generation_config = GenerationConfig.from_pretrained(model_id, trust_remote_code=True) # 可指定不同的生成长度、top_p等相关超参
self._model = model
return self._model, self._tokenizer
def llm_stream_generator(self, **kwargs):
# 🏃‍♂️🏃‍♂️🏃‍♂️ 子进程执行
def adaptor(kwargs):
query = kwargs['query']
max_length = kwargs['max_length']
top_p = kwargs['top_p']
temperature = kwargs['temperature']
history = kwargs['history']
return query, max_length, top_p, temperature, history
query, max_length, top_p, temperature, history = adaptor(kwargs)
for response in self._model.chat_stream(self._tokenizer, query, history=history):
yield response
def try_to_import_special_deps(self, **kwargs):
# import something that will raise error if the user does not install requirement_*.txt
# 🏃‍♂️🏃‍♂️🏃‍♂️ 主进程执行
import importlib
importlib.import_module('modelscope')
# ------------------------------------------------------------------------------------------------------------------------
# 🔌💻 GPT-Academic Interface
# ------------------------------------------------------------------------------------------------------------------------
predict_no_ui_long_connection, predict = get_local_llm_predict_fns(GetQwenLMHandle, model_name)

View File

@@ -26,7 +26,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
from .com_sparkapi import SparkRequestInstance
sri = SparkRequestInstance()
for response in sri.generate(inputs, llm_kwargs, history, sys_prompt):
for response in sri.generate(inputs, llm_kwargs, history, sys_prompt, use_image_api=False):
if len(observe_window) >= 1:
observe_window[0] = response
if len(observe_window) >= 2:
@@ -52,7 +52,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
# 开始接收回复
from .com_sparkapi import SparkRequestInstance
sri = SparkRequestInstance()
for response in sri.generate(inputs, llm_kwargs, history, system_prompt):
for response in sri.generate(inputs, llm_kwargs, history, system_prompt, use_image_api=True):
chatbot[-1] = (inputs, response)
yield from update_ui(chatbot=chatbot, history=history)

228
request_llms/com_google.py Normal file
View File

@@ -0,0 +1,228 @@
# encoding: utf-8
# @Time : 2023/12/25
# @Author : Spike
# @Descr :
import json
import os
import re
import requests
from typing import List, Dict, Tuple
from toolbox import get_conf, encode_image, get_pictures_list
proxies, TIMEOUT_SECONDS = get_conf("proxies", "TIMEOUT_SECONDS")
"""
========================================================================
第五部分 一些文件处理方法
files_filter_handler 根据type过滤文件
input_encode_handler 提取input中的文件并解析
file_manifest_filter_html 根据type过滤文件, 并解析为html or md 文本
link_mtime_to_md 文件增加本地时间参数,避免下载到缓存文件
html_view_blank 超链接
html_local_file 本地文件取相对路径
to_markdown_tabs 文件list 转换为 md tab
"""
def files_filter_handler(file_list):
new_list = []
filter_ = [
"png",
"jpg",
"jpeg",
"bmp",
"svg",
"webp",
"ico",
"tif",
"tiff",
"raw",
"eps",
]
for file in file_list:
file = str(file).replace("file=", "")
if os.path.exists(file):
if str(os.path.basename(file)).split(".")[-1] in filter_:
new_list.append(file)
return new_list
def input_encode_handler(inputs, llm_kwargs):
if llm_kwargs["most_recent_uploaded"].get("path"):
image_paths = get_pictures_list(llm_kwargs["most_recent_uploaded"]["path"])
md_encode = []
for md_path in image_paths:
type_ = os.path.splitext(md_path)[1].replace(".", "")
type_ = "jpeg" if type_ == "jpg" else type_
md_encode.append({"data": encode_image(md_path), "type": type_})
return inputs, md_encode
def file_manifest_filter_html(file_list, filter_: list = None, md_type=False):
new_list = []
if not filter_:
filter_ = [
"png",
"jpg",
"jpeg",
"bmp",
"svg",
"webp",
"ico",
"tif",
"tiff",
"raw",
"eps",
]
for file in file_list:
if str(os.path.basename(file)).split(".")[-1] in filter_:
new_list.append(html_local_img(file, md=md_type))
elif os.path.exists(file):
new_list.append(link_mtime_to_md(file))
else:
new_list.append(file)
return new_list
def link_mtime_to_md(file):
link_local = html_local_file(file)
link_name = os.path.basename(file)
a = f"[{link_name}]({link_local}?{os.path.getmtime(file)})"
return a
def html_local_file(file):
base_path = os.path.dirname(__file__) # 项目目录
if os.path.exists(str(file)):
file = f'file={file.replace(base_path, ".")}'
return file
def html_local_img(__file, layout="left", max_width=None, max_height=None, md=True):
style = ""
if max_width is not None:
style += f"max-width: {max_width};"
if max_height is not None:
style += f"max-height: {max_height};"
__file = html_local_file(__file)
a = f'<div align="{layout}"><img src="{__file}" style="{style}"></div>'
if md:
a = f"![{__file}]({__file})"
return a
def to_markdown_tabs(head: list, tabs: list, alignment=":---:", column=False):
"""
Args:
head: 表头:[]
tabs: 表值:[[列1], [列2], [列3], [列4]]
alignment: :--- 左对齐, :---: 居中对齐, ---: 右对齐
column: True to keep data in columns, False to keep data in rows (default).
Returns:
A string representation of the markdown table.
"""
if column:
transposed_tabs = list(map(list, zip(*tabs)))
else:
transposed_tabs = tabs
# Find the maximum length among the columns
max_len = max(len(column) for column in transposed_tabs)
tab_format = "| %s "
tabs_list = "".join([tab_format % i for i in head]) + "|\n"
tabs_list += "".join([tab_format % alignment for i in head]) + "|\n"
for i in range(max_len):
row_data = [tab[i] if i < len(tab) else "" for tab in transposed_tabs]
row_data = file_manifest_filter_html(row_data, filter_=None)
tabs_list += "".join([tab_format % i for i in row_data]) + "|\n"
return tabs_list
class GoogleChatInit:
def __init__(self):
self.url_gemini = "https://generativelanguage.googleapis.com/v1beta/models/%m:streamGenerateContent?key=%k"
def generate_chat(self, inputs, llm_kwargs, history, system_prompt):
headers, payload = self.generate_message_payload(
inputs, llm_kwargs, history, system_prompt
)
response = requests.post(
url=self.url_gemini,
headers=headers,
data=json.dumps(payload),
stream=True,
proxies=proxies,
timeout=TIMEOUT_SECONDS,
)
return response.iter_lines()
def __conversation_user(self, user_input, llm_kwargs):
what_i_have_asked = {"role": "user", "parts": []}
if "vision" not in self.url_gemini:
input_ = user_input
encode_img = []
else:
input_, encode_img = input_encode_handler(user_input, llm_kwargs=llm_kwargs)
what_i_have_asked["parts"].append({"text": input_})
if encode_img:
for data in encode_img:
what_i_have_asked["parts"].append(
{
"inline_data": {
"mime_type": f"image/{data['type']}",
"data": data["data"],
}
}
)
return what_i_have_asked
def __conversation_history(self, history, llm_kwargs):
messages = []
conversation_cnt = len(history) // 2
if conversation_cnt:
for index in range(0, 2 * conversation_cnt, 2):
what_i_have_asked = self.__conversation_user(history[index], llm_kwargs)
what_gpt_answer = {
"role": "model",
"parts": [{"text": history[index + 1]}],
}
messages.append(what_i_have_asked)
messages.append(what_gpt_answer)
return messages
def generate_message_payload(
self, inputs, llm_kwargs, history, system_prompt
) -> Tuple[Dict, Dict]:
messages = [
# {"role": "system", "parts": [{"text": system_prompt}]}, # gemini 不允许对话轮次为偶数,所以这个没有用,看后续支持吧。。。
# {"role": "user", "parts": [{"text": ""}]},
# {"role": "model", "parts": [{"text": ""}]}
]
self.url_gemini = self.url_gemini.replace(
"%m", llm_kwargs["llm_model"]
).replace("%k", get_conf("GEMINI_API_KEY"))
header = {"Content-Type": "application/json"}
if "vision" not in self.url_gemini: # 不是vision 才处理history
messages.extend(
self.__conversation_history(history, llm_kwargs)
) # 处理 history
messages.append(self.__conversation_user(inputs, llm_kwargs)) # 处理用户对话
payload = {
"contents": messages,
"generationConfig": {
# "maxOutputTokens": 800,
"stopSequences": str(llm_kwargs.get("stop", "")).split(" "),
"temperature": llm_kwargs.get("temperature", 1),
"topP": llm_kwargs.get("top_p", 0.8),
"topK": 10,
},
}
return header, payload
if __name__ == "__main__":
google = GoogleChatInit()
# print(gootle.generate_message_payload('你好呀', {}, ['123123', '3123123'], ''))
# gootle.input_encode_handle('123123[123123](./123123), ![53425](./asfafa/fff.jpg)')

View File

@@ -0,0 +1,94 @@
from http import HTTPStatus
from toolbox import get_conf
import threading
import logging
timeout_bot_msg = '[Local Message] Request timeout. Network error.'
class QwenRequestInstance():
def __init__(self):
import dashscope
self.time_to_yield_event = threading.Event()
self.time_to_exit_event = threading.Event()
self.result_buf = ""
def validate_key():
DASHSCOPE_API_KEY = get_conf("DASHSCOPE_API_KEY")
if DASHSCOPE_API_KEY == '': return False
return True
if not validate_key():
raise RuntimeError('请配置 DASHSCOPE_API_KEY')
dashscope.api_key = get_conf("DASHSCOPE_API_KEY")
def generate(self, inputs, llm_kwargs, history, system_prompt):
# import _thread as thread
from dashscope import Generation
QWEN_MODEL = {
'qwen-turbo': Generation.Models.qwen_turbo,
'qwen-plus': Generation.Models.qwen_plus,
'qwen-max': Generation.Models.qwen_max,
}[llm_kwargs['llm_model']]
top_p = llm_kwargs.get('top_p', 0.8)
if top_p == 0: top_p += 1e-5
if top_p == 1: top_p -= 1e-5
self.result_buf = ""
responses = Generation.call(
model=QWEN_MODEL,
messages=generate_message_payload(inputs, llm_kwargs, history, system_prompt),
top_p=top_p,
temperature=llm_kwargs.get('temperature', 1.0),
result_format='message',
stream=True,
incremental_output=True
)
for response in responses:
if response.status_code == HTTPStatus.OK:
if response.output.choices[0].finish_reason == 'stop':
yield self.result_buf
break
elif response.output.choices[0].finish_reason == 'length':
self.result_buf += "[Local Message] 生成长度过长,后续输出被截断"
yield self.result_buf
break
else:
self.result_buf += response.output.choices[0].message.content
yield self.result_buf
else:
self.result_buf += f"[Local Message] 请求错误:状态码:{response.status_code},错误码:{response.code},消息:{response.message}"
yield self.result_buf
break
logging.info(f'[raw_input] {inputs}')
logging.info(f'[response] {self.result_buf}')
return self.result_buf
def generate_message_payload(inputs, llm_kwargs, history, system_prompt):
conversation_cnt = len(history) // 2
if system_prompt == '': system_prompt = 'Hello!'
messages = [{"role": "user", "content": system_prompt}, {"role": "assistant", "content": "Certainly!"}]
if conversation_cnt:
for index in range(0, 2*conversation_cnt, 2):
what_i_have_asked = {}
what_i_have_asked["role"] = "user"
what_i_have_asked["content"] = history[index]
what_gpt_answer = {}
what_gpt_answer["role"] = "assistant"
what_gpt_answer["content"] = history[index+1]
if what_i_have_asked["content"] != "":
if what_gpt_answer["content"] == "":
continue
if what_gpt_answer["content"] == timeout_bot_msg:
continue
messages.append(what_i_have_asked)
messages.append(what_gpt_answer)
else:
messages[-1]['content'] = what_gpt_answer['content']
what_i_ask_now = {}
what_i_ask_now["role"] = "user"
what_i_ask_now["content"] = inputs
messages.append(what_i_ask_now)
return messages

View File

@@ -72,12 +72,12 @@ class SparkRequestInstance():
self.result_buf = ""
def generate(self, inputs, llm_kwargs, history, system_prompt):
def generate(self, inputs, llm_kwargs, history, system_prompt, use_image_api=False):
llm_kwargs = llm_kwargs
history = history
system_prompt = system_prompt
import _thread as thread
thread.start_new_thread(self.create_blocking_request, (inputs, llm_kwargs, history, system_prompt))
thread.start_new_thread(self.create_blocking_request, (inputs, llm_kwargs, history, system_prompt, use_image_api))
while True:
self.time_to_yield_event.wait(timeout=1)
if self.time_to_yield_event.is_set():
@@ -86,7 +86,7 @@ class SparkRequestInstance():
return self.result_buf
def create_blocking_request(self, inputs, llm_kwargs, history, system_prompt):
def create_blocking_request(self, inputs, llm_kwargs, history, system_prompt, use_image_api):
if llm_kwargs['llm_model'] == 'sparkv2':
gpt_url = self.gpt_url_v2
elif llm_kwargs['llm_model'] == 'sparkv3':
@@ -94,10 +94,12 @@ class SparkRequestInstance():
else:
gpt_url = self.gpt_url
file_manifest = []
if llm_kwargs.get('most_recent_uploaded'):
if use_image_api and llm_kwargs.get('most_recent_uploaded'):
if llm_kwargs['most_recent_uploaded'].get('path'):
file_manifest = get_pictures_list(llm_kwargs['most_recent_uploaded']['path'])
gpt_url = self.gpt_url_img
if len(file_manifest) > 0:
print('正在使用讯飞图片理解API')
gpt_url = self.gpt_url_img
wsParam = Ws_Param(self.appid, self.api_key, self.api_secret, gpt_url)
websocket.enableTrace(False)
wsUrl = wsParam.create_url()

View File

@@ -183,11 +183,11 @@ class LocalLLMHandle(Process):
def stream_chat(self, **kwargs):
# ⭐run in main process
if self.get_state() == "`准备就绪`":
yield "`正在等待线程锁,排队中请稍 ...`"
yield "`正在等待线程锁,排队中请稍 ...`"
with self.threadLock:
if self.parent.poll():
yield "`排队中请稍 ...`"
yield "`排队中请稍 ...`"
self.clear_pending_messages()
self.parent.send(kwargs)
std_out = ""

View File

@@ -2,4 +2,4 @@ protobuf
cpm_kernels
torch>=1.10
mdtex2html
sentencepiece
sentencepiece

View File

@@ -6,5 +6,3 @@ sentencepiece
numpy
onnxruntime
sentencepiece
streamlit
streamlit-chat

View File

@@ -3,4 +3,4 @@ jtorch >= 0.1.3
torch
torchvision
pandas
jieba
jieba

View File

@@ -5,5 +5,3 @@ accelerate
matplotlib
huggingface_hub
triton
streamlit

View File

@@ -1,4 +1 @@
modelscope
transformers_stream_generator
auto-gptq
optimum
dashscope

View File

@@ -0,0 +1,5 @@
modelscope
transformers_stream_generator
auto-gptq
optimum
urllib3<2

View File

@@ -1 +1 @@
slack-sdk==3.21.3
slack-sdk==3.21.3

View File

@@ -1,8 +1,10 @@
./docs/gradio-3.32.6-py3-none-any.whl
pypdf2==2.12.1
zhipuai<2
tiktoken>=0.3.3
requests[socks]
pydantic==1.10.11
protobuf==3.18
transformers>=4.27.1
scipdf_parser>=0.52
python-markdown-math

View File

@@ -3,12 +3,14 @@
# """
def validate_path():
import os, sys
dir_name = os.path.dirname(__file__)
root_dir_assume = os.path.abspath(os.path.dirname(__file__) + '/..')
os.path.dirname(__file__)
root_dir_assume = os.path.abspath(os.path.dirname(__file__) + "/..")
os.chdir(root_dir_assume)
sys.path.append(root_dir_assume)
validate_path() # validate path so you can run from base directory
validate_path() # validate path so you can run from base directory
if __name__ == "__main__":
# from request_llms.bridge_newbingfree import predict_no_ui_long_connection
# from request_llms.bridge_moss import predict_no_ui_long_connection
@@ -18,19 +20,19 @@ if __name__ == "__main__":
# from request_llms.bridge_internlm import predict_no_ui_long_connection
# from request_llms.bridge_deepseekcoder import predict_no_ui_long_connection
# from request_llms.bridge_qwen_7B import predict_no_ui_long_connection
from request_llms.bridge_qwen import predict_no_ui_long_connection
from request_llms.bridge_qwen_local import predict_no_ui_long_connection
# from request_llms.bridge_spark import predict_no_ui_long_connection
# from request_llms.bridge_zhipu import predict_no_ui_long_connection
# from request_llms.bridge_chatglm3 import predict_no_ui_long_connection
llm_kwargs = {
'max_length': 4096,
'top_p': 1,
'temperature': 1,
"max_length": 4096,
"top_p": 1,
"temperature": 1,
}
result = predict_no_ui_long_connection( inputs="请问什么是质子?",
llm_kwargs=llm_kwargs,
history=["你好", "我好!"],
sys_prompt="")
print('final result:', result)
result = predict_no_ui_long_connection(
inputs="请问什么是质子?", llm_kwargs=llm_kwargs, history=["你好", "我好!"], sys_prompt=""
)
print("final result:", result)

View File

@@ -29,16 +29,20 @@ md = """
请随时告诉我您的需求,我会尽力提供帮助。如果您有任何问题或需要解答的议题,请随时提问。
"""
def validate_path():
import os, sys
dir_name = os.path.dirname(__file__)
root_dir_assume = os.path.abspath(os.path.dirname(__file__) + '/..')
os.path.dirname(__file__)
root_dir_assume = os.path.abspath(os.path.dirname(__file__) + "/..")
os.chdir(root_dir_assume)
sys.path.append(root_dir_assume)
validate_path() # validate path so you can run from base directory
validate_path() # validate path so you can run from base directory
from toolbox import markdown_convertion
html = markdown_convertion(md)
print(html)
with open('test.html', 'w', encoding='utf-8') as f:
f.write(html)
with open("test.html", "w", encoding="utf-8") as f:
f.write(html)

View File

@@ -4,16 +4,28 @@
import os, sys
def validate_path(): dir_name = os.path.dirname(__file__); root_dir_assume = os.path.abspath(dir_name + '/..'); os.chdir(root_dir_assume); sys.path.append(root_dir_assume)
validate_path() # 返回项目根路径
def validate_path():
dir_name = os.path.dirname(__file__)
root_dir_assume = os.path.abspath(dir_name + "/..")
os.chdir(root_dir_assume)
sys.path.append(root_dir_assume)
validate_path() # 返回项目根路径
if __name__ == "__main__":
from tests.test_utils import plugin_test
# plugin_test(plugin='crazy_functions.函数动态生成->函数动态生成', main_input='交换图像的蓝色通道和红色通道', advanced_arg={"file_path_arg": "./build/ants.jpg"})
# plugin_test(plugin='crazy_functions.Latex输出PDF结果->Latex翻译中文并重新编译PDF', main_input="2307.07522")
plugin_test(plugin='crazy_functions.Latex输出PDF结果->Latex翻译中文并重新编译PDF', main_input="G:/SEAFILE_LOCAL/50503047/我的资料库/学位/paperlatex/aaai/Fu_8368_with_appendix")
plugin_test(
plugin="crazy_functions.Latex输出PDF结果->Latex翻译中文并重新编译PDF",
main_input="G:/SEAFILE_LOCAL/50503047/我的资料库/学位/paperlatex/aaai/Fu_8368_with_appendix",
)
# plugin_test(plugin='crazy_functions.虚空终端->虚空终端', main_input='修改api-key为sk-jhoejriotherjep')
@@ -34,7 +46,7 @@ if __name__ == "__main__":
# plugin_test(plugin='crazy_functions.批量翻译PDF文档_多线程->批量翻译PDF文档', main_input='crazy_functions/test_project/pdf_and_word/aaai.pdf')
# plugin_test(plugin='crazy_functions.谷歌检索小助手->谷歌检索小助手', main_input="https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=auto+reinforcement+learning&btnG=")
# plugin_test(plugin='crazy_functions.总结word文档->总结word文档', main_input="crazy_functions/test_project/pdf_and_word")
# plugin_test(plugin='crazy_functions.下载arxiv论文翻译摘要->下载arxiv论文并翻译摘要', main_input="1812.10695")
@@ -53,12 +65,11 @@ if __name__ == "__main__":
# plugin_test(plugin='crazy_functions.知识库文件注入->读取知识库作答', main_input="What is the installation method")
# plugin_test(plugin='crazy_functions.知识库文件注入->读取知识库作答', main_input="远程云服务器部署?")
# plugin_test(plugin='crazy_functions.Latex输出PDF结果->Latex翻译中文并重新编译PDF', main_input="2210.03629")
# advanced_arg = {"advanced_arg":"--llm_to_learn=gpt-3.5-turbo --prompt_prefix='根据下面的服装类型提示想象一个穿着者对这个人外貌、身处的环境、内心世界、人设进行描写。要求100字以内用第二人称。' --system_prompt=''" }
# plugin_test(plugin='crazy_functions.chatglm微调工具->微调数据集生成', main_input='build/dev.json', advanced_arg=advanced_arg)
# advanced_arg = {"advanced_arg":"--pre_seq_len=128 --learning_rate=2e-2 --num_gpus=1 --json_dataset='t_code.json' --ptuning_directory='/home/hmp/ChatGLM2-6B/ptuning' " }
# plugin_test(plugin='crazy_functions.chatglm微调工具->启动微调', main_input='build/dev.json', advanced_arg=advanced_arg)

View File

@@ -9,45 +9,52 @@ from functools import wraps
import sys
import os
def chat_to_markdown_str(chat):
result = ""
for i, cc in enumerate(chat):
result += f'\n\n{cc[0]}\n\n{cc[1]}'
if i != len(chat)-1:
result += '\n\n---'
result += f"\n\n{cc[0]}\n\n{cc[1]}"
if i != len(chat) - 1:
result += "\n\n---"
return result
def silence_stdout(func):
@wraps(func)
def wrapper(*args, **kwargs):
_original_stdout = sys.stdout
sys.stdout = open(os.devnull, 'w')
sys.stdout.reconfigure(encoding='utf-8')
sys.stdout = open(os.devnull, "w")
sys.stdout.reconfigure(encoding="utf-8")
for q in func(*args, **kwargs):
sys.stdout = _original_stdout
yield q
sys.stdout = open(os.devnull, 'w')
sys.stdout.reconfigure(encoding='utf-8')
sys.stdout = open(os.devnull, "w")
sys.stdout.reconfigure(encoding="utf-8")
sys.stdout.close()
sys.stdout = _original_stdout
return wrapper
def silence_stdout_fn(func):
@wraps(func)
def wrapper(*args, **kwargs):
_original_stdout = sys.stdout
sys.stdout = open(os.devnull, 'w')
sys.stdout.reconfigure(encoding='utf-8')
sys.stdout = open(os.devnull, "w")
sys.stdout.reconfigure(encoding="utf-8")
result = func(*args, **kwargs)
sys.stdout.close()
sys.stdout = _original_stdout
return result
return wrapper
class VoidTerminal():
class VoidTerminal:
def __init__(self) -> None:
pass
vt = VoidTerminal()
vt.get_conf = silence_stdout_fn(get_conf)
vt.set_conf = silence_stdout_fn(set_conf)
@@ -56,9 +63,27 @@ vt.get_plugin_handle = silence_stdout_fn(get_plugin_handle)
vt.get_plugin_default_kwargs = silence_stdout_fn(get_plugin_default_kwargs)
vt.get_chat_handle = silence_stdout_fn(get_chat_handle)
vt.get_chat_default_kwargs = silence_stdout_fn(get_chat_default_kwargs)
vt.chat_to_markdown_str = (chat_to_markdown_str)
proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION, CHATBOT_HEIGHT, LAYOUT, API_KEY = \
vt.get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION', 'CHATBOT_HEIGHT', 'LAYOUT', 'API_KEY')
vt.chat_to_markdown_str = chat_to_markdown_str
(
proxies,
WEB_PORT,
LLM_MODEL,
CONCURRENT_COUNT,
AUTHENTICATION,
CHATBOT_HEIGHT,
LAYOUT,
API_KEY,
) = vt.get_conf(
"proxies",
"WEB_PORT",
"LLM_MODEL",
"CONCURRENT_COUNT",
"AUTHENTICATION",
"CHATBOT_HEIGHT",
"LAYOUT",
"API_KEY",
)
def plugin_test(main_input, plugin, advanced_arg=None, debug=True):
from rich.live import Live
@@ -69,9 +94,9 @@ def plugin_test(main_input, plugin, advanced_arg=None, debug=True):
plugin = vt.get_plugin_handle(plugin)
plugin_kwargs = vt.get_plugin_default_kwargs()
plugin_kwargs['main_input'] = main_input
plugin_kwargs["main_input"] = main_input
if advanced_arg is not None:
plugin_kwargs['plugin_kwargs'] = advanced_arg
plugin_kwargs["plugin_kwargs"] = advanced_arg
if debug:
my_working_plugin = (plugin)(**plugin_kwargs)
else:
@@ -81,4 +106,4 @@ def plugin_test(main_input, plugin, advanced_arg=None, debug=True):
for cookies, chat, hist, msg in my_working_plugin:
md_str = vt.chat_to_markdown_str(chat)
md = Markdown(md_str)
live.update(md, refresh=True)
live.update(md, refresh=True)

View File

@@ -4,14 +4,25 @@
import os, sys
def validate_path(): dir_name = os.path.dirname(__file__); root_dir_assume = os.path.abspath(dir_name + '/..'); os.chdir(root_dir_assume); sys.path.append(root_dir_assume)
validate_path() # 返回项目根路径
def validate_path():
dir_name = os.path.dirname(__file__)
root_dir_assume = os.path.abspath(dir_name + "/..")
os.chdir(root_dir_assume)
sys.path.append(root_dir_assume)
validate_path() # 返回项目根路径
if __name__ == "__main__":
from tests.test_utils import plugin_test
plugin_test(plugin='crazy_functions.知识库问答->知识库文件注入', main_input="./README.md")
plugin_test(plugin="crazy_functions.知识库问答->知识库文件注入", main_input="./README.md")
plugin_test(plugin='crazy_functions.知识库问答->读取知识库作答', main_input="What is the installation method")
plugin_test(
plugin="crazy_functions.知识库问答->读取知识库作答",
main_input="What is the installation method",
)
plugin_test(plugin='crazy_functions.知识库问答->读取知识库作答', main_input="远程云服务器部署?")
plugin_test(plugin="crazy_functions.知识库问答->读取知识库作答", main_input="远程云服务器部署?")

View File

@@ -94,6 +94,10 @@
background-color: var(--block-background-fill) !important;
}
#cbsc {
background-color: var(--block-background-fill) !important;
}
#interact-panel .form {
border: hidden
}
@@ -111,4 +115,4 @@
border: solid;
border-width: thin;
border-top-width: 0;
}
}

View File

@@ -1,9 +1,13 @@
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
// 第 1 部分: 工具函数
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
function gradioApp() {
// https://github.com/GaiZhenbiao/ChuanhuChatGPT/tree/main/web_assets/javascript
const elems = document.getElementsByTagName('gradio-app');
const elem = elems.length == 0 ? document : elems[0];
if (elem !== document) {
elem.getElementById = function(id) {
elem.getElementById = function (id) {
return document.getElementById(id);
};
}
@@ -12,31 +16,168 @@ function gradioApp() {
function setCookie(name, value, days) {
var expires = "";
if (days) {
var date = new Date();
date.setTime(date.getTime() + (days * 24 * 60 * 60 * 1000));
expires = "; expires=" + date.toUTCString();
var date = new Date();
date.setTime(date.getTime() + (days * 24 * 60 * 60 * 1000));
expires = "; expires=" + date.toUTCString();
}
document.cookie = name + "=" + value + expires + "; path=/";
}
function getCookie(name) {
var decodedCookie = decodeURIComponent(document.cookie);
var cookies = decodedCookie.split(';');
for (var i = 0; i < cookies.length; i++) {
var cookie = cookies[i].trim();
if (cookie.indexOf(name + "=") === 0) {
return cookie.substring(name.length + 1, cookie.length);
}
var cookie = cookies[i].trim();
if (cookie.indexOf(name + "=") === 0) {
return cookie.substring(name.length + 1, cookie.length);
}
}
return null;
}
}
let toastCount = 0;
function toast_push(msg, duration) {
duration = isNaN(duration) ? 3000 : duration;
const existingToasts = document.querySelectorAll('.toast');
existingToasts.forEach(toast => {
toast.style.top = `${parseInt(toast.style.top, 10) - 70}px`;
});
const m = document.createElement('div');
m.innerHTML = msg;
m.classList.add('toast');
m.style.cssText = `font-size: var(--text-md) !important; color: rgb(255, 255, 255); background-color: rgba(0, 0, 0, 0.6); padding: 10px 15px; border-radius: 4px; position: fixed; top: ${50 + toastCount * 70}%; left: 50%; transform: translateX(-50%); width: auto; text-align: center; transition: top 0.3s;`;
document.body.appendChild(m);
setTimeout(function () {
m.style.opacity = '0';
setTimeout(function () {
document.body.removeChild(m);
toastCount--;
}, 500);
}, duration);
toastCount++;
}
function toast_up(msg) {
var m = document.getElementById('toast_up');
if (m) {
document.body.removeChild(m); // remove the loader from the body
}
m = document.createElement('div');
m.id = 'toast_up';
m.innerHTML = msg;
m.style.cssText = "font-size: var(--text-md) !important; color: rgb(255, 255, 255); background-color: rgba(0, 0, 100, 0.6); padding: 10px 15px; margin: 0 0 0 -60px; border-radius: 4px; position: fixed; top: 50%; left: 50%; width: auto; text-align: center;";
document.body.appendChild(m);
}
function toast_down() {
var m = document.getElementById('toast_up');
if (m) {
document.body.removeChild(m); // remove the loader from the body
}
}
function begin_loading_status() {
// Create the loader div and add styling
var loader = document.createElement('div');
loader.id = 'Js_File_Loading';
var C1 = document.createElement('div');
var C2 = document.createElement('div');
// var C3 = document.createElement('span');
// C3.textContent = '上传中...'
// C3.style.position = "fixed";
// C3.style.top = "50%";
// C3.style.left = "50%";
// C3.style.width = "80px";
// C3.style.height = "80px";
// C3.style.margin = "-40px 0 0 -40px";
C1.style.position = "fixed";
C1.style.top = "50%";
C1.style.left = "50%";
C1.style.width = "80px";
C1.style.height = "80px";
C1.style.borderLeft = "12px solid #00f3f300";
C1.style.borderRight = "12px solid #00f3f300";
C1.style.borderTop = "12px solid #82aaff";
C1.style.borderBottom = "12px solid #82aaff"; // Added for effect
C1.style.borderRadius = "50%";
C1.style.margin = "-40px 0 0 -40px";
C1.style.animation = "spinAndPulse 2s linear infinite";
C2.style.position = "fixed";
C2.style.top = "50%";
C2.style.left = "50%";
C2.style.width = "40px";
C2.style.height = "40px";
C2.style.borderLeft = "12px solid #00f3f300";
C2.style.borderRight = "12px solid #00f3f300";
C2.style.borderTop = "12px solid #33c9db";
C2.style.borderBottom = "12px solid #33c9db"; // Added for effect
C2.style.borderRadius = "50%";
C2.style.margin = "-20px 0 0 -20px";
C2.style.animation = "spinAndPulse2 2s linear infinite";
loader.appendChild(C1);
loader.appendChild(C2);
// loader.appendChild(C3);
document.body.appendChild(loader); // Add the loader to the body
// Set the CSS animation keyframes for spin and pulse to be synchronized
var styleSheet = document.createElement('style');
styleSheet.id = 'Js_File_Loading_Style';
styleSheet.textContent = `
@keyframes spinAndPulse {
0% { transform: rotate(0deg) scale(1); }
25% { transform: rotate(90deg) scale(1.1); }
50% { transform: rotate(180deg) scale(1); }
75% { transform: rotate(270deg) scale(0.9); }
100% { transform: rotate(360deg) scale(1); }
}
@keyframes spinAndPulse2 {
0% { transform: rotate(-90deg);}
25% { transform: rotate(-180deg);}
50% { transform: rotate(-270deg);}
75% { transform: rotate(-360deg);}
100% { transform: rotate(-450deg);}
}
`;
document.head.appendChild(styleSheet);
}
function cancel_loading_status() {
// remove the loader from the body
var loadingElement = document.getElementById('Js_File_Loading');
if (loadingElement) {
document.body.removeChild(loadingElement);
}
var loadingStyle = document.getElementById('Js_File_Loading_Style');
if (loadingStyle) {
document.head.removeChild(loadingStyle);
}
// create new listen event
let clearButton = document.querySelectorAll('div[id*="elem_upload"] button[aria-label="Clear"]');
for (let button of clearButton) {
button.addEventListener('click', function () {
setTimeout(function () {
register_upload_event();
}, 50);
});
}
}
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
// 第 2 部分: 复制按钮
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
function addCopyButton(botElement) {
// https://github.com/GaiZhenbiao/ChuanhuChatGPT/tree/main/web_assets/javascript
// Copy bot button
@@ -45,11 +186,10 @@ function addCopyButton(botElement) {
const messageBtnColumnElement = botElement.querySelector('.message-btn-row');
if (messageBtnColumnElement) {
// Do something if .message-btn-column exists, for example, remove it
// messageBtnColumnElement.remove();
// if .message-btn-column exists
return;
}
var copyButton = document.createElement('button');
copyButton.classList.add('copy-bot-btn');
copyButton.setAttribute('aria-label', 'Copy');
@@ -98,47 +238,63 @@ function chatbotContentChanged(attempt = 1, force = false) {
}
}
function chatbotAutoHeight(){
// 自动调整高度
function update_height(){
var { panel_height_target, chatbot_height, chatbot } = get_elements(true);
if (panel_height_target!=chatbot_height)
{
var pixelString = panel_height_target.toString() + 'px';
chatbot.style.maxHeight = pixelString; chatbot.style.height = pixelString;
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
// 第 3 部分: chatbot动态高度调整
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
function chatbotAutoHeight() {
// 自动调整高度:立即
function update_height() {
var { height_target, chatbot_height, chatbot } = get_elements(true);
if (height_target != chatbot_height) {
var pixelString = height_target.toString() + 'px';
chatbot.style.maxHeight = pixelString; chatbot.style.height = pixelString;
}
}
function update_height_slow(){
var { panel_height_target, chatbot_height, chatbot } = get_elements();
if (panel_height_target!=chatbot_height)
{
new_panel_height = (panel_height_target - chatbot_height)*0.5 + chatbot_height;
if (Math.abs(new_panel_height - panel_height_target) < 10){
new_panel_height = panel_height_target;
// 自动调整高度:缓慢
function update_height_slow() {
var { height_target, chatbot_height, chatbot } = get_elements();
if (height_target != chatbot_height) {
// sign = (height_target - chatbot_height)/Math.abs(height_target - chatbot_height);
// speed = Math.max(Math.abs(height_target - chatbot_height), 1);
new_panel_height = (height_target - chatbot_height) * 0.5 + chatbot_height;
if (Math.abs(new_panel_height - height_target) < 10) {
new_panel_height = height_target;
}
// console.log(chatbot_height, panel_height_target, new_panel_height);
var pixelString = new_panel_height.toString() + 'px';
chatbot.style.maxHeight = pixelString; chatbot.style.height = pixelString;
chatbot.style.maxHeight = pixelString; chatbot.style.height = pixelString;
}
}
monitoring_input_box()
update_height();
setInterval(function() {
update_height_slow()
}, 50); // 每100毫秒执行一次
window.addEventListener('resize', function() { update_height(); });
window.addEventListener('scroll', function() { update_height_slow(); });
setInterval(function () { update_height_slow() }, 50); // 每50毫秒执行一次
}
function GptAcademicJavaScriptInit(LAYOUT = "LEFT-RIGHT") {
chatbotIndicator = gradioApp().querySelector('#gpt-chatbot > div.wrap');
var chatbotObserver = new MutationObserver(() => {
chatbotContentChanged(1);
});
chatbotObserver.observe(chatbotIndicator, { attributes: true, childList: true, subtree: true });
if (LAYOUT === "LEFT-RIGHT") {chatbotAutoHeight();}
swapped = false;
function swap_input_area() {
// Get the elements to be swapped
var element1 = document.querySelector("#input-panel");
var element2 = document.querySelector("#basic-panel");
// Get the parent of the elements
var parent = element1.parentNode;
// Get the next sibling of element2
var nextSibling = element2.nextSibling;
// Swap the elements
parent.insertBefore(element2, element1);
parent.insertBefore(element1, nextSibling);
if (swapped) {swapped = false;}
else {swapped = true;}
}
function get_elements(consider_state_panel=false) {
function get_elements(consider_state_panel = false) {
var chatbot = document.querySelector('#gpt-chatbot > div.wrap.svelte-18telvq');
if (!chatbot) {
chatbot = document.querySelector('#gpt-chatbot');
@@ -147,23 +303,96 @@ function get_elements(consider_state_panel=false) {
const panel2 = document.querySelector('#basic-panel').getBoundingClientRect()
const panel3 = document.querySelector('#plugin-panel').getBoundingClientRect();
// const panel4 = document.querySelector('#interact-panel').getBoundingClientRect();
const panel5 = document.querySelector('#input-panel2').getBoundingClientRect();
const panel_active = document.querySelector('#state-panel').getBoundingClientRect();
if (consider_state_panel || panel_active.height < 25){
if (consider_state_panel || panel_active.height < 25) {
document.state_panel_height = panel_active.height;
}
// 25 是chatbot的label高度, 16 是右侧的gap
var panel_height_target = panel1.height + panel2.height + panel3.height + 0 + 0 - 25 + 16*2;
var height_target = panel1.height + panel2.height + panel3.height + 0 + 0 - 25 + 16 * 2;
// 禁止动态的state-panel高度影响
panel_height_target = panel_height_target + (document.state_panel_height-panel_active.height)
var panel_height_target = parseInt(panel_height_target);
height_target = height_target + (document.state_panel_height - panel_active.height)
var height_target = parseInt(height_target);
var chatbot_height = chatbot.style.height;
// 交换输入区位置,使得输入区始终可用
if (!swapped){
if (panel1.top!=0 && (panel1.bottom + panel1.top)/2 < 0){ swap_input_area(); }
}
else if (swapped){
if (panel2.top!=0 && panel2.top > 0){ swap_input_area(); }
}
// 调整高度
const err_tor = 5;
if (Math.abs(panel1.left - chatbot.getBoundingClientRect().left) < err_tor){
// 是否处于窄屏模式
height_target = window.innerHeight * 0.6;
}else{
// 调整高度
const chatbot_height_exceed = 15;
const chatbot_height_exceed_m = 10;
b_panel = Math.max(panel1.bottom, panel2.bottom, panel3.bottom)
if (b_panel >= window.innerHeight - chatbot_height_exceed) {
height_target = window.innerHeight - chatbot.getBoundingClientRect().top - chatbot_height_exceed_m;
}
else if (b_panel < window.innerHeight * 0.75) {
height_target = window.innerHeight * 0.8;
}
}
var chatbot_height = parseInt(chatbot_height);
return { panel_height_target, chatbot_height, chatbot };
return { height_target, chatbot_height, chatbot };
}
function add_func_paste(input) {
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
// 第 4 部分: 粘贴、拖拽文件上传
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
var elem_upload = null;
var elem_upload_float = null;
var elem_input_main = null;
var elem_input_float = null;
var elem_chatbot = null;
var elem_upload_component_float = null;
var elem_upload_component = null;
var exist_file_msg = '⚠️请先删除上传区(左上方)中的历史文件,再尝试上传。'
function locate_upload_elems(){
elem_upload = document.getElementById('elem_upload')
elem_upload_float = document.getElementById('elem_upload_float')
elem_input_main = document.getElementById('user_input_main')
elem_input_float = document.getElementById('user_input_float')
elem_chatbot = document.getElementById('gpt-chatbot')
elem_upload_component_float = elem_upload_float.querySelector("input[type=file]");
elem_upload_component = elem_upload.querySelector("input[type=file]");
}
async function upload_files(files) {
let totalSizeMb = 0
elem_upload_component_float = elem_upload_float.querySelector("input[type=file]");
if (files && files.length > 0) {
// 执行具体的上传逻辑
if (elem_upload_component_float) {
for (let i = 0; i < files.length; i++) {
// 将从文件数组中获取的文件大小(单位为字节)转换为MB
totalSizeMb += files[i].size / 1024 / 1024;
}
// 检查文件总大小是否超过20MB
if (totalSizeMb > 20) {
toast_push('⚠️文件夹大于 20MB 🚀上传文件中', 3000);
}
let event = new Event("change");
Object.defineProperty(event, "target", { value: elem_upload_component_float, enumerable: true });
Object.defineProperty(event, "currentTarget", { value: elem_upload_component_float, enumerable: true });
Object.defineProperty(elem_upload_component_float, "files", { value: files, enumerable: true });
elem_upload_component_float.dispatchEvent(event);
} else {
console.log(exist_file_msg);
toast_push(exist_file_msg, 3000);
}
}
}
function register_func_paste(input) {
let paste_files = [];
if (input) {
input.addEventListener("paste", async function (e) {
@@ -180,7 +409,7 @@ function add_func_paste(input) {
}
if (paste_files.length > 0) {
// 按照文件列表执行批量上传逻辑
await paste_upload_files(paste_files);
await upload_files(paste_files);
paste_files = []
}
@@ -189,72 +418,110 @@ function add_func_paste(input) {
}
}
function register_func_drag(elem) {
if (elem) {
const dragEvents = ["dragover"];
const leaveEvents = ["dragleave", "dragend", "drop"];
async function paste_upload_files(files) {
const uploadInputElement = elem_upload_float.querySelector("input[type=file]");
let totalSizeMb = 0
if (files && files.length > 0) {
// 执行具体的上传逻辑
if (uploadInputElement) {
for (let i = 0; i < files.length; i++) {
// 将从文件数组中获取的文件大小(单位为字节)转换为MB
totalSizeMb += files[i].size / 1024 / 1024;
const onDrag = function (e) {
e.preventDefault();
e.stopPropagation();
if (elem_upload_float.querySelector("input[type=file]")) {
toast_up('⚠️释放以上传文件')
} else {
toast_up(exist_file_msg)
}
// 检查文件总大小是否超过20MB
if (totalSizeMb > 20) {
toast_push('⚠文件夹大于20MB 🚀上传文件中', 2000)
// return; // 如果超过了指定大小, 可以不进行后续上传操作
}
// 监听change事件 原生Gradio可以实现
// uploadInputElement.addEventListener('change', function(){replace_input_string()});
let event = new Event("change");
Object.defineProperty(event, "target", {value: uploadInputElement, enumerable: true});
Object.defineProperty(event, "currentTarget", {value: uploadInputElement, enumerable: true});
Object.defineProperty(uploadInputElement, "files", {value: files, enumerable: true});
uploadInputElement.dispatchEvent(event);
// toast_push('🎉上传文件成功', 2000)
} else {
toast_push('⚠️请先删除上传区中的历史文件,再尝试粘贴。', 2000)
}
};
const onLeave = function (e) {
toast_down();
e.preventDefault();
e.stopPropagation();
};
dragEvents.forEach(event => {
elem.addEventListener(event, onDrag);
});
leaveEvents.forEach(event => {
elem.addEventListener(event, onLeave);
});
elem.addEventListener("drop", async function (e) {
const files = e.dataTransfer.files;
await upload_files(files);
});
}
}
//提示信息 封装
function toast_push(msg, duration) {
duration = isNaN(duration) ? 3000 : duration;
const m = document.createElement('div');
m.innerHTML = msg;
m.style.cssText = "font-size: var(--text-md) !important; color: rgb(255, 255, 255);background-color: rgba(0, 0, 0, 0.6);padding: 10px 15px;margin: 0 0 0 -60px;border-radius: 4px;position: fixed; top: 50%;left: 50%;width: auto; text-align: center;";
document.body.appendChild(m);
setTimeout(function () {
var d = 0.5;
m.style.opacity = '0';
setTimeout(function () {
document.body.removeChild(m)
}, d * 1000);
}, duration);
function elem_upload_component_pop_message(elem) {
if (elem) {
const dragEvents = ["dragover"];
const leaveEvents = ["dragleave", "dragend", "drop"];
dragEvents.forEach(event => {
elem.addEventListener(event, function (e) {
e.preventDefault();
e.stopPropagation();
if (elem_upload_float.querySelector("input[type=file]")) {
toast_up('⚠️释放以上传文件')
} else {
toast_up(exist_file_msg)
}
});
});
leaveEvents.forEach(event => {
elem.addEventListener(event, function (e) {
toast_down();
e.preventDefault();
e.stopPropagation();
});
});
elem.addEventListener("drop", async function (e) {
toast_push('正在上传中,请稍等。', 2000);
begin_loading_status();
});
}
}
var elem_upload = null;
var elem_upload_float = null;
var elem_input_main = null;
var elem_input_float = null;
function register_upload_event() {
locate_upload_elems();
if (elem_upload_float) {
_upload = document.querySelector("#elem_upload_float div.center.boundedheight.flex")
elem_upload_component_pop_message(_upload);
}
if (elem_upload_component_float) {
elem_upload_component_float.addEventListener('change', function (event) {
toast_push('正在上传中,请稍等。', 2000);
begin_loading_status();
});
}
if (elem_upload_component) {
elem_upload_component.addEventListener('change', function (event) {
toast_push('正在上传中,请稍等。', 2000);
begin_loading_status();
});
}else{
toast_push("oppps", 3000);
}
}
function monitoring_input_box() {
elem_upload = document.getElementById('elem_upload')
elem_upload_float = document.getElementById('elem_upload_float')
elem_input_main = document.getElementById('user_input_main')
elem_input_float = document.getElementById('user_input_float')
register_upload_event();
if (elem_input_main) {
if (elem_input_main.querySelector("textarea")) {
add_func_paste(elem_input_main.querySelector("textarea"))
register_func_paste(elem_input_main.querySelector("textarea"))
}
}
if (elem_input_float) {
if (elem_input_float.querySelector("textarea")){
add_func_paste(elem_input_float.querySelector("textarea"))
if (elem_input_float.querySelector("textarea")) {
register_func_paste(elem_input_float.querySelector("textarea"))
}
}
if (elem_chatbot) {
register_func_drag(elem_chatbot)
}
}
@@ -263,3 +530,119 @@ window.addEventListener("DOMContentLoaded", function () {
// const ga = document.getElementsByTagName("gradio-app");
gradioApp().addEventListener("render", monitoring_input_box);
});
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
// 第 5 部分: 音频按钮样式变化
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
function audio_fn_init() {
let audio_component = document.getElementById('elem_audio');
if (audio_component) {
let buttonElement = audio_component.querySelector('button');
let specificElement = audio_component.querySelector('.hide.sr-only');
specificElement.remove();
buttonElement.childNodes[1].nodeValue = '启动麦克风';
buttonElement.addEventListener('click', function (event) {
event.stopPropagation();
toast_push('您启动了麦克风!下一步请点击“实时语音对话”启动语音对话。');
});
// 查找语音插件按钮
let buttons = document.querySelectorAll('button');
let audio_button = null;
for (let button of buttons) {
if (button.textContent.includes('语音')) {
audio_button = button;
break;
}
}
if (audio_button) {
audio_button.addEventListener('click', function () {
toast_push('您点击了“实时语音对话”启动语音对话。');
});
let parent_element = audio_component.parentElement; // 将buttonElement移动到audio_button的内部
audio_button.appendChild(audio_component);
buttonElement.style.cssText = 'border-color: #00ffe0;border-width: 2px; height: 25px;'
parent_element.remove();
audio_component.style.cssText = 'width: 250px;right: 0px;display: inline-flex;flex-flow: row-reverse wrap;place-content: stretch space-between;align-items: center;background-color: #ffffff00;';
}
}
}
function minor_ui_adjustment() {
let cbsc_area = document.getElementById('cbsc');
cbsc_area.style.paddingTop = '15px';
var bar_btn_width = [];
// 自动隐藏超出范围的toolbar按钮
function auto_hide_toolbar() {
var qq = document.getElementById('tooltip');
var tab_nav = qq.getElementsByClassName('tab-nav');
if (tab_nav.length == 0){ return; }
var btn_list = tab_nav[0].getElementsByTagName('button')
if (btn_list.length == 0){ return; }
// 获取页面宽度
var page_width = document.documentElement.clientWidth;
// 总是保留的按钮数量
const always_preserve = 2;
// 获取最后一个按钮的右侧位置
var cur_right = btn_list[always_preserve-1].getBoundingClientRect().right;
if (bar_btn_width.length == 0){
// 首次运行,记录每个按钮的宽度
for (var i = 0; i < btn_list.length; i++) {
bar_btn_width.push(btn_list[i].getBoundingClientRect().width);
}
}
// 处理每一个按钮
for (var i = always_preserve; i < btn_list.length; i++) {
var element = btn_list[i];
var element_right = element.getBoundingClientRect().right;
if (element_right!=0){ cur_right = element_right; }
if (element.style.display === 'none') {
if ((cur_right + bar_btn_width[i]) < (page_width * 0.37)) {
// 恢复显示当前按钮
element.style.display = 'block';
// console.log('show');
return;
}else{
return;
}
} else {
if (cur_right > (page_width * 0.38)) {
// 隐藏当前按钮以及右侧所有按钮
for (var j = i; j < btn_list.length; j++) {
if (btn_list[j].style.display !== 'none') {
btn_list[j].style.display = 'none';
}
}
// console.log('show');
return;
}
}
}
}
setInterval(function () {
auto_hide_toolbar()
}, 200); // 每50毫秒执行一次
}
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
// 第 6 部分: JS初始化函数
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
function GptAcademicJavaScriptInit(LAYOUT = "LEFT-RIGHT") {
audio_fn_init();
minor_ui_adjustment();
chatbotIndicator = gradioApp().querySelector('#gpt-chatbot > div.wrap');
var chatbotObserver = new MutationObserver(() => {
chatbotContentChanged(1);
});
chatbotObserver.observe(chatbotIndicator, { attributes: true, childList: true, subtree: true });
if (LAYOUT === "LEFT-RIGHT") { chatbotAutoHeight(); }
}

View File

@@ -17,7 +17,7 @@
--button-primary-text-color-hover: #FFFFFF;
--button-secondary-text-color: #FFFFFF;
--button-secondary-text-color-hover: #FFFFFF;
--border-bottom-right-radius: 0px;
--border-bottom-left-radius: 0px;
@@ -51,8 +51,8 @@
--button-primary-border-color-hover: #3cff00;
--button-secondary-border-color: #3cff00;
--button-secondary-border-color-hover: #3cff00;
--body-background-fill: #000000;
--background-fill-primary: #000000;
--background-fill-secondary: #000000;
@@ -103,7 +103,7 @@
--button-primary-text-color-hover: #FFFFFF;
--button-secondary-text-color: #FFFFFF;
--button-secondary-text-color-hover: #FFFFFF;
--border-bottom-right-radius: 0px;
@@ -138,8 +138,8 @@
--button-primary-border-color-hover: #3cff00;
--button-secondary-border-color: #3cff00;
--button-secondary-border-color-hover: #3cff00;
--body-background-fill: #000000;
--background-fill-primary: #000000;
--background-fill-secondary: #000000;
@@ -479,4 +479,3 @@
.dark .codehilite .vi { color: #89DDFF } /* Name.Variable.Instance */
.dark .codehilite .vm { color: #82AAFF } /* Name.Variable.Magic */
.dark .codehilite .il { color: #F78C6C } /* Literal.Number.Integer.Long */

View File

@@ -1,18 +1,26 @@
import os
import gradio as gr
from toolbox import get_conf
CODE_HIGHLIGHT, ADD_WAIFU, LAYOUT = get_conf('CODE_HIGHLIGHT', 'ADD_WAIFU', 'LAYOUT')
CODE_HIGHLIGHT, ADD_WAIFU, LAYOUT = get_conf("CODE_HIGHLIGHT", "ADD_WAIFU", "LAYOUT")
theme_dir = os.path.dirname(__file__)
def adjust_theme():
def adjust_theme():
try:
color_er = gr.themes.utils.colors.fuchsia
set_theme = gr.themes.Default(
primary_hue=gr.themes.utils.colors.orange,
neutral_hue=gr.themes.utils.colors.gray,
font=["Helvetica", "Microsoft YaHei", "ui-sans-serif", "sans-serif", "system-ui"],
font_mono=["ui-monospace", "Consolas", "monospace"])
font=[
"Helvetica",
"Microsoft YaHei",
"ui-sans-serif",
"sans-serif",
"system-ui",
],
font_mono=["ui-monospace", "Consolas", "monospace"],
)
set_theme.set(
# Colors
input_background_fill_dark="*neutral_800",
@@ -59,9 +67,9 @@ def adjust_theme():
button_cancel_text_color_dark="white",
)
with open(os.path.join(theme_dir, 'common.js'), 'r', encoding='utf8') as f:
with open(os.path.join(theme_dir, "common.js"), "r", encoding="utf8") as f:
js = f"<script>{f.read()}</script>"
# 添加一个萌萌的看板娘
if ADD_WAIFU:
js += """
@@ -69,21 +77,26 @@ def adjust_theme():
<script src="file=docs/waifu_plugin/jquery-ui.min.js"></script>
<script src="file=docs/waifu_plugin/autoload.js"></script>
"""
if not hasattr(gr, 'RawTemplateResponse'):
if not hasattr(gr, "RawTemplateResponse"):
gr.RawTemplateResponse = gr.routes.templates.TemplateResponse
gradio_original_template_fn = gr.RawTemplateResponse
def gradio_new_template_fn(*args, **kwargs):
res = gradio_original_template_fn(*args, **kwargs)
res.body = res.body.replace(b'</html>', f'{js}</html>'.encode("utf8"))
res.body = res.body.replace(b"</html>", f"{js}</html>".encode("utf8"))
res.init_headers()
return res
gr.routes.templates.TemplateResponse = gradio_new_template_fn # override gradio template
gr.routes.templates.TemplateResponse = (
gradio_new_template_fn # override gradio template
)
except:
set_theme = None
print('gradio版本较旧, 不能自定义字体和颜色')
print("gradio版本较旧, 不能自定义字体和颜色")
return set_theme
with open(os.path.join(theme_dir, 'contrast.css'), "r", encoding="utf-8") as f:
with open(os.path.join(theme_dir, "contrast.css"), "r", encoding="utf-8") as f:
advanced_css = f.read()
with open(os.path.join(theme_dir, 'common.css'), "r", encoding="utf-8") as f:
with open(os.path.join(theme_dir, "common.css"), "r", encoding="utf-8") as f:
advanced_css += f.read()

0
themes/cookies.py Normal file
View File

View File

@@ -303,4 +303,3 @@
.dark .codehilite .vi { color: #89DDFF } /* Name.Variable.Instance */
.dark .codehilite .vm { color: #82AAFF } /* Name.Variable.Magic */
.dark .codehilite .il { color: #F78C6C } /* Literal.Number.Integer.Long */

View File

@@ -1,17 +1,26 @@
import os
import gradio as gr
from toolbox import get_conf
CODE_HIGHLIGHT, ADD_WAIFU, LAYOUT = get_conf('CODE_HIGHLIGHT', 'ADD_WAIFU', 'LAYOUT')
theme_dir = os.path.dirname(__file__)
def adjust_theme():
CODE_HIGHLIGHT, ADD_WAIFU, LAYOUT = get_conf("CODE_HIGHLIGHT", "ADD_WAIFU", "LAYOUT")
theme_dir = os.path.dirname(__file__)
def adjust_theme():
try:
color_er = gr.themes.utils.colors.fuchsia
set_theme = gr.themes.Default(
primary_hue=gr.themes.utils.colors.orange,
neutral_hue=gr.themes.utils.colors.gray,
font=["Helvetica", "Microsoft YaHei", "ui-sans-serif", "sans-serif", "system-ui"],
font_mono=["ui-monospace", "Consolas", "monospace"])
font=[
"Helvetica",
"Microsoft YaHei",
"ui-sans-serif",
"sans-serif",
"system-ui",
],
font_mono=["ui-monospace", "Consolas", "monospace"],
)
set_theme.set(
# Colors
input_background_fill_dark="*neutral_800",
@@ -58,7 +67,7 @@ def adjust_theme():
button_cancel_text_color_dark="white",
)
with open(os.path.join(theme_dir, 'common.js'), 'r', encoding='utf8') as f:
with open(os.path.join(theme_dir, "common.js"), "r", encoding="utf8") as f:
js = f"<script>{f.read()}</script>"
# 添加一个萌萌的看板娘
@@ -68,21 +77,26 @@ def adjust_theme():
<script src="file=docs/waifu_plugin/jquery-ui.min.js"></script>
<script src="file=docs/waifu_plugin/autoload.js"></script>
"""
if not hasattr(gr, 'RawTemplateResponse'):
if not hasattr(gr, "RawTemplateResponse"):
gr.RawTemplateResponse = gr.routes.templates.TemplateResponse
gradio_original_template_fn = gr.RawTemplateResponse
def gradio_new_template_fn(*args, **kwargs):
res = gradio_original_template_fn(*args, **kwargs)
res.body = res.body.replace(b'</html>', f'{js}</html>'.encode("utf8"))
res.body = res.body.replace(b"</html>", f"{js}</html>".encode("utf8"))
res.init_headers()
return res
gr.routes.templates.TemplateResponse = gradio_new_template_fn # override gradio template
gr.routes.templates.TemplateResponse = (
gradio_new_template_fn # override gradio template
)
except:
set_theme = None
print('gradio版本较旧, 不能自定义字体和颜色')
print("gradio版本较旧, 不能自定义字体和颜色")
return set_theme
with open(os.path.join(theme_dir, 'default.css'), "r", encoding="utf-8") as f:
with open(os.path.join(theme_dir, "default.css"), "r", encoding="utf-8") as f:
advanced_css = f.read()
with open(os.path.join(theme_dir, 'common.css'), "r", encoding="utf-8") as f:
with open(os.path.join(theme_dir, "common.css"), "r", encoding="utf-8") as f:
advanced_css += f.read()

View File

@@ -2,29 +2,36 @@ import logging
import os
import gradio as gr
from toolbox import get_conf, ProxyNetworkActivate
CODE_HIGHLIGHT, ADD_WAIFU, LAYOUT = get_conf('CODE_HIGHLIGHT', 'ADD_WAIFU', 'LAYOUT')
CODE_HIGHLIGHT, ADD_WAIFU, LAYOUT = get_conf("CODE_HIGHLIGHT", "ADD_WAIFU", "LAYOUT")
theme_dir = os.path.dirname(__file__)
def dynamic_set_theme(THEME):
set_theme = gr.themes.ThemeClass()
with ProxyNetworkActivate('Download_Gradio_Theme'):
logging.info('正在下载Gradio主题请稍等。')
if THEME.startswith('Huggingface-'): THEME = THEME.lstrip('Huggingface-')
if THEME.startswith('huggingface-'): THEME = THEME.lstrip('huggingface-')
with ProxyNetworkActivate("Download_Gradio_Theme"):
logging.info("正在下载Gradio主题请稍等。")
if THEME.startswith("Huggingface-"):
THEME = THEME.lstrip("Huggingface-")
if THEME.startswith("huggingface-"):
THEME = THEME.lstrip("huggingface-")
set_theme = set_theme.from_hub(THEME.lower())
return set_theme
def adjust_theme():
try:
set_theme = gr.themes.ThemeClass()
with ProxyNetworkActivate('Download_Gradio_Theme'):
logging.info('正在下载Gradio主题请稍等。')
THEME = get_conf('THEME')
if THEME.startswith('Huggingface-'): THEME = THEME.lstrip('Huggingface-')
if THEME.startswith('huggingface-'): THEME = THEME.lstrip('huggingface-')
with ProxyNetworkActivate("Download_Gradio_Theme"):
logging.info("正在下载Gradio主题请稍等。")
THEME = get_conf("THEME")
if THEME.startswith("Huggingface-"):
THEME = THEME.lstrip("Huggingface-")
if THEME.startswith("huggingface-"):
THEME = THEME.lstrip("huggingface-")
set_theme = set_theme.from_hub(THEME.lower())
with open(os.path.join(theme_dir, 'common.js'), 'r', encoding='utf8') as f:
with open(os.path.join(theme_dir, "common.js"), "r", encoding="utf8") as f:
js = f"<script>{f.read()}</script>"
# 添加一个萌萌的看板娘
@@ -34,20 +41,26 @@ def adjust_theme():
<script src="file=docs/waifu_plugin/jquery-ui.min.js"></script>
<script src="file=docs/waifu_plugin/autoload.js"></script>
"""
if not hasattr(gr, 'RawTemplateResponse'):
if not hasattr(gr, "RawTemplateResponse"):
gr.RawTemplateResponse = gr.routes.templates.TemplateResponse
gradio_original_template_fn = gr.RawTemplateResponse
def gradio_new_template_fn(*args, **kwargs):
res = gradio_original_template_fn(*args, **kwargs)
res.body = res.body.replace(b'</html>', f'{js}</html>'.encode("utf8"))
res.body = res.body.replace(b"</html>", f"{js}</html>".encode("utf8"))
res.init_headers()
return res
gr.routes.templates.TemplateResponse = gradio_new_template_fn # override gradio template
except Exception as e:
gr.routes.templates.TemplateResponse = (
gradio_new_template_fn # override gradio template
)
except Exception:
set_theme = None
from toolbox import trimmed_format_exc
logging.error('gradio版本较旧, 不能自定义字体和颜色:', trimmed_format_exc())
logging.error("gradio版本较旧, 不能自定义字体和颜色:", trimmed_format_exc())
return set_theme
with open(os.path.join(theme_dir, 'common.css'), "r", encoding="utf-8") as f:
with open(os.path.join(theme_dir, "common.css"), "r", encoding="utf-8") as f:
advanced_css = f.read()

View File

@@ -197,12 +197,12 @@ footer {
}
textarea.svelte-1pie7s6 {
background: #e7e6e6 !important;
width: 96% !important;
width: 100% !important;
}
.dark textarea.svelte-1pie7s6 {
background: var(--input-background-fill) !important;
width: 96% !important;
width: 100% !important;
}
.dark input[type=number].svelte-1cl284s {
@@ -256,13 +256,13 @@ textarea.svelte-1pie7s6 {
max-height: 95% !important;
overflow-y: auto !important;
}*/
.app.svelte-1mya07g.svelte-1mya07g {
/* .app.svelte-1mya07g.svelte-1mya07g {
max-width: 100%;
position: relative;
padding: var(--size-4);
width: 100%;
height: 100%;
}
} */
.gradio-container-3-32-2 h1 {
font-weight: 700 !important;
@@ -508,12 +508,14 @@ ol:not(.options), ul:not(.options) {
[data-testid = "bot"] {
max-width: 85%;
border-bottom-left-radius: 0 !important;
box-shadow: 2px 2px 0px 1px rgba(0, 0, 0, 0.06);
background-color: var(--message-bot-background-color-light) !important;
}
[data-testid = "user"] {
max-width: 85%;
width: auto !important;
border-bottom-right-radius: 0 !important;
box-shadow: 2px 2px 0px 1px rgba(0, 0, 0, 0.06);
background-color: var(--message-user-background-color-light) !important;
}
.dark [data-testid = "bot"] {

View File

@@ -38,4 +38,4 @@ function setSlider() {
window.addEventListener("DOMContentLoaded", () => {
set_elements();
});
});

View File

@@ -1,9 +1,11 @@
import os
import gradio as gr
from toolbox import get_conf
CODE_HIGHLIGHT, ADD_WAIFU, LAYOUT = get_conf('CODE_HIGHLIGHT', 'ADD_WAIFU', 'LAYOUT')
CODE_HIGHLIGHT, ADD_WAIFU, LAYOUT = get_conf("CODE_HIGHLIGHT", "ADD_WAIFU", "LAYOUT")
theme_dir = os.path.dirname(__file__)
def adjust_theme():
try:
set_theme = gr.themes.Soft(
@@ -50,7 +52,6 @@ def adjust_theme():
c900="#2B2B2B",
c950="#171717",
),
radius_size=gr.themes.sizes.radius_sm,
).set(
button_primary_background_fill="*primary_500",
@@ -75,7 +76,7 @@ def adjust_theme():
chatbot_code_background_color_dark="*neutral_950",
)
with open(os.path.join(theme_dir, 'common.js'), 'r', encoding='utf8') as f:
with open(os.path.join(theme_dir, "common.js"), "r", encoding="utf8") as f:
js = f"<script>{f.read()}</script>"
# 添加一个萌萌的看板娘
@@ -86,24 +87,29 @@ def adjust_theme():
<script src="file=docs/waifu_plugin/autoload.js"></script>
"""
with open(os.path.join(theme_dir, 'green.js'), 'r', encoding='utf8') as f:
with open(os.path.join(theme_dir, "green.js"), "r", encoding="utf8") as f:
js += f"<script>{f.read()}</script>"
if not hasattr(gr, 'RawTemplateResponse'):
if not hasattr(gr, "RawTemplateResponse"):
gr.RawTemplateResponse = gr.routes.templates.TemplateResponse
gradio_original_template_fn = gr.RawTemplateResponse
def gradio_new_template_fn(*args, **kwargs):
res = gradio_original_template_fn(*args, **kwargs)
res.body = res.body.replace(b'</html>', f'{js}</html>'.encode("utf8"))
res.body = res.body.replace(b"</html>", f"{js}</html>".encode("utf8"))
res.init_headers()
return res
gr.routes.templates.TemplateResponse = gradio_new_template_fn # override gradio template
gr.routes.templates.TemplateResponse = (
gradio_new_template_fn # override gradio template
)
except:
set_theme = None
print('gradio版本较旧, 不能自定义字体和颜色')
print("gradio版本较旧, 不能自定义字体和颜色")
return set_theme
with open(os.path.join(theme_dir, 'green.css'), "r", encoding="utf-8") as f:
with open(os.path.join(theme_dir, "green.css"), "r", encoding="utf-8") as f:
advanced_css = f.read()
with open(os.path.join(theme_dir, 'common.css'), "r", encoding="utf-8") as f:
with open(os.path.join(theme_dir, "common.css"), "r", encoding="utf-8") as f:
advanced_css += f.read()

View File

@@ -1,23 +1,120 @@
import gradio as gr
import pickle
import base64
import uuid
from toolbox import get_conf
THEME = get_conf('THEME')
"""
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
第 1 部分
加载主题相关的工具函数
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
"""
def load_dynamic_theme(THEME):
adjust_dynamic_theme = None
if THEME == 'Chuanhu-Small-and-Beautiful':
if THEME == "Chuanhu-Small-and-Beautiful":
from .green import adjust_theme, advanced_css
theme_declaration = "<h2 align=\"center\" class=\"small\">[Chuanhu-Small-and-Beautiful主题]</h2>"
elif THEME == 'High-Contrast':
theme_declaration = (
'<h2 align="center" class="small">[Chuanhu-Small-and-Beautiful主题]</h2>'
)
elif THEME == "High-Contrast":
from .contrast import adjust_theme, advanced_css
theme_declaration = ""
elif '/' in THEME:
elif "/" in THEME:
from .gradios import adjust_theme, advanced_css
from .gradios import dynamic_set_theme
adjust_dynamic_theme = dynamic_set_theme(THEME)
theme_declaration = ""
else:
from .default import adjust_theme, advanced_css
theme_declaration = ""
return adjust_theme, advanced_css, theme_declaration, adjust_dynamic_theme
adjust_theme, advanced_css, theme_declaration, _ = load_dynamic_theme(THEME)
adjust_theme, advanced_css, theme_declaration, _ = load_dynamic_theme(get_conf("THEME"))
"""
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
第 2 部分
cookie相关工具函数
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
"""
def init_cookie(cookies, chatbot):
# 为每一位访问的用户赋予一个独一无二的uuid编码
cookies.update({"uuid": uuid.uuid4()})
return cookies
def to_cookie_str(d):
# Pickle the dictionary and encode it as a string
pickled_dict = pickle.dumps(d)
cookie_value = base64.b64encode(pickled_dict).decode("utf-8")
return cookie_value
def from_cookie_str(c):
# Decode the base64-encoded string and unpickle it into a dictionary
pickled_dict = base64.b64decode(c.encode("utf-8"))
return pickle.loads(pickled_dict)
"""
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
第 3 部分
内嵌的javascript代码
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
"""
js_code_for_css_changing = """(css) => {
var existingStyles = document.querySelectorAll("body > gradio-app > div > style")
for (var i = 0; i < existingStyles.length; i++) {
var style = existingStyles[i];
style.parentNode.removeChild(style);
}
var existingStyles = document.querySelectorAll("style[data-loaded-css]");
for (var i = 0; i < existingStyles.length; i++) {
var style = existingStyles[i];
style.parentNode.removeChild(style);
}
var styleElement = document.createElement('style');
styleElement.setAttribute('data-loaded-css', 'placeholder');
styleElement.innerHTML = css;
document.body.appendChild(styleElement);
}
"""
js_code_for_darkmode_init = """(dark) => {
dark = dark == "True";
if (document.querySelectorAll('.dark').length) {
if (!dark){
document.querySelectorAll('.dark').forEach(el => el.classList.remove('dark'));
}
} else {
if (dark){
document.querySelector('body').classList.add('dark');
}
}
}
"""
js_code_for_toggle_darkmode = """() => {
if (document.querySelectorAll('.dark').length) {
document.querySelectorAll('.dark').forEach(el => el.classList.remove('dark'));
} else {
document.querySelector('body').classList.add('dark');
}
}"""
js_code_for_persistent_cookie_init = """(persistent_cookie) => {
return getCookie("persistent_cookie");
}
"""

View File

@@ -11,8 +11,10 @@ import glob
import math
from latex2mathml.converter import convert as tex2mathml
from functools import wraps, lru_cache
pj = os.path.join
default_user_name = 'default_user'
"""
========================================================================
第一部分
@@ -26,6 +28,7 @@ default_user_name = 'default_user'
========================================================================
"""
class ChatBotWithCookies(list):
def __init__(self, cookie):
"""
@@ -67,18 +70,18 @@ def ArgsGeneralWrapper(f):
else:
user_name = default_user_name
cookies.update({
'top_p':top_p,
'top_p': top_p,
'api_key': cookies['api_key'],
'llm_model': llm_model,
'temperature':temperature,
'temperature': temperature,
'user_name': user_name,
})
llm_kwargs = {
'api_key': cookies['api_key'],
'llm_model': llm_model,
'top_p':top_p,
'top_p': top_p,
'max_length': max_length,
'temperature':temperature,
'temperature': temperature,
'client_ip': request.client.host,
'most_recent_uploaded': cookies.get('most_recent_uploaded')
}
@@ -87,7 +90,7 @@ def ArgsGeneralWrapper(f):
}
chatbot_with_cookie = ChatBotWithCookies(cookies)
chatbot_with_cookie.write_list(chatbot)
if cookies.get('lock_plugin', None) is None:
# 正常状态
if len(args) == 0: # 插件通道
@@ -103,8 +106,10 @@ def ArgsGeneralWrapper(f):
final_cookies = chatbot_with_cookie.get_cookies()
# len(args) != 0 代表“提交”键对话通道,或者基础功能通道
if len(args) != 0 and 'files_to_promote' in final_cookies and len(final_cookies['files_to_promote']) > 0:
chatbot_with_cookie.append(["检测到**滞留的缓存文档**,请及时处理。", "请及时点击“**保存当前对话**”获取所有滞留文档。"])
chatbot_with_cookie.append(
["检测到**滞留的缓存文档**,请及时处理。", "请及时点击“**保存当前对话**”获取所有滞留文档。"])
yield from update_ui(chatbot_with_cookie, final_cookies['history'], msg="检测到被滞留的缓存文档")
return decorated
@@ -129,6 +134,7 @@ def update_ui(chatbot, history, msg='正常', **kwargs): # 刷新界面
yield cookies, chatbot_gr, history, msg
def update_ui_lastest_msg(lastmsg, chatbot, history, delay=1): # 刷新界面
"""
刷新用户界面
@@ -147,6 +153,7 @@ def trimmed_format_exc():
replace_path = "."
return str.replace(current_path, replace_path)
def CatchException(f):
"""
装饰器函数捕捉函数f中的异常并封装到一个生成器中返回并显示到聊天当中。
@@ -164,9 +171,9 @@ def CatchException(f):
if len(chatbot_with_cookie) == 0:
chatbot_with_cookie.clear()
chatbot_with_cookie.append(["插件调度异常", "异常原因"])
chatbot_with_cookie[-1] = (chatbot_with_cookie[-1][0],
f"[Local Message] 插件调用出错: \n\n{tb_str} \n\n当前代理可用性: \n\n{check_proxy(proxies)}")
yield from update_ui(chatbot=chatbot_with_cookie, history=history, msg=f'异常 {e}') # 刷新界面
chatbot_with_cookie[-1] = (chatbot_with_cookie[-1][0], f"[Local Message] 插件调用出错: \n\n{tb_str} \n")
yield from update_ui(chatbot=chatbot_with_cookie, history=history, msg=f'异常 {e}') # 刷新界面
return decorated
@@ -209,6 +216,7 @@ def HotReload(f):
========================================================================
"""
def get_reduce_token_percent(text):
"""
* 此函数未来将被弃用
@@ -220,9 +228,9 @@ def get_reduce_token_percent(text):
EXCEED_ALLO = 500 # 稍微留一点余地,否则在回复时会因余量太少出问题
max_limit = float(match[0]) - EXCEED_ALLO
current_tokens = float(match[1])
ratio = max_limit/current_tokens
ratio = max_limit / current_tokens
assert ratio > 0 and ratio < 1
return ratio, str(int(current_tokens-max_limit))
return ratio, str(int(current_tokens - max_limit))
except:
return 0.5, '不详'
@@ -242,7 +250,7 @@ def write_history_to_file(history, file_basename=None, file_fullname=None, auto_
with open(file_fullname, 'w', encoding='utf8') as f:
f.write('# GPT-Academic Report\n')
for i, content in enumerate(history):
try:
try:
if type(content) != str: content = str(content)
except:
continue
@@ -268,8 +276,6 @@ def regular_txt_to_markdown(text):
return text
def report_exception(chatbot, history, a, b):
"""
向chatbot中添加错误信息
@@ -286,7 +292,7 @@ def text_divide_paragraph(text):
suf = '</div>'
if text.startswith(pre) and text.endswith(suf):
return text
if '```' in text:
# careful input
return text
@@ -312,7 +318,7 @@ def markdown_convertion(txt):
if txt.startswith(pre) and txt.endswith(suf):
# print('警告,输入了已经经过转化的字符串,二次转化可能出问题')
return txt # 已经被转化过,不需要再次转化
markdown_extension_configs = {
'mdx_math': {
'enable_dollar_delimiter': True,
@@ -352,7 +358,8 @@ def markdown_convertion(txt):
"""
解决一个mdx_math的bug单$包裹begin命令时多余<script>
"""
content = content.replace('<script type="math/tex">\n<script type="math/tex; mode=display">', '<script type="math/tex; mode=display">')
content = content.replace('<script type="math/tex">\n<script type="math/tex; mode=display">',
'<script type="math/tex; mode=display">')
content = content.replace('</script>\n</script>', '</script>')
return content
@@ -363,16 +370,16 @@ def markdown_convertion(txt):
if '```' in txt and '```reference' not in txt: return False
if '$' not in txt and '\\[' not in txt: return False
mathpatterns = {
r'(?<!\\|\$)(\$)([^\$]+)(\$)': {'allow_multi_lines': False}, #  $...$
r'(?<!\\)(\$\$)([^\$]+)(\$\$)': {'allow_multi_lines': True}, # $$...$$
r'(?<!\\)(\\\[)(.+?)(\\\])': {'allow_multi_lines': False}, # \[...\]
# r'(?<!\\)(\\\()(.+?)(\\\))': {'allow_multi_lines': False}, # \(...\)
# r'(?<!\\)(\\begin{([a-z]+?\*?)})(.+?)(\\end{\2})': {'allow_multi_lines': True}, # \begin...\end
# r'(?<!\\)(\$`)([^`]+)(`\$)': {'allow_multi_lines': False}, # $`...`$
r'(?<!\\|\$)(\$)([^\$]+)(\$)': {'allow_multi_lines': False}, #  $...$
r'(?<!\\)(\$\$)([^\$]+)(\$\$)': {'allow_multi_lines': True}, # $$...$$
r'(?<!\\)(\\\[)(.+?)(\\\])': {'allow_multi_lines': False}, # \[...\]
# r'(?<!\\)(\\\()(.+?)(\\\))': {'allow_multi_lines': False}, # \(...\)
# r'(?<!\\)(\\begin{([a-z]+?\*?)})(.+?)(\\end{\2})': {'allow_multi_lines': True}, # \begin...\end
# r'(?<!\\)(\$`)([^`]+)(`\$)': {'allow_multi_lines': False}, # $`...`$
}
matches = []
for pattern, property in mathpatterns.items():
flags = re.ASCII|re.DOTALL if property['allow_multi_lines'] else re.ASCII
flags = re.ASCII | re.DOTALL if property['allow_multi_lines'] else re.ASCII
matches.extend(re.findall(pattern, txt, flags))
if len(matches) == 0: return False
contain_any_eq = False
@@ -380,16 +387,16 @@ def markdown_convertion(txt):
for match in matches:
if len(match) != 3: return False
eq_canidate = match[1]
if illegal_pattern.search(eq_canidate):
if illegal_pattern.search(eq_canidate):
return False
else:
else:
contain_any_eq = True
return contain_any_eq
def fix_markdown_indent(txt):
# fix markdown indent
if (' - ' not in txt) or ('. ' not in txt):
return txt # do not need to fix, fast escape
if (' - ' not in txt) or ('. ' not in txt):
return txt # do not need to fix, fast escape
# walk through the lines and fix non-standard indentation
lines = txt.split("\n")
pattern = re.compile(r'^\s+-')
@@ -401,7 +408,7 @@ def markdown_convertion(txt):
stripped_string = line.lstrip()
num_spaces = len(line) - len(stripped_string)
if (num_spaces % 4) == 3:
num_spaces_should_be = math.ceil(num_spaces/4) * 4
num_spaces_should_be = math.ceil(num_spaces / 4) * 4
lines[i] = ' ' * num_spaces_should_be + stripped_string
return '\n'.join(lines)
@@ -409,7 +416,8 @@ def markdown_convertion(txt):
if is_equation(txt): # 有$标识的公式符号,且没有代码段```的标识
# convert everything to html format
split = markdown.markdown(text='---')
convert_stage_1 = markdown.markdown(text=txt, extensions=['sane_lists', 'tables', 'mdx_math', 'fenced_code'], extension_configs=markdown_extension_configs)
convert_stage_1 = markdown.markdown(text=txt, extensions=['sane_lists', 'tables', 'mdx_math', 'fenced_code'],
extension_configs=markdown_extension_configs)
convert_stage_1 = markdown_bug_hunt(convert_stage_1)
# 1. convert to easy-to-copy tex (do not render math)
convert_stage_2_1, n = re.subn(find_equation_pattern, replace_math_no_render, convert_stage_1, flags=re.DOTALL)
@@ -441,8 +449,7 @@ def close_up_code_segment_during_stream(gpt_reply):
segments = gpt_reply.split('```')
n_mark = len(segments) - 1
if n_mark % 2 == 1:
# print('输出代码片段中!')
return gpt_reply+'\n```'
return gpt_reply + '\n```' # 输出代码片段中!
else:
return gpt_reply
@@ -533,7 +540,7 @@ def find_recent_files(directory):
current_time = time.time()
one_minute_ago = current_time - 60
recent_files = []
if not os.path.exists(directory):
if not os.path.exists(directory):
os.makedirs(directory, exist_ok=True)
for filename in os.listdir(directory):
file_path = pj(directory, filename)
@@ -559,6 +566,7 @@ def file_already_in_downloadzone(file, user_path):
except:
return False
def promote_file_to_downloadzone(file, rename_file=None, chatbot=None):
# 将文件复制一份到下载区
import shutil
@@ -581,9 +589,12 @@ def promote_file_to_downloadzone(file, rename_file=None, chatbot=None):
if not os.path.exists(new_path): shutil.copyfile(file, new_path)
# 将文件添加到chatbot cookie中
if chatbot is not None:
if 'files_to_promote' in chatbot._cookies: current = chatbot._cookies['files_to_promote']
else: current = []
chatbot._cookies.update({'files_to_promote': [new_path] + current})
if 'files_to_promote' in chatbot._cookies:
current = chatbot._cookies['files_to_promote']
else:
current = []
if new_path not in current: # 避免把同一个文件添加多次
chatbot._cookies.update({'files_to_promote': [new_path] + current})
return new_path
@@ -604,8 +615,10 @@ def del_outdated_uploads(outdate_time_seconds, target_path_base=None):
for subdirectory in glob.glob(f'{user_upload_dir}/*'):
subdirectory_time = os.path.getmtime(subdirectory)
if subdirectory_time < one_hour_ago:
try: shutil.rmtree(subdirectory)
except: pass
try:
shutil.rmtree(subdirectory)
except:
pass
return
@@ -678,9 +691,9 @@ def on_file_uploaded(request: gradio.Request, files, chatbot, txt, txt2, checkbo
time_tag = gen_time_str()
target_path_base = get_upload_folder(user_name, tag=time_tag)
os.makedirs(target_path_base, exist_ok=True)
# 移除过时的旧文件从而节省空间&保护隐私
outdate_time_seconds = 3600 # 一小时
outdate_time_seconds = 3600 # 一小时
del_outdated_uploads(outdate_time_seconds, get_upload_folder(user_name))
# 逐个文件转移到目标路径
@@ -689,21 +702,20 @@ def on_file_uploaded(request: gradio.Request, files, chatbot, txt, txt2, checkbo
file_origin_name = os.path.basename(file.orig_name)
this_file_path = pj(target_path_base, file_origin_name)
shutil.move(file.name, this_file_path)
upload_msg += extract_archive(file_path=this_file_path, dest_dir=this_file_path+'.extract')
if "浮动输入区" in checkboxes:
txt, txt2 = "", target_path_base
else:
txt, txt2 = target_path_base, ""
upload_msg += extract_archive(file_path=this_file_path, dest_dir=this_file_path + '.extract')
# 整理文件集合 输出消息
moved_files = [fp for fp in glob.glob(f'{target_path_base}/**/*', recursive=True)]
moved_files_str = to_markdown_tabs(head=['文件'], tabs=[moved_files])
chatbot.append(['我上传了文件,请查收',
chatbot.append(['我上传了文件,请查收',
f'[Local Message] 收到以下文件: \n\n{moved_files_str}' +
f'\n\n调用路径参数已自动修正到: \n\n{txt}' +
f'\n\n现在您点击任意函数插件时,以上文件将被作为输入参数'+upload_msg])
f'\n\n现在您点击任意函数插件时,以上文件将被作为输入参数' + upload_msg])
txt, txt2 = target_path_base, ""
if "浮动输入区" in checkboxes:
txt, txt2 = txt2, txt
# 记录近期文件
cookies.update({
'most_recent_uploaded': {
@@ -731,34 +743,40 @@ def on_report_generated(cookies, files, chatbot):
chatbot.append(['报告如何远程获取?', f'报告已经添加到右侧“文件上传区”(可能处于折叠状态),请查收。{file_links}'])
return cookies, report_files, chatbot
def load_chat_cookies():
API_KEY, LLM_MODEL, AZURE_API_KEY = get_conf('API_KEY', 'LLM_MODEL', 'AZURE_API_KEY')
AZURE_CFG_ARRAY, NUM_CUSTOM_BASIC_BTN = get_conf('AZURE_CFG_ARRAY', 'NUM_CUSTOM_BASIC_BTN')
# deal with azure openai key
if is_any_api_key(AZURE_API_KEY):
if is_any_api_key(API_KEY): API_KEY = API_KEY + ',' + AZURE_API_KEY
else: API_KEY = AZURE_API_KEY
if is_any_api_key(API_KEY):
API_KEY = API_KEY + ',' + AZURE_API_KEY
else:
API_KEY = AZURE_API_KEY
if len(AZURE_CFG_ARRAY) > 0:
for azure_model_name, azure_cfg_dict in AZURE_CFG_ARRAY.items():
if not azure_model_name.startswith('azure'):
if not azure_model_name.startswith('azure'):
raise ValueError("AZURE_CFG_ARRAY中配置的模型必须以azure开头")
AZURE_API_KEY_ = azure_cfg_dict["AZURE_API_KEY"]
if is_any_api_key(AZURE_API_KEY_):
if is_any_api_key(API_KEY): API_KEY = API_KEY + ',' + AZURE_API_KEY_
else: API_KEY = AZURE_API_KEY_
if is_any_api_key(API_KEY):
API_KEY = API_KEY + ',' + AZURE_API_KEY_
else:
API_KEY = AZURE_API_KEY_
customize_fn_overwrite_ = {}
for k in range(NUM_CUSTOM_BASIC_BTN):
customize_fn_overwrite_.update({
customize_fn_overwrite_.update({
"自定义按钮" + str(k+1):{
"Title": r"",
"Prefix": r"请在自定义菜单中定义提示词前缀.",
"Suffix": r"请在自定义菜单中定义提示词后缀",
"Title": r"",
"Prefix": r"请在自定义菜单中定义提示词前缀.",
"Suffix": r"请在自定义菜单中定义提示词后缀",
}
})
return {'api_key': API_KEY, 'llm_model': LLM_MODEL, 'customize_fn_overwrite': customize_fn_overwrite_}
def is_openai_api_key(key):
CUSTOM_API_KEY_PATTERN = get_conf('CUSTOM_API_KEY_PATTERN')
if len(CUSTOM_API_KEY_PATTERN) != 0:
@@ -767,14 +785,17 @@ def is_openai_api_key(key):
API_MATCH_ORIGINAL = re.match(r"sk-[a-zA-Z0-9]{48}$", key)
return bool(API_MATCH_ORIGINAL)
def is_azure_api_key(key):
API_MATCH_AZURE = re.match(r"[a-zA-Z0-9]{32}$", key)
return bool(API_MATCH_AZURE)
def is_api2d_key(key):
API_MATCH_API2D = re.match(r"fk[a-zA-Z0-9]{6}-[a-zA-Z0-9]{32}$", key)
return bool(API_MATCH_API2D)
def is_any_api_key(key):
if ',' in key:
keys = key.split(',')
@@ -784,24 +805,26 @@ def is_any_api_key(key):
else:
return is_openai_api_key(key) or is_api2d_key(key) or is_azure_api_key(key)
def what_keys(keys):
avail_key_list = {'OpenAI Key':0, "Azure Key":0, "API2D Key":0}
avail_key_list = {'OpenAI Key': 0, "Azure Key": 0, "API2D Key": 0}
key_list = keys.split(',')
for k in key_list:
if is_openai_api_key(k):
if is_openai_api_key(k):
avail_key_list['OpenAI Key'] += 1
for k in key_list:
if is_api2d_key(k):
if is_api2d_key(k):
avail_key_list['API2D Key'] += 1
for k in key_list:
if is_azure_api_key(k):
if is_azure_api_key(k):
avail_key_list['Azure Key'] += 1
return f"检测到: OpenAI Key {avail_key_list['OpenAI Key']} 个, Azure Key {avail_key_list['Azure Key']} 个, API2D Key {avail_key_list['API2D Key']}"
def select_api_key(keys, llm_model):
import random
avail_key_list = []
@@ -825,6 +848,7 @@ def select_api_key(keys, llm_model):
api_key = random.choice(avail_key_list) # 随机负载均衡
return api_key
def read_env_variable(arg, default_value):
"""
环境变量可以是 `GPT_ACADEMIC_CONFIG`(优先),也可以直接是`CONFIG`
@@ -842,10 +866,10 @@ def read_env_variable(arg, default_value):
set GPT_ACADEMIC_AUTHENTICATION=[("username", "password"), ("username2", "password2")]
"""
from colorful import print亮红, print亮绿
arg_with_prefix = "GPT_ACADEMIC_" + arg
if arg_with_prefix in os.environ:
arg_with_prefix = "GPT_ACADEMIC_" + arg
if arg_with_prefix in os.environ:
env_arg = os.environ[arg_with_prefix]
elif arg in os.environ:
elif arg in os.environ:
env_arg = os.environ[arg]
else:
raise KeyError
@@ -855,7 +879,7 @@ def read_env_variable(arg, default_value):
env_arg = env_arg.strip()
if env_arg == 'True': r = True
elif env_arg == 'False': r = False
else: print('enter True or False, but have:', env_arg); r = default_value
else: print('Enter True or False, but have:', env_arg); r = default_value
elif isinstance(default_value, int):
r = int(env_arg)
elif isinstance(default_value, float):
@@ -879,13 +903,14 @@ def read_env_variable(arg, default_value):
print亮绿(f"[ENV_VAR] 成功读取环境变量{arg}")
return r
@lru_cache(maxsize=128)
def read_single_conf_with_lru_cache(arg):
from colorful import print亮红, print亮绿, print亮蓝
try:
# 优先级1. 获取环境变量作为配置
default_ref = getattr(importlib.import_module('config'), arg) # 读取默认值作为数据类型转换的参考
r = read_env_variable(arg, default_ref)
default_ref = getattr(importlib.import_module('config'), arg) # 读取默认值作为数据类型转换的参考
r = read_env_variable(arg, default_ref)
except:
try:
# 优先级2. 获取config_private中的配置
@@ -898,7 +923,7 @@ def read_single_conf_with_lru_cache(arg):
if arg == 'API_URL_REDIRECT':
oai_rd = r.get("https://api.openai.com/v1/chat/completions", None) # API_URL_REDIRECT填写格式是错误的请阅读`https://github.com/binary-husky/gpt_academic/wiki/项目配置说明`
if oai_rd and not oai_rd.endswith('/completions'):
print亮红( "\n\n[API_URL_REDIRECT] API_URL_REDIRECT填错了。请阅读`https://github.com/binary-husky/gpt_academic/wiki/项目配置说明`。如果您确信自己没填错,无视此消息即可。")
print亮红("\n\n[API_URL_REDIRECT] API_URL_REDIRECT填错了。请阅读`https://github.com/binary-husky/gpt_academic/wiki/项目配置说明`。如果您确信自己没填错,无视此消息即可。")
time.sleep(5)
if arg == 'API_KEY':
print亮蓝(f"[API_KEY] 本项目现已支持OpenAI和Azure的api-key。也支持同时填写多个api-key如API_KEY=\"openai-key1,openai-key2,azure-key3\"")
@@ -906,9 +931,9 @@ def read_single_conf_with_lru_cache(arg):
if is_any_api_key(r):
print亮绿(f"[API_KEY] 您的 API_KEY 是: {r[:15]}*** API_KEY 导入成功")
else:
print亮红( "[API_KEY] 您的 API_KEY 不满足任何一种已知的密钥格式请在config文件中修改API密钥之后再运行。")
print亮红("[API_KEY] 您的 API_KEY 不满足任何一种已知的密钥格式请在config文件中修改API密钥之后再运行。")
if arg == 'proxies':
if not read_single_conf_with_lru_cache('USE_PROXY'): r = None # 检查USE_PROXY防止proxies单独起作用
if not read_single_conf_with_lru_cache('USE_PROXY'): r = None # 检查USE_PROXY防止proxies单独起作用
if r is None:
print亮红('[PROXY] 网络代理状态未配置。无代理状态下很可能无法访问OpenAI家族的模型。建议检查USE_PROXY选项是否修改。')
else:
@@ -952,17 +977,20 @@ class DummyWith():
在上下文执行开始的情况下__enter__()方法会在代码块被执行前被调用,
而在上下文执行结束时__exit__()方法则会被调用。
"""
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
return
def run_gradio_in_subpath(demo, auth, port, custom_path):
"""
把gradio的运行地址更改到指定的二次路径上
"""
def is_path_legal(path: str)->bool:
def is_path_legal(path: str) -> bool:
'''
check path for sub url
path: path to check
@@ -987,7 +1015,7 @@ def run_gradio_in_subpath(demo, auth, port, custom_path):
app = FastAPI()
if custom_path != "/":
@app.get("/")
def read_main():
def read_main():
return {"message": f"Gradio is running at: {custom_path}"}
app = gr.mount_gradio_app(app, demo, path=custom_path)
uvicorn.run(app, host="0.0.0.0", port=port) # , auth=auth
@@ -998,23 +1026,28 @@ def clip_history(inputs, history, tokenizer, max_token_limit):
reduce the length of history by clipping.
this function search for the longest entries to clip, little by little,
until the number of token of history is reduced under threshold.
通过裁剪来缩短历史记录的长度。
通过裁剪来缩短历史记录的长度。
此函数逐渐地搜索最长的条目进行剪辑,
直到历史记录的标记数量降低到阈值以下。
"""
import numpy as np
from request_llms.bridge_all import model_info
def get_token_num(txt):
def get_token_num(txt):
return len(tokenizer.encode(txt, disallowed_special=()))
input_token_num = get_token_num(inputs)
if max_token_limit < 5000: output_token_expect = 256 # 4k & 2k models
elif max_token_limit < 9000: output_token_expect = 512 # 8k models
else: output_token_expect = 1024 # 16k & 32k models
if input_token_num < max_token_limit * 3 / 4:
# 当输入部分的token占比小于限制的3/4时裁剪时
# 1. 把input的余量留出来
max_token_limit = max_token_limit - input_token_num
# 2. 把输出用的余量留出来
max_token_limit = max_token_limit - 128
max_token_limit = max_token_limit - output_token_expect
# 3. 如果余量太小了,直接清除历史
if max_token_limit < 128:
if max_token_limit < output_token_expect:
history = []
return history
else:
@@ -1033,14 +1066,15 @@ def clip_history(inputs, history, tokenizer, max_token_limit):
while n_token > max_token_limit:
where = np.argmax(everything_token)
encoded = tokenizer.encode(everything[where], disallowed_special=())
clipped_encoded = encoded[:len(encoded)-delta]
everything[where] = tokenizer.decode(clipped_encoded)[:-1] # -1 to remove the may-be illegal char
clipped_encoded = encoded[:len(encoded) - delta]
everything[where] = tokenizer.decode(clipped_encoded)[:-1] # -1 to remove the may-be illegal char
everything_token[where] = get_token_num(everything[where])
n_token = get_token_num('\n'.join(everything))
history = everything[1:]
return history
"""
========================================================================
第三部分
@@ -1052,6 +1086,7 @@ def clip_history(inputs, history, tokenizer, max_token_limit):
========================================================================
"""
def zip_folder(source_folder, dest_folder, zip_name):
import zipfile
import os
@@ -1083,15 +1118,18 @@ def zip_folder(source_folder, dest_folder, zip_name):
print(f"Zip file created at {zip_file}")
def zip_result(folder):
t = gen_time_str()
zip_folder(folder, get_log_folder(), f'{t}-result.zip')
return pj(get_log_folder(), f'{t}-result.zip')
def gen_time_str():
import time
return time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime())
def get_log_folder(user=default_user_name, plugin_name='shared'):
if user is None: user = default_user_name
PATH_LOGGING = get_conf('PATH_LOGGING')
@@ -1102,29 +1140,36 @@ def get_log_folder(user=default_user_name, plugin_name='shared'):
if not os.path.exists(_dir): os.makedirs(_dir)
return _dir
def get_upload_folder(user=default_user_name, tag=None):
PATH_PRIVATE_UPLOAD = get_conf('PATH_PRIVATE_UPLOAD')
if user is None: user = default_user_name
if tag is None or len(tag)==0:
if tag is None or len(tag) == 0:
target_path_base = pj(PATH_PRIVATE_UPLOAD, user)
else:
target_path_base = pj(PATH_PRIVATE_UPLOAD, user, tag)
return target_path_base
def is_the_upload_folder(string):
PATH_PRIVATE_UPLOAD = get_conf('PATH_PRIVATE_UPLOAD')
pattern = r'^PATH_PRIVATE_UPLOAD[\\/][A-Za-z0-9_-]+[\\/]\d{4}-\d{2}-\d{2}-\d{2}-\d{2}-\d{2}$'
pattern = pattern.replace('PATH_PRIVATE_UPLOAD', PATH_PRIVATE_UPLOAD)
if re.match(pattern, string): return True
else: return False
if re.match(pattern, string):
return True
else:
return False
def get_user(chatbotwithcookies):
return chatbotwithcookies._cookies.get('user_name', default_user_name)
class ProxyNetworkActivate():
"""
这段代码定义了一个名为TempProxy的空上下文管理器, 用于给一小段代码上代理
这段代码定义了一个名为ProxyNetworkActivate的空上下文管理器, 用于给一小段代码上代理
"""
def __init__(self, task=None) -> None:
self.task = task
if not task:
@@ -1152,32 +1197,36 @@ class ProxyNetworkActivate():
if 'HTTPS_PROXY' in os.environ: os.environ.pop('HTTPS_PROXY')
return
def objdump(obj, file='objdump.tmp'):
import pickle
with open(file, 'wb+') as f:
pickle.dump(obj, f)
return
def objload(file='objdump.tmp'):
import pickle, os
if not os.path.exists(file):
if not os.path.exists(file):
return
with open(file, 'rb') as f:
return pickle.load(f)
def Singleton(cls):
"""
一个单实例装饰器
"""
_instance = {}
def _singleton(*args, **kargs):
if cls not in _instance:
_instance[cls] = cls(*args, **kargs)
return _instance[cls]
return _singleton
"""
========================================================================
第四部分
@@ -1191,6 +1240,7 @@ def Singleton(cls):
========================================================================
"""
def set_conf(key, value):
from toolbox import read_single_conf_with_lru_cache, get_conf
read_single_conf_with_lru_cache.cache_clear()
@@ -1199,10 +1249,12 @@ def set_conf(key, value):
altered = get_conf(key)
return altered
def set_multi_conf(dic):
for k, v in dic.items(): set_conf(k, v)
return
def get_plugin_handle(plugin_name):
"""
e.g. plugin_name = 'crazy_functions.批量Markdown翻译->Markdown翻译指定语言'
@@ -1214,12 +1266,14 @@ def get_plugin_handle(plugin_name):
f_hot_reload = getattr(importlib.import_module(module, fn_name), fn_name)
return f_hot_reload
def get_chat_handle():
"""
"""
from request_llms.bridge_all import predict_no_ui_long_connection
return predict_no_ui_long_connection
def get_plugin_default_kwargs():
"""
"""
@@ -1228,9 +1282,9 @@ def get_plugin_default_kwargs():
llm_kwargs = {
'api_key': cookies['api_key'],
'llm_model': cookies['llm_model'],
'top_p':1.0,
'top_p': 1.0,
'max_length': None,
'temperature':1.0,
'temperature': 1.0,
}
chatbot = ChatBotWithCookies(llm_kwargs)
@@ -1241,11 +1295,12 @@ def get_plugin_default_kwargs():
"plugin_kwargs": {},
"chatbot_with_cookie": chatbot,
"history": [],
"system_prompt": "You are a good AI.",
"system_prompt": "You are a good AI.",
"web_port": None
}
return DEFAULT_FN_GROUPS_kwargs
def get_chat_default_kwargs():
"""
"""
@@ -1253,9 +1308,9 @@ def get_chat_default_kwargs():
llm_kwargs = {
'api_key': cookies['api_key'],
'llm_model': cookies['llm_model'],
'top_p':1.0,
'top_p': 1.0,
'max_length': None,
'temperature':1.0,
'temperature': 1.0,
}
default_chat_kwargs = {
"inputs": "Hello there, are you ready?",
@@ -1278,15 +1333,15 @@ def get_pictures_list(path):
def have_any_recent_upload_image_files(chatbot):
_5min = 5 * 60
if chatbot is None: return False, None # chatbot is None
if chatbot is None: return False, None # chatbot is None
most_recent_uploaded = chatbot._cookies.get("most_recent_uploaded", None)
if not most_recent_uploaded: return False, None # most_recent_uploaded is None
if not most_recent_uploaded: return False, None # most_recent_uploaded is None
if time.time() - most_recent_uploaded["time"] < _5min:
most_recent_uploaded = chatbot._cookies.get("most_recent_uploaded", None)
path = most_recent_uploaded['path']
file_manifest = get_pictures_list(path)
if len(file_manifest) == 0: return False, None
return True, file_manifest # most_recent_uploaded is new
return True, file_manifest # most_recent_uploaded is new
else:
return False, None # most_recent_uploaded is too old
@@ -1301,6 +1356,7 @@ def get_max_token(llm_kwargs):
from request_llms.bridge_all import model_info
return model_info[llm_kwargs['llm_model']]['max_token']
def check_packages(packages=[]):
import importlib.util
for p in packages:

View File

@@ -1,5 +1,5 @@
{
"version": 3.62,
"version": 3.65,
"show_feature": true,
"new_feature": "修复若干隐蔽的内存BUG <-> 修复多用户冲突问题 <-> 接入Deepseek Coder <-> AutoGen多智能体插件测试版 <-> 修复本地模型在Windows下的加载BUG <-> 支持文心一言v4和星火v3 <-> 支持GLM3和智谱的API <-> 解决本地模型并发BUG <-> 支持动态追加基础功能按钮"
"new_feature": "支持Gemini-pro <-> 支持直接拖拽文件到上传区 <-> 支持将图片粘贴到输入区 <-> 修复若干隐蔽的内存BUG <-> 修复多用户冲突问题 <-> 接入Deepseek Coder <-> AutoGen多智能体插件测试版"
}