Compare commits

...

5 Commits

Author SHA1 Message Date
雷欧(林平凡)
0055ea2df7 Merge branch 'master' of https://github.com/binary-husky/gpt_academic
Some checks failed
build-with-all-capacity / build-and-push-image (push) Has been cancelled
build-with-audio-assistant / build-and-push-image (push) Has been cancelled
build-with-chatglm / build-and-push-image (push) Has been cancelled
build-with-latex-arm / build-and-push-image (push) Has been cancelled
build-with-latex / build-and-push-image (push) Has been cancelled
build-without-local-llms / build-and-push-image (push) Has been cancelled
2025-03-04 14:16:24 +08:00
Steven Moder
4a79aa6a93 typo: Fix typos and rename functions across multiple files (#2130)
* typo: Fix typos and rename functions across multiple files

This commit addresses several minor issues:
- Corrected spelling of function names (e.g., `update_ui_lastest_msg` to `update_ui_latest_msg`)
- Fixed typos in comments and variable names
- Corrected capitalization in some strings (e.g., "ArXiv" instead of "Arixv")
- Renamed some variables for consistency
- Corrected some console-related parameter names (e.g., `console_slience` to `console_silence`)

The changes span multiple files across the project, including request LLM bridges, crazy functions, and utility modules.

* fix: f-string expression part cannot include a backslash (#2139)

* raise error when the uploaded tar contain hard/soft link (#2136)

* minor bug fix

* fine tune reasoning css

* upgrade internet gpt plugin

* Update README.md

* fix GHSA-gqp5-wm97-qxcv

* typo fix

* update readme

---------

Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com>
Co-authored-by: binary-husky <qingxu.fu@outlook.com>
2025-03-02 02:16:10 +08:00
binary-husky
5dffe8627f fix GHSA-gqp5-wm97-qxcv 2025-03-02 01:58:45 +08:00
binary-husky
2aefef26db Update README.md 2025-02-21 19:51:09 +08:00
binary-husky
957da731db upgrade internet gpt plugin 2025-02-13 00:19:43 +08:00
81 changed files with 586 additions and 396 deletions

View File

@@ -1,5 +1,5 @@
> [!IMPORTANT] > [!IMPORTANT]
> `master主分支`最新动态(2025.2.4): 增加deepseek-r1支持 > `master主分支`最新动态(2025.3.2): 修复大量代码typo / 联网组件支持Jina的api / 增加deepseek-r1支持
> `frontier开发分支`最新动态(2024.12.9): 更新对话时间线功能优化xelatex论文翻译 > `frontier开发分支`最新动态(2024.12.9): 更新对话时间线功能优化xelatex论文翻译
> `wiki文档`最新动态(2024.12.5): 更新ollama接入指南 > `wiki文档`最新动态(2024.12.5): 更新ollama接入指南
> >
@@ -8,7 +8,7 @@
> 2024.10.10: 突发停电,紧急恢复了提供[whl包](https://drive.google.com/drive/folders/14kR-3V-lIbvGxri4AHc8TpiA1fqsw7SK?usp=sharing)的文件服务器 > 2024.10.10: 突发停电,紧急恢复了提供[whl包](https://drive.google.com/drive/folders/14kR-3V-lIbvGxri4AHc8TpiA1fqsw7SK?usp=sharing)的文件服务器
> 2024.5.1: 加入Doc2x翻译PDF论文的功能[查看详情](https://github.com/binary-husky/gpt_academic/wiki/Doc2x) > 2024.5.1: 加入Doc2x翻译PDF论文的功能[查看详情](https://github.com/binary-husky/gpt_academic/wiki/Doc2x)
> 2024.3.11: 全力支持Qwen、GLM、DeepseekCoder等中文大语言模型 SoVits语音克隆模块[查看详情](https://www.bilibili.com/video/BV1Rp421S7tF/) > 2024.3.11: 全力支持Qwen、GLM、DeepseekCoder等中文大语言模型 SoVits语音克隆模块[查看详情](https://www.bilibili.com/video/BV1Rp421S7tF/)
> 2024.1.17: 安装依赖时,请选择`requirements.txt`中**指定的版本**。 安装命令:`pip install -r requirements.txt`。本项目完全开源免费,您可通过订阅[在线服务](https://github.com/binary-husky/gpt_academic/wiki/online)的方式鼓励本项目的发展。 > 2024.1.17: 安装依赖时,请选择`requirements.txt`中**指定的版本**。 安装命令:`pip install -r requirements.txt`。
<br> <br>
@@ -428,7 +428,6 @@ timeline LR
1. `master` 分支: 主分支,稳定版 1. `master` 分支: 主分支,稳定版
2. `frontier` 分支: 开发分支,测试版 2. `frontier` 分支: 开发分支,测试版
3. 如何[接入其他大模型](request_llms/README.md) 3. 如何[接入其他大模型](request_llms/README.md)
4. 访问GPT-Academic的[在线服务并支持我们](https://github.com/binary-husky/gpt_academic/wiki/online)
### V参考与学习 ### V参考与学习

View File

@@ -344,6 +344,8 @@ NUM_CUSTOM_BASIC_BTN = 4
DAAS_SERVER_URLS = [ f"https://niuziniu-biligpt{i}.hf.space/stream" for i in range(1,5) ] DAAS_SERVER_URLS = [ f"https://niuziniu-biligpt{i}.hf.space/stream" for i in range(1,5) ]
# 在互联网搜索组件中负责将搜索结果整理成干净的Markdown
JINA_API_KEY = ""
""" """
--------------- 配置关联关系说明 --------------- --------------- 配置关联关系说明 ---------------

View File

@@ -113,7 +113,7 @@ def get_crazy_functions():
"Group": "学术", "Group": "学术",
"Color": "stop", "Color": "stop",
"AsButton": True, "AsButton": True,
"Info": "Arixv论文精细翻译 | 输入参数arxiv论文的ID比如1812.10695", "Info": "ArXiv论文精细翻译 | 输入参数arxiv论文的ID比如1812.10695",
"Function": HotReload(Latex翻译中文并重新编译PDF), # 当注册Class后Function旧接口仅会在“虚空终端”中起作用 "Function": HotReload(Latex翻译中文并重新编译PDF), # 当注册Class后Function旧接口仅会在“虚空终端”中起作用
"Class": Arxiv_Localize, # 新一代插件需要注册Class "Class": Arxiv_Localize, # 新一代插件需要注册Class
}, },
@@ -352,7 +352,7 @@ def get_crazy_functions():
"ArgsReminder": r"如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 " "ArgsReminder": r"如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 "
r"例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: " r"例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: "
r'If the term "agent" is used in this section, it should be translated to "智能体". ', r'If the term "agent" is used in this section, it should be translated to "智能体". ',
"Info": "Arixv论文精细翻译 | 输入参数arxiv论文的ID比如1812.10695", "Info": "ArXiv论文精细翻译 | 输入参数arxiv论文的ID比如1812.10695",
"Function": HotReload(Latex翻译中文并重新编译PDF), # 当注册Class后Function旧接口仅会在“虚空终端”中起作用 "Function": HotReload(Latex翻译中文并重新编译PDF), # 当注册Class后Function旧接口仅会在“虚空终端”中起作用
"Class": Arxiv_Localize, # 新一代插件需要注册Class "Class": Arxiv_Localize, # 新一代插件需要注册Class
}, },
@@ -434,36 +434,6 @@ def get_crazy_functions():
logger.error(trimmed_format_exc()) logger.error(trimmed_format_exc())
logger.error("Load function plugin failed") logger.error("Load function plugin failed")
# try:
# from crazy_functions.联网的ChatGPT import 连接网络回答问题
# function_plugins.update(
# {
# "连接网络回答问题(输入问题后点击该插件,需要访问谷歌)": {
# "Group": "对话",
# "Color": "stop",
# "AsButton": False, # 加入下拉菜单中
# # "Info": "连接网络回答问题(需要访问谷歌)| 输入参数是一个问题",
# "Function": HotReload(连接网络回答问题),
# }
# }
# )
# from crazy_functions.联网的ChatGPT_bing版 import 连接bing搜索回答问题
# function_plugins.update(
# {
# "连接网络回答问题中文Bing版输入问题后点击该插件": {
# "Group": "对话",
# "Color": "stop",
# "AsButton": False, # 加入下拉菜单中
# "Info": "连接网络回答问题需要访问中文Bing| 输入参数是一个问题",
# "Function": HotReload(连接bing搜索回答问题),
# }
# }
# )
# except:
# logger.error(trimmed_format_exc())
# logger.error("Load function plugin failed")
try: try:
from crazy_functions.SourceCode_Analyse import 解析任意code项目 from crazy_functions.SourceCode_Analyse import 解析任意code项目
@@ -771,6 +741,9 @@ def get_multiplex_button_functions():
"常规对话": "常规对话":
"", "",
"查互联网后回答":
"查互联网后回答",
"多模型对话": "多模型对话":
"询问多个GPT模型", # 映射到上面的 `询问多个GPT模型` 插件 "询问多个GPT模型", # 映射到上面的 `询问多个GPT模型` 插件

View File

@@ -7,7 +7,7 @@ from bs4 import BeautifulSoup
from functools import lru_cache from functools import lru_cache
from itertools import zip_longest from itertools import zip_longest
from check_proxy import check_proxy from check_proxy import check_proxy
from toolbox import CatchException, update_ui, get_conf, update_ui_lastest_msg from toolbox import CatchException, update_ui, get_conf, update_ui_latest_msg
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive, input_clipping from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive, input_clipping
from request_llms.bridge_all import model_info from request_llms.bridge_all import model_info
from request_llms.bridge_all import predict_no_ui_long_connection from request_llms.bridge_all import predict_no_ui_long_connection
@@ -49,7 +49,7 @@ def search_optimizer(
mutable = ["", time.time(), ""] mutable = ["", time.time(), ""]
llm_kwargs["temperature"] = 0.8 llm_kwargs["temperature"] = 0.8
try: try:
querys_json = predict_no_ui_long_connection( query_json = predict_no_ui_long_connection(
inputs=query, inputs=query,
llm_kwargs=llm_kwargs, llm_kwargs=llm_kwargs,
history=[], history=[],
@@ -57,31 +57,31 @@ def search_optimizer(
observe_window=mutable, observe_window=mutable,
) )
except Exception: except Exception:
querys_json = "1234" query_json = "null"
#* 尝试解码优化后的搜索结果 #* 尝试解码优化后的搜索结果
querys_json = re.sub(r"```json|```", "", querys_json) query_json = re.sub(r"```json|```", "", query_json)
try: try:
querys = json.loads(querys_json) queries = json.loads(query_json)
except Exception: except Exception:
#* 如果解码失败,降低温度再试一次 #* 如果解码失败,降低温度再试一次
try: try:
llm_kwargs["temperature"] = 0.4 llm_kwargs["temperature"] = 0.4
querys_json = predict_no_ui_long_connection( query_json = predict_no_ui_long_connection(
inputs=query, inputs=query,
llm_kwargs=llm_kwargs, llm_kwargs=llm_kwargs,
history=[], history=[],
sys_prompt=sys_prompt, sys_prompt=sys_prompt,
observe_window=mutable, observe_window=mutable,
) )
querys_json = re.sub(r"```json|```", "", querys_json) query_json = re.sub(r"```json|```", "", query_json)
querys = json.loads(querys_json) queries = json.loads(query_json)
except Exception: except Exception:
#* 如果再次失败,直接返回原始问题 #* 如果再次失败,直接返回原始问题
querys = [query] queries = [query]
links = [] links = []
success = 0 success = 0
Exceptions = "" Exceptions = ""
for q in querys: for q in queries:
try: try:
link = searxng_request(q, proxies, categories, searxng_url, engines=engines) link = searxng_request(q, proxies, categories, searxng_url, engines=engines)
if len(link) > 0: if len(link) > 0:
@@ -175,10 +175,17 @@ def scrape_text(url, proxies) -> str:
Returns: Returns:
str: The scraped text str: The scraped text
""" """
from loguru import logger
headers = { headers = {
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.61 Safari/537.36', 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.61 Safari/537.36',
'Content-Type': 'text/plain', 'Content-Type': 'text/plain',
} }
# 首先采用Jina进行文本提取
if get_conf("JINA_API_KEY"):
try: return jina_scrape_text(url)
except: logger.debug("Jina API 请求失败,回到旧方法")
try: try:
response = requests.get(url, headers=headers, proxies=proxies, timeout=8) response = requests.get(url, headers=headers, proxies=proxies, timeout=8)
if response.encoding == "ISO-8859-1": response.encoding = response.apparent_encoding if response.encoding == "ISO-8859-1": response.encoding = response.apparent_encoding
@@ -193,21 +200,39 @@ def scrape_text(url, proxies) -> str:
text = "\n".join(chunk for chunk in chunks if chunk) text = "\n".join(chunk for chunk in chunks if chunk)
return text return text
def jina_scrape_text(url) -> str:
"jina_39727421c8fa4e4fa9bd698e5211feaaDyGeVFESNrRaepWiLT0wmHYJSh-d"
headers = {
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.61 Safari/537.36',
'Content-Type': 'text/plain',
"X-Retain-Images": "none",
"Authorization": f'Bearer {get_conf("JINA_API_KEY")}'
}
response = requests.get("https://r.jina.ai/" + url, headers=headers, proxies=None, timeout=8)
if response.status_code != 200:
raise ValueError("Jina API 请求失败,开始尝试旧方法!" + response.text)
if response.encoding == "ISO-8859-1": response.encoding = response.apparent_encoding
result = response.text
result = result.replace("\\[", "[").replace("\\]", "]").replace("\\(", "(").replace("\\)", ")")
return response.text
def internet_search_with_analysis_prompt(prompt, analysis_prompt, llm_kwargs, chatbot): def internet_search_with_analysis_prompt(prompt, analysis_prompt, llm_kwargs, chatbot):
from toolbox import get_conf from toolbox import get_conf
proxies = get_conf('proxies') proxies = get_conf('proxies')
categories = 'general' categories = 'general'
searxng_url = None # 使用默认的searxng_url searxng_url = None # 使用默认的searxng_url
engines = None # 使用默认的搜索引擎 engines = None # 使用默认的搜索引擎
yield from update_ui_lastest_msg(lastmsg=f"检索中: {prompt} ...", chatbot=chatbot, history=[], delay=1) yield from update_ui_latest_msg(lastmsg=f"检索中: {prompt} ...", chatbot=chatbot, history=[], delay=1)
urls = searxng_request(prompt, proxies, categories, searxng_url, engines=engines) urls = searxng_request(prompt, proxies, categories, searxng_url, engines=engines)
yield from update_ui_lastest_msg(lastmsg=f"依次访问搜索到的网站 ...", chatbot=chatbot, history=[], delay=1) yield from update_ui_latest_msg(lastmsg=f"依次访问搜索到的网站 ...", chatbot=chatbot, history=[], delay=1)
if len(urls) == 0: if len(urls) == 0:
return None return None
max_search_result = 5 # 最多收纳多少个网页的结果 max_search_result = 5 # 最多收纳多少个网页的结果
history = [] history = []
for index, url in enumerate(urls[:max_search_result]): for index, url in enumerate(urls[:max_search_result]):
yield from update_ui_lastest_msg(lastmsg=f"依次访问搜索到的网站: {url['link']} ...", chatbot=chatbot, history=[], delay=1) yield from update_ui_latest_msg(lastmsg=f"依次访问搜索到的网站: {url['link']} ...", chatbot=chatbot, history=[], delay=1)
res = scrape_text(url['link'], proxies) res = scrape_text(url['link'], proxies)
prefix = f"{index}份搜索结果 [源自{url['source'][0]}搜索] {url['title'][:25]}" prefix = f"{index}份搜索结果 [源自{url['source'][0]}搜索] {url['title'][:25]}"
history.extend([prefix, res]) history.extend([prefix, res])
@@ -222,7 +247,7 @@ def internet_search_with_analysis_prompt(prompt, analysis_prompt, llm_kwargs, ch
llm_kwargs=llm_kwargs, llm_kwargs=llm_kwargs,
history=history, history=history,
sys_prompt="请从搜索结果中抽取信息,对最相关的两个搜索结果进行总结,然后回答问题。", sys_prompt="请从搜索结果中抽取信息,对最相关的两个搜索结果进行总结,然后回答问题。",
console_slience=False, console_silence=False,
) )
return gpt_say return gpt_say
@@ -246,23 +271,52 @@ def 连接网络回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
urls = search_optimizer(txt, proxies, optimizer_history, llm_kwargs, optimizer, categories, searxng_url, engines) urls = search_optimizer(txt, proxies, optimizer_history, llm_kwargs, optimizer, categories, searxng_url, engines)
history = [] history = []
if len(urls) == 0: if len(urls) == 0:
chatbot.append((f"结论:{txt}", chatbot.append((f"结论:{txt}", "[Local Message] 受到限制无法从searxng获取信息请尝试更换搜索引擎。"))
"[Local Message] 受到限制无法从searxng获取信息请尝试更换搜索引擎。"))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return return
# ------------- < 第2步依次访问网页 > ------------- # ------------- < 第2步依次访问网页 > -------------
from concurrent.futures import ThreadPoolExecutor
from textwrap import dedent
max_search_result = 5 # 最多收纳多少个网页的结果 max_search_result = 5 # 最多收纳多少个网页的结果
if optimizer == "开启(增强)": if optimizer == "开启(增强)":
max_search_result = 8 max_search_result = 8
chatbot.append(["联网检索中 ...", None]) template = dedent("""
for index, url in enumerate(urls[:max_search_result]): <details>
res = scrape_text(url['link'], proxies) <summary>{TITLE}</summary>
prefix = f"{index}份搜索结果 [源自{url['source'][0]}搜索] {url['title'][:25]}" <div class="search_result">{URL}</div>
history.extend([prefix, res]) <div class="search_result">{CONTENT}</div>
res_squeeze = res.replace('\n', '...') </details>
chatbot[-1] = [prefix + "\n\n" + res_squeeze[:500] + "......", None] """)
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
buffer = ""
# 创建线程池
with ThreadPoolExecutor(max_workers=5) as executor:
# 提交任务到线程池
futures = []
for index, url in enumerate(urls[:max_search_result]):
future = executor.submit(scrape_text, url['link'], proxies)
futures.append((index, future, url))
# 处理完成的任务
for index, future, url in futures:
# 开始
prefix = f"正在加载 第{index+1}份搜索结果 [源自{url['source'][0]}搜索] {url['title'][:25]}"
string_structure = template.format(TITLE=prefix, URL=url['link'], CONTENT="正在加载,请稍后 ......")
yield from update_ui_latest_msg(lastmsg=(buffer + string_structure), chatbot=chatbot, history=history, delay=0.1) # 刷新界面
# 获取结果
res = future.result()
# 显示结果
prefix = f"{index+1}份搜索结果 [源自{url['source'][0]}搜索] {url['title'][:25]}"
string_structure = template.format(TITLE=prefix, URL=url['link'], CONTENT=res[:1000] + "......")
buffer += string_structure
# 更新历史
history.extend([prefix, res])
yield from update_ui_latest_msg(lastmsg=buffer, chatbot=chatbot, history=history, delay=0.1) # 刷新界面
# ------------- < 第3步ChatGPT综合 > ------------- # ------------- < 第3步ChatGPT综合 > -------------
if (optimizer != "开启(增强)"): if (optimizer != "开启(增强)"):

View File

@@ -38,11 +38,12 @@ class NetworkGPT_Wrap(GptAcademicPluginTemplate):
} }
return gui_definition return gui_definition
def execute(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request): def execute(txt, llm_kwargs, plugin_kwargs:dict, chatbot, history, system_prompt, user_request):
""" """
执行插件 执行插件
""" """
if plugin_kwargs["categories"] == "网页": plugin_kwargs["categories"] = "general" if plugin_kwargs.get("categories", None) == "网页": plugin_kwargs["categories"] = "general"
if plugin_kwargs["categories"] == "学术论文": plugin_kwargs["categories"] = "science" elif plugin_kwargs.get("categories", None) == "学术论文": plugin_kwargs["categories"] = "science"
else: plugin_kwargs["categories"] = "general"
yield from 连接网络回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request) yield from 连接网络回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request)

View File

@@ -1,5 +1,5 @@
from toolbox import update_ui, trimmed_format_exc, get_conf, get_log_folder, promote_file_to_downloadzone, check_repeat_upload, map_file_to_sha256 from toolbox import update_ui, trimmed_format_exc, get_conf, get_log_folder, promote_file_to_downloadzone, check_repeat_upload, map_file_to_sha256
from toolbox import CatchException, report_exception, update_ui_lastest_msg, zip_result, gen_time_str from toolbox import CatchException, report_exception, update_ui_latest_msg, zip_result, gen_time_str
from functools import partial from functools import partial
from loguru import logger from loguru import logger
@@ -41,7 +41,7 @@ def switch_prompt(pfg, mode, more_requirement):
return inputs_array, sys_prompt_array return inputs_array, sys_prompt_array
def desend_to_extracted_folder_if_exist(project_folder): def descend_to_extracted_folder_if_exist(project_folder):
""" """
Descend into the extracted folder if it exists, otherwise return the original folder. Descend into the extracted folder if it exists, otherwise return the original folder.
@@ -130,7 +130,7 @@ def arxiv_download(chatbot, history, txt, allow_cache=True):
if not txt.startswith('https://arxiv.org/abs/'): if not txt.startswith('https://arxiv.org/abs/'):
msg = f"解析arxiv网址失败, 期望格式例如: https://arxiv.org/abs/1707.06690。实际得到格式: {url_}" msg = f"解析arxiv网址失败, 期望格式例如: https://arxiv.org/abs/1707.06690。实际得到格式: {url_}"
yield from update_ui_lastest_msg(msg, chatbot=chatbot, history=history) # 刷新界面 yield from update_ui_latest_msg(msg, chatbot=chatbot, history=history) # 刷新界面
return msg, None return msg, None
# <-------------- set format -------------> # <-------------- set format ------------->
arxiv_id = url_.split('/abs/')[-1] arxiv_id = url_.split('/abs/')[-1]
@@ -156,16 +156,16 @@ def arxiv_download(chatbot, history, txt, allow_cache=True):
return False return False
if os.path.exists(dst) and allow_cache: if os.path.exists(dst) and allow_cache:
yield from update_ui_lastest_msg(f"调用缓存 {arxiv_id}", chatbot=chatbot, history=history) # 刷新界面 yield from update_ui_latest_msg(f"调用缓存 {arxiv_id}", chatbot=chatbot, history=history) # 刷新界面
success = True success = True
else: else:
yield from update_ui_lastest_msg(f"开始下载 {arxiv_id}", chatbot=chatbot, history=history) # 刷新界面 yield from update_ui_latest_msg(f"开始下载 {arxiv_id}", chatbot=chatbot, history=history) # 刷新界面
success = fix_url_and_download() success = fix_url_and_download()
yield from update_ui_lastest_msg(f"下载完成 {arxiv_id}", chatbot=chatbot, history=history) # 刷新界面 yield from update_ui_latest_msg(f"下载完成 {arxiv_id}", chatbot=chatbot, history=history) # 刷新界面
if not success: if not success:
yield from update_ui_lastest_msg(f"下载失败 {arxiv_id}", chatbot=chatbot, history=history) yield from update_ui_latest_msg(f"下载失败 {arxiv_id}", chatbot=chatbot, history=history)
raise tarfile.ReadError(f"论文下载失败 {arxiv_id}") raise tarfile.ReadError(f"论文下载失败 {arxiv_id}")
# <-------------- extract file -------------> # <-------------- extract file ------------->
@@ -288,7 +288,7 @@ def Latex英文纠错加PDF对比(txt, llm_kwargs, plugin_kwargs, chatbot, histo
return return
# <-------------- if is a zip/tar file -------------> # <-------------- if is a zip/tar file ------------->
project_folder = desend_to_extracted_folder_if_exist(project_folder) project_folder = descend_to_extracted_folder_if_exist(project_folder)
# <-------------- move latex project away from temp folder -------------> # <-------------- move latex project away from temp folder ------------->
from shared_utils.fastapi_server import validate_path_safety from shared_utils.fastapi_server import validate_path_safety
@@ -365,7 +365,7 @@ def Latex翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot,
try: try:
txt, arxiv_id = yield from arxiv_download(chatbot, history, txt, allow_cache) txt, arxiv_id = yield from arxiv_download(chatbot, history, txt, allow_cache)
except tarfile.ReadError as e: except tarfile.ReadError as e:
yield from update_ui_lastest_msg( yield from update_ui_latest_msg(
"无法自动下载该论文的Latex源码请前往arxiv打开此论文下载页面点other Formats然后download source手动下载latex源码包。接下来调用本地Latex翻译插件即可。", "无法自动下载该论文的Latex源码请前往arxiv打开此论文下载页面点other Formats然后download source手动下载latex源码包。接下来调用本地Latex翻译插件即可。",
chatbot=chatbot, history=history) chatbot=chatbot, history=history)
return return
@@ -404,7 +404,7 @@ def Latex翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot,
return return
# <-------------- if is a zip/tar file -------------> # <-------------- if is a zip/tar file ------------->
project_folder = desend_to_extracted_folder_if_exist(project_folder) project_folder = descend_to_extracted_folder_if_exist(project_folder)
# <-------------- move latex project away from temp folder -------------> # <-------------- move latex project away from temp folder ------------->
from shared_utils.fastapi_server import validate_path_safety from shared_utils.fastapi_server import validate_path_safety
@@ -518,7 +518,7 @@ def PDF翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot, h
# repeat, project_folder = check_repeat_upload(file_manifest[0], hash_tag) # repeat, project_folder = check_repeat_upload(file_manifest[0], hash_tag)
# if repeat: # if repeat:
# yield from update_ui_lastest_msg(f"发现重复上传,请查收结果(压缩包)...", chatbot=chatbot, history=history) # yield from update_ui_latest_msg(f"发现重复上传,请查收结果(压缩包)...", chatbot=chatbot, history=history)
# try: # try:
# translate_pdf = [f for f in glob.glob(f'{project_folder}/**/merge_translate_zh.pdf', recursive=True)][0] # translate_pdf = [f for f in glob.glob(f'{project_folder}/**/merge_translate_zh.pdf', recursive=True)][0]
# promote_file_to_downloadzone(translate_pdf, rename_file=None, chatbot=chatbot) # promote_file_to_downloadzone(translate_pdf, rename_file=None, chatbot=chatbot)
@@ -531,7 +531,7 @@ def PDF翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot, h
# report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"发现重复上传,但是无法找到相关文件") # report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"发现重复上传,但是无法找到相关文件")
# yield from update_ui(chatbot=chatbot, history=history) # yield from update_ui(chatbot=chatbot, history=history)
# else: # else:
# yield from update_ui_lastest_msg(f"未发现重复上传", chatbot=chatbot, history=history) # yield from update_ui_latest_msg(f"未发现重复上传", chatbot=chatbot, history=history)
# <-------------- convert pdf into tex -------------> # <-------------- convert pdf into tex ------------->
chatbot.append([f"解析项目: {txt}", "正在将PDF转换为tex项目请耐心等待..."]) chatbot.append([f"解析项目: {txt}", "正在将PDF转换为tex项目请耐心等待..."])
@@ -543,7 +543,7 @@ def PDF翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot, h
return False return False
# <-------------- translate latex file into Chinese -------------> # <-------------- translate latex file into Chinese ------------->
yield from update_ui_lastest_msg("正在tex项目将翻译为中文...", chatbot=chatbot, history=history) yield from update_ui_latest_msg("正在tex项目将翻译为中文...", chatbot=chatbot, history=history)
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)]
if len(file_manifest) == 0: if len(file_manifest) == 0:
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何.tex文件: {txt}") report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何.tex文件: {txt}")
@@ -551,7 +551,7 @@ def PDF翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot, h
return return
# <-------------- if is a zip/tar file -------------> # <-------------- if is a zip/tar file ------------->
project_folder = desend_to_extracted_folder_if_exist(project_folder) project_folder = descend_to_extracted_folder_if_exist(project_folder)
# <-------------- move latex project away from temp folder -------------> # <-------------- move latex project away from temp folder ------------->
from shared_utils.fastapi_server import validate_path_safety from shared_utils.fastapi_server import validate_path_safety
@@ -571,7 +571,7 @@ def PDF翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot, h
switch_prompt=_switch_prompt_) switch_prompt=_switch_prompt_)
# <-------------- compile PDF -------------> # <-------------- compile PDF ------------->
yield from update_ui_lastest_msg("正在将翻译好的项目tex项目编译为PDF...", chatbot=chatbot, history=history) yield from update_ui_latest_msg("正在将翻译好的项目tex项目编译为PDF...", chatbot=chatbot, history=history)
success = yield from 编译Latex(chatbot, history, main_file_original='merge', success = yield from 编译Latex(chatbot, history, main_file_original='merge',
main_file_modified='merge_translate_zh', mode='translate_zh', main_file_modified='merge_translate_zh', mode='translate_zh',
work_folder_original=project_folder, work_folder_modified=project_folder, work_folder_original=project_folder, work_folder_modified=project_folder,

View File

@@ -1,5 +1,5 @@
from toolbox import CatchException, check_packages, get_conf from toolbox import CatchException, check_packages, get_conf
from toolbox import update_ui, update_ui_lastest_msg, disable_auto_promotion from toolbox import update_ui, update_ui_latest_msg, disable_auto_promotion
from toolbox import trimmed_format_exc_markdown from toolbox import trimmed_format_exc_markdown
from crazy_functions.crazy_utils import get_files_from_everything from crazy_functions.crazy_utils import get_files_from_everything
from crazy_functions.pdf_fns.parse_pdf import get_avail_grobid_url from crazy_functions.pdf_fns.parse_pdf import get_avail_grobid_url
@@ -57,9 +57,9 @@ def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
yield from 解析PDF_基于GROBID(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, grobid_url) yield from 解析PDF_基于GROBID(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, grobid_url)
return return
if method == "ClASSIC": if method == "Classic":
# ------- 第三种方法,早期代码,效果不理想 ------- # ------- 第三种方法,早期代码,效果不理想 -------
yield from update_ui_lastest_msg("GROBID服务不可用请检查config中的GROBID_URL。作为替代现在将执行效果稍差的旧版代码。", chatbot, history, delay=3) yield from update_ui_latest_msg("GROBID服务不可用请检查config中的GROBID_URL。作为替代现在将执行效果稍差的旧版代码。", chatbot, history, delay=3)
yield from 解析PDF_简单拆解(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) yield from 解析PDF_简单拆解(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
return return
@@ -77,7 +77,7 @@ def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
if grobid_url is not None: if grobid_url is not None:
yield from 解析PDF_基于GROBID(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, grobid_url) yield from 解析PDF_基于GROBID(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, grobid_url)
return return
yield from update_ui_lastest_msg("GROBID服务不可用请检查config中的GROBID_URL。作为替代现在将执行效果稍差的旧版代码。", chatbot, history, delay=3) yield from update_ui_latest_msg("GROBID服务不可用请检查config中的GROBID_URL。作为替代现在将执行效果稍差的旧版代码。", chatbot, history, delay=3)
yield from 解析PDF_简单拆解(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) yield from 解析PDF_简单拆解(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
return return

View File

@@ -19,7 +19,7 @@ class PDF_Tran(GptAcademicPluginTemplate):
"additional_prompt": "additional_prompt":
ArgProperty(title="额外提示词", description="例如:对专有名词、翻译语气等方面的要求", default_value="", type="string").model_dump_json(), # 高级参数输入区,自动同步 ArgProperty(title="额外提示词", description="例如:对专有名词、翻译语气等方面的要求", default_value="", type="string").model_dump_json(), # 高级参数输入区,自动同步
"pdf_parse_method": "pdf_parse_method":
ArgProperty(title="PDF解析方法", options=["DOC2X", "GROBID", "ClASSIC"], description="", default_value="GROBID", type="dropdown").model_dump_json(), ArgProperty(title="PDF解析方法", options=["DOC2X", "GROBID", "Classic"], description="", default_value="GROBID", type="dropdown").model_dump_json(),
} }
return gui_definition return gui_definition

View File

@@ -4,7 +4,7 @@ from typing import List
from shared_utils.fastapi_server import validate_path_safety from shared_utils.fastapi_server import validate_path_safety
from toolbox import report_exception from toolbox import report_exception
from toolbox import CatchException, update_ui, get_conf, get_log_folder, update_ui_lastest_msg from toolbox import CatchException, update_ui, get_conf, get_log_folder, update_ui_latest_msg
from shared_utils.fastapi_server import validate_path_safety from shared_utils.fastapi_server import validate_path_safety
from crazy_functions.crazy_utils import input_clipping from crazy_functions.crazy_utils import input_clipping
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
@@ -92,7 +92,7 @@ def Rag问答(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, u
chatbot.append([txt, f'正在清空 ({current_context}) ...']) chatbot.append([txt, f'正在清空 ({current_context}) ...'])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
rag_worker.purge_vector_store() rag_worker.purge_vector_store()
yield from update_ui_lastest_msg('已清空', chatbot, history, delay=0) # 刷新界面 yield from update_ui_latest_msg('已清空', chatbot, history, delay=0) # 刷新界面
return return
# 3. Normal Q&A processing # 3. Normal Q&A processing
@@ -109,10 +109,10 @@ def Rag问答(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, u
# 5. If input is clipped, add input to vector store before retrieve # 5. If input is clipped, add input to vector store before retrieve
if input_is_clipped_flag: if input_is_clipped_flag:
yield from update_ui_lastest_msg('检测到长输入, 正在向量化 ...', chatbot, history, delay=0) # 刷新界面 yield from update_ui_latest_msg('检测到长输入, 正在向量化 ...', chatbot, history, delay=0) # 刷新界面
# Save input to vector store # Save input to vector store
rag_worker.add_text_to_vector_store(txt_origin) rag_worker.add_text_to_vector_store(txt_origin)
yield from update_ui_lastest_msg('向量化完成 ...', chatbot, history, delay=0) # 刷新界面 yield from update_ui_latest_msg('向量化完成 ...', chatbot, history, delay=0) # 刷新界面
if len(txt_origin) > REMEMBER_PREVIEW: if len(txt_origin) > REMEMBER_PREVIEW:
HALF = REMEMBER_PREVIEW // 2 HALF = REMEMBER_PREVIEW // 2
@@ -142,7 +142,7 @@ def Rag问答(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, u
) )
# 8. Remember Q&A # 8. Remember Q&A
yield from update_ui_lastest_msg( yield from update_ui_latest_msg(
model_say + '</br></br>' + f'对话记忆中, 请稍等 ({current_context}) ...', model_say + '</br></br>' + f'对话记忆中, 请稍等 ({current_context}) ...',
chatbot, history, delay=0.5 chatbot, history, delay=0.5
) )
@@ -150,4 +150,4 @@ def Rag问答(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, u
history.extend([i_say, model_say]) history.extend([i_say, model_say])
# 9. Final UI Update # 9. Final UI Update
yield from update_ui_lastest_msg(model_say, chatbot, history, delay=0, msg=tip) yield from update_ui_latest_msg(model_say, chatbot, history, delay=0, msg=tip)

View File

@@ -1,5 +1,5 @@
import pickle, os, random import pickle, os, random
from toolbox import CatchException, update_ui, get_conf, get_log_folder, update_ui_lastest_msg from toolbox import CatchException, update_ui, get_conf, get_log_folder, update_ui_latest_msg
from crazy_functions.crazy_utils import input_clipping from crazy_functions.crazy_utils import input_clipping
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
from request_llms.bridge_all import predict_no_ui_long_connection from request_llms.bridge_all import predict_no_ui_long_connection
@@ -9,7 +9,7 @@ from loguru import logger
from typing import List from typing import List
SOCIAL_NETWOK_WORKER_REGISTER = {} SOCIAL_NETWORK_WORKER_REGISTER = {}
class SocialNetwork(): class SocialNetwork():
def __init__(self): def __init__(self):
@@ -78,7 +78,7 @@ class SocialNetworkWorker(SaveAndLoad):
for f in friend.friends_list: for f in friend.friends_list:
self.add_friend(f) self.add_friend(f)
msg = f"成功添加{len(friend.friends_list)}个联系人: {str(friend.friends_list)}" msg = f"成功添加{len(friend.friends_list)}个联系人: {str(friend.friends_list)}"
yield from update_ui_lastest_msg(lastmsg=msg, chatbot=chatbot, history=history, delay=0) yield from update_ui_latest_msg(lastmsg=msg, chatbot=chatbot, history=history, delay=0)
def run(self, txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request): def run(self, txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
@@ -104,12 +104,12 @@ class SocialNetworkWorker(SaveAndLoad):
} }
try: try:
Explaination = '\n'.join([f'{k}: {v["explain_to_llm"]}' for k, v in self.tools_to_select.items()]) Explanation = '\n'.join([f'{k}: {v["explain_to_llm"]}' for k, v in self.tools_to_select.items()])
class UserSociaIntention(BaseModel): class UserSociaIntention(BaseModel):
intention_type: str = Field( intention_type: str = Field(
description= description=
f"The type of user intention. You must choose from {self.tools_to_select.keys()}.\n\n" f"The type of user intention. You must choose from {self.tools_to_select.keys()}.\n\n"
f"Explaination:\n{Explaination}", f"Explanation:\n{Explanation}",
default="SocialAdvice" default="SocialAdvice"
) )
pydantic_cls_instance, err_msg = select_tool( pydantic_cls_instance, err_msg = select_tool(
@@ -118,7 +118,7 @@ class SocialNetworkWorker(SaveAndLoad):
pydantic_cls=UserSociaIntention pydantic_cls=UserSociaIntention
) )
except Exception as e: except Exception as e:
yield from update_ui_lastest_msg( yield from update_ui_latest_msg(
lastmsg=f"无法理解用户意图 {err_msg}", lastmsg=f"无法理解用户意图 {err_msg}",
chatbot=chatbot, chatbot=chatbot,
history=history, history=history,
@@ -150,10 +150,10 @@ def I人助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt,
# 1. we retrieve worker from global context # 1. we retrieve worker from global context
user_name = chatbot.get_user() user_name = chatbot.get_user()
checkpoint_dir=get_log_folder(user_name, plugin_name='experimental_rag') checkpoint_dir=get_log_folder(user_name, plugin_name='experimental_rag')
if user_name in SOCIAL_NETWOK_WORKER_REGISTER: if user_name in SOCIAL_NETWORK_WORKER_REGISTER:
social_network_worker = SOCIAL_NETWOK_WORKER_REGISTER[user_name] social_network_worker = SOCIAL_NETWORK_WORKER_REGISTER[user_name]
else: else:
social_network_worker = SOCIAL_NETWOK_WORKER_REGISTER[user_name] = SocialNetworkWorker( social_network_worker = SOCIAL_NETWORK_WORKER_REGISTER[user_name] = SocialNetworkWorker(
user_name, user_name,
llm_kwargs, llm_kwargs,
checkpoint_dir=checkpoint_dir, checkpoint_dir=checkpoint_dir,

View File

@@ -1,5 +1,5 @@
import os, copy, time import os, copy, time
from toolbox import CatchException, report_exception, update_ui, zip_result, promote_file_to_downloadzone, update_ui_lastest_msg, get_conf, generate_file_link from toolbox import CatchException, report_exception, update_ui, zip_result, promote_file_to_downloadzone, update_ui_latest_msg, get_conf, generate_file_link
from shared_utils.fastapi_server import validate_path_safety from shared_utils.fastapi_server import validate_path_safety
from crazy_functions.crazy_utils import input_clipping from crazy_functions.crazy_utils import input_clipping
from crazy_functions.crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency from crazy_functions.crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
@@ -117,7 +117,7 @@ def 注释源代码(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
logger.error(f"文件: {fp} 的注释结果未能成功") logger.error(f"文件: {fp} 的注释结果未能成功")
file_links = generate_file_link(preview_html_list) file_links = generate_file_link(preview_html_list)
yield from update_ui_lastest_msg( yield from update_ui_latest_msg(
f"当前任务: <br/>{'<br/>'.join(tasks)}.<br/>" + f"当前任务: <br/>{'<br/>'.join(tasks)}.<br/>" +
f"剩余源文件数量: {remain}.<br/>" + f"剩余源文件数量: {remain}.<br/>" +
f"已完成的文件: {sum(worker_done)}.<br/>" + f"已完成的文件: {sum(worker_done)}.<br/>" +

View File

@@ -7,7 +7,7 @@ from bs4 import BeautifulSoup
from functools import lru_cache from functools import lru_cache
from itertools import zip_longest from itertools import zip_longest
from check_proxy import check_proxy from check_proxy import check_proxy
from toolbox import CatchException, update_ui, get_conf, promote_file_to_downloadzone, update_ui_lastest_msg, generate_file_link from toolbox import CatchException, update_ui, get_conf, promote_file_to_downloadzone, update_ui_latest_msg, generate_file_link
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive, input_clipping from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive, input_clipping
from request_llms.bridge_all import model_info from request_llms.bridge_all import model_info
from request_llms.bridge_all import predict_no_ui_long_connection from request_llms.bridge_all import predict_no_ui_long_connection
@@ -46,7 +46,7 @@ def download_video(bvid, user_name, chatbot, history):
# pause a while # pause a while
tic_time = 8 tic_time = 8
for i in range(tic_time): for i in range(tic_time):
yield from update_ui_lastest_msg( yield from update_ui_latest_msg(
lastmsg=f"即将下载音频。等待{tic_time-i}秒后自动继续, 点击“停止”键取消此操作。", lastmsg=f"即将下载音频。等待{tic_time-i}秒后自动继续, 点击“停止”键取消此操作。",
chatbot=chatbot, history=[], delay=1) chatbot=chatbot, history=[], delay=1)
@@ -61,13 +61,13 @@ def download_video(bvid, user_name, chatbot, history):
# preview # preview
preview_list = [promote_file_to_downloadzone(fp) for fp in downloaded_files] preview_list = [promote_file_to_downloadzone(fp) for fp in downloaded_files]
file_links = generate_file_link(preview_list) file_links = generate_file_link(preview_list)
yield from update_ui_lastest_msg(f"已完成的文件: <br/>" + file_links, chatbot=chatbot, history=history, delay=0) yield from update_ui_latest_msg(f"已完成的文件: <br/>" + file_links, chatbot=chatbot, history=history, delay=0)
chatbot.append((None, f"即将下载视频。")) chatbot.append((None, f"即将下载视频。"))
# pause a while # pause a while
tic_time = 16 tic_time = 16
for i in range(tic_time): for i in range(tic_time):
yield from update_ui_lastest_msg( yield from update_ui_latest_msg(
lastmsg=f"即将下载视频。等待{tic_time-i}秒后自动继续, 点击“停止”键取消此操作。", lastmsg=f"即将下载视频。等待{tic_time-i}秒后自动继续, 点击“停止”键取消此操作。",
chatbot=chatbot, history=[], delay=1) chatbot=chatbot, history=[], delay=1)
@@ -78,7 +78,7 @@ def download_video(bvid, user_name, chatbot, history):
# preview # preview
preview_list = [promote_file_to_downloadzone(fp) for fp in downloaded_files_part2] preview_list = [promote_file_to_downloadzone(fp) for fp in downloaded_files_part2]
file_links = generate_file_link(preview_list) file_links = generate_file_link(preview_list)
yield from update_ui_lastest_msg(f"已完成的文件: <br/>" + file_links, chatbot=chatbot, history=history, delay=0) yield from update_ui_latest_msg(f"已完成的文件: <br/>" + file_links, chatbot=chatbot, history=history, delay=0)
# return # return
return downloaded_files + downloaded_files_part2 return downloaded_files + downloaded_files_part2
@@ -110,7 +110,7 @@ def 多媒体任务(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_pro
# 结构化生成 # 结构化生成
internet_search_keyword = user_wish internet_search_keyword = user_wish
yield from update_ui_lastest_msg(lastmsg=f"发起互联网检索: {internet_search_keyword} ...", chatbot=chatbot, history=[], delay=1) yield from update_ui_latest_msg(lastmsg=f"发起互联网检索: {internet_search_keyword} ...", chatbot=chatbot, history=[], delay=1)
from crazy_functions.Internet_GPT import internet_search_with_analysis_prompt from crazy_functions.Internet_GPT import internet_search_with_analysis_prompt
result = yield from internet_search_with_analysis_prompt( result = yield from internet_search_with_analysis_prompt(
prompt=internet_search_keyword, prompt=internet_search_keyword,
@@ -119,7 +119,7 @@ def 多媒体任务(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_pro
chatbot=chatbot chatbot=chatbot
) )
yield from update_ui_lastest_msg(lastmsg=f"互联网检索结论: {result} \n\n 正在生成进一步检索方案 ...", chatbot=chatbot, history=[], delay=1) yield from update_ui_latest_msg(lastmsg=f"互联网检索结论: {result} \n\n 正在生成进一步检索方案 ...", chatbot=chatbot, history=[], delay=1)
rf_req = dedent(f""" rf_req = dedent(f"""
The user wish to get the following resource: The user wish to get the following resource:
{user_wish} {user_wish}
@@ -132,7 +132,7 @@ def 多媒体任务(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_pro
rf_req = dedent(f""" rf_req = dedent(f"""
The user wish to get the following resource: The user wish to get the following resource:
{user_wish} {user_wish}
Generate reseach keywords (less than 5 keywords) accordingly. Generate research keywords (less than 5 keywords) accordingly.
""") """)
gpt_json_io = GptJsonIO(Query) gpt_json_io = GptJsonIO(Query)
inputs = rf_req + gpt_json_io.format_instructions inputs = rf_req + gpt_json_io.format_instructions
@@ -146,12 +146,12 @@ def 多媒体任务(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_pro
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# 获取候选资源 # 获取候选资源
candadate_dictionary: dict = get_video_resource(video_engine_keywords) candidate_dictionary: dict = get_video_resource(video_engine_keywords)
candadate_dictionary_as_str = json.dumps(candadate_dictionary, ensure_ascii=False, indent=4) candidate_dictionary_as_str = json.dumps(candidate_dictionary, ensure_ascii=False, indent=4)
# 展示候选资源 # 展示候选资源
candadate_display = "\n".join([f"{i+1}. {it['title']}" for i, it in enumerate(candadate_dictionary)]) candidate_display = "\n".join([f"{i+1}. {it['title']}" for i, it in enumerate(candidate_dictionary)])
chatbot.append((None, f"候选:\n\n{candadate_display}")) chatbot.append((None, f"候选:\n\n{candidate_display}"))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# 结构化生成 # 结构化生成
@@ -160,7 +160,7 @@ def 多媒体任务(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_pro
{user_wish} {user_wish}
Select the most relevant and suitable video resource from the following search results: Select the most relevant and suitable video resource from the following search results:
{candadate_dictionary_as_str} {candidate_dictionary_as_str}
Note: Note:
1. The first several search video results are more likely to satisfy the user's wish. 1. The first several search video results are more likely to satisfy the user's wish.

View File

@@ -1,5 +1,5 @@
from toolbox import CatchException, update_ui, gen_time_str, trimmed_format_exc, ProxyNetworkActivate from toolbox import CatchException, update_ui, gen_time_str, trimmed_format_exc, ProxyNetworkActivate
from toolbox import report_exception, get_log_folder, update_ui_lastest_msg, Singleton from toolbox import report_exception, get_log_folder, update_ui_latest_msg, Singleton
from crazy_functions.agent_fns.pipe import PluginMultiprocessManager, PipeCom from crazy_functions.agent_fns.pipe import PluginMultiprocessManager, PipeCom
from crazy_functions.agent_fns.general import AutoGenGeneral from crazy_functions.agent_fns.general import AutoGenGeneral

View File

@@ -8,7 +8,7 @@ class EchoDemo(PluginMultiprocessManager):
while True: while True:
msg = self.child_conn.recv() # PipeCom msg = self.child_conn.recv() # PipeCom
if msg.cmd == "user_input": if msg.cmd == "user_input":
# wait futher user input # wait father user input
self.child_conn.send(PipeCom("show", msg.content)) self.child_conn.send(PipeCom("show", msg.content))
wait_success = self.subprocess_worker_wait_user_feedback(wait_msg="我准备好处理下一个问题了.") wait_success = self.subprocess_worker_wait_user_feedback(wait_msg="我准备好处理下一个问题了.")
if not wait_success: if not wait_success:

View File

@@ -27,7 +27,7 @@ def gpt_academic_generate_oai_reply(
llm_kwargs=llm_config, llm_kwargs=llm_config,
history=history, history=history,
sys_prompt=self._oai_system_message[0]['content'], sys_prompt=self._oai_system_message[0]['content'],
console_slience=True console_silence=True
) )
assumed_done = reply.endswith('\nTERMINATE') assumed_done = reply.endswith('\nTERMINATE')
return True, reply return True, reply

View File

@@ -10,7 +10,7 @@ from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_
# TODO: 解决缩进问题 # TODO: 解决缩进问题
find_function_end_prompt = ''' find_function_end_prompt = '''
Below is a page of code that you need to read. This page may not yet complete, you job is to split this page to sperate functions, class functions etc. Below is a page of code that you need to read. This page may not yet complete, you job is to split this page to separate functions, class functions etc.
- Provide the line number where the first visible function ends. - Provide the line number where the first visible function ends.
- Provide the line number where the next visible function begins. - Provide the line number where the next visible function begins.
- If there are no other functions in this page, you should simply return the line number of the last line. - If there are no other functions in this page, you should simply return the line number of the last line.
@@ -59,7 +59,7 @@ OUTPUT:
revise_funtion_prompt = ''' revise_function_prompt = '''
You need to read the following code, and revise the source code ({FILE_BASENAME}) according to following instructions: You need to read the following code, and revise the source code ({FILE_BASENAME}) according to following instructions:
1. You should analyze the purpose of the functions (if there are any). 1. You should analyze the purpose of the functions (if there are any).
2. You need to add docstring for the provided functions (if there are any). 2. You need to add docstring for the provided functions (if there are any).
@@ -117,7 +117,7 @@ def zip_result(folder):
''' '''
revise_funtion_prompt_chinese = ''' revise_function_prompt_chinese = '''
您需要阅读以下代码,并根据以下说明修订源代码({FILE_BASENAME}): 您需要阅读以下代码,并根据以下说明修订源代码({FILE_BASENAME}):
1. 如果源代码中包含函数的话, 你应该分析给定函数实现了什么功能 1. 如果源代码中包含函数的话, 你应该分析给定函数实现了什么功能
2. 如果源代码中包含函数的话, 你需要为函数添加docstring, docstring必须使用中文 2. 如果源代码中包含函数的话, 你需要为函数添加docstring, docstring必须使用中文
@@ -188,9 +188,9 @@ class PythonCodeComment():
self.language = language self.language = language
self.observe_window_update = observe_window_update self.observe_window_update = observe_window_update
if self.language == "chinese": if self.language == "chinese":
self.core_prompt = revise_funtion_prompt_chinese self.core_prompt = revise_function_prompt_chinese
else: else:
self.core_prompt = revise_funtion_prompt self.core_prompt = revise_function_prompt
self.path = None self.path = None
self.file_basename = None self.file_basename = None
self.file_brief = "" self.file_brief = ""
@@ -222,7 +222,7 @@ class PythonCodeComment():
history=[], history=[],
sys_prompt="", sys_prompt="",
observe_window=[], observe_window=[],
console_slience=True console_silence=True
) )
def extract_number(text): def extract_number(text):
@@ -316,7 +316,7 @@ class PythonCodeComment():
def tag_code(self, fn, hint): def tag_code(self, fn, hint):
code = fn code = fn
_, n_indent = self.dedent(code) _, n_indent = self.dedent(code)
indent_reminder = "" if n_indent == 0 else "(Reminder: as you can see, this piece of code has indent made up with {n_indent} whitespace, please preseve them in the OUTPUT.)" indent_reminder = "" if n_indent == 0 else "(Reminder: as you can see, this piece of code has indent made up with {n_indent} whitespace, please preserve them in the OUTPUT.)"
brief_reminder = "" if self.file_brief == "" else f"({self.file_basename} abstract: {self.file_brief})" brief_reminder = "" if self.file_brief == "" else f"({self.file_basename} abstract: {self.file_brief})"
hint_reminder = "" if hint is None else f"(Reminder: do not ignore or modify code such as `{hint}`, provide complete code in the OUTPUT.)" hint_reminder = "" if hint is None else f"(Reminder: do not ignore or modify code such as `{hint}`, provide complete code in the OUTPUT.)"
self.llm_kwargs['temperature'] = 0 self.llm_kwargs['temperature'] = 0
@@ -333,7 +333,7 @@ class PythonCodeComment():
history=[], history=[],
sys_prompt="", sys_prompt="",
observe_window=[], observe_window=[],
console_slience=True console_silence=True
) )
def get_code_block(reply): def get_code_block(reply):
@@ -400,7 +400,7 @@ class PythonCodeComment():
return revised return revised
def begin_comment_source_code(self, chatbot=None, history=None): def begin_comment_source_code(self, chatbot=None, history=None):
# from toolbox import update_ui_lastest_msg # from toolbox import update_ui_latest_msg
assert self.path is not None assert self.path is not None
assert '.py' in self.path # must be python source code assert '.py' in self.path # must be python source code
# write_target = self.path + '.revised.py' # write_target = self.path + '.revised.py'
@@ -409,10 +409,10 @@ class PythonCodeComment():
# with open(self.path + '.revised.py', 'w+', encoding='utf8') as f: # with open(self.path + '.revised.py', 'w+', encoding='utf8') as f:
while True: while True:
try: try:
# yield from update_ui_lastest_msg(f"({self.file_basename}) 正在读取下一段代码片段:\n", chatbot=chatbot, history=history, delay=0) # yield from update_ui_latest_msg(f"({self.file_basename}) 正在读取下一段代码片段:\n", chatbot=chatbot, history=history, delay=0)
next_batch, line_no_start, line_no_end = self.get_next_batch() next_batch, line_no_start, line_no_end = self.get_next_batch()
self.observe_window_update(f"正在处理{self.file_basename} - {line_no_start}/{len(self.full_context)}\n") self.observe_window_update(f"正在处理{self.file_basename} - {line_no_start}/{len(self.full_context)}\n")
# yield from update_ui_lastest_msg(f"({self.file_basename}) 处理代码片段:\n\n{next_batch}", chatbot=chatbot, history=history, delay=0) # yield from update_ui_latest_msg(f"({self.file_basename}) 处理代码片段:\n\n{next_batch}", chatbot=chatbot, history=history, delay=0)
hint = None hint = None
MAX_ATTEMPT = 2 MAX_ATTEMPT = 2

View File

@@ -1,7 +1,7 @@
import os import os
import threading import threading
from loguru import logger from loguru import logger
from shared_utils.char_visual_effect import scolling_visual_effect from shared_utils.char_visual_effect import scrolling_visual_effect
from toolbox import update_ui, get_conf, trimmed_format_exc, get_max_token, Singleton from toolbox import update_ui, get_conf, trimmed_format_exc, get_max_token, Singleton
def input_clipping(inputs, history, max_token_limit, return_clip_flags=False): def input_clipping(inputs, history, max_token_limit, return_clip_flags=False):
@@ -256,7 +256,7 @@ def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
# 【第一种情况】:顺利完成 # 【第一种情况】:顺利完成
gpt_say = predict_no_ui_long_connection( gpt_say = predict_no_ui_long_connection(
inputs=inputs, llm_kwargs=llm_kwargs, history=history, inputs=inputs, llm_kwargs=llm_kwargs, history=history,
sys_prompt=sys_prompt, observe_window=mutable[index], console_slience=True sys_prompt=sys_prompt, observe_window=mutable[index], console_silence=True
) )
mutable[index][2] = "已成功" mutable[index][2] = "已成功"
return gpt_say return gpt_say
@@ -326,7 +326,7 @@ def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
mutable[thread_index][1] = time.time() mutable[thread_index][1] = time.time()
# 在前端打印些好玩的东西 # 在前端打印些好玩的东西
for thread_index, _ in enumerate(worker_done): for thread_index, _ in enumerate(worker_done):
print_something_really_funny = f"[ ...`{scolling_visual_effect(mutable[thread_index][0], scroller_max_len)}`... ]" print_something_really_funny = f"[ ...`{scrolling_visual_effect(mutable[thread_index][0], scroller_max_len)}`... ]"
observe_win.append(print_something_really_funny) observe_win.append(print_something_really_funny)
# 在前端打印些好玩的东西 # 在前端打印些好玩的东西
stat_str = ''.join([f'`{mutable[thread_index][2]}`: {obs}\n\n' stat_str = ''.join([f'`{mutable[thread_index][2]}`: {obs}\n\n'
@@ -389,11 +389,11 @@ def read_and_clean_pdf_text(fp):
""" """
提取文本块主字体 提取文本块主字体
""" """
fsize_statiscs = {} fsize_statistics = {}
for wtf in l['spans']: for wtf in l['spans']:
if wtf['size'] not in fsize_statiscs: fsize_statiscs[wtf['size']] = 0 if wtf['size'] not in fsize_statistics: fsize_statistics[wtf['size']] = 0
fsize_statiscs[wtf['size']] += len(wtf['text']) fsize_statistics[wtf['size']] += len(wtf['text'])
return max(fsize_statiscs, key=fsize_statiscs.get) return max(fsize_statistics, key=fsize_statistics.get)
def ffsize_same(a,b): def ffsize_same(a,b):
""" """
@@ -433,11 +433,11 @@ def read_and_clean_pdf_text(fp):
############################## <第 2 步,获取正文主字体> ################################## ############################## <第 2 步,获取正文主字体> ##################################
try: try:
fsize_statiscs = {} fsize_statistics = {}
for span in meta_span: for span in meta_span:
if span[1] not in fsize_statiscs: fsize_statiscs[span[1]] = 0 if span[1] not in fsize_statistics: fsize_statistics[span[1]] = 0
fsize_statiscs[span[1]] += span[2] fsize_statistics[span[1]] += span[2]
main_fsize = max(fsize_statiscs, key=fsize_statiscs.get) main_fsize = max(fsize_statistics, key=fsize_statistics.get)
if REMOVE_FOOT_NOTE: if REMOVE_FOOT_NOTE:
give_up_fize_threshold = main_fsize * REMOVE_FOOT_FFSIZE_PERCENT give_up_fize_threshold = main_fsize * REMOVE_FOOT_FFSIZE_PERCENT
except: except:
@@ -610,9 +610,9 @@ class nougat_interface():
def NOUGAT_parse_pdf(self, fp, chatbot, history): def NOUGAT_parse_pdf(self, fp, chatbot, history):
from toolbox import update_ui_lastest_msg from toolbox import update_ui_latest_msg
yield from update_ui_lastest_msg("正在解析论文, 请稍候。进度:正在排队, 等待线程锁...", yield from update_ui_latest_msg("正在解析论文, 请稍候。进度:正在排队, 等待线程锁...",
chatbot=chatbot, history=history, delay=0) chatbot=chatbot, history=history, delay=0)
self.threadLock.acquire() self.threadLock.acquire()
import glob, threading, os import glob, threading, os
@@ -620,7 +620,7 @@ class nougat_interface():
dst = os.path.join(get_log_folder(plugin_name='nougat'), gen_time_str()) dst = os.path.join(get_log_folder(plugin_name='nougat'), gen_time_str())
os.makedirs(dst) os.makedirs(dst)
yield from update_ui_lastest_msg("正在解析论文, 请稍候。进度正在加载NOUGAT... 提示首次运行需要花费较长时间下载NOUGAT参数", yield from update_ui_latest_msg("正在解析论文, 请稍候。进度正在加载NOUGAT... 提示首次运行需要花费较长时间下载NOUGAT参数",
chatbot=chatbot, history=history, delay=0) chatbot=chatbot, history=history, delay=0)
command = ['nougat', '--out', os.path.abspath(dst), os.path.abspath(fp)] command = ['nougat', '--out', os.path.abspath(dst), os.path.abspath(fp)]
self.nougat_with_timeout(command, cwd=os.getcwd(), timeout=3600) self.nougat_with_timeout(command, cwd=os.getcwd(), timeout=3600)

View File

@@ -1,4 +1,4 @@
from toolbox import CatchException, update_ui, update_ui_lastest_msg from toolbox import CatchException, update_ui, update_ui_latest_msg
from crazy_functions.multi_stage.multi_stage_utils import GptAcademicGameBaseState from crazy_functions.multi_stage.multi_stage_utils import GptAcademicGameBaseState
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
from request_llms.bridge_all import predict_no_ui_long_connection from request_llms.bridge_all import predict_no_ui_long_connection
@@ -13,7 +13,7 @@ class MiniGame_ASCII_Art(GptAcademicGameBaseState):
else: else:
if prompt.strip() == 'exit': if prompt.strip() == 'exit':
self.delete_game = True self.delete_game = True
yield from update_ui_lastest_msg(lastmsg=f"谜底是{self.obj},游戏结束。", chatbot=chatbot, history=history, delay=0.) yield from update_ui_latest_msg(lastmsg=f"谜底是{self.obj},游戏结束。", chatbot=chatbot, history=history, delay=0.)
return return
chatbot.append([prompt, ""]) chatbot.append([prompt, ""])
yield from update_ui(chatbot=chatbot, history=history) yield from update_ui(chatbot=chatbot, history=history)
@@ -31,12 +31,12 @@ class MiniGame_ASCII_Art(GptAcademicGameBaseState):
self.cur_task = 'identify user guess' self.cur_task = 'identify user guess'
res = get_code_block(raw_res) res = get_code_block(raw_res)
history += ['', f'the answer is {self.obj}', inputs, res] history += ['', f'the answer is {self.obj}', inputs, res]
yield from update_ui_lastest_msg(lastmsg=res, chatbot=chatbot, history=history, delay=0.) yield from update_ui_latest_msg(lastmsg=res, chatbot=chatbot, history=history, delay=0.)
elif self.cur_task == 'identify user guess': elif self.cur_task == 'identify user guess':
if is_same_thing(self.obj, prompt, self.llm_kwargs): if is_same_thing(self.obj, prompt, self.llm_kwargs):
self.delete_game = True self.delete_game = True
yield from update_ui_lastest_msg(lastmsg="你猜对了!", chatbot=chatbot, history=history, delay=0.) yield from update_ui_latest_msg(lastmsg="你猜对了!", chatbot=chatbot, history=history, delay=0.)
else: else:
self.cur_task = 'identify user guess' self.cur_task = 'identify user guess'
yield from update_ui_lastest_msg(lastmsg="猜错了再试试输入“exit”获取答案。", chatbot=chatbot, history=history, delay=0.) yield from update_ui_latest_msg(lastmsg="猜错了再试试输入“exit”获取答案。", chatbot=chatbot, history=history, delay=0.)

View File

@@ -63,7 +63,7 @@ prompts_terminate = """小说的前文回顾:
""" """
from toolbox import CatchException, update_ui, update_ui_lastest_msg from toolbox import CatchException, update_ui, update_ui_latest_msg
from crazy_functions.multi_stage.multi_stage_utils import GptAcademicGameBaseState from crazy_functions.multi_stage.multi_stage_utils import GptAcademicGameBaseState
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
from request_llms.bridge_all import predict_no_ui_long_connection from request_llms.bridge_all import predict_no_ui_long_connection
@@ -112,7 +112,7 @@ class MiniGame_ResumeStory(GptAcademicGameBaseState):
if prompt.strip() == 'exit' or prompt.strip() == '结束剧情': if prompt.strip() == 'exit' or prompt.strip() == '结束剧情':
# should we terminate game here? # should we terminate game here?
self.delete_game = True self.delete_game = True
yield from update_ui_lastest_msg(lastmsg=f"游戏结束。", chatbot=chatbot, history=history, delay=0.) yield from update_ui_latest_msg(lastmsg=f"游戏结束。", chatbot=chatbot, history=history, delay=0.)
return return
if '剧情收尾' in prompt: if '剧情收尾' in prompt:
self.cur_task = 'story_terminate' self.cur_task = 'story_terminate'
@@ -137,8 +137,8 @@ class MiniGame_ResumeStory(GptAcademicGameBaseState):
) )
self.story.append(story_paragraph) self.story.append(story_paragraph)
# # 配图 # # 配图
yield from update_ui_lastest_msg(lastmsg=story_paragraph + '<br/>正在生成插图中 ...', chatbot=chatbot, history=history, delay=0.) yield from update_ui_latest_msg(lastmsg=story_paragraph + '<br/>正在生成插图中 ...', chatbot=chatbot, history=history, delay=0.)
yield from update_ui_lastest_msg(lastmsg=story_paragraph + '<br/>'+ self.generate_story_image(story_paragraph), chatbot=chatbot, history=history, delay=0.) yield from update_ui_latest_msg(lastmsg=story_paragraph + '<br/>'+ self.generate_story_image(story_paragraph), chatbot=chatbot, history=history, delay=0.)
# # 构建后续剧情引导 # # 构建后续剧情引导
previously_on_story = "" previously_on_story = ""
@@ -171,8 +171,8 @@ class MiniGame_ResumeStory(GptAcademicGameBaseState):
) )
self.story.append(story_paragraph) self.story.append(story_paragraph)
# # 配图 # # 配图
yield from update_ui_lastest_msg(lastmsg=story_paragraph + '<br/>正在生成插图中 ...', chatbot=chatbot, history=history, delay=0.) yield from update_ui_latest_msg(lastmsg=story_paragraph + '<br/>正在生成插图中 ...', chatbot=chatbot, history=history, delay=0.)
yield from update_ui_lastest_msg(lastmsg=story_paragraph + '<br/>'+ self.generate_story_image(story_paragraph), chatbot=chatbot, history=history, delay=0.) yield from update_ui_latest_msg(lastmsg=story_paragraph + '<br/>'+ self.generate_story_image(story_paragraph), chatbot=chatbot, history=history, delay=0.)
# # 构建后续剧情引导 # # 构建后续剧情引导
previously_on_story = "" previously_on_story = ""
@@ -204,8 +204,8 @@ class MiniGame_ResumeStory(GptAcademicGameBaseState):
chatbot, history_, self.sys_prompt_ chatbot, history_, self.sys_prompt_
) )
# # 配图 # # 配图
yield from update_ui_lastest_msg(lastmsg=story_paragraph + '<br/>正在生成插图中 ...', chatbot=chatbot, history=history, delay=0.) yield from update_ui_latest_msg(lastmsg=story_paragraph + '<br/>正在生成插图中 ...', chatbot=chatbot, history=history, delay=0.)
yield from update_ui_lastest_msg(lastmsg=story_paragraph + '<br/>'+ self.generate_story_image(story_paragraph), chatbot=chatbot, history=history, delay=0.) yield from update_ui_latest_msg(lastmsg=story_paragraph + '<br/>'+ self.generate_story_image(story_paragraph), chatbot=chatbot, history=history, delay=0.)
# terminate game # terminate game
self.delete_game = True self.delete_game = True

View File

@@ -2,7 +2,7 @@ import time
import importlib import importlib
from toolbox import trimmed_format_exc, gen_time_str, get_log_folder from toolbox import trimmed_format_exc, gen_time_str, get_log_folder
from toolbox import CatchException, update_ui, gen_time_str, trimmed_format_exc, is_the_upload_folder from toolbox import CatchException, update_ui, gen_time_str, trimmed_format_exc, is_the_upload_folder
from toolbox import promote_file_to_downloadzone, get_log_folder, update_ui_lastest_msg from toolbox import promote_file_to_downloadzone, get_log_folder, update_ui_latest_msg
import multiprocessing import multiprocessing
def get_class_name(class_string): def get_class_name(class_string):

View File

@@ -102,10 +102,10 @@ class GptJsonIO():
logging.info(f'Repairing json{response}') logging.info(f'Repairing json{response}')
repair_prompt = self.generate_repair_prompt(broken_json = response, error=repr(e)) repair_prompt = self.generate_repair_prompt(broken_json = response, error=repr(e))
result = self.generate_output(gpt_gen_fn(repair_prompt, self.format_instructions)) result = self.generate_output(gpt_gen_fn(repair_prompt, self.format_instructions))
logging.info('Repaire json success.') logging.info('Repair json success.')
except Exception as e: except Exception as e:
# 没辙了,放弃治疗 # 没辙了,放弃治疗
logging.info('Repaire json fail.') logging.info('Repair json fail.')
raise JsonStringError('Cannot repair json.', str(e)) raise JsonStringError('Cannot repair json.', str(e))
return result return result

View File

@@ -3,7 +3,7 @@ import re
import shutil import shutil
import numpy as np import numpy as np
from loguru import logger from loguru import logger
from toolbox import update_ui, update_ui_lastest_msg, get_log_folder, gen_time_str from toolbox import update_ui, update_ui_latest_msg, get_log_folder, gen_time_str
from toolbox import get_conf, promote_file_to_downloadzone from toolbox import get_conf, promote_file_to_downloadzone
from crazy_functions.latex_fns.latex_toolbox import PRESERVE, TRANSFORM from crazy_functions.latex_fns.latex_toolbox import PRESERVE, TRANSFORM
from crazy_functions.latex_fns.latex_toolbox import set_forbidden_text, set_forbidden_text_begin_end, set_forbidden_text_careful_brace from crazy_functions.latex_fns.latex_toolbox import set_forbidden_text, set_forbidden_text_begin_end, set_forbidden_text_careful_brace
@@ -20,7 +20,7 @@ def split_subprocess(txt, project_folder, return_dict, opts):
""" """
break down latex file to a linked list, break down latex file to a linked list,
each node use a preserve flag to indicate whether it should each node use a preserve flag to indicate whether it should
be proccessed by GPT. be processed by GPT.
""" """
text = txt text = txt
mask = np.zeros(len(txt), dtype=np.uint8) + TRANSFORM mask = np.zeros(len(txt), dtype=np.uint8) + TRANSFORM
@@ -85,14 +85,14 @@ class LatexPaperSplit():
""" """
break down latex file to a linked list, break down latex file to a linked list,
each node use a preserve flag to indicate whether it should each node use a preserve flag to indicate whether it should
be proccessed by GPT. be processed by GPT.
""" """
def __init__(self) -> None: def __init__(self) -> None:
self.nodes = None self.nodes = None
self.msg = "*{\\scriptsize\\textbf{警告该PDF由GPT-Academic开源项目调用大语言模型+Latex翻译插件一键生成" + \ self.msg = "*{\\scriptsize\\textbf{警告该PDF由GPT-Academic开源项目调用大语言模型+Latex翻译插件一键生成" + \
"版权归原文作者所有。翻译内容可靠性无保障,请仔细鉴别并以原文为准。" + \ "版权归原文作者所有。翻译内容可靠性无保障,请仔细鉴别并以原文为准。" + \
"项目Github地址 \\url{https://github.com/binary-husky/gpt_academic/}。" "项目Github地址 \\url{https://github.com/binary-husky/gpt_academic/}。"
# 请您不要删除或修改这行警告除非您是论文的原作者如果您是论文原作者欢迎加REAME中的QQ联系开发者 # 请您不要删除或修改这行警告除非您是论文的原作者如果您是论文原作者欢迎加README中的QQ联系开发者
self.msg_declare = "为了防止大语言模型的意外谬误产生扩散影响,禁止移除或修改此警告。}}\\\\" self.msg_declare = "为了防止大语言模型的意外谬误产生扩散影响,禁止移除或修改此警告。}}\\\\"
self.title = "unknown" self.title = "unknown"
self.abstract = "unknown" self.abstract = "unknown"
@@ -151,7 +151,7 @@ class LatexPaperSplit():
""" """
break down latex file to a linked list, break down latex file to a linked list,
each node use a preserve flag to indicate whether it should each node use a preserve flag to indicate whether it should
be proccessed by GPT. be processed by GPT.
P.S. use multiprocessing to avoid timeout error P.S. use multiprocessing to avoid timeout error
""" """
import multiprocessing import multiprocessing
@@ -351,7 +351,7 @@ def 编译Latex(chatbot, history, main_file_original, main_file_modified, work_f
max_try = 32 max_try = 32
chatbot.append([f"正在编译PDF文档", f'编译已经开始。当前工作路径为{work_folder}如果程序停顿5分钟以上请直接去该路径下取回翻译结果或者重启之后再度尝试 ...']); yield from update_ui(chatbot=chatbot, history=history) chatbot.append([f"正在编译PDF文档", f'编译已经开始。当前工作路径为{work_folder}如果程序停顿5分钟以上请直接去该路径下取回翻译结果或者重启之后再度尝试 ...']); yield from update_ui(chatbot=chatbot, history=history)
chatbot.append([f"正在编译PDF文档", '...']); yield from update_ui(chatbot=chatbot, history=history); time.sleep(1); chatbot[-1] = list(chatbot[-1]) # 刷新界面 chatbot.append([f"正在编译PDF文档", '...']); yield from update_ui(chatbot=chatbot, history=history); time.sleep(1); chatbot[-1] = list(chatbot[-1]) # 刷新界面
yield from update_ui_lastest_msg('编译已经开始...', chatbot, history) # 刷新Gradio前端界面 yield from update_ui_latest_msg('编译已经开始...', chatbot, history) # 刷新Gradio前端界面
# 检查是否需要使用xelatex # 检查是否需要使用xelatex
def check_if_need_xelatex(tex_path): def check_if_need_xelatex(tex_path):
try: try:
@@ -396,32 +396,32 @@ def 编译Latex(chatbot, history, main_file_original, main_file_modified, work_f
shutil.copyfile(may_exist_bbl, target_bbl) shutil.copyfile(may_exist_bbl, target_bbl)
# https://stackoverflow.com/questions/738755/dont-make-me-manually-abort-a-latex-compile-when-theres-an-error # https://stackoverflow.com/questions/738755/dont-make-me-manually-abort-a-latex-compile-when-theres-an-error
yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 编译原始PDF ...', chatbot, history) # 刷新Gradio前端界面 yield from update_ui_latest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 编译原始PDF ...', chatbot, history) # 刷新Gradio前端界面
ok = compile_latex_with_timeout(get_compile_command(compiler, main_file_original), work_folder_original) ok = compile_latex_with_timeout(get_compile_command(compiler, main_file_original), work_folder_original)
yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 编译转化后的PDF ...', chatbot, history) # 刷新Gradio前端界面 yield from update_ui_latest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 编译转化后的PDF ...', chatbot, history) # 刷新Gradio前端界面
ok = compile_latex_with_timeout(get_compile_command(compiler, main_file_modified), work_folder_modified) ok = compile_latex_with_timeout(get_compile_command(compiler, main_file_modified), work_folder_modified)
if ok and os.path.exists(pj(work_folder_modified, f'{main_file_modified}.pdf')): if ok and os.path.exists(pj(work_folder_modified, f'{main_file_modified}.pdf')):
# 只有第二步成功,才能继续下面的步骤 # 只有第二步成功,才能继续下面的步骤
yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 编译BibTex ...', chatbot, history) # 刷新Gradio前端界面 yield from update_ui_latest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 编译BibTex ...', chatbot, history) # 刷新Gradio前端界面
if not os.path.exists(pj(work_folder_original, f'{main_file_original}.bbl')): if not os.path.exists(pj(work_folder_original, f'{main_file_original}.bbl')):
ok = compile_latex_with_timeout(f'bibtex {main_file_original}.aux', work_folder_original) ok = compile_latex_with_timeout(f'bibtex {main_file_original}.aux', work_folder_original)
if not os.path.exists(pj(work_folder_modified, f'{main_file_modified}.bbl')): if not os.path.exists(pj(work_folder_modified, f'{main_file_modified}.bbl')):
ok = compile_latex_with_timeout(f'bibtex {main_file_modified}.aux', work_folder_modified) ok = compile_latex_with_timeout(f'bibtex {main_file_modified}.aux', work_folder_modified)
yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 编译文献交叉引用 ...', chatbot, history) # 刷新Gradio前端界面 yield from update_ui_latest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 编译文献交叉引用 ...', chatbot, history) # 刷新Gradio前端界面
ok = compile_latex_with_timeout(get_compile_command(compiler, main_file_original), work_folder_original) ok = compile_latex_with_timeout(get_compile_command(compiler, main_file_original), work_folder_original)
ok = compile_latex_with_timeout(get_compile_command(compiler, main_file_modified), work_folder_modified) ok = compile_latex_with_timeout(get_compile_command(compiler, main_file_modified), work_folder_modified)
ok = compile_latex_with_timeout(get_compile_command(compiler, main_file_original), work_folder_original) ok = compile_latex_with_timeout(get_compile_command(compiler, main_file_original), work_folder_original)
ok = compile_latex_with_timeout(get_compile_command(compiler, main_file_modified), work_folder_modified) ok = compile_latex_with_timeout(get_compile_command(compiler, main_file_modified), work_folder_modified)
if mode!='translate_zh': if mode!='translate_zh':
yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 使用latexdiff生成论文转化前后对比 ...', chatbot, history) # 刷新Gradio前端界面 yield from update_ui_latest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 使用latexdiff生成论文转化前后对比 ...', chatbot, history) # 刷新Gradio前端界面
logger.info( f'latexdiff --encoding=utf8 --append-safecmd=subfile {work_folder_original}/{main_file_original}.tex {work_folder_modified}/{main_file_modified}.tex --flatten > {work_folder}/merge_diff.tex') logger.info( f'latexdiff --encoding=utf8 --append-safecmd=subfile {work_folder_original}/{main_file_original}.tex {work_folder_modified}/{main_file_modified}.tex --flatten > {work_folder}/merge_diff.tex')
ok = compile_latex_with_timeout(f'latexdiff --encoding=utf8 --append-safecmd=subfile {work_folder_original}/{main_file_original}.tex {work_folder_modified}/{main_file_modified}.tex --flatten > {work_folder}/merge_diff.tex', os.getcwd()) ok = compile_latex_with_timeout(f'latexdiff --encoding=utf8 --append-safecmd=subfile {work_folder_original}/{main_file_original}.tex {work_folder_modified}/{main_file_modified}.tex --flatten > {work_folder}/merge_diff.tex', os.getcwd())
yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 正在编译对比PDF ...', chatbot, history) # 刷新Gradio前端界面 yield from update_ui_latest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 正在编译对比PDF ...', chatbot, history) # 刷新Gradio前端界面
ok = compile_latex_with_timeout(get_compile_command(compiler, 'merge_diff'), work_folder) ok = compile_latex_with_timeout(get_compile_command(compiler, 'merge_diff'), work_folder)
ok = compile_latex_with_timeout(f'bibtex merge_diff.aux', work_folder) ok = compile_latex_with_timeout(f'bibtex merge_diff.aux', work_folder)
ok = compile_latex_with_timeout(get_compile_command(compiler, 'merge_diff'), work_folder) ok = compile_latex_with_timeout(get_compile_command(compiler, 'merge_diff'), work_folder)
@@ -435,13 +435,13 @@ def 编译Latex(chatbot, history, main_file_original, main_file_modified, work_f
results_ += f"原始PDF编译是否成功: {original_pdf_success};" results_ += f"原始PDF编译是否成功: {original_pdf_success};"
results_ += f"转化PDF编译是否成功: {modified_pdf_success};" results_ += f"转化PDF编译是否成功: {modified_pdf_success};"
results_ += f"对比PDF编译是否成功: {diff_pdf_success};" results_ += f"对比PDF编译是否成功: {diff_pdf_success};"
yield from update_ui_lastest_msg(f'{n_fix}编译结束:<br/>{results_}...', chatbot, history) # 刷新Gradio前端界面 yield from update_ui_latest_msg(f'{n_fix}编译结束:<br/>{results_}...', chatbot, history) # 刷新Gradio前端界面
if diff_pdf_success: if diff_pdf_success:
result_pdf = pj(work_folder_modified, f'merge_diff.pdf') # get pdf path result_pdf = pj(work_folder_modified, f'merge_diff.pdf') # get pdf path
promote_file_to_downloadzone(result_pdf, rename_file=None, chatbot=chatbot) # promote file to web UI promote_file_to_downloadzone(result_pdf, rename_file=None, chatbot=chatbot) # promote file to web UI
if modified_pdf_success: if modified_pdf_success:
yield from update_ui_lastest_msg(f'转化PDF编译已经成功, 正在尝试生成对比PDF, 请稍候 ...', chatbot, history) # 刷新Gradio前端界面 yield from update_ui_latest_msg(f'转化PDF编译已经成功, 正在尝试生成对比PDF, 请稍候 ...', chatbot, history) # 刷新Gradio前端界面
result_pdf = pj(work_folder_modified, f'{main_file_modified}.pdf') # get pdf path result_pdf = pj(work_folder_modified, f'{main_file_modified}.pdf') # get pdf path
origin_pdf = pj(work_folder_original, f'{main_file_original}.pdf') # get pdf path origin_pdf = pj(work_folder_original, f'{main_file_original}.pdf') # get pdf path
if os.path.exists(pj(work_folder, '..', 'translation')): if os.path.exists(pj(work_folder, '..', 'translation')):
@@ -472,7 +472,7 @@ def 编译Latex(chatbot, history, main_file_original, main_file_modified, work_f
work_folder_modified=work_folder_modified, work_folder_modified=work_folder_modified,
fixed_line=fixed_line fixed_line=fixed_line
) )
yield from update_ui_lastest_msg(f'由于最为关键的转化PDF编译失败, 将根据报错信息修正tex源文件并重试, 当前报错的latex代码处于第{buggy_lines}行 ...', chatbot, history) # 刷新Gradio前端界面 yield from update_ui_latest_msg(f'由于最为关键的转化PDF编译失败, 将根据报错信息修正tex源文件并重试, 当前报错的latex代码处于第{buggy_lines}行 ...', chatbot, history) # 刷新Gradio前端界面
if not can_retry: break if not can_retry: break
return False # 失败啦 return False # 失败啦

View File

@@ -168,7 +168,7 @@ def set_forbidden_text(text, mask, pattern, flags=0):
def reverse_forbidden_text(text, mask, pattern, flags=0, forbid_wrapper=True): def reverse_forbidden_text(text, mask, pattern, flags=0, forbid_wrapper=True):
""" """
Move area out of preserve area (make text editable for GPT) Move area out of preserve area (make text editable for GPT)
count the number of the braces so as to catch compelete text area. count the number of the braces so as to catch complete text area.
e.g. e.g.
\begin{abstract} blablablablablabla. \end{abstract} \begin{abstract} blablablablablabla. \end{abstract}
""" """
@@ -188,7 +188,7 @@ def reverse_forbidden_text(text, mask, pattern, flags=0, forbid_wrapper=True):
def set_forbidden_text_careful_brace(text, mask, pattern, flags=0): def set_forbidden_text_careful_brace(text, mask, pattern, flags=0):
""" """
Add a preserve text area in this paper (text become untouchable for GPT). Add a preserve text area in this paper (text become untouchable for GPT).
count the number of the braces so as to catch compelete text area. count the number of the braces so as to catch complete text area.
e.g. e.g.
\caption{blablablablabla\texbf{blablabla}blablabla.} \caption{blablablablabla\texbf{blablabla}blablabla.}
""" """
@@ -214,7 +214,7 @@ def reverse_forbidden_text_careful_brace(
): ):
""" """
Move area out of preserve area (make text editable for GPT) Move area out of preserve area (make text editable for GPT)
count the number of the braces so as to catch compelete text area. count the number of the braces so as to catch complete text area.
e.g. e.g.
\caption{blablablablabla\texbf{blablabla}blablabla.} \caption{blablablablabla\texbf{blablabla}blablabla.}
""" """
@@ -287,23 +287,23 @@ def find_main_tex_file(file_manifest, mode):
在多Tex文档中寻找主文件必须包含documentclass返回找到的第一个。 在多Tex文档中寻找主文件必须包含documentclass返回找到的第一个。
P.S. 但愿没人把latex模板放在里面传进来 (6.25 加入判定latex模板的代码) P.S. 但愿没人把latex模板放在里面传进来 (6.25 加入判定latex模板的代码)
""" """
canidates = [] candidates = []
for texf in file_manifest: for texf in file_manifest:
if os.path.basename(texf).startswith("merge"): if os.path.basename(texf).startswith("merge"):
continue continue
with open(texf, "r", encoding="utf8", errors="ignore") as f: with open(texf, "r", encoding="utf8", errors="ignore") as f:
file_content = f.read() file_content = f.read()
if r"\documentclass" in file_content: if r"\documentclass" in file_content:
canidates.append(texf) candidates.append(texf)
else: else:
continue continue
if len(canidates) == 0: if len(candidates) == 0:
raise RuntimeError("无法找到一个主Tex文件包含documentclass关键字") raise RuntimeError("无法找到一个主Tex文件包含documentclass关键字")
elif len(canidates) == 1: elif len(candidates) == 1:
return canidates[0] return candidates[0]
else: # if len(canidates) >= 2 通过一些Latex模板中常见但通常不会出现在正文的单词对不同latex源文件扣分取评分最高者返回 else: # if len(candidates) >= 2 通过一些Latex模板中常见但通常不会出现在正文的单词对不同latex源文件扣分取评分最高者返回
canidates_score = [] candidates_score = []
# 给出一些判定模板文档的词作为扣分项 # 给出一些判定模板文档的词作为扣分项
unexpected_words = [ unexpected_words = [
"\\LaTeX", "\\LaTeX",
@@ -316,19 +316,19 @@ def find_main_tex_file(file_manifest, mode):
"reviewers", "reviewers",
] ]
expected_words = ["\\input", "\\ref", "\\cite"] expected_words = ["\\input", "\\ref", "\\cite"]
for texf in canidates: for texf in candidates:
canidates_score.append(0) candidates_score.append(0)
with open(texf, "r", encoding="utf8", errors="ignore") as f: with open(texf, "r", encoding="utf8", errors="ignore") as f:
file_content = f.read() file_content = f.read()
file_content = rm_comments(file_content) file_content = rm_comments(file_content)
for uw in unexpected_words: for uw in unexpected_words:
if uw in file_content: if uw in file_content:
canidates_score[-1] -= 1 candidates_score[-1] -= 1
for uw in expected_words: for uw in expected_words:
if uw in file_content: if uw in file_content:
canidates_score[-1] += 1 candidates_score[-1] += 1
select = np.argmax(canidates_score) # 取评分最高者返回 select = np.argmax(candidates_score) # 取评分最高者返回
return canidates[select] return candidates[select]
def rm_comments(main_file): def rm_comments(main_file):
@@ -374,7 +374,7 @@ def find_tex_file_ignore_case(fp):
def merge_tex_files_(project_foler, main_file, mode): def merge_tex_files_(project_foler, main_file, mode):
""" """
Merge Tex project recrusively Merge Tex project recursively
""" """
main_file = rm_comments(main_file) main_file = rm_comments(main_file)
for s in reversed([q for q in re.finditer(r"\\input\{(.*?)\}", main_file, re.M)]): for s in reversed([q for q in re.finditer(r"\\input\{(.*?)\}", main_file, re.M)]):
@@ -429,7 +429,7 @@ def find_title_and_abs(main_file):
def merge_tex_files(project_foler, main_file, mode): def merge_tex_files(project_foler, main_file, mode):
""" """
Merge Tex project recrusively Merge Tex project recursively
P.S. 顺便把CTEX塞进去以支持中文 P.S. 顺便把CTEX塞进去以支持中文
P.S. 顺便把Latex的注释去除 P.S. 顺便把Latex的注释去除
""" """

View File

@@ -1,4 +1,4 @@
from toolbox import update_ui, get_conf, promote_file_to_downloadzone, update_ui_lastest_msg, generate_file_link from toolbox import update_ui, get_conf, promote_file_to_downloadzone, update_ui_latest_msg, generate_file_link
from shared_utils.docker_as_service_api import stream_daas from shared_utils.docker_as_service_api import stream_daas
from shared_utils.docker_as_service_api import DockerServiceApiComModel from shared_utils.docker_as_service_api import DockerServiceApiComModel
import random import random
@@ -25,7 +25,7 @@ def download_video(video_id, only_audio, user_name, chatbot, history):
status_buf += "\n\n" status_buf += "\n\n"
status_buf += "DaaS file attach: \n\n" status_buf += "DaaS file attach: \n\n"
status_buf += str(output_manifest['server_file_attach']) status_buf += str(output_manifest['server_file_attach'])
yield from update_ui_lastest_msg(status_buf, chatbot, history) yield from update_ui_latest_msg(status_buf, chatbot, history)
return output_manifest['server_file_attach'] return output_manifest['server_file_attach']

View File

@@ -1,6 +1,6 @@
from pydantic import BaseModel, Field from pydantic import BaseModel, Field
from typing import List from typing import List
from toolbox import update_ui_lastest_msg, disable_auto_promotion from toolbox import update_ui_latest_msg, disable_auto_promotion
from toolbox import CatchException, update_ui, get_conf, select_api_key, get_log_folder from toolbox import CatchException, update_ui, get_conf, select_api_key, get_log_folder
from request_llms.bridge_all import predict_no_ui_long_connection from request_llms.bridge_all import predict_no_ui_long_connection
from crazy_functions.json_fns.pydantic_io import GptJsonIO, JsonStringError from crazy_functions.json_fns.pydantic_io import GptJsonIO, JsonStringError

View File

@@ -113,7 +113,7 @@ def translate_pdf(article_dict, llm_kwargs, chatbot, fp, generated_conclusion_fi
return [txt] return [txt]
else: else:
# raw_token_num > TOKEN_LIMIT_PER_FRAGMENT # raw_token_num > TOKEN_LIMIT_PER_FRAGMENT
# find a smooth token limit to achieve even seperation # find a smooth token limit to achieve even separation
count = int(math.ceil(raw_token_num / TOKEN_LIMIT_PER_FRAGMENT)) count = int(math.ceil(raw_token_num / TOKEN_LIMIT_PER_FRAGMENT))
token_limit_smooth = raw_token_num // count + count token_limit_smooth = raw_token_num // count + count
return breakdown_text_to_satisfy_token_limit(txt, limit=token_limit_smooth, llm_model=llm_kwargs['llm_model']) return breakdown_text_to_satisfy_token_limit(txt, limit=token_limit_smooth, llm_model=llm_kwargs['llm_model'])

View File

@@ -1,6 +1,6 @@
import os import os
from toolbox import CatchException, report_exception, get_log_folder, gen_time_str, check_packages from toolbox import CatchException, report_exception, get_log_folder, gen_time_str, check_packages
from toolbox import update_ui, promote_file_to_downloadzone, update_ui_lastest_msg, disable_auto_promotion from toolbox import update_ui, promote_file_to_downloadzone, update_ui_latest_msg, disable_auto_promotion
from toolbox import write_history_to_file, promote_file_to_downloadzone, get_conf, extract_archive from toolbox import write_history_to_file, promote_file_to_downloadzone, get_conf, extract_archive
from crazy_functions.pdf_fns.parse_pdf import parse_pdf, translate_pdf from crazy_functions.pdf_fns.parse_pdf import parse_pdf, translate_pdf

View File

@@ -14,17 +14,17 @@ def extract_text_from_files(txt, chatbot, history):
final_result(list):文本内容 final_result(list):文本内容
page_one(list):第一页内容/摘要 page_one(list):第一页内容/摘要
file_manifest(list):文件路径 file_manifest(list):文件路径
excption(string):需要用户手动处理的信息,如没出错则保持为空 exception(string):需要用户手动处理的信息,如没出错则保持为空
""" """
final_result = [] final_result = []
page_one = [] page_one = []
file_manifest = [] file_manifest = []
excption = "" exception = ""
if txt == "": if txt == "":
final_result.append(txt) final_result.append(txt)
return False, final_result, page_one, file_manifest, excption #如输入区内容不是文件则直接返回输入区内容 return False, final_result, page_one, file_manifest, exception #如输入区内容不是文件则直接返回输入区内容
#查找输入区内容中的文件 #查找输入区内容中的文件
file_pdf,pdf_manifest,folder_pdf = get_files_from_everything(txt, '.pdf') file_pdf,pdf_manifest,folder_pdf = get_files_from_everything(txt, '.pdf')
@@ -33,20 +33,20 @@ def extract_text_from_files(txt, chatbot, history):
file_doc,doc_manifest,folder_doc = get_files_from_everything(txt, '.doc') file_doc,doc_manifest,folder_doc = get_files_from_everything(txt, '.doc')
if file_doc: if file_doc:
excption = "word" exception = "word"
return False, final_result, page_one, file_manifest, excption return False, final_result, page_one, file_manifest, exception
file_num = len(pdf_manifest) + len(md_manifest) + len(word_manifest) file_num = len(pdf_manifest) + len(md_manifest) + len(word_manifest)
if file_num == 0: if file_num == 0:
final_result.append(txt) final_result.append(txt)
return False, final_result, page_one, file_manifest, excption #如输入区内容不是文件则直接返回输入区内容 return False, final_result, page_one, file_manifest, exception #如输入区内容不是文件则直接返回输入区内容
if file_pdf: if file_pdf:
try: # 尝试导入依赖,如果缺少依赖,则给出安装建议 try: # 尝试导入依赖,如果缺少依赖,则给出安装建议
import fitz import fitz
except: except:
excption = "pdf" exception = "pdf"
return False, final_result, page_one, file_manifest, excption return False, final_result, page_one, file_manifest, exception
for index, fp in enumerate(pdf_manifest): for index, fp in enumerate(pdf_manifest):
file_content, pdf_one = read_and_clean_pdf_text(fp) # 尝试按照章节切割PDF file_content, pdf_one = read_and_clean_pdf_text(fp) # 尝试按照章节切割PDF
file_content = file_content.encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars file_content = file_content.encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
@@ -72,8 +72,8 @@ def extract_text_from_files(txt, chatbot, history):
try: # 尝试导入依赖,如果缺少依赖,则给出安装建议 try: # 尝试导入依赖,如果缺少依赖,则给出安装建议
from docx import Document from docx import Document
except: except:
excption = "word_pip" exception = "word_pip"
return False, final_result, page_one, file_manifest, excption return False, final_result, page_one, file_manifest, exception
for index, fp in enumerate(word_manifest): for index, fp in enumerate(word_manifest):
doc = Document(fp) doc = Document(fp)
file_content = '\n'.join([p.text for p in doc.paragraphs]) file_content = '\n'.join([p.text for p in doc.paragraphs])
@@ -82,4 +82,4 @@ def extract_text_from_files(txt, chatbot, history):
final_result.append(file_content) final_result.append(file_content)
file_manifest.append(os.path.relpath(fp, folder_word)) file_manifest.append(os.path.relpath(fp, folder_word))
return True, final_result, page_one, file_manifest, excption return True, final_result, page_one, file_manifest, exception

View File

@@ -60,7 +60,7 @@ def similarity_search_with_score_by_vector(
self, embedding: List[float], k: int = 4 self, embedding: List[float], k: int = 4
) -> List[Tuple[Document, float]]: ) -> List[Tuple[Document, float]]:
def seperate_list(ls: List[int]) -> List[List[int]]: def separate_list(ls: List[int]) -> List[List[int]]:
lists = [] lists = []
ls1 = [ls[0]] ls1 = [ls[0]]
for i in range(1, len(ls)): for i in range(1, len(ls)):
@@ -82,7 +82,7 @@ def similarity_search_with_score_by_vector(
continue continue
_id = self.index_to_docstore_id[i] _id = self.index_to_docstore_id[i]
doc = self.docstore.search(_id) doc = self.docstore.search(_id)
if not self.chunk_conent: if not self.chunk_content:
if not isinstance(doc, Document): if not isinstance(doc, Document):
raise ValueError(f"Could not find document for id {_id}, got {doc}") raise ValueError(f"Could not find document for id {_id}, got {doc}")
doc.metadata["score"] = int(scores[0][j]) doc.metadata["score"] = int(scores[0][j])
@@ -104,12 +104,12 @@ def similarity_search_with_score_by_vector(
id_set.add(l) id_set.add(l)
if break_flag: if break_flag:
break break
if not self.chunk_conent: if not self.chunk_content:
return docs return docs
if len(id_set) == 0 and self.score_threshold > 0: if len(id_set) == 0 and self.score_threshold > 0:
return [] return []
id_list = sorted(list(id_set)) id_list = sorted(list(id_set))
id_lists = seperate_list(id_list) id_lists = separate_list(id_list)
for id_seq in id_lists: for id_seq in id_lists:
for id in id_seq: for id in id_seq:
if id == id_seq[0]: if id == id_seq[0]:
@@ -132,7 +132,7 @@ class LocalDocQA:
embeddings: object = None embeddings: object = None
top_k: int = VECTOR_SEARCH_TOP_K top_k: int = VECTOR_SEARCH_TOP_K
chunk_size: int = CHUNK_SIZE chunk_size: int = CHUNK_SIZE
chunk_conent: bool = True chunk_content: bool = True
score_threshold: int = VECTOR_SEARCH_SCORE_THRESHOLD score_threshold: int = VECTOR_SEARCH_SCORE_THRESHOLD
def init_cfg(self, def init_cfg(self,
@@ -209,16 +209,16 @@ class LocalDocQA:
# query 查询内容 # query 查询内容
# vs_path 知识库路径 # vs_path 知识库路径
# chunk_conent 是否启用上下文关联 # chunk_content 是否启用上下文关联
# score_threshold 搜索匹配score阈值 # score_threshold 搜索匹配score阈值
# vector_search_top_k 搜索知识库内容条数默认搜索5条结果 # vector_search_top_k 搜索知识库内容条数默认搜索5条结果
# chunk_sizes 匹配单段内容的连接上下文长度 # chunk_sizes 匹配单段内容的连接上下文长度
def get_knowledge_based_conent_test(self, query, vs_path, chunk_conent, def get_knowledge_based_content_test(self, query, vs_path, chunk_content,
score_threshold=VECTOR_SEARCH_SCORE_THRESHOLD, score_threshold=VECTOR_SEARCH_SCORE_THRESHOLD,
vector_search_top_k=VECTOR_SEARCH_TOP_K, chunk_size=CHUNK_SIZE, vector_search_top_k=VECTOR_SEARCH_TOP_K, chunk_size=CHUNK_SIZE,
text2vec=None): text2vec=None):
self.vector_store = FAISS.load_local(vs_path, text2vec) self.vector_store = FAISS.load_local(vs_path, text2vec)
self.vector_store.chunk_conent = chunk_conent self.vector_store.chunk_content = chunk_content
self.vector_store.score_threshold = score_threshold self.vector_store.score_threshold = score_threshold
self.vector_store.chunk_size = chunk_size self.vector_store.chunk_size = chunk_size
@@ -241,7 +241,7 @@ class LocalDocQA:
def construct_vector_store(vs_id, vs_path, files, sentence_size, history, one_conent, one_content_segmentation, text2vec): def construct_vector_store(vs_id, vs_path, files, sentence_size, history, one_content, one_content_segmentation, text2vec):
for file in files: for file in files:
assert os.path.exists(file), "输入文件不存在:" + file assert os.path.exists(file), "输入文件不存在:" + file
import nltk import nltk
@@ -297,7 +297,7 @@ class knowledge_archive_interface():
files=file_manifest, files=file_manifest,
sentence_size=100, sentence_size=100,
history=[], history=[],
one_conent="", one_content="",
one_content_segmentation="", one_content_segmentation="",
text2vec = self.get_chinese_text2vec(), text2vec = self.get_chinese_text2vec(),
) )
@@ -319,19 +319,19 @@ class knowledge_archive_interface():
files=[], files=[],
sentence_size=100, sentence_size=100,
history=[], history=[],
one_conent="", one_content="",
one_content_segmentation="", one_content_segmentation="",
text2vec = self.get_chinese_text2vec(), text2vec = self.get_chinese_text2vec(),
) )
VECTOR_SEARCH_SCORE_THRESHOLD = 0 VECTOR_SEARCH_SCORE_THRESHOLD = 0
VECTOR_SEARCH_TOP_K = 4 VECTOR_SEARCH_TOP_K = 4
CHUNK_SIZE = 512 CHUNK_SIZE = 512
resp, prompt = self.qa_handle.get_knowledge_based_conent_test( resp, prompt = self.qa_handle.get_knowledge_based_content_test(
query = txt, query = txt,
vs_path = self.kai_path, vs_path = self.kai_path,
score_threshold=VECTOR_SEARCH_SCORE_THRESHOLD, score_threshold=VECTOR_SEARCH_SCORE_THRESHOLD,
vector_search_top_k=VECTOR_SEARCH_TOP_K, vector_search_top_k=VECTOR_SEARCH_TOP_K,
chunk_conent=True, chunk_content=True,
chunk_size=CHUNK_SIZE, chunk_size=CHUNK_SIZE,
text2vec = self.get_chinese_text2vec(), text2vec = self.get_chinese_text2vec(),
) )

View File

@@ -1,6 +1,6 @@
from pydantic import BaseModel, Field from pydantic import BaseModel, Field
from typing import List from typing import List
from toolbox import update_ui_lastest_msg, disable_auto_promotion from toolbox import update_ui_latest_msg, disable_auto_promotion
from request_llms.bridge_all import predict_no_ui_long_connection from request_llms.bridge_all import predict_no_ui_long_connection
from crazy_functions.json_fns.pydantic_io import GptJsonIO, JsonStringError from crazy_functions.json_fns.pydantic_io import GptJsonIO, JsonStringError
import copy, json, pickle, os, sys, time import copy, json, pickle, os, sys, time
@@ -9,14 +9,14 @@ import copy, json, pickle, os, sys, time
def read_avail_plugin_enum(): def read_avail_plugin_enum():
from crazy_functional import get_crazy_functions from crazy_functional import get_crazy_functions
plugin_arr = get_crazy_functions() plugin_arr = get_crazy_functions()
# remove plugins with out explaination # remove plugins with out explanation
plugin_arr = {k:v for k, v in plugin_arr.items() if ('Info' in v) and ('Function' in v)} plugin_arr = {k:v for k, v in plugin_arr.items() if ('Info' in v) and ('Function' in v)}
plugin_arr_info = {"F_{:04d}".format(i):v["Info"] for i, v in enumerate(plugin_arr.values(), start=1)} plugin_arr_info = {"F_{:04d}".format(i):v["Info"] for i, v in enumerate(plugin_arr.values(), start=1)}
plugin_arr_dict = {"F_{:04d}".format(i):v for i, v in enumerate(plugin_arr.values(), start=1)} plugin_arr_dict = {"F_{:04d}".format(i):v for i, v in enumerate(plugin_arr.values(), start=1)}
plugin_arr_dict_parse = {"F_{:04d}".format(i):v for i, v in enumerate(plugin_arr.values(), start=1)} plugin_arr_dict_parse = {"F_{:04d}".format(i):v for i, v in enumerate(plugin_arr.values(), start=1)}
plugin_arr_dict_parse.update({f"F_{i}":v for i, v in enumerate(plugin_arr.values(), start=1)}) plugin_arr_dict_parse.update({f"F_{i}":v for i, v in enumerate(plugin_arr.values(), start=1)})
prompt = json.dumps(plugin_arr_info, ensure_ascii=False, indent=2) prompt = json.dumps(plugin_arr_info, ensure_ascii=False, indent=2)
prompt = "\n\nThe defination of PluginEnum:\nPluginEnum=" + prompt prompt = "\n\nThe definition of PluginEnum:\nPluginEnum=" + prompt
return prompt, plugin_arr_dict, plugin_arr_dict_parse return prompt, plugin_arr_dict, plugin_arr_dict_parse
def wrap_code(txt): def wrap_code(txt):
@@ -55,7 +55,7 @@ def execute_plugin(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prom
plugin_selection: str = Field(description="The most related plugin from one of the PluginEnum.", default="F_0000") plugin_selection: str = Field(description="The most related plugin from one of the PluginEnum.", default="F_0000")
reason_of_selection: str = Field(description="The reason why you should select this plugin.", default="This plugin satisfy user requirement most") reason_of_selection: str = Field(description="The reason why you should select this plugin.", default="This plugin satisfy user requirement most")
# ⭐ ⭐ ⭐ 选择插件 # ⭐ ⭐ ⭐ 选择插件
yield from update_ui_lastest_msg(lastmsg=f"正在执行任务: {txt}\n\n查找可用插件中...", chatbot=chatbot, history=history, delay=0) yield from update_ui_latest_msg(lastmsg=f"正在执行任务: {txt}\n\n查找可用插件中...", chatbot=chatbot, history=history, delay=0)
gpt_json_io = GptJsonIO(Plugin) gpt_json_io = GptJsonIO(Plugin)
gpt_json_io.format_instructions = "The format of your output should be a json that can be parsed by json.loads.\n" gpt_json_io.format_instructions = "The format of your output should be a json that can be parsed by json.loads.\n"
gpt_json_io.format_instructions += """Output example: {"plugin_selection":"F_1234", "reason_of_selection":"F_1234 plugin satisfy user requirement most"}\n""" gpt_json_io.format_instructions += """Output example: {"plugin_selection":"F_1234", "reason_of_selection":"F_1234 plugin satisfy user requirement most"}\n"""
@@ -74,13 +74,13 @@ def execute_plugin(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prom
msg += "请求的Prompt为\n" + wrap_code(get_inputs_show_user(inputs, plugin_arr_enum_prompt)) msg += "请求的Prompt为\n" + wrap_code(get_inputs_show_user(inputs, plugin_arr_enum_prompt))
msg += "语言模型回复为:\n" + wrap_code(gpt_reply) msg += "语言模型回复为:\n" + wrap_code(gpt_reply)
msg += "\n但您可以尝试再试一次\n" msg += "\n但您可以尝试再试一次\n"
yield from update_ui_lastest_msg(lastmsg=msg, chatbot=chatbot, history=history, delay=2) yield from update_ui_latest_msg(lastmsg=msg, chatbot=chatbot, history=history, delay=2)
return return
if plugin_sel.plugin_selection not in plugin_arr_dict_parse: if plugin_sel.plugin_selection not in plugin_arr_dict_parse:
msg = f"抱歉, 找不到合适插件执行该任务, 或者{llm_kwargs['llm_model']}无法理解您的需求。" msg = f"抱歉, 找不到合适插件执行该任务, 或者{llm_kwargs['llm_model']}无法理解您的需求。"
msg += f"语言模型{llm_kwargs['llm_model']}选择了不存在的插件:\n" + wrap_code(gpt_reply) msg += f"语言模型{llm_kwargs['llm_model']}选择了不存在的插件:\n" + wrap_code(gpt_reply)
msg += "\n但您可以尝试再试一次\n" msg += "\n但您可以尝试再试一次\n"
yield from update_ui_lastest_msg(lastmsg=msg, chatbot=chatbot, history=history, delay=2) yield from update_ui_latest_msg(lastmsg=msg, chatbot=chatbot, history=history, delay=2)
return return
# ⭐ ⭐ ⭐ 确认插件参数 # ⭐ ⭐ ⭐ 确认插件参数
@@ -90,7 +90,7 @@ def execute_plugin(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prom
appendix_info = get_recent_file_prompt_support(chatbot) appendix_info = get_recent_file_prompt_support(chatbot)
plugin = plugin_arr_dict_parse[plugin_sel.plugin_selection] plugin = plugin_arr_dict_parse[plugin_sel.plugin_selection]
yield from update_ui_lastest_msg(lastmsg=f"正在执行任务: {txt}\n\n提取插件参数...", chatbot=chatbot, history=history, delay=0) yield from update_ui_latest_msg(lastmsg=f"正在执行任务: {txt}\n\n提取插件参数...", chatbot=chatbot, history=history, delay=0)
class PluginExplicit(BaseModel): class PluginExplicit(BaseModel):
plugin_selection: str = plugin_sel.plugin_selection plugin_selection: str = plugin_sel.plugin_selection
plugin_arg: str = Field(description="The argument of the plugin.", default="") plugin_arg: str = Field(description="The argument of the plugin.", default="")
@@ -109,6 +109,6 @@ def execute_plugin(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prom
fn = plugin['Function'] fn = plugin['Function']
fn_name = fn.__name__ fn_name = fn.__name__
msg = f'{llm_kwargs["llm_model"]}为您选择了插件: `{fn_name}`\n\n插件说明:{plugin["Info"]}\n\n插件参数:{plugin_sel.plugin_arg}\n\n假如偏离了您的要求,按停止键终止。' msg = f'{llm_kwargs["llm_model"]}为您选择了插件: `{fn_name}`\n\n插件说明:{plugin["Info"]}\n\n插件参数:{plugin_sel.plugin_arg}\n\n假如偏离了您的要求,按停止键终止。'
yield from update_ui_lastest_msg(lastmsg=msg, chatbot=chatbot, history=history, delay=2) yield from update_ui_latest_msg(lastmsg=msg, chatbot=chatbot, history=history, delay=2)
yield from fn(plugin_sel.plugin_arg, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, -1) yield from fn(plugin_sel.plugin_arg, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, -1)
return return

View File

@@ -1,6 +1,6 @@
from pydantic import BaseModel, Field from pydantic import BaseModel, Field
from typing import List from typing import List
from toolbox import update_ui_lastest_msg, get_conf from toolbox import update_ui_latest_msg, get_conf
from request_llms.bridge_all import predict_no_ui_long_connection from request_llms.bridge_all import predict_no_ui_long_connection
from crazy_functions.json_fns.pydantic_io import GptJsonIO from crazy_functions.json_fns.pydantic_io import GptJsonIO
import copy, json, pickle, os, sys import copy, json, pickle, os, sys
@@ -9,7 +9,7 @@ import copy, json, pickle, os, sys
def modify_configuration_hot(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_intention): def modify_configuration_hot(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_intention):
ALLOW_RESET_CONFIG = get_conf('ALLOW_RESET_CONFIG') ALLOW_RESET_CONFIG = get_conf('ALLOW_RESET_CONFIG')
if not ALLOW_RESET_CONFIG: if not ALLOW_RESET_CONFIG:
yield from update_ui_lastest_msg( yield from update_ui_latest_msg(
lastmsg=f"当前配置不允许被修改如需激活本功能请在config.py中设置ALLOW_RESET_CONFIG=True后重启软件。", lastmsg=f"当前配置不允许被修改如需激活本功能请在config.py中设置ALLOW_RESET_CONFIG=True后重启软件。",
chatbot=chatbot, history=history, delay=2 chatbot=chatbot, history=history, delay=2
) )
@@ -30,7 +30,7 @@ def modify_configuration_hot(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
new_option_value: str = Field(description="the new value of the option", default=None) new_option_value: str = Field(description="the new value of the option", default=None)
# ⭐ ⭐ ⭐ 分析用户意图 # ⭐ ⭐ ⭐ 分析用户意图
yield from update_ui_lastest_msg(lastmsg=f"正在执行任务: {txt}\n\n读取新配置中", chatbot=chatbot, history=history, delay=0) yield from update_ui_latest_msg(lastmsg=f"正在执行任务: {txt}\n\n读取新配置中", chatbot=chatbot, history=history, delay=0)
gpt_json_io = GptJsonIO(ModifyConfigurationIntention) gpt_json_io = GptJsonIO(ModifyConfigurationIntention)
inputs = "Analyze how to change configuration according to following user input, answer me with json: \n\n" + \ inputs = "Analyze how to change configuration according to following user input, answer me with json: \n\n" + \
">> " + txt.rstrip('\n').replace('\n','\n>> ') + '\n\n' + \ ">> " + txt.rstrip('\n').replace('\n','\n>> ') + '\n\n' + \
@@ -44,11 +44,11 @@ def modify_configuration_hot(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
ok = (explicit_conf in txt) ok = (explicit_conf in txt)
if ok: if ok:
yield from update_ui_lastest_msg( yield from update_ui_latest_msg(
lastmsg=f"正在执行任务: {txt}\n\n新配置{explicit_conf}={user_intention.new_option_value}", lastmsg=f"正在执行任务: {txt}\n\n新配置{explicit_conf}={user_intention.new_option_value}",
chatbot=chatbot, history=history, delay=1 chatbot=chatbot, history=history, delay=1
) )
yield from update_ui_lastest_msg( yield from update_ui_latest_msg(
lastmsg=f"正在执行任务: {txt}\n\n新配置{explicit_conf}={user_intention.new_option_value}\n\n正在修改配置中", lastmsg=f"正在执行任务: {txt}\n\n新配置{explicit_conf}={user_intention.new_option_value}\n\n正在修改配置中",
chatbot=chatbot, history=history, delay=2 chatbot=chatbot, history=history, delay=2
) )
@@ -57,25 +57,25 @@ def modify_configuration_hot(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
from toolbox import set_conf from toolbox import set_conf
set_conf(explicit_conf, user_intention.new_option_value) set_conf(explicit_conf, user_intention.new_option_value)
yield from update_ui_lastest_msg( yield from update_ui_latest_msg(
lastmsg=f"正在执行任务: {txt}\n\n配置修改完成,重新页面即可生效。", chatbot=chatbot, history=history, delay=1 lastmsg=f"正在执行任务: {txt}\n\n配置修改完成,重新页面即可生效。", chatbot=chatbot, history=history, delay=1
) )
else: else:
yield from update_ui_lastest_msg( yield from update_ui_latest_msg(
lastmsg=f"失败,如果需要配置{explicit_conf},您需要明确说明并在指令中提到它。", chatbot=chatbot, history=history, delay=5 lastmsg=f"失败,如果需要配置{explicit_conf},您需要明确说明并在指令中提到它。", chatbot=chatbot, history=history, delay=5
) )
def modify_configuration_reboot(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_intention): def modify_configuration_reboot(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_intention):
ALLOW_RESET_CONFIG = get_conf('ALLOW_RESET_CONFIG') ALLOW_RESET_CONFIG = get_conf('ALLOW_RESET_CONFIG')
if not ALLOW_RESET_CONFIG: if not ALLOW_RESET_CONFIG:
yield from update_ui_lastest_msg( yield from update_ui_latest_msg(
lastmsg=f"当前配置不允许被修改如需激活本功能请在config.py中设置ALLOW_RESET_CONFIG=True后重启软件。", lastmsg=f"当前配置不允许被修改如需激活本功能请在config.py中设置ALLOW_RESET_CONFIG=True后重启软件。",
chatbot=chatbot, history=history, delay=2 chatbot=chatbot, history=history, delay=2
) )
return return
yield from modify_configuration_hot(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_intention) yield from modify_configuration_hot(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_intention)
yield from update_ui_lastest_msg( yield from update_ui_latest_msg(
lastmsg=f"正在执行任务: {txt}\n\n配置修改完成,五秒后即将重启!若出现报错请无视即可。", chatbot=chatbot, history=history, delay=5 lastmsg=f"正在执行任务: {txt}\n\n配置修改完成,五秒后即将重启!若出现报错请无视即可。", chatbot=chatbot, history=history, delay=5
) )
os.execl(sys.executable, sys.executable, *sys.argv) os.execl(sys.executable, sys.executable, *sys.argv)

View File

@@ -5,7 +5,7 @@ class VoidTerminalState():
self.reset_state() self.reset_state()
def reset_state(self): def reset_state(self):
self.has_provided_explaination = False self.has_provided_explanation = False
def lock_plugin(self, chatbot): def lock_plugin(self, chatbot):
chatbot._cookies['lock_plugin'] = 'crazy_functions.虚空终端->虚空终端' chatbot._cookies['lock_plugin'] = 'crazy_functions.虚空终端->虚空终端'

View File

@@ -1,4 +1,4 @@
from toolbox import CatchException, update_ui, update_ui_lastest_msg from toolbox import CatchException, update_ui, update_ui_latest_msg
from crazy_functions.multi_stage.multi_stage_utils import GptAcademicGameBaseState from crazy_functions.multi_stage.multi_stage_utils import GptAcademicGameBaseState
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
from request_llms.bridge_all import predict_no_ui_long_connection from request_llms.bridge_all import predict_no_ui_long_connection

View File

@@ -15,7 +15,7 @@ Testing:
from toolbox import CatchException, update_ui, gen_time_str, trimmed_format_exc, is_the_upload_folder from toolbox import CatchException, update_ui, gen_time_str, trimmed_format_exc, is_the_upload_folder
from toolbox import promote_file_to_downloadzone, get_log_folder, update_ui_lastest_msg from toolbox import promote_file_to_downloadzone, get_log_folder, update_ui_latest_msg
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive, get_plugin_arg from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive, get_plugin_arg
from crazy_functions.crazy_utils import input_clipping, try_install_deps from crazy_functions.crazy_utils import input_clipping, try_install_deps
from crazy_functions.gen_fns.gen_fns_shared import is_function_successfully_generated from crazy_functions.gen_fns.gen_fns_shared import is_function_successfully_generated
@@ -27,7 +27,7 @@ import time
import glob import glob
import multiprocessing import multiprocessing
templete = """ template = """
```python ```python
import ... # Put dependencies here, e.g. import numpy as np. import ... # Put dependencies here, e.g. import numpy as np.
@@ -77,10 +77,10 @@ def gpt_interact_multi_step(txt, file_type, llm_kwargs, chatbot, history):
# 第二步 # 第二步
prompt_compose = [ prompt_compose = [
"If previous stage is successful, rewrite the function you have just written to satisfy following templete: \n", "If previous stage is successful, rewrite the function you have just written to satisfy following template: \n",
templete template
] ]
i_say = "".join(prompt_compose); inputs_show_user = "If previous stage is successful, rewrite the function you have just written to satisfy executable templete. " i_say = "".join(prompt_compose); inputs_show_user = "If previous stage is successful, rewrite the function you have just written to satisfy executable template. "
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=i_say, inputs_show_user=inputs_show_user, inputs=i_say, inputs_show_user=inputs_show_user,
llm_kwargs=llm_kwargs, chatbot=chatbot, history=history, llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,
@@ -164,18 +164,18 @@ def 函数动态生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_
if get_plugin_arg(plugin_kwargs, key="file_path_arg", default=False): if get_plugin_arg(plugin_kwargs, key="file_path_arg", default=False):
file_path = get_plugin_arg(plugin_kwargs, key="file_path_arg", default=None) file_path = get_plugin_arg(plugin_kwargs, key="file_path_arg", default=None)
file_list.append(file_path) file_list.append(file_path)
yield from update_ui_lastest_msg(f"当前文件: {file_path}", chatbot, history, 1) yield from update_ui_latest_msg(f"当前文件: {file_path}", chatbot, history, 1)
elif have_any_recent_upload_files(chatbot): elif have_any_recent_upload_files(chatbot):
file_dir = get_recent_file_prompt_support(chatbot) file_dir = get_recent_file_prompt_support(chatbot)
file_list = glob.glob(os.path.join(file_dir, '**/*'), recursive=True) file_list = glob.glob(os.path.join(file_dir, '**/*'), recursive=True)
yield from update_ui_lastest_msg(f"当前文件处理列表: {file_list}", chatbot, history, 1) yield from update_ui_latest_msg(f"当前文件处理列表: {file_list}", chatbot, history, 1)
else: else:
chatbot.append(["文件检索", "没有发现任何近期上传的文件。"]) chatbot.append(["文件检索", "没有发现任何近期上传的文件。"])
yield from update_ui_lastest_msg("没有发现任何近期上传的文件。", chatbot, history, 1) yield from update_ui_latest_msg("没有发现任何近期上传的文件。", chatbot, history, 1)
return # 2. 如果没有文件 return # 2. 如果没有文件
if len(file_list) == 0: if len(file_list) == 0:
chatbot.append(["文件检索", "没有发现任何近期上传的文件。"]) chatbot.append(["文件检索", "没有发现任何近期上传的文件。"])
yield from update_ui_lastest_msg("没有发现任何近期上传的文件。", chatbot, history, 1) yield from update_ui_latest_msg("没有发现任何近期上传的文件。", chatbot, history, 1)
return # 2. 如果没有文件 return # 2. 如果没有文件
# 读取文件 # 读取文件
@@ -183,7 +183,7 @@ def 函数动态生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_
# 粗心检查 # 粗心检查
if is_the_upload_folder(txt): if is_the_upload_folder(txt):
yield from update_ui_lastest_msg(f"请在输入框内填写需求, 然后再次点击该插件! 至于您的文件,不用担心, 文件路径 {txt} 已经被记忆. ", chatbot, history, 1) yield from update_ui_latest_msg(f"请在输入框内填写需求, 然后再次点击该插件! 至于您的文件,不用担心, 文件路径 {txt} 已经被记忆. ", chatbot, history, 1)
return return
# 开始干正事 # 开始干正事
@@ -195,7 +195,7 @@ def 函数动态生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_
code, installation_advance, txt, file_type, llm_kwargs, chatbot, history = \ code, installation_advance, txt, file_type, llm_kwargs, chatbot, history = \
yield from gpt_interact_multi_step(txt, file_type, llm_kwargs, chatbot, history) yield from gpt_interact_multi_step(txt, file_type, llm_kwargs, chatbot, history)
chatbot.append(["代码生成阶段结束", ""]) chatbot.append(["代码生成阶段结束", ""])
yield from update_ui_lastest_msg(f"正在验证上述代码的有效性 ...", chatbot, history, 1) yield from update_ui_latest_msg(f"正在验证上述代码的有效性 ...", chatbot, history, 1)
# ⭐ 分离代码块 # ⭐ 分离代码块
code = get_code_block(code) code = get_code_block(code)
# ⭐ 检查模块 # ⭐ 检查模块
@@ -206,11 +206,11 @@ def 函数动态生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_
if not traceback: traceback = trimmed_format_exc() if not traceback: traceback = trimmed_format_exc()
# 处理异常 # 处理异常
if not traceback: traceback = trimmed_format_exc() if not traceback: traceback = trimmed_format_exc()
yield from update_ui_lastest_msg(f"{j+1}/{MAX_TRY} 次代码生成尝试, 失败了~ 别担心, 我们5秒后再试一次... \n\n此次我们的错误追踪是\n```\n{traceback}\n```\n", chatbot, history, 5) yield from update_ui_latest_msg(f"{j+1}/{MAX_TRY} 次代码生成尝试, 失败了~ 别担心, 我们5秒后再试一次... \n\n此次我们的错误追踪是\n```\n{traceback}\n```\n", chatbot, history, 5)
# 代码生成结束, 开始执行 # 代码生成结束, 开始执行
TIME_LIMIT = 15 TIME_LIMIT = 15
yield from update_ui_lastest_msg(f"开始创建新进程并执行代码! 时间限制 {TIME_LIMIT} 秒. 请等待任务完成... ", chatbot, history, 1) yield from update_ui_latest_msg(f"开始创建新进程并执行代码! 时间限制 {TIME_LIMIT} 秒. 请等待任务完成... ", chatbot, history, 1)
manager = multiprocessing.Manager() manager = multiprocessing.Manager()
return_dict = manager.dict() return_dict = manager.dict()

View File

@@ -8,7 +8,7 @@
import time import time
from toolbox import CatchException, update_ui, gen_time_str, trimmed_format_exc, ProxyNetworkActivate from toolbox import CatchException, update_ui, gen_time_str, trimmed_format_exc, ProxyNetworkActivate
from toolbox import get_conf, select_api_key, update_ui_lastest_msg, Singleton from toolbox import get_conf, select_api_key, update_ui_latest_msg, Singleton
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive, get_plugin_arg from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive, get_plugin_arg
from crazy_functions.crazy_utils import input_clipping, try_install_deps from crazy_functions.crazy_utils import input_clipping, try_install_deps
from crazy_functions.agent_fns.persistent import GradioMultiuserManagerForPersistentClasses from crazy_functions.agent_fns.persistent import GradioMultiuserManagerForPersistentClasses

View File

@@ -1,5 +1,5 @@
from toolbox import CatchException, report_exception, get_log_folder, gen_time_str from toolbox import CatchException, report_exception, get_log_folder, gen_time_str
from toolbox import update_ui, promote_file_to_downloadzone, update_ui_lastest_msg, disable_auto_promotion from toolbox import update_ui, promote_file_to_downloadzone, update_ui_latest_msg, disable_auto_promotion
from toolbox import write_history_to_file, promote_file_to_downloadzone from toolbox import write_history_to_file, promote_file_to_downloadzone
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
from crazy_functions.crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency from crazy_functions.crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency

View File

@@ -166,7 +166,7 @@ class PointWithTrace(Scene):
``` ```
# do not use get_graph, this funciton is deprecated # do not use get_graph, this function is deprecated
class ExampleFunctionGraph(Scene): class ExampleFunctionGraph(Scene):
def construct(self): def construct(self):

View File

@@ -324,16 +324,16 @@ def 生成多种Mermaid图表(
if os.path.exists(txt): # 如输入区无内容则直接解析历史记录 if os.path.exists(txt): # 如输入区无内容则直接解析历史记录
from crazy_functions.pdf_fns.parse_word import extract_text_from_files from crazy_functions.pdf_fns.parse_word import extract_text_from_files
file_exist, final_result, page_one, file_manifest, excption = ( file_exist, final_result, page_one, file_manifest, exception = (
extract_text_from_files(txt, chatbot, history) extract_text_from_files(txt, chatbot, history)
) )
else: else:
file_exist = False file_exist = False
excption = "" exception = ""
file_manifest = [] file_manifest = []
if excption != "": if exception != "":
if excption == "word": if exception == "word":
report_exception( report_exception(
chatbot, chatbot,
history, history,
@@ -341,7 +341,7 @@ def 生成多种Mermaid图表(
b=f"找到了.doc文件但是该文件格式不被支持请先转化为.docx格式。", b=f"找到了.doc文件但是该文件格式不被支持请先转化为.docx格式。",
) )
elif excption == "pdf": elif exception == "pdf":
report_exception( report_exception(
chatbot, chatbot,
history, history,
@@ -349,7 +349,7 @@ def 生成多种Mermaid图表(
b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pymupdf```。", b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pymupdf```。",
) )
elif excption == "word_pip": elif exception == "word_pip":
report_exception( report_exception(
chatbot, chatbot,
history, history,

View File

@@ -1,4 +1,4 @@
from toolbox import CatchException, update_ui, ProxyNetworkActivate, update_ui_lastest_msg, get_log_folder, get_user from toolbox import CatchException, update_ui, ProxyNetworkActivate, update_ui_latest_msg, get_log_folder, get_user
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive, get_files_from_everything from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive, get_files_from_everything
from loguru import logger from loguru import logger
install_msg =""" install_msg ="""
@@ -42,7 +42,7 @@ def 知识库文件注入(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# from crazy_functions.crazy_utils import try_install_deps # from crazy_functions.crazy_utils import try_install_deps
# try_install_deps(['zh_langchain==0.2.1', 'pypinyin'], reload_m=['pypinyin', 'zh_langchain']) # try_install_deps(['zh_langchain==0.2.1', 'pypinyin'], reload_m=['pypinyin', 'zh_langchain'])
# yield from update_ui_lastest_msg("安装完成,您可以再次重试。", chatbot, history) # yield from update_ui_latest_msg("安装完成,您可以再次重试。", chatbot, history)
return return
# < --------------------读取文件--------------- > # < --------------------读取文件--------------- >
@@ -95,7 +95,7 @@ def 读取知识库作答(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# from crazy_functions.crazy_utils import try_install_deps # from crazy_functions.crazy_utils import try_install_deps
# try_install_deps(['zh_langchain==0.2.1', 'pypinyin'], reload_m=['pypinyin', 'zh_langchain']) # try_install_deps(['zh_langchain==0.2.1', 'pypinyin'], reload_m=['pypinyin', 'zh_langchain'])
# yield from update_ui_lastest_msg("安装完成,您可以再次重试。", chatbot, history) # yield from update_ui_latest_msg("安装完成,您可以再次重试。", chatbot, history)
return return
# < ------------------- --------------- > # < ------------------- --------------- >

View File

@@ -47,7 +47,7 @@ explain_msg = """
from pydantic import BaseModel, Field from pydantic import BaseModel, Field
from typing import List from typing import List
from toolbox import CatchException, update_ui, is_the_upload_folder from toolbox import CatchException, update_ui, is_the_upload_folder
from toolbox import update_ui_lastest_msg, disable_auto_promotion from toolbox import update_ui_latest_msg, disable_auto_promotion
from request_llms.bridge_all import predict_no_ui_long_connection from request_llms.bridge_all import predict_no_ui_long_connection
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
from crazy_functions.crazy_utils import input_clipping from crazy_functions.crazy_utils import input_clipping
@@ -113,19 +113,19 @@ def 虚空终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt
# 用简单的关键词检测用户意图 # 用简单的关键词检测用户意图
is_certain, _ = analyze_intention_with_simple_rules(txt) is_certain, _ = analyze_intention_with_simple_rules(txt)
if is_the_upload_folder(txt): if is_the_upload_folder(txt):
state.set_state(chatbot=chatbot, key='has_provided_explaination', value=False) state.set_state(chatbot=chatbot, key='has_provided_explanation', value=False)
appendix_msg = "\n\n**很好,您已经上传了文件**,现在请您描述您的需求。" appendix_msg = "\n\n**很好,您已经上传了文件**,现在请您描述您的需求。"
if is_certain or (state.has_provided_explaination): if is_certain or (state.has_provided_explanation):
# 如果意图明确,跳过提示环节 # 如果意图明确,跳过提示环节
state.set_state(chatbot=chatbot, key='has_provided_explaination', value=True) state.set_state(chatbot=chatbot, key='has_provided_explanation', value=True)
state.unlock_plugin(chatbot=chatbot) state.unlock_plugin(chatbot=chatbot)
yield from update_ui(chatbot=chatbot, history=history) yield from update_ui(chatbot=chatbot, history=history)
yield from 虚空终端主路由(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request) yield from 虚空终端主路由(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request)
return return
else: else:
# 如果意图模糊,提示 # 如果意图模糊,提示
state.set_state(chatbot=chatbot, key='has_provided_explaination', value=True) state.set_state(chatbot=chatbot, key='has_provided_explanation', value=True)
state.lock_plugin(chatbot=chatbot) state.lock_plugin(chatbot=chatbot)
chatbot.append(("虚空终端状态:", explain_msg+appendix_msg)) chatbot.append(("虚空终端状态:", explain_msg+appendix_msg))
yield from update_ui(chatbot=chatbot, history=history) yield from update_ui(chatbot=chatbot, history=history)
@@ -141,7 +141,7 @@ def 虚空终端主路由(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
# ⭐ ⭐ ⭐ 分析用户意图 # ⭐ ⭐ ⭐ 分析用户意图
is_certain, user_intention = analyze_intention_with_simple_rules(txt) is_certain, user_intention = analyze_intention_with_simple_rules(txt)
if not is_certain: if not is_certain:
yield from update_ui_lastest_msg( yield from update_ui_latest_msg(
lastmsg=f"正在执行任务: {txt}\n\n分析用户意图中", chatbot=chatbot, history=history, delay=0) lastmsg=f"正在执行任务: {txt}\n\n分析用户意图中", chatbot=chatbot, history=history, delay=0)
gpt_json_io = GptJsonIO(UserIntention) gpt_json_io = GptJsonIO(UserIntention)
rf_req = "\nchoose from ['ModifyConfiguration', 'ExecutePlugin', 'Chat']" rf_req = "\nchoose from ['ModifyConfiguration', 'ExecutePlugin', 'Chat']"
@@ -154,13 +154,13 @@ def 虚空终端主路由(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
user_intention = gpt_json_io.generate_output_auto_repair(analyze_res, run_gpt_fn) user_intention = gpt_json_io.generate_output_auto_repair(analyze_res, run_gpt_fn)
lastmsg=f"正在执行任务: {txt}\n\n用户意图理解: 意图={explain_intention_to_user[user_intention.intention_type]}", lastmsg=f"正在执行任务: {txt}\n\n用户意图理解: 意图={explain_intention_to_user[user_intention.intention_type]}",
except JsonStringError as e: except JsonStringError as e:
yield from update_ui_lastest_msg( yield from update_ui_latest_msg(
lastmsg=f"正在执行任务: {txt}\n\n用户意图理解: 失败 当前语言模型({llm_kwargs['llm_model']})不能理解您的意图", chatbot=chatbot, history=history, delay=0) lastmsg=f"正在执行任务: {txt}\n\n用户意图理解: 失败 当前语言模型({llm_kwargs['llm_model']})不能理解您的意图", chatbot=chatbot, history=history, delay=0)
return return
else: else:
pass pass
yield from update_ui_lastest_msg( yield from update_ui_latest_msg(
lastmsg=f"正在执行任务: {txt}\n\n用户意图理解: 意图={explain_intention_to_user[user_intention.intention_type]}", lastmsg=f"正在执行任务: {txt}\n\n用户意图理解: 意图={explain_intention_to_user[user_intention.intention_type]}",
chatbot=chatbot, history=history, delay=0) chatbot=chatbot, history=history, delay=0)

View File

@@ -42,7 +42,7 @@ class AsyncGptTask():
MAX_TOKEN_ALLO = 2560 MAX_TOKEN_ALLO = 2560
i_say, history = input_clipping(i_say, history, max_token_limit=MAX_TOKEN_ALLO) i_say, history = input_clipping(i_say, history, max_token_limit=MAX_TOKEN_ALLO)
gpt_say_partial = predict_no_ui_long_connection(inputs=i_say, llm_kwargs=llm_kwargs, history=history, sys_prompt=sys_prompt, gpt_say_partial = predict_no_ui_long_connection(inputs=i_say, llm_kwargs=llm_kwargs, history=history, sys_prompt=sys_prompt,
observe_window=observe_window[index], console_slience=True) observe_window=observe_window[index], console_silence=True)
except ConnectionAbortedError as token_exceed_err: except ConnectionAbortedError as token_exceed_err:
logger.error('至少一个线程任务Token溢出而失败', e) logger.error('至少一个线程任务Token溢出而失败', e)
except Exception as e: except Exception as e:

View File

@@ -1,6 +1,6 @@
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
from toolbox import CatchException, report_exception, promote_file_to_downloadzone from toolbox import CatchException, report_exception, promote_file_to_downloadzone
from toolbox import update_ui, update_ui_lastest_msg, disable_auto_promotion, write_history_to_file from toolbox import update_ui, update_ui_latest_msg, disable_auto_promotion, write_history_to_file
import logging import logging
import requests import requests
import time import time
@@ -156,7 +156,7 @@ def 谷歌检索小助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
history = [] history = []
meta_paper_info_list = yield from get_meta_information(txt, chatbot, history) meta_paper_info_list = yield from get_meta_information(txt, chatbot, history)
if len(meta_paper_info_list) == 0: if len(meta_paper_info_list) == 0:
yield from update_ui_lastest_msg(lastmsg='获取文献失败可能触发了google反爬虫机制。',chatbot=chatbot, history=history, delay=0) yield from update_ui_latest_msg(lastmsg='获取文献失败可能触发了google反爬虫机制。',chatbot=chatbot, history=history, delay=0)
return return
batchsize = 5 batchsize = 5
for batch in range(math.ceil(len(meta_paper_info_list)/batchsize)): for batch in range(math.ceil(len(meta_paper_info_list)/batchsize)):

View File

@@ -1141,7 +1141,7 @@
"内容太长了都会触发token数量溢出的错误": "An error of token overflow will be triggered if the content is too long", "内容太长了都会触发token数量溢出的错误": "An error of token overflow will be triggered if the content is too long",
"chatbot 为WebUI中显示的对话列表": "chatbot is the conversation list displayed in WebUI", "chatbot 为WebUI中显示的对话列表": "chatbot is the conversation list displayed in WebUI",
"修改它": "Modify it", "修改它": "Modify it",
"然后yeild出去": "Then yield it out", "然后yield出去": "Then yield it out",
"可以直接修改对话界面内容": "You can directly modify the conversation interface content", "可以直接修改对话界面内容": "You can directly modify the conversation interface content",
"additional_fn代表点击的哪个按钮": "additional_fn represents which button is clicked", "additional_fn代表点击的哪个按钮": "additional_fn represents which button is clicked",
"按钮见functional.py": "See functional.py for buttons", "按钮见functional.py": "See functional.py for buttons",
@@ -1732,7 +1732,7 @@
"或者重启之后再度尝试": "Or try again after restarting", "或者重启之后再度尝试": "Or try again after restarting",
"免费": "Free", "免费": "Free",
"仅在Windows系统进行了测试": "Tested only on Windows system", "仅在Windows系统进行了测试": "Tested only on Windows system",
"欢迎加REAME中的QQ联系开发者": "Feel free to contact the developer via QQ in REAME", "欢迎加README中的QQ联系开发者": "Feel free to contact the developer via QQ in README",
"当前知识库内的有效文件": "Valid files in the current knowledge base", "当前知识库内的有效文件": "Valid files in the current knowledge base",
"您可以到Github Issue区": "You can go to the Github Issue area", "您可以到Github Issue区": "You can go to the Github Issue area",
"刷新Gradio前端界面": "Refresh the Gradio frontend interface", "刷新Gradio前端界面": "Refresh the Gradio frontend interface",
@@ -1759,7 +1759,7 @@
"报错信息如下. 如果是与网络相关的问题": "Error message as follows. If it is related to network issues", "报错信息如下. 如果是与网络相关的问题": "Error message as follows. If it is related to network issues",
"功能描述": "Function description", "功能描述": "Function description",
"禁止移除或修改此警告": "Removal or modification of this warning is prohibited", "禁止移除或修改此警告": "Removal or modification of this warning is prohibited",
"Arixv翻译": "Arixv translation", "ArXiv翻译": "ArXiv translation",
"读取优先级": "Read priority", "读取优先级": "Read priority",
"包含documentclass关键字": "Contains the documentclass keyword", "包含documentclass关键字": "Contains the documentclass keyword",
"根据文本使用GPT模型生成相应的图像": "Generate corresponding images using GPT model based on the text", "根据文本使用GPT模型生成相应的图像": "Generate corresponding images using GPT model based on the text",
@@ -1998,7 +1998,7 @@
"开始最终总结": "Start final summary", "开始最终总结": "Start final summary",
"openai的官方KEY需要伴随组织编码": "Openai's official KEY needs to be accompanied by organizational code", "openai的官方KEY需要伴随组织编码": "Openai's official KEY needs to be accompanied by organizational code",
"将子线程的gpt结果写入chatbot": "Write the GPT result of the sub-thread into the chatbot", "将子线程的gpt结果写入chatbot": "Write the GPT result of the sub-thread into the chatbot",
"Arixv论文精细翻译": "Fine translation of Arixv paper", "ArXiv论文精细翻译": "Fine translation of ArXiv paper",
"开始接收chatglmft的回复": "Start receiving replies from chatglmft", "开始接收chatglmft的回复": "Start receiving replies from chatglmft",
"请先将.doc文档转换为.docx文档": "Please convert .doc documents to .docx documents first", "请先将.doc文档转换为.docx文档": "Please convert .doc documents to .docx documents first",
"避免多用户干扰": "Avoid multiple user interference", "避免多用户干扰": "Avoid multiple user interference",
@@ -2360,7 +2360,7 @@
"请在config.py中设置ALLOW_RESET_CONFIG=True后重启软件": "Please set ALLOW_RESET_CONFIG=True in config.py and restart the software", "请在config.py中设置ALLOW_RESET_CONFIG=True后重启软件": "Please set ALLOW_RESET_CONFIG=True in config.py and restart the software",
"按照自然语言描述生成一个动画 | 输入参数是一段话": "Generate an animation based on natural language description | Input parameter is a sentence", "按照自然语言描述生成一个动画 | 输入参数是一段话": "Generate an animation based on natural language description | Input parameter is a sentence",
"你的hf用户名如qingxu98": "Your hf username is qingxu98", "你的hf用户名如qingxu98": "Your hf username is qingxu98",
"Arixv论文精细翻译 | 输入参数arxiv论文的ID": "Fine translation of Arixv paper | Input parameter is the ID of arxiv paper", "ArXiv论文精细翻译 | 输入参数arxiv论文的ID": "Fine translation of ArXiv paper | Input parameter is the ID of arxiv paper",
"无法获取 abstract": "Unable to retrieve abstract", "无法获取 abstract": "Unable to retrieve abstract",
"尽可能地仅用一行命令解决我的要求": "Try to solve my request using only one command", "尽可能地仅用一行命令解决我的要求": "Try to solve my request using only one command",
"提取插件参数": "Extract plugin parameters", "提取插件参数": "Extract plugin parameters",

View File

@@ -753,7 +753,7 @@
"手动指定和筛选源代码文件类型": "ソースコードファイルタイプを手動で指定およびフィルタリングする", "手动指定和筛选源代码文件类型": "ソースコードファイルタイプを手動で指定およびフィルタリングする",
"更多函数插件": "その他の関数プラグイン", "更多函数插件": "その他の関数プラグイン",
"看门狗的耐心": "監視犬の忍耐力", "看门狗的耐心": "監視犬の忍耐力",
"然后yeild出去": "そして出力する", "然后yield出去": "そして出力する",
"拆分过长的IPynb文件": "長すぎるIPynbファイルを分割する", "拆分过长的IPynb文件": "長すぎるIPynbファイルを分割する",
"1. 把input的余量留出来": "1. 入力の余裕を残す", "1. 把input的余量留出来": "1. 入力の余裕を残す",
"请求超时": "リクエストがタイムアウトしました", "请求超时": "リクエストがタイムアウトしました",
@@ -1803,7 +1803,7 @@
"默认值为1000": "デフォルト値は1000です", "默认值为1000": "デフォルト値は1000です",
"写出文件": "ファイルに書き出す", "写出文件": "ファイルに書き出す",
"生成的视频文件路径": "生成されたビデオファイルのパス", "生成的视频文件路径": "生成されたビデオファイルのパス",
"Arixv论文精细翻译": "Arixv論文の詳細な翻訳", "ArXiv论文精细翻译": "ArXiv論文の詳細な翻訳",
"用latex编译为PDF对修正处做高亮": "LaTeXでコンパイルしてPDFに修正をハイライトする", "用latex编译为PDF对修正处做高亮": "LaTeXでコンパイルしてPDFに修正をハイライトする",
"点击“停止”键可终止程序": "「停止」ボタンをクリックしてプログラムを終了できます", "点击“停止”键可终止程序": "「停止」ボタンをクリックしてプログラムを終了できます",
"否则将导致每个人的Claude问询历史互相渗透": "さもないと、各人のClaudeの問い合わせ履歴が相互に侵入します", "否则将导致每个人的Claude问询历史互相渗透": "さもないと、各人のClaudeの問い合わせ履歴が相互に侵入します",
@@ -1987,7 +1987,7 @@
"前面是中文逗号": "前面是中文逗号", "前面是中文逗号": "前面是中文逗号",
"的依赖": "的依赖", "的依赖": "的依赖",
"材料如下": "材料如下", "材料如下": "材料如下",
"欢迎加REAME中的QQ联系开发者": "欢迎加REAME中的QQ联系开发者", "欢迎加README中的QQ联系开发者": "欢迎加README中的QQ联系开发者",
"开始下载": "開始ダウンロード", "开始下载": "開始ダウンロード",
"100字以内": "100文字以内", "100字以内": "100文字以内",
"创建request": "リクエストの作成", "创建request": "リクエストの作成",

View File

@@ -771,7 +771,7 @@
"查询代理的地理位置": "查詢代理的地理位置", "查询代理的地理位置": "查詢代理的地理位置",
"是否在输入过长时": "是否在輸入過長時", "是否在输入过长时": "是否在輸入過長時",
"chatGPT分析报告": "chatGPT分析報告", "chatGPT分析报告": "chatGPT分析報告",
"然后yeild出去": "然後yield出去", "然后yield出去": "然後yield出去",
"用户取消了程序": "使用者取消了程式", "用户取消了程序": "使用者取消了程式",
"琥珀色": "琥珀色", "琥珀色": "琥珀色",
"这里是特殊函数插件的高级参数输入区": "這裡是特殊函數插件的高級參數輸入區", "这里是特殊函数插件的高级参数输入区": "這裡是特殊函數插件的高級參數輸入區",
@@ -1587,7 +1587,7 @@
"否则将导致每个人的Claude问询历史互相渗透": "否則將導致每個人的Claude問詢歷史互相滲透", "否则将导致每个人的Claude问询历史互相渗透": "否則將導致每個人的Claude問詢歷史互相滲透",
"提问吧! 但注意": "提問吧!但注意", "提问吧! 但注意": "提問吧!但注意",
"待处理的word文档路径": "待處理的word文檔路徑", "待处理的word文档路径": "待處理的word文檔路徑",
"欢迎加REAME中的QQ联系开发者": "歡迎加REAME中的QQ聯繫開發者", "欢迎加README中的QQ联系开发者": "歡迎加README中的QQ聯繫開發者",
"建议暂时不要使用": "建議暫時不要使用", "建议暂时不要使用": "建議暫時不要使用",
"Latex没有安装": "Latex沒有安裝", "Latex没有安装": "Latex沒有安裝",
"在这里放一些网上搜集的demo": "在這裡放一些網上搜集的demo", "在这里放一些网上搜集的demo": "在這裡放一些網上搜集的demo",
@@ -1989,7 +1989,7 @@
"请耐心等待": "請耐心等待", "请耐心等待": "請耐心等待",
"在执行完成之后": "在執行完成之後", "在执行完成之后": "在執行完成之後",
"参数简单": "參數簡單", "参数简单": "參數簡單",
"Arixv论文精细翻译": "Arixv論文精細翻譯", "ArXiv论文精细翻译": "ArXiv論文精細翻譯",
"备份和下载": "備份和下載", "备份和下载": "備份和下載",
"当前报错的latex代码处于第": "當前報錯的latex代碼處於第", "当前报错的latex代码处于第": "當前報錯的latex代碼處於第",
"Markdown翻译": "Markdown翻譯", "Markdown翻译": "Markdown翻譯",

View File

@@ -1265,9 +1265,9 @@ def LLM_CATCH_EXCEPTION(f):
""" """
装饰器函数,将错误显示出来 装饰器函数,将错误显示出来
""" """
def decorated(inputs:str, llm_kwargs:dict, history:list, sys_prompt:str, observe_window:list, console_slience:bool): def decorated(inputs:str, llm_kwargs:dict, history:list, sys_prompt:str, observe_window:list, console_silence:bool):
try: try:
return f(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience) return f(inputs, llm_kwargs, history, sys_prompt, observe_window, console_silence)
except Exception as e: except Exception as e:
tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n' tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n'
observe_window[0] = tb_str observe_window[0] = tb_str
@@ -1275,7 +1275,7 @@ def LLM_CATCH_EXCEPTION(f):
return decorated return decorated
def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list, sys_prompt:str, observe_window:list=[], console_slience:bool=False): def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list, sys_prompt:str, observe_window:list=[], console_silence:bool=False):
""" """
发送至LLM等待回复一次性完成不显示中间过程。但内部尽可能地用stream的方法避免中途网线被掐。 发送至LLM等待回复一次性完成不显示中间过程。但内部尽可能地用stream的方法避免中途网线被掐。
inputs inputs
@@ -1297,7 +1297,7 @@ def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list, sys
if '&' not in model: if '&' not in model:
# 如果只询问“一个”大语言模型(多数情况): # 如果只询问“一个”大语言模型(多数情况):
method = model_info[model]["fn_without_ui"] method = model_info[model]["fn_without_ui"]
return method(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience) return method(inputs, llm_kwargs, history, sys_prompt, observe_window, console_silence)
else: else:
# 如果同时询问“多个”大语言模型这个稍微啰嗦一点但思路相同您不必读这个else分支 # 如果同时询问“多个”大语言模型这个稍微啰嗦一点但思路相同您不必读这个else分支
executor = ThreadPoolExecutor(max_workers=4) executor = ThreadPoolExecutor(max_workers=4)
@@ -1314,7 +1314,7 @@ def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list, sys
method = model_info[model]["fn_without_ui"] method = model_info[model]["fn_without_ui"]
llm_kwargs_feedin = copy.deepcopy(llm_kwargs) llm_kwargs_feedin = copy.deepcopy(llm_kwargs)
llm_kwargs_feedin['llm_model'] = model llm_kwargs_feedin['llm_model'] = model
future = executor.submit(LLM_CATCH_EXCEPTION(method), inputs, llm_kwargs_feedin, history, sys_prompt, window_mutex[i], console_slience) future = executor.submit(LLM_CATCH_EXCEPTION(method), inputs, llm_kwargs_feedin, history, sys_prompt, window_mutex[i], console_silence)
futures.append(future) futures.append(future)
def mutex_manager(window_mutex, observe_window): def mutex_manager(window_mutex, observe_window):

View File

@@ -139,7 +139,7 @@ global glmft_handle
glmft_handle = None glmft_handle = None
################################################################################# #################################################################################
def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="", def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="",
observe_window:list=[], console_slience:bool=False): observe_window:list=[], console_silence:bool=False):
""" """
多线程方法 多线程方法
函数的说明请见 request_llms/bridge_all.py 函数的说明请见 request_llms/bridge_all.py

View File

@@ -125,7 +125,7 @@ def verify_endpoint(endpoint):
raise ValueError("Endpoint不正确, 请检查AZURE_ENDPOINT的配置! 当前的Endpoint为:" + endpoint) raise ValueError("Endpoint不正确, 请检查AZURE_ENDPOINT的配置! 当前的Endpoint为:" + endpoint)
return endpoint return endpoint
def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="", observe_window:list=None, console_slience:bool=False): def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="", observe_window:list=None, console_silence:bool=False):
""" """
发送至chatGPT等待回复一次性完成不显示中间过程。但内部用stream的方法避免中途网线被掐。 发送至chatGPT等待回复一次性完成不显示中间过程。但内部用stream的方法避免中途网线被掐。
inputs inputs
@@ -203,7 +203,7 @@ def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[],
if (not has_content) and (not has_role): continue # raise RuntimeError("发现不标准的第三方接口:"+delta) if (not has_content) and (not has_role): continue # raise RuntimeError("发现不标准的第三方接口:"+delta)
if has_content: # has_role = True/False if has_content: # has_role = True/False
result += delta["content"] result += delta["content"]
if not console_slience: print(delta["content"], end='') if not console_silence: print(delta["content"], end='')
if observe_window is not None: if observe_window is not None:
# 观测窗,把已经获取的数据显示出去 # 观测窗,把已经获取的数据显示出去
if len(observe_window) >= 1: if len(observe_window) >= 1:
@@ -231,7 +231,7 @@ def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:ChatBotWith
inputs 是本次问询的输入 inputs 是本次问询的输入
top_p, temperature是chatGPT的内部调优参数 top_p, temperature是chatGPT的内部调优参数
history 是之前的对话列表注意无论是inputs还是history内容太长了都会触发token数量溢出的错误 history 是之前的对话列表注意无论是inputs还是history内容太长了都会触发token数量溢出的错误
chatbot 为WebUI中显示的对话列表修改它然后yeild出去可以直接修改对话界面内容 chatbot 为WebUI中显示的对话列表修改它然后yield出去可以直接修改对话界面内容
additional_fn代表点击的哪个按钮按钮见functional.py additional_fn代表点击的哪个按钮按钮见functional.py
""" """
from request_llms.bridge_all import model_info from request_llms.bridge_all import model_info

View File

@@ -16,7 +16,7 @@ import base64
import glob import glob
from loguru import logger from loguru import logger
from toolbox import get_conf, update_ui, is_any_api_key, select_api_key, what_keys, clip_history, trimmed_format_exc, is_the_upload_folder, \ from toolbox import get_conf, update_ui, is_any_api_key, select_api_key, what_keys, clip_history, trimmed_format_exc, is_the_upload_folder, \
update_ui_lastest_msg, get_max_token, encode_image, have_any_recent_upload_image_files, log_chat update_ui_latest_msg, get_max_token, encode_image, have_any_recent_upload_image_files, log_chat
proxies, TIMEOUT_SECONDS, MAX_RETRY, API_ORG, AZURE_CFG_ARRAY = \ proxies, TIMEOUT_SECONDS, MAX_RETRY, API_ORG, AZURE_CFG_ARRAY = \
@@ -67,7 +67,7 @@ def verify_endpoint(endpoint):
""" """
return endpoint return endpoint
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=None, console_slience=False): def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=None, console_silence=False):
raise NotImplementedError raise NotImplementedError
@@ -183,7 +183,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
if ('data: [DONE]' in chunk_decoded) or (len(chunkjson['choices'][0]["delta"]) == 0): if ('data: [DONE]' in chunk_decoded) or (len(chunkjson['choices'][0]["delta"]) == 0):
# 判定为数据流的结束gpt_replying_buffer也写完了 # 判定为数据流的结束gpt_replying_buffer也写完了
lastmsg = chatbot[-1][-1] + f"\n\n\n\n{llm_kwargs['llm_model']}调用结束,该模型不具备上下文对话能力,如需追问,请及时切换模型。」" lastmsg = chatbot[-1][-1] + f"\n\n\n\n{llm_kwargs['llm_model']}调用结束,该模型不具备上下文对话能力,如需追问,请及时切换模型。」"
yield from update_ui_lastest_msg(lastmsg, chatbot, history, delay=1) yield from update_ui_latest_msg(lastmsg, chatbot, history, delay=1)
log_chat(llm_model=llm_kwargs["llm_model"], input_str=inputs, output_str=gpt_replying_buffer) log_chat(llm_model=llm_kwargs["llm_model"], input_str=inputs, output_str=gpt_replying_buffer)
break break
# 处理数据流的主体 # 处理数据流的主体

View File

@@ -69,7 +69,7 @@ def decode_chunk(chunk):
return need_to_pass, chunkjson, is_last_chunk return need_to_pass, chunkjson, is_last_chunk
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=None, console_slience=False): def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=None, console_silence=False):
""" """
发送至chatGPT等待回复一次性完成不显示中间过程。但内部用stream的方法避免中途网线被掐。 发送至chatGPT等待回复一次性完成不显示中间过程。但内部用stream的方法避免中途网线被掐。
inputs inputs
@@ -151,7 +151,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
inputs 是本次问询的输入 inputs 是本次问询的输入
top_p, temperature是chatGPT的内部调优参数 top_p, temperature是chatGPT的内部调优参数
history 是之前的对话列表注意无论是inputs还是history内容太长了都会触发token数量溢出的错误 history 是之前的对话列表注意无论是inputs还是history内容太长了都会触发token数量溢出的错误
chatbot 为WebUI中显示的对话列表修改它然后yeild出去可以直接修改对话界面内容 chatbot 为WebUI中显示的对话列表修改它然后yield出去可以直接修改对话界面内容
additional_fn代表点击的哪个按钮按钮见functional.py additional_fn代表点击的哪个按钮按钮见functional.py
""" """
if inputs == "": inputs = "空空如也的输入栏" if inputs == "": inputs = "空空如也的输入栏"

View File

@@ -68,7 +68,7 @@ def verify_endpoint(endpoint):
raise ValueError("Endpoint不正确, 请检查AZURE_ENDPOINT的配置! 当前的Endpoint为:" + endpoint) raise ValueError("Endpoint不正确, 请检查AZURE_ENDPOINT的配置! 当前的Endpoint为:" + endpoint)
return endpoint return endpoint
def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="", observe_window:list=None, console_slience:bool=False): def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="", observe_window:list=None, console_silence:bool=False):
""" """
发送等待回复一次性完成不显示中间过程。但内部用stream的方法避免中途网线被掐。 发送等待回复一次性完成不显示中间过程。但内部用stream的方法避免中途网线被掐。
inputs inputs
@@ -111,7 +111,7 @@ def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[],
if chunkjson['event_type'] == 'stream-start': continue if chunkjson['event_type'] == 'stream-start': continue
if chunkjson['event_type'] == 'text-generation': if chunkjson['event_type'] == 'text-generation':
result += chunkjson["text"] result += chunkjson["text"]
if not console_slience: print(chunkjson["text"], end='') if not console_silence: print(chunkjson["text"], end='')
if observe_window is not None: if observe_window is not None:
# 观测窗,把已经获取的数据显示出去 # 观测窗,把已经获取的数据显示出去
if len(observe_window) >= 1: if len(observe_window) >= 1:
@@ -132,7 +132,7 @@ def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:ChatBotWith
inputs 是本次问询的输入 inputs 是本次问询的输入
top_p, temperature是chatGPT的内部调优参数 top_p, temperature是chatGPT的内部调优参数
history 是之前的对话列表注意无论是inputs还是history内容太长了都会触发token数量溢出的错误 history 是之前的对话列表注意无论是inputs还是history内容太长了都会触发token数量溢出的错误
chatbot 为WebUI中显示的对话列表修改它然后yeild出去可以直接修改对话界面内容 chatbot 为WebUI中显示的对话列表修改它然后yield出去可以直接修改对话界面内容
additional_fn代表点击的哪个按钮按钮见functional.py additional_fn代表点击的哪个按钮按钮见functional.py
""" """
# if is_any_api_key(inputs): # if is_any_api_key(inputs):

View File

@@ -8,7 +8,7 @@ import os
import time import time
from request_llms.com_google import GoogleChatInit from request_llms.com_google import GoogleChatInit
from toolbox import ChatBotWithCookies from toolbox import ChatBotWithCookies
from toolbox import get_conf, update_ui, update_ui_lastest_msg, have_any_recent_upload_image_files, trimmed_format_exc, log_chat, encode_image from toolbox import get_conf, update_ui, update_ui_latest_msg, have_any_recent_upload_image_files, trimmed_format_exc, log_chat, encode_image
proxies, TIMEOUT_SECONDS, MAX_RETRY = get_conf('proxies', 'TIMEOUT_SECONDS', 'MAX_RETRY') proxies, TIMEOUT_SECONDS, MAX_RETRY = get_conf('proxies', 'TIMEOUT_SECONDS', 'MAX_RETRY')
timeout_bot_msg = '[Local Message] Request timeout. Network error. Please check proxy settings in config.py.' + \ timeout_bot_msg = '[Local Message] Request timeout. Network error. Please check proxy settings in config.py.' + \
@@ -16,7 +16,7 @@ timeout_bot_msg = '[Local Message] Request timeout. Network error. Please check
def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="", observe_window:list=[], def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="", observe_window:list=[],
console_slience:bool=False): console_silence:bool=False):
# 检查API_KEY # 检查API_KEY
if get_conf("GEMINI_API_KEY") == "": if get_conf("GEMINI_API_KEY") == "":
raise ValueError(f"请配置 GEMINI_API_KEY。") raise ValueError(f"请配置 GEMINI_API_KEY。")
@@ -60,7 +60,7 @@ def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:ChatBotWith
# 检查API_KEY # 检查API_KEY
if get_conf("GEMINI_API_KEY") == "": if get_conf("GEMINI_API_KEY") == "":
yield from update_ui_lastest_msg(f"请配置 GEMINI_API_KEY。", chatbot=chatbot, history=history, delay=0) yield from update_ui_latest_msg(f"请配置 GEMINI_API_KEY。", chatbot=chatbot, history=history, delay=0)
return return
# 适配润色区域 # 适配润色区域

View File

@@ -55,7 +55,7 @@ class GetGLMHandle(Process):
if self.jittorllms_model is None: if self.jittorllms_model is None:
device = get_conf('LOCAL_MODEL_DEVICE') device = get_conf('LOCAL_MODEL_DEVICE')
from .jittorllms.models import get_model from .jittorllms.models import get_model
# availabel_models = ["chatglm", "pangualpha", "llama", "chatrwkv"] # available_models = ["chatglm", "pangualpha", "llama", "chatrwkv"]
args_dict = {'model': 'llama'} args_dict = {'model': 'llama'}
print('self.jittorllms_model = get_model(types.SimpleNamespace(**args_dict))') print('self.jittorllms_model = get_model(types.SimpleNamespace(**args_dict))')
self.jittorllms_model = get_model(types.SimpleNamespace(**args_dict)) self.jittorllms_model = get_model(types.SimpleNamespace(**args_dict))
@@ -107,7 +107,7 @@ global llama_glm_handle
llama_glm_handle = None llama_glm_handle = None
################################################################################# #################################################################################
def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="", def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="",
observe_window:list=[], console_slience:bool=False): observe_window:list=[], console_silence:bool=False):
""" """
多线程方法 多线程方法
函数的说明请见 request_llms/bridge_all.py 函数的说明请见 request_llms/bridge_all.py

View File

@@ -55,7 +55,7 @@ class GetGLMHandle(Process):
if self.jittorllms_model is None: if self.jittorllms_model is None:
device = get_conf('LOCAL_MODEL_DEVICE') device = get_conf('LOCAL_MODEL_DEVICE')
from .jittorllms.models import get_model from .jittorllms.models import get_model
# availabel_models = ["chatglm", "pangualpha", "llama", "chatrwkv"] # available_models = ["chatglm", "pangualpha", "llama", "chatrwkv"]
args_dict = {'model': 'pangualpha'} args_dict = {'model': 'pangualpha'}
print('self.jittorllms_model = get_model(types.SimpleNamespace(**args_dict))') print('self.jittorllms_model = get_model(types.SimpleNamespace(**args_dict))')
self.jittorllms_model = get_model(types.SimpleNamespace(**args_dict)) self.jittorllms_model = get_model(types.SimpleNamespace(**args_dict))
@@ -107,7 +107,7 @@ global pangu_glm_handle
pangu_glm_handle = None pangu_glm_handle = None
################################################################################# #################################################################################
def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="", def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="",
observe_window:list=[], console_slience:bool=False): observe_window:list=[], console_silence:bool=False):
""" """
多线程方法 多线程方法
函数的说明请见 request_llms/bridge_all.py 函数的说明请见 request_llms/bridge_all.py

View File

@@ -55,7 +55,7 @@ class GetGLMHandle(Process):
if self.jittorllms_model is None: if self.jittorllms_model is None:
device = get_conf('LOCAL_MODEL_DEVICE') device = get_conf('LOCAL_MODEL_DEVICE')
from .jittorllms.models import get_model from .jittorllms.models import get_model
# availabel_models = ["chatglm", "pangualpha", "llama", "chatrwkv"] # available_models = ["chatglm", "pangualpha", "llama", "chatrwkv"]
args_dict = {'model': 'chatrwkv'} args_dict = {'model': 'chatrwkv'}
print('self.jittorllms_model = get_model(types.SimpleNamespace(**args_dict))') print('self.jittorllms_model = get_model(types.SimpleNamespace(**args_dict))')
self.jittorllms_model = get_model(types.SimpleNamespace(**args_dict)) self.jittorllms_model = get_model(types.SimpleNamespace(**args_dict))
@@ -107,7 +107,7 @@ global rwkv_glm_handle
rwkv_glm_handle = None rwkv_glm_handle = None
################################################################################# #################################################################################
def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="", def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="",
observe_window:list=[], console_slience:bool=False): observe_window:list=[], console_silence:bool=False):
""" """
多线程方法 多线程方法
函数的说明请见 request_llms/bridge_all.py 函数的说明请见 request_llms/bridge_all.py

View File

@@ -46,8 +46,8 @@ class GetLlamaHandle(LocalLLMHandle):
top_p = kwargs['top_p'] top_p = kwargs['top_p']
temperature = kwargs['temperature'] temperature = kwargs['temperature']
history = kwargs['history'] history = kwargs['history']
console_slience = kwargs.get('console_slience', True) console_silence = kwargs.get('console_silence', True)
return query, max_length, top_p, temperature, history, console_slience return query, max_length, top_p, temperature, history, console_silence
def convert_messages_to_prompt(query, history): def convert_messages_to_prompt(query, history):
prompt = "" prompt = ""
@@ -57,7 +57,7 @@ class GetLlamaHandle(LocalLLMHandle):
prompt += f"\n[INST]{query}[/INST]" prompt += f"\n[INST]{query}[/INST]"
return prompt return prompt
query, max_length, top_p, temperature, history, console_slience = adaptor(kwargs) query, max_length, top_p, temperature, history, console_silence = adaptor(kwargs)
prompt = convert_messages_to_prompt(query, history) prompt = convert_messages_to_prompt(query, history)
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=--=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=--=-=- # =-=-=-=-=-=-=-=-=-=-=-=-=-=-=--=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=--=-=-
# code from transformers.llama # code from transformers.llama
@@ -72,9 +72,9 @@ class GetLlamaHandle(LocalLLMHandle):
generated_text = "" generated_text = ""
for new_text in streamer: for new_text in streamer:
generated_text += new_text generated_text += new_text
if not console_slience: print(new_text, end='') if not console_silence: print(new_text, end='')
yield generated_text.lstrip(prompt_tk_back).rstrip("</s>") yield generated_text.lstrip(prompt_tk_back).rstrip("</s>")
if not console_slience: print() if not console_silence: print()
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=--=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=--=-=- # =-=-=-=-=-=-=-=-=-=-=-=-=-=-=--=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=--=-=-
def try_to_import_special_deps(self, **kwargs): def try_to_import_special_deps(self, **kwargs):

View File

@@ -169,7 +169,7 @@ def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:ChatBotWith
log_chat(llm_model=llm_kwargs["llm_model"], input_str=inputs, output_str=gpt_bro_result) log_chat(llm_model=llm_kwargs["llm_model"], input_str=inputs, output_str=gpt_bro_result)
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=None, def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=None,
console_slience=False): console_silence=False):
gpt_bro_init = MoonShotInit() gpt_bro_init = MoonShotInit()
watch_dog_patience = 60 # 看门狗的耐心, 设置10秒即可 watch_dog_patience = 60 # 看门狗的耐心, 设置10秒即可
stream_response = gpt_bro_init.generate_messages(inputs, llm_kwargs, history, sys_prompt, True) stream_response = gpt_bro_init.generate_messages(inputs, llm_kwargs, history, sys_prompt, True)

View File

@@ -95,7 +95,7 @@ class GetGLMHandle(Process):
- Its responses must not be vague, accusatory, rude, controversial, off-topic, or defensive. - Its responses must not be vague, accusatory, rude, controversial, off-topic, or defensive.
- It should avoid giving subjective opinions but rely on objective facts or phrases like \"in this context a human might say...\", \"some people might think...\", etc. - It should avoid giving subjective opinions but rely on objective facts or phrases like \"in this context a human might say...\", \"some people might think...\", etc.
- Its responses must also be positive, polite, interesting, entertaining, and engaging. - Its responses must also be positive, polite, interesting, entertaining, and engaging.
- It can provide additional relevant details to answer in-depth and comprehensively covering mutiple aspects. - It can provide additional relevant details to answer in-depth and comprehensively covering multiple aspects.
- It apologizes and accepts the user's suggestion if the user corrects the incorrect answer generated by MOSS. - It apologizes and accepts the user's suggestion if the user corrects the incorrect answer generated by MOSS.
Capabilities and tools that MOSS can possess. Capabilities and tools that MOSS can possess.
""" """
@@ -172,7 +172,7 @@ global moss_handle
moss_handle = None moss_handle = None
################################################################################# #################################################################################
def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="", def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="",
observe_window:list=[], console_slience:bool=False): observe_window:list=[], console_silence:bool=False):
""" """
多线程方法 多线程方法
函数的说明请见 request_llms/bridge_all.py 函数的说明请见 request_llms/bridge_all.py

View File

@@ -209,7 +209,7 @@ def predict_no_ui_long_connection(
history=[], history=[],
sys_prompt="", sys_prompt="",
observe_window=[], observe_window=[],
console_slience=False, console_silence=False,
): ):
""" """
多线程方法 多线程方法

View File

@@ -52,7 +52,7 @@ def decode_chunk(chunk):
pass pass
return chunk_decoded, chunkjson, is_last_chunk return chunk_decoded, chunkjson, is_last_chunk
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=None, console_slience=False): def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=None, console_silence=False):
""" """
发送至chatGPT等待回复一次性完成不显示中间过程。但内部用stream的方法避免中途网线被掐。 发送至chatGPT等待回复一次性完成不显示中间过程。但内部用stream的方法避免中途网线被掐。
inputs inputs
@@ -99,7 +99,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
logger.info(f'[response] {result}') logger.info(f'[response] {result}')
break break
result += chunkjson['message']["content"] result += chunkjson['message']["content"]
if not console_slience: print(chunkjson['message']["content"], end='') if not console_silence: print(chunkjson['message']["content"], end='')
if observe_window is not None: if observe_window is not None:
# 观测窗,把已经获取的数据显示出去 # 观测窗,把已经获取的数据显示出去
if len(observe_window) >= 1: if len(observe_window) >= 1:
@@ -124,7 +124,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
inputs 是本次问询的输入 inputs 是本次问询的输入
top_p, temperature是chatGPT的内部调优参数 top_p, temperature是chatGPT的内部调优参数
history 是之前的对话列表注意无论是inputs还是history内容太长了都会触发token数量溢出的错误 history 是之前的对话列表注意无论是inputs还是history内容太长了都会触发token数量溢出的错误
chatbot 为WebUI中显示的对话列表修改它然后yeild出去可以直接修改对话界面内容 chatbot 为WebUI中显示的对话列表修改它然后yield出去可以直接修改对话界面内容
additional_fn代表点击的哪个按钮按钮见functional.py additional_fn代表点击的哪个按钮按钮见functional.py
""" """
if inputs == "": inputs = "空空如也的输入栏" if inputs == "": inputs = "空空如也的输入栏"

View File

@@ -119,7 +119,7 @@ def verify_endpoint(endpoint):
raise ValueError("Endpoint不正确, 请检查AZURE_ENDPOINT的配置! 当前的Endpoint为:" + endpoint) raise ValueError("Endpoint不正确, 请检查AZURE_ENDPOINT的配置! 当前的Endpoint为:" + endpoint)
return endpoint return endpoint
def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="", observe_window:list=None, console_slience:bool=False): def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="", observe_window:list=None, console_silence:bool=False):
""" """
发送至chatGPT等待回复一次性完成不显示中间过程。但内部用stream的方法避免中途网线被掐。 发送至chatGPT等待回复一次性完成不显示中间过程。但内部用stream的方法避免中途网线被掐。
inputs inputs
@@ -188,7 +188,7 @@ def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[],
if (not has_content) and (not has_role): continue # raise RuntimeError("发现不标准的第三方接口:"+delta) if (not has_content) and (not has_role): continue # raise RuntimeError("发现不标准的第三方接口:"+delta)
if has_content: # has_role = True/False if has_content: # has_role = True/False
result += delta["content"] result += delta["content"]
if not console_slience: print(delta["content"], end='') if not console_silence: print(delta["content"], end='')
if observe_window is not None: if observe_window is not None:
# 观测窗,把已经获取的数据显示出去 # 观测窗,把已经获取的数据显示出去
if len(observe_window) >= 1: if len(observe_window) >= 1:
@@ -213,7 +213,7 @@ def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:ChatBotWith
inputs 是本次问询的输入 inputs 是本次问询的输入
top_p, temperature是chatGPT的内部调优参数 top_p, temperature是chatGPT的内部调优参数
history 是之前的对话列表注意无论是inputs还是history内容太长了都会触发token数量溢出的错误 history 是之前的对话列表注意无论是inputs还是history内容太长了都会触发token数量溢出的错误
chatbot 为WebUI中显示的对话列表修改它然后yeild出去可以直接修改对话界面内容 chatbot 为WebUI中显示的对话列表修改它然后yield出去可以直接修改对话界面内容
additional_fn代表点击的哪个按钮按钮见functional.py additional_fn代表点击的哪个按钮按钮见functional.py
""" """
from request_llms.bridge_all import model_info from request_llms.bridge_all import model_info

View File

@@ -121,7 +121,7 @@ def generate_from_baidu_qianfan(inputs, llm_kwargs, history, system_prompt):
def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="", def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="",
observe_window:list=[], console_slience:bool=False): observe_window:list=[], console_silence:bool=False):
""" """
⭐多线程方法 ⭐多线程方法
函数的说明请见 request_llms/bridge_all.py 函数的说明请见 request_llms/bridge_all.py

View File

@@ -1,12 +1,12 @@
import time import time
import os import os
from toolbox import update_ui, get_conf, update_ui_lastest_msg from toolbox import update_ui, get_conf, update_ui_latest_msg
from toolbox import check_packages, report_exception, log_chat from toolbox import check_packages, report_exception, log_chat
model_name = 'Qwen' model_name = 'Qwen'
def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="", def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="",
observe_window:list=[], console_slience:bool=False): observe_window:list=[], console_silence:bool=False):
""" """
⭐多线程方法 ⭐多线程方法
函数的说明请见 request_llms/bridge_all.py 函数的说明请见 request_llms/bridge_all.py
@@ -35,13 +35,13 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
try: try:
check_packages(["dashscope"]) check_packages(["dashscope"])
except: except:
yield from update_ui_lastest_msg(f"导入软件依赖失败。使用该模型需要额外依赖,安装方法```pip install --upgrade dashscope```。", yield from update_ui_latest_msg(f"导入软件依赖失败。使用该模型需要额外依赖,安装方法```pip install --upgrade dashscope```。",
chatbot=chatbot, history=history, delay=0) chatbot=chatbot, history=history, delay=0)
return return
# 检查DASHSCOPE_API_KEY # 检查DASHSCOPE_API_KEY
if get_conf("DASHSCOPE_API_KEY") == "": if get_conf("DASHSCOPE_API_KEY") == "":
yield from update_ui_lastest_msg(f"请配置 DASHSCOPE_API_KEY。", yield from update_ui_latest_msg(f"请配置 DASHSCOPE_API_KEY。",
chatbot=chatbot, history=history, delay=0) chatbot=chatbot, history=history, delay=0)
return return

View File

@@ -1,5 +1,5 @@
import time import time
from toolbox import update_ui, get_conf, update_ui_lastest_msg from toolbox import update_ui, get_conf, update_ui_latest_msg
from toolbox import check_packages, report_exception from toolbox import check_packages, report_exception
model_name = '云雀大模型' model_name = '云雀大模型'
@@ -10,7 +10,7 @@ def validate_key():
return True return True
def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="", def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="",
observe_window:list=[], console_slience:bool=False): observe_window:list=[], console_silence:bool=False):
""" """
⭐ 多线程方法 ⭐ 多线程方法
函数的说明请见 request_llms/bridge_all.py 函数的说明请见 request_llms/bridge_all.py
@@ -42,12 +42,12 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
try: try:
check_packages(["zhipuai"]) check_packages(["zhipuai"])
except: except:
yield from update_ui_lastest_msg(f"导入软件依赖失败。使用该模型需要额外依赖,安装方法```pip install --upgrade zhipuai```。", yield from update_ui_latest_msg(f"导入软件依赖失败。使用该模型需要额外依赖,安装方法```pip install --upgrade zhipuai```。",
chatbot=chatbot, history=history, delay=0) chatbot=chatbot, history=history, delay=0)
return return
if validate_key() is False: if validate_key() is False:
yield from update_ui_lastest_msg(lastmsg="[Local Message] 请配置HUOSHAN_API_KEY", chatbot=chatbot, history=history, delay=0) yield from update_ui_latest_msg(lastmsg="[Local Message] 请配置HUOSHAN_API_KEY", chatbot=chatbot, history=history, delay=0)
return return
if additional_fn is not None: if additional_fn is not None:

View File

@@ -2,7 +2,7 @@
import time import time
import threading import threading
import importlib import importlib
from toolbox import update_ui, get_conf, update_ui_lastest_msg from toolbox import update_ui, get_conf, update_ui_latest_msg
from multiprocessing import Process, Pipe from multiprocessing import Process, Pipe
model_name = '星火认知大模型' model_name = '星火认知大模型'
@@ -14,7 +14,7 @@ def validate_key():
return True return True
def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="", def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="",
observe_window:list=[], console_slience:bool=False): observe_window:list=[], console_silence:bool=False):
""" """
⭐多线程方法 ⭐多线程方法
函数的说明请见 request_llms/bridge_all.py 函数的说明请见 request_llms/bridge_all.py
@@ -43,7 +43,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
yield from update_ui(chatbot=chatbot, history=history) yield from update_ui(chatbot=chatbot, history=history)
if validate_key() is False: if validate_key() is False:
yield from update_ui_lastest_msg(lastmsg="[Local Message] 请配置讯飞星火大模型的XFYUN_APPID, XFYUN_API_KEY, XFYUN_API_SECRET", chatbot=chatbot, history=history, delay=0) yield from update_ui_latest_msg(lastmsg="[Local Message] 请配置讯飞星火大模型的XFYUN_APPID, XFYUN_API_KEY, XFYUN_API_SECRET", chatbot=chatbot, history=history, delay=0)
return return
if additional_fn is not None: if additional_fn is not None:

View File

@@ -225,7 +225,7 @@ def predict_no_ui_long_connection(
history=[], history=[],
sys_prompt="", sys_prompt="",
observe_window=None, observe_window=None,
console_slience=False, console_silence=False,
): ):
""" """
多线程方法 多线程方法

View File

@@ -1,6 +1,6 @@
import time import time
import os import os
from toolbox import update_ui, get_conf, update_ui_lastest_msg, log_chat from toolbox import update_ui, get_conf, update_ui_latest_msg, log_chat
from toolbox import check_packages, report_exception, have_any_recent_upload_image_files from toolbox import check_packages, report_exception, have_any_recent_upload_image_files
from toolbox import ChatBotWithCookies from toolbox import ChatBotWithCookies
@@ -13,7 +13,7 @@ def validate_key():
return True return True
def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="", def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="",
observe_window:list=[], console_slience:bool=False): observe_window:list=[], console_silence:bool=False):
""" """
⭐多线程方法 ⭐多线程方法
函数的说明请见 request_llms/bridge_all.py 函数的说明请见 request_llms/bridge_all.py
@@ -49,7 +49,7 @@ def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:ChatBotWith
yield from update_ui(chatbot=chatbot, history=history) yield from update_ui(chatbot=chatbot, history=history)
if validate_key() is False: if validate_key() is False:
yield from update_ui_lastest_msg(lastmsg="[Local Message] 请配置ZHIPUAI_API_KEY", chatbot=chatbot, history=history, delay=0) yield from update_ui_latest_msg(lastmsg="[Local Message] 请配置ZHIPUAI_API_KEY", chatbot=chatbot, history=history, delay=0)
return return
if additional_fn is not None: if additional_fn is not None:

View File

@@ -91,7 +91,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
inputs 是本次问询的输入 inputs 是本次问询的输入
top_p, temperature是chatGPT的内部调优参数 top_p, temperature是chatGPT的内部调优参数
history 是之前的对话列表注意无论是inputs还是history内容太长了都会触发token数量溢出的错误 history 是之前的对话列表注意无论是inputs还是history内容太长了都会触发token数量溢出的错误
chatbot 为WebUI中显示的对话列表修改它然后yeild出去可以直接修改对话界面内容 chatbot 为WebUI中显示的对话列表修改它然后yield出去可以直接修改对话界面内容
additional_fn代表点击的哪个按钮按钮见functional.py additional_fn代表点击的哪个按钮按钮见functional.py
""" """
if additional_fn is not None: if additional_fn is not None:
@@ -112,7 +112,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
mutable = ["", time.time()] mutable = ["", time.time()]
def run_coorotine(mutable): def run_coroutine(mutable):
async def get_result(mutable): async def get_result(mutable):
# "tgui:galactica-1.3b@localhost:7860" # "tgui:galactica-1.3b@localhost:7860"
@@ -126,7 +126,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
break break
asyncio.run(get_result(mutable)) asyncio.run(get_result(mutable))
thread_listen = threading.Thread(target=run_coorotine, args=(mutable,), daemon=True) thread_listen = threading.Thread(target=run_coroutine, args=(mutable,), daemon=True)
thread_listen.start() thread_listen.start()
while thread_listen.is_alive(): while thread_listen.is_alive():
@@ -142,7 +142,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
def predict_no_ui_long_connection(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience=False): def predict_no_ui_long_connection(inputs, llm_kwargs, history, sys_prompt, observe_window, console_silence=False):
raw_input = "What I would like to say is the following: " + inputs raw_input = "What I would like to say is the following: " + inputs
prompt = raw_input prompt = raw_input
tgui_say = "" tgui_say = ""
@@ -151,7 +151,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history, sys_prompt, obser
addr, port = addr_port.split(':') addr, port = addr_port.split(':')
def run_coorotine(observe_window): def run_coroutine(observe_window):
async def get_result(observe_window): async def get_result(observe_window):
async for response in run(context=prompt, max_token=llm_kwargs['max_length'], async for response in run(context=prompt, max_token=llm_kwargs['max_length'],
temperature=llm_kwargs['temperature'], temperature=llm_kwargs['temperature'],
@@ -162,6 +162,6 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history, sys_prompt, obser
print('exit when no listener') print('exit when no listener')
break break
asyncio.run(get_result(observe_window)) asyncio.run(get_result(observe_window))
thread_listen = threading.Thread(target=run_coorotine, args=(observe_window,)) thread_listen = threading.Thread(target=run_coroutine, args=(observe_window,))
thread_listen.start() thread_listen.start()
return observe_window[0] return observe_window[0]

View File

@@ -1,6 +1,6 @@
import time import time
import os import os
from toolbox import update_ui, get_conf, update_ui_lastest_msg, log_chat from toolbox import update_ui, get_conf, update_ui_latest_msg, log_chat
from toolbox import check_packages, report_exception, have_any_recent_upload_image_files from toolbox import check_packages, report_exception, have_any_recent_upload_image_files
from toolbox import ChatBotWithCookies from toolbox import ChatBotWithCookies
@@ -18,7 +18,7 @@ def make_media_input(inputs, image_paths):
return inputs return inputs
def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="", def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="",
observe_window:list=[], console_slience:bool=False): observe_window:list=[], console_silence:bool=False):
""" """
⭐多线程方法 ⭐多线程方法
函数的说明请见 request_llms/bridge_all.py 函数的说明请见 request_llms/bridge_all.py
@@ -57,12 +57,12 @@ def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:ChatBotWith
try: try:
check_packages(["zhipuai"]) check_packages(["zhipuai"])
except: except:
yield from update_ui_lastest_msg(f"导入软件依赖失败。使用该模型需要额外依赖,安装方法```pip install --upgrade zhipuai```。", yield from update_ui_latest_msg(f"导入软件依赖失败。使用该模型需要额外依赖,安装方法```pip install --upgrade zhipuai```。",
chatbot=chatbot, history=history, delay=0) chatbot=chatbot, history=history, delay=0)
return return
if validate_key() is False: if validate_key() is False:
yield from update_ui_lastest_msg(lastmsg="[Local Message] 请配置ZHIPUAI_API_KEY", chatbot=chatbot, history=history, delay=0) yield from update_ui_latest_msg(lastmsg="[Local Message] 请配置ZHIPUAI_API_KEY", chatbot=chatbot, history=history, delay=0)
return return
if additional_fn is not None: if additional_fn is not None:

View File

@@ -216,7 +216,7 @@ class LocalLLMHandle(Process):
def get_local_llm_predict_fns(LLMSingletonClass, model_name, history_format='classic'): def get_local_llm_predict_fns(LLMSingletonClass, model_name, history_format='classic'):
load_message = f"{model_name}尚未加载,加载需要一段时间。注意,取决于`config.py`的配置,{model_name}消耗大量的内存CPU或显存GPU也许会导致低配计算机卡死 ……" load_message = f"{model_name}尚未加载,加载需要一段时间。注意,取决于`config.py`的配置,{model_name}消耗大量的内存CPU或显存GPU也许会导致低配计算机卡死 ……"
def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="", observe_window:list=[], console_slience:bool=False): def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="", observe_window:list=[], console_silence:bool=False):
""" """
refer to request_llms/bridge_all.py refer to request_llms/bridge_all.py
""" """

View File

@@ -4,7 +4,7 @@ import traceback
import requests import requests
from loguru import logger from loguru import logger
from toolbox import get_conf, is_the_upload_folder, update_ui, update_ui_lastest_msg from toolbox import get_conf, is_the_upload_folder, update_ui, update_ui_latest_msg
proxies, TIMEOUT_SECONDS, MAX_RETRY = get_conf( proxies, TIMEOUT_SECONDS, MAX_RETRY = get_conf(
"proxies", "TIMEOUT_SECONDS", "MAX_RETRY" "proxies", "TIMEOUT_SECONDS", "MAX_RETRY"
@@ -350,14 +350,14 @@ def get_predict_function(
chunk = next(stream_response) chunk = next(stream_response)
except StopIteration: except StopIteration:
if wait_counter != 0 and gpt_replying_buffer == "": if wait_counter != 0 and gpt_replying_buffer == "":
yield from update_ui_lastest_msg(lastmsg="模型调用失败 ...", chatbot=chatbot, history=history, msg="failed") yield from update_ui_latest_msg(lastmsg="模型调用失败 ...", chatbot=chatbot, history=history, msg="failed")
break break
except requests.exceptions.ConnectionError: except requests.exceptions.ConnectionError:
chunk = next(stream_response) # 失败了,重试一次?再失败就没办法了。 chunk = next(stream_response) # 失败了,重试一次?再失败就没办法了。
response_text, reasoning_content, finish_reason, decoded_chunk = decode_chunk(chunk) response_text, reasoning_content, finish_reason, decoded_chunk = decode_chunk(chunk)
if decoded_chunk == ': keep-alive': if decoded_chunk == ': keep-alive':
wait_counter += 1 wait_counter += 1
yield from update_ui_lastest_msg(lastmsg="等待中 " + "".join(["."] * (wait_counter%10)), chatbot=chatbot, history=history, msg="waiting ...") yield from update_ui_latest_msg(lastmsg="等待中 " + "".join(["."] * (wait_counter%10)), chatbot=chatbot, history=history, msg="waiting ...")
continue continue
# 返回的数据流第一次为空,继续等待 # 返回的数据流第一次为空,继续等待
if response_text == "" and (reasoning == False or reasoning_content == "") and finish_reason != "False": if response_text == "" and (reasoning == False or reasoning_content == "") and finish_reason != "False":

View File

@@ -8,7 +8,7 @@ def is_full_width_char(ch):
return True # CJK标点符号 return True # CJK标点符号
return False return False
def scolling_visual_effect(text, scroller_max_len): def scrolling_visual_effect(text, scroller_max_len):
text = text.\ text = text.\
replace('\n', '').replace('`', '.').replace(' ', '.').replace('<br/>', '.....').replace('$', '.') replace('\n', '').replace('`', '.').replace(' ', '.').replace('<br/>', '.....').replace('$', '.')
place_take_cnt = 0 place_take_cnt = 0

View File

@@ -85,7 +85,7 @@ def get_chat_default_kwargs():
"history": [], "history": [],
"sys_prompt": "You are AI assistant", "sys_prompt": "You are AI assistant",
"observe_window": None, "observe_window": None,
"console_slience": False, "console_silence": False,
} }
return default_chat_kwargs return default_chat_kwargs

View File

@@ -88,6 +88,32 @@ def zip_extract_member_new(self, member, targetpath, pwd):
return targetpath return targetpath
def safe_extract_rar(file_path, dest_dir):
import rarfile
import posixpath
with rarfile.RarFile(file_path) as rf:
os.makedirs(dest_dir, exist_ok=True)
base_path = os.path.abspath(dest_dir)
for file_info in rf.infolist():
orig_filename = file_info.filename
filename = posixpath.normpath(orig_filename).lstrip('/')
# 路径遍历防护
if '..' in filename or filename.startswith('../'):
raise Exception(f"Attempted Path Traversal in {orig_filename}")
# 符号链接防护
if hasattr(file_info, 'is_symlink') and file_info.is_symlink():
raise Exception(f"Attempted Symlink in {orig_filename}")
# 构造完整目标路径
target_path = os.path.join(base_path, filename)
final_path = os.path.normpath(target_path)
# 最终路径校验
if not final_path.startswith(base_path):
raise Exception(f"Attempted Path Traversal in {orig_filename}")
rf.extractall(dest_dir)
def extract_archive(file_path, dest_dir): def extract_archive(file_path, dest_dir):
import zipfile import zipfile
import tarfile import tarfile
@@ -132,14 +158,11 @@ def extract_archive(file_path, dest_dir):
# 此外Windows上还需要安装winrar软件配置其Path环境变量如"C:\Program Files\WinRAR"才可以 # 此外Windows上还需要安装winrar软件配置其Path环境变量如"C:\Program Files\WinRAR"才可以
elif file_extension == ".rar": elif file_extension == ".rar":
try: try:
import rarfile import rarfile # 用来检查rarfile是否安装不要删除
safe_extract_rar(file_path, dest_dir)
with rarfile.RarFile(file_path) as rf:
rf.extractall(path=dest_dir)
logger.info("Successfully extracted rar archive to {}".format(dest_dir))
except: except:
logger.info("Rar format requires additional dependencies to install") logger.info("Rar format requires additional dependencies to install")
return "\n\n解压失败! 需要安装pip install rarfile来解压rar文件。建议使用zip压缩格式。" return "<br/><br/>解压失败! 需要安装pip install rarfile来解压rar文件。建议使用zip压缩格式。"
# 第三方库需要预先pip install py7zr # 第三方库需要预先pip install py7zr
elif file_extension == ".7z": elif file_extension == ".7z":
@@ -151,7 +174,7 @@ def extract_archive(file_path, dest_dir):
logger.info("Successfully extracted 7z archive to {}".format(dest_dir)) logger.info("Successfully extracted 7z archive to {}".format(dest_dir))
except: except:
logger.info("7z format requires additional dependencies to install") logger.info("7z format requires additional dependencies to install")
return "\n\n解压失败! 需要安装pip install py7zr来解压7z文件" return "<br/><br/>解压失败! 需要安装pip install py7zr来解压7z文件"
else: else:
return "" return ""
return "" return ""

View File

@@ -45,28 +45,161 @@ Any folded content here. It requires an empty line just above it.
md =""" md ="""
在这种场景中,您希望机器 B 能够通过轮询机制来间接地“请求”机器 A而实际上机器 A 只能主动向机器 B 发出请求。这是一种典型的客户端-服务器轮询模式。下面是如何实现这种机制的详细步骤: <details>
<summary>第0份搜索结果 [源自google搜索] (汤姆·赫兰德):</summary>
<div class="search_result">https://baike.baidu.com/item/%E6%B1%A4%E5%A7%86%C2%B7%E8%B5%AB%E5%85%B0%E5%BE%B7/3687216</div>
<div class="search_result">Title: 汤姆·赫兰德
### 机器 B 的实现 URL Source: https://baike.baidu.com/item/%E6%B1%A4%E5%A7%86%C2%B7%E8%B5%AB%E5%85%B0%E5%BE%B7/3687216
1. **安装 FastAPI 和必要的依赖库** Markdown Content:
```bash 网页新闻贴吧知道网盘图片视频地图文库资讯采购百科
pip install fastapi uvicorn 百度首页
``` 登录
注册
进入词条
全站搜索
帮助
首页
秒懂百科
特色百科
知识专题
加入百科
百科团队
权威合作
个人中心
汤姆·赫兰德
播报
讨论
上传视频
英国男演员
汤姆·赫兰德Tom Holland1996年6月1日出生于英国英格兰泰晤士河畔金斯顿英国男演员。2008年出演音乐剧《跳出我天地》而崭露头角。2010年作为主演参加音乐剧《跳出我天地》的五周年特别演出。2012年10月11日主演的个人首部电影《海啸奇迹》上映并凭该电影获得第84届美国国家评论协会奖最具突破男演员奖。2016年10月15日与查理·汉纳姆、西耶娜·米勒合作出演的电影《 ... >>>
2. **创建 FastAPI 服务** 目录
```python 1早年经历
from fastapi import FastAPI 2演艺经历
from fastapi.responses import JSONResponse ▪影坛新星
from uuid import uuid4 ▪角色多变
from threading import Lock ▪跨界翘楚
import time 3个人生活
▪家庭
▪恋情
▪社交
4主要作品
▪参演电影
▪参演电视剧
▪配音作品
▪导演作品
▪杂志写真
5社会活动
6获奖记录
7人物评价
基本信息
汤姆·赫兰德Tom Holland1996年6月1日出生于英国英格兰泰晤士河畔金斯顿英国男演员。 [67]
2008年出演音乐剧《跳出我天地》而崭露头角 [1]。2010年作为主演参加音乐剧《跳出我天地》的五周年特别演出 [2]。2012年10月11日主演的个人首部电影《海啸奇迹》上映并凭该电影获得第84届美国国家评论协会奖最具突破男演员奖 [3]。2016年10月15日与查理·汉纳姆、西耶娜·米勒合作出演的电影《迷失Z城》在纽约电影节首映 [17]2017年主演的《蜘蛛侠英雄归来》上映他凭该电影获得第19届青少年选择奖最佳暑期电影男演员奖以及第70届英国电影和电视艺术学院奖最佳新星奖。 [72]2019年主演的电影《蜘蛛侠英雄远征》上映 [5]同年凭借该电影获得第21届青少年选择奖最佳夏日电影男演员奖 [6]。2024年4月汤姆·霍兰德主演的伦敦西区新版舞台剧《罗密欧与朱丽叶》公布演员名单。 [66]
2024年......</div>
</details>
app = FastAPI() <details>
<summary>第1份搜索结果 [源自google搜索] (汤姆·霍兰德):</summary>
<div class="search_result">https://zh.wikipedia.org/zh-hans/%E6%B1%A4%E5%A7%86%C2%B7%E8%B5%AB%E5%85%B0%E5%BE%B7</div>
<div class="search_result">Title: 汤姆·赫兰德
# 字典用于存储请求和状态 URL Source: https://zh.wikipedia.org/zh-hans/%E6%B1%A4%E5%A7%86%C2%B7%E8%B5%AB%E5%85%B0%E5%BE%B7
requests = {}
process_lock = Lock() Published Time: 2015-06-24T01:08:01Z
Markdown Content:
| 汤姆·霍兰德
Tom Holland |
| --- |
| [![Image 19](https://upload.wikimedia.org/wikipedia/commons/thumb/3/3c/Tom_Holland_by_Gage_Skidmore.jpg/220px-Tom_Holland_by_Gage_Skidmore.jpg)](https://zh.wikipedia.org/wiki/File:Tom_Holland_by_Gage_Skidmore.jpg)
2016年在[圣地牙哥国际漫画展](https://zh.wikipedia.org/wiki/%E8%81%96%E5%9C%B0%E7%89%99%E5%93%A5%E5%9C%8B%E9%9A%9B%E6%BC%AB%E7%95%AB%E5%B1%95 "圣地牙哥国际漫画展")的霍兰德
|
| 男演员 |
| 昵称 | 荷兰弟[\[1\]](https://zh.wikipedia.org/zh-hans/%E6%B1%A4%E5%A7%86%C2%B7%E8%B5%AB%E5%85%B0%E5%BE%B7#cite_note-1) |
| 出生 | 汤玛斯·史丹利·霍兰德
Thomas Stanley Holland[\[2\]](https://zh.wikipedia.org/zh-hans/%E6%B1%A4%E5%A7%86%C2%B7%E8%B5%AB%E5%85%B0%E5%BE%B7#cite_note-2)
1996年6月1日28岁
英国[英格兰](https://zh.wikipedia.org/wiki/%E8%8B%B1%E6%A0%BC%E8%98%AD "英格兰")[泰晤士河畔金斯顿](https://zh.wikipedia.org/wiki/%E6%B3%B0%E6%99%A4......</div>
</details>
<details>
<summary>第2份搜索结果 [源自google搜索] (为什么汤姆赫兰德被称为荷兰弟?):</summary>
<div class="search_result">https://www.zhihu.com/question/363988307</div>
<div class="search_result">Title: 为什么汤姆赫兰德被称为荷兰弟? - 知乎
URL Source: https://www.zhihu.com/question/363988307
Markdown Content:
要说漫威演员里面,谁是最牛的存在,不好说,各有各的看法,但要说谁是最能剧透的,毫无疑问,是我们的汤姆赫兰德荷兰弟,可以说,他算得上是把剧透给玩明白了,先后剧透了不少的电影桥段,以至于漫威后面像防贼一样防着人家荷兰弟,可大家知道吗?你永远想象不到荷兰弟的嘴巴到底有多能漏风?
![Image 9](https://pica.zhimg.com/50/v2-a0aa9972315519ec4975f974f01fc6ca_720w.jpg?source=1def8aca)
故事要回到《侏罗纪世界2》的筹备期间当时荷兰弟也参与了面试计划在剧中饰演一个角色原本这也没啥这都是好莱坞的传统了可是当时的导演胡安根本不知道荷兰弟的“风光伟绩”于是乎人家便屁颠屁颠把侏罗纪世界2的资料拿过来给荷兰弟虽然后面没有让荷兰弟出演这部电影但导演似乎忘了他的嘴巴是开过光的。
![Image 10](https://picx.zhimg.com/50/v2-1da72b482c6a44e1826abb430d95a062_720w.jpg?source=1def8aca)
荷兰弟把剧情刻在了脑子
......</div>
</details>
<details>
<summary>第3份搜索结果 [源自google搜索] 爱戴名表被喷配不上赞达亚荷兰弟曝近照气质大变2</summary>
<div class="search_result">https://www.sohu.com/a/580380519_120702487</div>
<div class="search_result">Title: 爱戴名表被喷配不上赞达亚荷兰弟曝近照气质大变26岁资产惊人_蜘蛛侠_手表_罗伯特·唐尼
URL Source: https://www.sohu.com/a/580380519_120702487
Markdown Content:
2022-08-27 19:00 来源: [BEGEEL宾爵表](https://www.sohu.com/a/580380519_120702487?spm=smpc.content-abroad.content.1.1739375950559fBhgNpP)
发布于:广东省
近日大家熟悉的荷兰弟也就演漫威超级英雄“蜘蛛侠”而走红的英国男星汤姆·赫兰德Tom Holland最近在没有任何预警的情况下宣布自己暂停使用社交媒体原因网络暴力已经严重影响到他的心理健康了。虽然自出演蜘蛛侠以来对荷兰弟的骂声就没停过但不可否认他确实是一位才貌双全的好演员同时也是一位拥有高雅品味的地道英伦绅士从他近年名表收藏的趋势也能略知一二。
![Image 37](https://p5.itc.cn/q_70/images03/20220827/86aca867047b4119ba96a59e33d2d387.jpeg)
2016年《美国队长3内战》上映汤姆·赫兰德扮演的“史上最嫩”蜘蛛侠也正式登场。这个美国普通学生由于意外被一只受过放射性感染的蜘蛛咬到并因此获得超能力化身邻居英雄蜘蛛侠警恶惩奸。和蜘蛛侠彼得·帕克一样当时年仅20岁的荷兰弟无论戏里戏外的穿搭都是少年感十足走的阳光邻家大男孩路线手上戴的最多的就是来自卡西欧的电子表还有来自Nixon sentry的手表千元级别甚至是百元级。
20岁的荷兰弟走的是邻家大男孩路线
![Image 38](https://p3.itc.cn/q_70/images03/20220827/aded82ecfb1d439a8fd4741b49a8eb9b.png)
随着荷兰弟主演的《蜘蛛侠英雄归来》上演第三代蜘蛛侠的话痨性格和年轻活力的形象瞬间圈粉无数。荷兰弟的知名度和演艺收入都大幅度增长他的穿衣品味也渐渐从稚嫩少年风转变成轻熟绅士风。从简单的T恤短袖搭配牛仔裤开始向更加丰富的造型发展其中变化最明显的就是他手腕上的表。
荷兰弟的衣品日......</div>
</details>
<details>
<summary>第4份搜索结果 [源自google搜索] (荷兰弟居然要休息一年,因演戏演到精神分裂…):</summary>
<div class="search_result">https://www.sohu.com/a/683718058_544020</div>
<div class="search_result">Title: 荷兰弟居然要休息一年因演戏演到精神分裂…_Holland_Tom_工作
URL Source: https://www.sohu.com/a/683718058_544020
Markdown Content:
荷兰弟居然要休息一年,因演戏演到精神分裂…\_Holland\_Tom\_工作
===============
* [](http://www.sohu.com/?spm=smpc.content-abroad.nav.1.1739375954055TcEvWsY)
* [新闻](http://news.sohu.com/?spm=smpc.content-abroad.nav.2.1739375954055TcEvWsY)
* [体育](http://sports.sohu.com/?spm=smpc.content-abroad.nav.3.1739375954055TcEvWsY)
* [汽车](http://auto.sohu.com/?spm=smpc.content-abroad.nav.4.1739375954055TcEvWsY)
* [房产](http://www.focus.cn/?spm=smpc.content-abroad.nav.5.1739375954055TcEvWsY)
* [旅游](http://travel.sohu.com/?spm=smpc.content-abroad.nav.6.1739375954055TcEvWsY)
* [教育](http://learning.sohu.com/?spm=smpc.content-abroad.nav.7.1739375954055TcEvWsY)
* [时尚](http://fashion.sohu.com/?spm=smpc.content-abroad.nav.8.1739375954055TcEvWsY)
* [科技](http://it.sohu.com/?spm=smpc.content-abroad.nav.9.1739375954055TcEvWsY)
* [财经](http://business.sohu.com/?spm=smpc.content-abroad.nav.10.17393759......</div>
</details>
""" """
def validate_path(): def validate_path():

View File

@@ -48,8 +48,6 @@ if __name__ == "__main__":
# plugin_test(plugin='crazy_functions.下载arxiv论文翻译摘要->下载arxiv论文并翻译摘要', main_input="1812.10695") # plugin_test(plugin='crazy_functions.下载arxiv论文翻译摘要->下载arxiv论文并翻译摘要', main_input="1812.10695")
# plugin_test(plugin='crazy_functions.联网的ChatGPT->连接网络回答问题', main_input="谁是应急食品?")
# plugin_test(plugin='crazy_functions.解析JupyterNotebook->解析ipynb文件', main_input="crazy_functions/test_samples") # plugin_test(plugin='crazy_functions.解析JupyterNotebook->解析ipynb文件', main_input="crazy_functions/test_samples")
# plugin_test(plugin='crazy_functions.数学动画生成manim->动画生成', main_input="A ball split into 2, and then split into 4, and finally split into 8.") # plugin_test(plugin='crazy_functions.数学动画生成manim->动画生成', main_input="A ball split into 2, and then split into 4, and finally split into 8.")

View File

@@ -48,8 +48,6 @@ if __name__ == "__main__":
# plugin_test(plugin='crazy_functions.下载arxiv论文翻译摘要->下载arxiv论文并翻译摘要', main_input="1812.10695") # plugin_test(plugin='crazy_functions.下载arxiv论文翻译摘要->下载arxiv论文并翻译摘要', main_input="1812.10695")
# plugin_test(plugin='crazy_functions.联网的ChatGPT->连接网络回答问题', main_input="谁是应急食品?")
# plugin_test(plugin='crazy_functions.解析JupyterNotebook->解析ipynb文件', main_input="crazy_functions/test_samples") # plugin_test(plugin='crazy_functions.解析JupyterNotebook->解析ipynb文件', main_input="crazy_functions/test_samples")
# plugin_test(plugin='crazy_functions.数学动画生成manim->动画生成', main_input="A ball split into 2, and then split into 4, and finally split into 8.") # plugin_test(plugin='crazy_functions.数学动画生成manim->动画生成', main_input="A ball split into 2, and then split into 4, and finally split into 8.")

View File

@@ -9,7 +9,7 @@ from textwrap import dedent
# TODO: 解决缩进问题 # TODO: 解决缩进问题
find_function_end_prompt = ''' find_function_end_prompt = '''
Below is a page of code that you need to read. This page may not yet complete, you job is to split this page to sperate functions, class functions etc. Below is a page of code that you need to read. This page may not yet complete, you job is to split this page to separate functions, class functions etc.
- Provide the line number where the first visible function ends. - Provide the line number where the first visible function ends.
- Provide the line number where the next visible function begins. - Provide the line number where the next visible function begins.
- If there are no other functions in this page, you should simply return the line number of the last line. - If there are no other functions in this page, you should simply return the line number of the last line.
@@ -58,7 +58,7 @@ OUTPUT:
revise_funtion_prompt = ''' revise_function_prompt = '''
You need to read the following code, and revise the code according to following instructions: You need to read the following code, and revise the code according to following instructions:
1. You should analyze the purpose of the functions (if there are any). 1. You should analyze the purpose of the functions (if there are any).
2. You need to add docstring for the provided functions (if there are any). 2. You need to add docstring for the provided functions (if there are any).
@@ -147,7 +147,7 @@ class ContextWindowManager():
history=[], history=[],
sys_prompt="", sys_prompt="",
observe_window=[], observe_window=[],
console_slience=True console_silence=True
) )
def extract_number(text): def extract_number(text):
@@ -240,15 +240,15 @@ class ContextWindowManager():
def tag_code(self, fn): def tag_code(self, fn):
code = ''.join(fn) code = ''.join(fn)
_, n_indent = self.dedent(code) _, n_indent = self.dedent(code)
indent_reminder = "" if n_indent == 0 else "(Reminder: as you can see, this piece of code has indent made up with {n_indent} whitespace, please preseve them in the OUTPUT.)" indent_reminder = "" if n_indent == 0 else "(Reminder: as you can see, this piece of code has indent made up with {n_indent} whitespace, please preserve them in the OUTPUT.)"
self.llm_kwargs['temperature'] = 0 self.llm_kwargs['temperature'] = 0
result = predict_no_ui_long_connection( result = predict_no_ui_long_connection(
inputs=revise_funtion_prompt.format(THE_CODE=code, INDENT_REMINDER=indent_reminder), inputs=revise_function_prompt.format(THE_CODE=code, INDENT_REMINDER=indent_reminder),
llm_kwargs=self.llm_kwargs, llm_kwargs=self.llm_kwargs,
history=[], history=[],
sys_prompt="", sys_prompt="",
observe_window=[], observe_window=[],
console_slience=True console_silence=True
) )
def get_code_block(reply): def get_code_block(reply):

View File

@@ -323,3 +323,12 @@
opacity: 0.8; opacity: 0.8;
} }
.search_result {
font-size: smaller;
font-style: italic;
margin: 0px;
padding: 1em;
line-height: 1.5;
text-wrap: wrap;
opacity: 0.8;
}

View File

@@ -598,7 +598,7 @@ function(t) {
default.VIEW_LOGICAL_MAX_BOTTOM, w. default.VIEW_LOGICAL_MAX_BOTTOM, w.
default.VIEW_LOGICAL_MAX_TOP), B.setMaxScale(w. default.VIEW_LOGICAL_MAX_TOP), B.setMaxScale(w.
default.VIEW_MAX_SCALE), B.setMinScale(w. default.VIEW_MAX_SCALE), B.setMinScale(w.
default.VIEW_MIN_SCALE), U = new M.L2DMatrix44, U.multScale(1, i / e), G = new M.L2DMatrix44, G.multTranslate(-i / 2, -e / 2), G.multScale(2 / i, -2 / i), F = v(), (0, D.setContext)(F), !F) return console.error("Failed to create WebGL context."), void(window.WebGLRenderingContext && console.error("Your browser don't support WebGL, check https://get.webgl.org/ for futher information.")); default.VIEW_MIN_SCALE), U = new M.L2DMatrix44, U.multScale(1, i / e), G = new M.L2DMatrix44, G.multTranslate(-i / 2, -e / 2), G.multScale(2 / i, -2 / i), F = v(), (0, D.setContext)(F), !F) return console.error("Failed to create WebGL context."), void(window.WebGLRenderingContext && console.error("Your browser don't support WebGL, check https://get.webgl.org/ for further information."));
window.Live2D.setGL(F), F.clearColor(0, 0, 0, 0), a(t), s() window.Live2D.setGL(F), F.clearColor(0, 0, 0, 0), a(t), s()
} }
function s() { function s() {

View File

@@ -183,7 +183,7 @@ def update_ui(chatbot:ChatBotWithCookies, history:list, msg:str="正常", **kwar
yield cookies, chatbot_gr, json_history, msg yield cookies, chatbot_gr, json_history, msg
def update_ui_lastest_msg(lastmsg:str, chatbot:ChatBotWithCookies, history:list, delay:float=1, msg:str="正常"): # 刷新界面 def update_ui_latest_msg(lastmsg:str, chatbot:ChatBotWithCookies, history:list, delay:float=1, msg:str="正常"): # 刷新界面
""" """
刷新用户界面 刷新用户界面
""" """
@@ -679,7 +679,7 @@ def run_gradio_in_subpath(demo, auth, port, custom_path):
return True return True
if len(path) == 0: if len(path) == 0:
logger.info( logger.info(
"ilegal custom path: {}\npath must not be empty\ndeploy on root url".format( "illegal custom path: {}\npath must not be empty\ndeploy on root url".format(
path path
) )
) )
@@ -690,14 +690,14 @@ def run_gradio_in_subpath(demo, auth, port, custom_path):
return True return True
return False return False
logger.info( logger.info(
"ilegal custom path: {}\npath should begin with '/'\ndeploy on root url".format( "illegal custom path: {}\npath should begin with '/'\ndeploy on root url".format(
path path
) )
) )
return False return False
if not is_path_legal(custom_path): if not is_path_legal(custom_path):
raise RuntimeError("Ilegal custom path") raise RuntimeError("Illegal custom path")
import uvicorn import uvicorn
import gradio as gr import gradio as gr
from fastapi import FastAPI from fastapi import FastAPI