merge frontier branch (#1620)

* Zhipu sdk update 适配最新的智谱SDK,支持GLM4v (#1502)

* 适配 google gemini 优化为从用户input中提取文件

* 适配最新的智谱SDK、支持glm-4v

* requirements.txt fix

* pending history check

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>

* Update "生成多种Mermaid图表" plugin: Separate out the file reading function (#1520)

* Update crazy_functional.py with new functionality deal with PDF

* Update crazy_functional.py and Mermaid.py for plugin_kwargs

* Update crazy_functional.py with new chart type: mind map

* Update SELECT_PROMPT and i_say_show_user messages

* Update ArgsReminder message in get_crazy_functions() function

* Update with read md file and update PROMPTS

* Return the PROMPTS as the test found that the initial version worked best

* Update Mermaid chart generation function

* version 3.71

* 解决issues #1510

* Remove unnecessary text from sys_prompt in 解析历史输入 function

* Remove sys_prompt message in 解析历史输入 function

* Update bridge_all.py: supports gpt-4-turbo-preview (#1517)

* Update bridge_all.py: supports gpt-4-turbo-preview

supports gpt-4-turbo-preview

* Update bridge_all.py

---------

Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com>

* Update config.py: supports gpt-4-turbo-preview (#1516)

* Update config.py: supports gpt-4-turbo-preview

supports gpt-4-turbo-preview

* Update config.py

---------

Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com>

* Refactor 解析历史输入 function to handle file input

* Update Mermaid chart generation functionality

* rename files and functions

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>
Co-authored-by: hongyi-zhao <hongyi.zhao@gmail.com>
Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com>

* 接入mathpix ocr功能 (#1468)

* Update Latex输出PDF结果.py

借助mathpix实现了PDF翻译中文并重新编译PDF

* Update config.py

add mathpix appid & appkey

* Add 'PDF翻译中文并重新编译PDF' feature to plugins.

---------

Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com>

* fix zhipuai

* check picture

* remove glm-4 due to bug

* 修改config

* 检查MATHPIX_APPID

* Remove unnecessary code and update
function_plugins dictionary

* capture non-standard token overflow

* bug fix #1524

* change mermaid style

* 支持mermaid 滚动放大缩小重置,鼠标滚动和拖拽 (#1530)

* 支持mermaid 滚动放大缩小重置,鼠标滚动和拖拽

* 微调未果 先stage一下

* update

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>
Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com>

* ver 3.72

* change live2d

* save the status of ``clear btn` in cookie

* 前端选择保持

* js ui bug fix

* reset btn bug fix

* update live2d tips

* fix missing get_token_num method

* fix live2d toggle switch

* fix persistent custom btn with cookie

* fix zhipuai feedback with core functionality

* Refactor button update and clean up functions

* tailing space removal

* Fix missing MATHPIX_APPID and MATHPIX_APPKEY
configuration

* Prompt fix、脑图提示词优化 (#1537)

* 适配 google gemini 优化为从用户input中提取文件

* 脑图提示词优化

* Fix missing MATHPIX_APPID and MATHPIX_APPKEY
configuration

---------

Co-authored-by: binary-husky <qingxu.fu@outlook.com>

* 优化“PDF翻译中文并重新编译PDF”插件 (#1602)

* Add gemini_endpoint to API_URL_REDIRECT (#1560)

* Add gemini_endpoint to API_URL_REDIRECT

* Update gemini-pro and gemini-pro-vision model_info
endpoints

* Update to support new claude models (#1606)

* Add anthropic library and update claude models

* 更新bridge_claude.py文件,添加了对图片输入的支持。修复了一些bug。

* 添加Claude_3_Models变量以限制图片数量

* Refactor code to improve readability and
maintainability

* minor claude bug fix

* more flexible one-api support

* reformat config

* fix one-api new access bug

* dummy

* compat non-standard api

* version 3.73

---------

Co-authored-by: XIao <46100050+Kilig947@users.noreply.github.com>
Co-authored-by: Menghuan1918 <menghuan2003@outlook.com>
Co-authored-by: hongyi-zhao <hongyi.zhao@gmail.com>
Co-authored-by: Hao Ma <893017927@qq.com>
Co-authored-by: zeyuan huang <599012428@qq.com>
This commit is contained in:
binary-husky
2024-03-11 17:26:09 +08:00
committed by GitHub
parent cd18663800
commit c3140ce344
85 changed files with 866 additions and 642 deletions

View File

@@ -8,10 +8,10 @@
具备多线程调用能力的函数:在函数插件中被调用,灵活而简洁
2. predict_no_ui_long_connection(...)
"""
import tiktoken, copy
import tiktoken, copy, re
from functools import lru_cache
from concurrent.futures import ThreadPoolExecutor
from toolbox import get_conf, trimmed_format_exc, apply_gpt_academic_string_mask
from toolbox import get_conf, trimmed_format_exc, apply_gpt_academic_string_mask, read_one_api_model_name
from .bridge_chatgpt import predict_no_ui_long_connection as chatgpt_noui
from .bridge_chatgpt import predict as chatgpt_ui
@@ -61,6 +61,9 @@ API_URL_REDIRECT, AZURE_ENDPOINT, AZURE_ENGINE = get_conf("API_URL_REDIRECT", "A
openai_endpoint = "https://api.openai.com/v1/chat/completions"
api2d_endpoint = "https://openai.api2d.net/v1/chat/completions"
newbing_endpoint = "wss://sydney.bing.com/sydney/ChatHub"
gemini_endpoint = "https://generativelanguage.googleapis.com/v1beta/models"
claude_endpoint = "https://api.anthropic.com"
if not AZURE_ENDPOINT.endswith('/'): AZURE_ENDPOINT += '/'
azure_endpoint = AZURE_ENDPOINT + f'openai/deployments/{AZURE_ENGINE}/chat/completions?api-version=2023-05-15'
# 兼容旧版的配置
@@ -75,7 +78,8 @@ except:
if openai_endpoint in API_URL_REDIRECT: openai_endpoint = API_URL_REDIRECT[openai_endpoint]
if api2d_endpoint in API_URL_REDIRECT: api2d_endpoint = API_URL_REDIRECT[api2d_endpoint]
if newbing_endpoint in API_URL_REDIRECT: newbing_endpoint = API_URL_REDIRECT[newbing_endpoint]
if gemini_endpoint in API_URL_REDIRECT: gemini_endpoint = API_URL_REDIRECT[gemini_endpoint]
if claude_endpoint in API_URL_REDIRECT: claude_endpoint = API_URL_REDIRECT[claude_endpoint]
# 获取tokenizer
tokenizer_gpt35 = LazyloadTiktoken("gpt-3.5-turbo")
@@ -291,7 +295,7 @@ model_info = {
"gemini-pro": {
"fn_with_ui": genai_ui,
"fn_without_ui": genai_noui,
"endpoint": None,
"endpoint": gemini_endpoint,
"max_token": 1024 * 32,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
@@ -299,7 +303,7 @@ model_info = {
"gemini-pro-vision": {
"fn_with_ui": genai_ui,
"fn_without_ui": genai_noui,
"endpoint": None,
"endpoint": gemini_endpoint,
"max_token": 1024 * 32,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
@@ -349,25 +353,57 @@ for model in AVAIL_LLM_MODELS:
model_info.update({model: mi})
# -=-=-=-=-=-=- 以下部分是新加入的模型,可能附带额外依赖 -=-=-=-=-=-=-
if "claude-1-100k" in AVAIL_LLM_MODELS or "claude-2" in AVAIL_LLM_MODELS:
# claude家族
claude_models = ["claude-instant-1.2","claude-2.0","claude-2.1","claude-3-sonnet-20240229","claude-3-opus-20240229"]
if any(item in claude_models for item in AVAIL_LLM_MODELS):
from .bridge_claude import predict_no_ui_long_connection as claude_noui
from .bridge_claude import predict as claude_ui
model_info.update({
"claude-1-100k": {
"claude-instant-1.2": {
"fn_with_ui": claude_ui,
"fn_without_ui": claude_noui,
"endpoint": None,
"max_token": 8196,
"endpoint": claude_endpoint,
"max_token": 100000,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
})
model_info.update({
"claude-2": {
"claude-2.0": {
"fn_with_ui": claude_ui,
"fn_without_ui": claude_noui,
"endpoint": None,
"max_token": 8196,
"endpoint": claude_endpoint,
"max_token": 100000,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
})
model_info.update({
"claude-2.1": {
"fn_with_ui": claude_ui,
"fn_without_ui": claude_noui,
"endpoint": claude_endpoint,
"max_token": 200000,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
})
model_info.update({
"claude-3-sonnet-20240229": {
"fn_with_ui": claude_ui,
"fn_without_ui": claude_noui,
"endpoint": claude_endpoint,
"max_token": 200000,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
})
model_info.update({
"claude-3-opus-20240229": {
"fn_with_ui": claude_ui,
"fn_without_ui": claude_noui,
"endpoint": claude_endpoint,
"max_token": 200000,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
@@ -675,22 +711,28 @@ if "deepseekcoder" in AVAIL_LLM_MODELS: # deepseekcoder
})
except:
print(trimmed_format_exc())
# if "skylark" in AVAIL_LLM_MODELS:
# try:
# from .bridge_skylark2 import predict_no_ui_long_connection as skylark_noui
# from .bridge_skylark2 import predict as skylark_ui
# model_info.update({
# "skylark": {
# "fn_with_ui": skylark_ui,
# "fn_without_ui": skylark_noui,
# "endpoint": None,
# "max_token": 4096,
# "tokenizer": tokenizer_gpt35,
# "token_cnt": get_token_num_gpt35,
# }
# })
# except:
# print(trimmed_format_exc())
# -=-=-=-=-=-=- one-api 对齐支持 -=-=-=-=-=-=-
for model in [m for m in AVAIL_LLM_MODELS if m.startswith("one-api-")]:
# 为了更灵活地接入one-api多模型管理界面设计了此接口例子AVAIL_LLM_MODELS = ["one-api-mixtral-8x7b(max_token=6666)"]
# 其中
# "one-api-" 是前缀(必要)
# "mixtral-8x7b" 是模型名(必要)
# "(max_token=6666)" 是配置(非必要)
try:
_, max_token_tmp = read_one_api_model_name(model)
except:
print(f"one-api模型 {model} 的 max_token 配置不是整数,请检查配置文件。")
continue
model_info.update({
model: {
"fn_with_ui": chatgpt_ui,
"fn_without_ui": chatgpt_noui,
"endpoint": openai_endpoint,
"max_token": max_token_tmp,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
})
# <-- 用于定义和切换多个azure模型 -->

View File

@@ -56,15 +56,15 @@ class GetGLM2Handle(LocalLLMHandle):
query, max_length, top_p, temperature, history = adaptor(kwargs)
for response, history in self._model.stream_chat(self._tokenizer,
query,
history,
for response, history in self._model.stream_chat(self._tokenizer,
query,
history,
max_length=max_length,
top_p=top_p,
temperature=temperature,
):
yield response
def try_to_import_special_deps(self, **kwargs):
# import something that will raise error if the user does not install requirement_*.txt
# 🏃‍♂️🏃‍♂️🏃‍♂️ 主进程执行

View File

@@ -55,15 +55,15 @@ class GetGLM3Handle(LocalLLMHandle):
query, max_length, top_p, temperature, history = adaptor(kwargs)
for response, history in self._model.stream_chat(self._tokenizer,
query,
history,
for response, history in self._model.stream_chat(self._tokenizer,
query,
history,
max_length=max_length,
top_p=top_p,
temperature=temperature,
):
yield response
def try_to_import_special_deps(self, **kwargs):
# import something that will raise error if the user does not install requirement_*.txt
# 🏃‍♂️🏃‍♂️🏃‍♂️ 主进程执行

View File

@@ -37,7 +37,7 @@ class GetGLMFTHandle(Process):
self.check_dependency()
self.start()
self.threadLock = threading.Lock()
def check_dependency(self):
try:
import sentencepiece
@@ -101,7 +101,7 @@ class GetGLMFTHandle(Process):
break
except Exception as e:
retry += 1
if retry > 3:
if retry > 3:
self.child.send('[Local Message] Call ChatGLMFT fail 不能正常加载ChatGLMFT的参数。')
raise RuntimeError("不能正常加载ChatGLMFT的参数")
@@ -113,7 +113,7 @@ class GetGLMFTHandle(Process):
for response, history in self.chatglmft_model.stream_chat(self.chatglmft_tokenizer, **kwargs):
self.child.send(response)
# # 中途接收可能的终止指令(如果有的话)
# if self.child.poll():
# if self.child.poll():
# command = self.child.recv()
# if command == '[Terminate]': break
except:
@@ -133,7 +133,7 @@ class GetGLMFTHandle(Process):
else:
break
self.threadLock.release()
global glmft_handle
glmft_handle = None
#################################################################################
@@ -146,7 +146,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
if glmft_handle is None:
glmft_handle = GetGLMFTHandle()
if len(observe_window) >= 1: observe_window[0] = load_message + "\n\n" + glmft_handle.info
if not glmft_handle.success:
if not glmft_handle.success:
error = glmft_handle.info
glmft_handle = None
raise RuntimeError(error)
@@ -161,7 +161,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
response = ""
for response in glmft_handle.stream_chat(query=inputs, history=history_feedin, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
if len(observe_window) >= 1: observe_window[0] = response
if len(observe_window) >= 2:
if len(observe_window) >= 2:
if (time.time()-observe_window[1]) > watch_dog_patience:
raise RuntimeError("程序终止。")
return response
@@ -180,7 +180,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
glmft_handle = GetGLMFTHandle()
chatbot[-1] = (inputs, load_message + "\n\n" + glmft_handle.info)
yield from update_ui(chatbot=chatbot, history=[])
if not glmft_handle.success:
if not glmft_handle.success:
glmft_handle = None
return

View File

@@ -59,7 +59,7 @@ class GetONNXGLMHandle(LocalLLMHandle):
temperature=temperature,
):
yield answer
def try_to_import_special_deps(self, **kwargs):
# import something that will raise error if the user does not install requirement_*.txt
# 🏃‍♂️🏃‍♂️🏃‍♂️ 子进程执行

View File

@@ -21,7 +21,7 @@ import random
# config_private.py放自己的秘密如API和代理网址
# 读取时首先看是否存在私密的config_private配置文件不受git管控如果有则覆盖原config文件
from toolbox import get_conf, update_ui, is_any_api_key, select_api_key, what_keys, clip_history, trimmed_format_exc, is_the_upload_folder
from toolbox import get_conf, update_ui, is_any_api_key, select_api_key, what_keys, clip_history, trimmed_format_exc, is_the_upload_folder, read_one_api_model_name
proxies, TIMEOUT_SECONDS, MAX_RETRY, API_ORG, AZURE_CFG_ARRAY = \
get_conf('proxies', 'TIMEOUT_SECONDS', 'MAX_RETRY', 'API_ORG', 'AZURE_CFG_ARRAY')
@@ -358,6 +358,9 @@ def generate_payload(inputs, llm_kwargs, history, system_prompt, stream):
model = llm_kwargs['llm_model']
if llm_kwargs['llm_model'].startswith('api2d-'):
model = llm_kwargs['llm_model'][len('api2d-'):]
if llm_kwargs['llm_model'].startswith('one-api-'):
model = llm_kwargs['llm_model'][len('one-api-'):]
model, _ = read_one_api_model_name(model)
if model == "gpt-3.5-random": # 随机选择, 绕过openai访问频率限制
model = random.choice([

View File

@@ -27,7 +27,7 @@ timeout_bot_msg = '[Local Message] Request timeout. Network error. Please check
def report_invalid_key(key):
if get_conf("BLOCK_INVALID_APIKEY"):
if get_conf("BLOCK_INVALID_APIKEY"):
# 实验性功能自动检测并屏蔽失效的KEY请勿使用
from request_llms.key_manager import ApiKeyManager
api_key = ApiKeyManager().add_key_to_blacklist(key)
@@ -51,13 +51,13 @@ def decode_chunk(chunk):
choice_valid = False
has_content = False
has_role = False
try:
try:
chunkjson = json.loads(chunk_decoded[6:])
has_choices = 'choices' in chunkjson
if has_choices: choice_valid = (len(chunkjson['choices']) > 0)
if has_choices and choice_valid: has_content = "content" in chunkjson['choices'][0]["delta"]
if has_choices and choice_valid: has_role = "role" in chunkjson['choices'][0]["delta"]
except:
except:
pass
return chunk_decoded, chunkjson, has_choices, choice_valid, has_content, has_role
@@ -103,7 +103,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
raw_input = inputs
logging.info(f'[raw_input] {raw_input}')
def make_media_input(inputs, image_paths):
def make_media_input(inputs, image_paths):
for image_path in image_paths:
inputs = inputs + f'<br/><br/><div align="center"><img src="file={os.path.abspath(image_path)}"></div>'
return inputs
@@ -122,7 +122,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
chatbot[-1] = (inputs, f"您提供的api-key不满足要求不包含任何可用于{llm_kwargs['llm_model']}的api-key。您可能选择了错误的模型或请求源。")
yield from update_ui(chatbot=chatbot, history=history, msg="api-key不满足要求") # 刷新界面
return
# 检查endpoint是否合法
try:
from .bridge_all import model_info
@@ -150,7 +150,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
if retry > MAX_RETRY: raise TimeoutError
gpt_replying_buffer = ""
is_head_of_the_stream = True
if stream:
stream_response = response.iter_lines()
@@ -162,21 +162,21 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
chunk_decoded = chunk.decode()
error_msg = chunk_decoded
# 首先排除一个one-api没有done数据包的第三方Bug情形
if len(gpt_replying_buffer.strip()) > 0 and len(error_msg) == 0:
if len(gpt_replying_buffer.strip()) > 0 and len(error_msg) == 0:
yield from update_ui(chatbot=chatbot, history=history, msg="检测到有缺陷的非OpenAI官方接口建议选择更稳定的接口。")
break
# 其他情况,直接返回报错
chatbot, history = handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg, api_key)
yield from update_ui(chatbot=chatbot, history=history, msg="非OpenAI官方接口返回了错误:" + chunk.decode()) # 刷新界面
return
# 提前读取一些信息 (用于判断异常)
chunk_decoded, chunkjson, has_choices, choice_valid, has_content, has_role = decode_chunk(chunk)
if is_head_of_the_stream and (r'"object":"error"' not in chunk_decoded) and (r"content" not in chunk_decoded):
# 数据流的第一帧不携带content
is_head_of_the_stream = False; continue
if chunk:
try:
if has_choices and not choice_valid:
@@ -220,7 +220,7 @@ def handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg,
openai_website = ' 请登录OpenAI查看详情 https://platform.openai.com/signup'
if "reduce the length" in error_msg:
if len(history) >= 2: history[-1] = ""; history[-2] = "" # 清除当前溢出的输入history[-2] 是本次输入, history[-1] 是本次输出
history = clip_history(inputs=inputs, history=history, tokenizer=model_info[llm_kwargs['llm_model']]['tokenizer'],
history = clip_history(inputs=inputs, history=history, tokenizer=model_info[llm_kwargs['llm_model']]['tokenizer'],
max_token_limit=(model_info[llm_kwargs['llm_model']]['max_token'])) # history至少释放二分之一
chatbot[-1] = (chatbot[-1][0], "[Local Message] Reduce the length. 本次输入过长, 或历史数据过长. 历史缓存数据已部分释放, 您可以请再次尝试. (若再次失败则更可能是因为输入过长.)")
elif "does not exist" in error_msg:
@@ -260,7 +260,7 @@ def generate_payload(inputs, llm_kwargs, history, system_prompt, image_paths):
"Authorization": f"Bearer {api_key}"
}
if API_ORG.startswith('org-'): headers.update({"OpenAI-Organization": API_ORG})
if llm_kwargs['llm_model'].startswith('azure-'):
if llm_kwargs['llm_model'].startswith('azure-'):
headers.update({"api-key": api_key})
if llm_kwargs['llm_model'] in AZURE_CFG_ARRAY.keys():
azure_api_key_unshared = AZURE_CFG_ARRAY[llm_kwargs['llm_model']]["AZURE_API_KEY"]
@@ -294,7 +294,7 @@ def generate_payload(inputs, llm_kwargs, history, system_prompt, image_paths):
payload = {
"model": model,
"messages": messages,
"messages": messages,
"temperature": llm_kwargs['temperature'], # 1.0,
"top_p": llm_kwargs['top_p'], # 1.0,
"n": 1,

View File

@@ -73,12 +73,12 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
result = ''
while True:
try: chunk = next(stream_response).decode()
except StopIteration:
except StopIteration:
break
except requests.exceptions.ConnectionError:
chunk = next(stream_response).decode() # 失败了,重试一次?再失败就没办法了。
if len(chunk)==0: continue
if not chunk.startswith('data:'):
if not chunk.startswith('data:'):
error_msg = get_full_error(chunk.encode('utf8'), stream_response).decode()
if "reduce the length" in error_msg:
raise ConnectionAbortedError("OpenAI拒绝了请求:" + error_msg)
@@ -89,14 +89,14 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
delta = json_data["delta"]
if len(delta) == 0: break
if "role" in delta: continue
if "content" in delta:
if "content" in delta:
result += delta["content"]
if not console_slience: print(delta["content"], end='')
if observe_window is not None:
if observe_window is not None:
# 观测窗,把已经获取的数据显示出去
if len(observe_window) >= 1: observe_window[0] += delta["content"]
# 看门狗,如果超过期限没有喂狗,则终止
if len(observe_window) >= 2:
if len(observe_window) >= 2:
if (time.time()-observe_window[1]) > watch_dog_patience:
raise RuntimeError("用户取消了程序。")
else: raise RuntimeError("意外Json结构"+delta)
@@ -132,7 +132,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
chatbot[-1] = (inputs, f"您提供的api-key不满足要求不包含任何可用于{llm_kwargs['llm_model']}的api-key。您可能选择了错误的模型或请求源。")
yield from update_ui(chatbot=chatbot, history=history, msg="api-key不满足要求") # 刷新界面
return
history.append(inputs); history.append("")
retry = 0
@@ -151,7 +151,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
if retry > MAX_RETRY: raise TimeoutError
gpt_replying_buffer = ""
is_head_of_the_stream = True
if stream:
stream_response = response.iter_lines()
@@ -165,12 +165,12 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
chatbot, history = handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg)
yield from update_ui(chatbot=chatbot, history=history, msg="非Openai官方接口返回了错误:" + chunk.decode()) # 刷新界面
return
# print(chunk.decode()[6:])
if is_head_of_the_stream and (r'"object":"error"' not in chunk.decode()):
# 数据流的第一帧不携带content
is_head_of_the_stream = False; continue
if chunk:
try:
chunk_decoded = chunk.decode()
@@ -203,7 +203,7 @@ def handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg)
openai_website = ' 请登录OpenAI查看详情 https://platform.openai.com/signup'
if "reduce the length" in error_msg:
if len(history) >= 2: history[-1] = ""; history[-2] = "" # 清除当前溢出的输入history[-2] 是本次输入, history[-1] 是本次输出
history = clip_history(inputs=inputs, history=history, tokenizer=model_info[llm_kwargs['llm_model']]['tokenizer'],
history = clip_history(inputs=inputs, history=history, tokenizer=model_info[llm_kwargs['llm_model']]['tokenizer'],
max_token_limit=(model_info[llm_kwargs['llm_model']]['max_token'])) # history至少释放二分之一
chatbot[-1] = (chatbot[-1][0], "[Local Message] Reduce the length. 本次输入过长, 或历史数据过长. 历史缓存数据已部分释放, 您可以请再次尝试. (若再次失败则更可能是因为输入过长.)")
# history = [] # 清除历史
@@ -264,7 +264,7 @@ def generate_payload(inputs, llm_kwargs, history, system_prompt, stream):
payload = {
"model": llm_kwargs['llm_model'].strip('api2d-'),
"messages": messages,
"messages": messages,
"temperature": llm_kwargs['temperature'], # 1.0,
"top_p": llm_kwargs['top_p'], # 1.0,
"n": 1,

View File

@@ -11,13 +11,12 @@
"""
import os
import json
import time
import gradio as gr
import logging
import traceback
import requests
import importlib
from toolbox import get_conf, update_ui, trimmed_format_exc, encode_image, every_image_file_in_path
picture_system_prompt = "\n当回复图像时,必须说明正在回复哪张图像。所有图像仅在最后一个问题中提供,即使它们在历史记录中被提及。请使用'这是第X张图像:'的格式来指明您正在描述的是哪张图像。"
Claude_3_Models = ["claude-3-sonnet-20240229", "claude-3-opus-20240229"]
# config_private.py放自己的秘密如API和代理网址
# 读取时首先看是否存在私密的config_private配置文件不受git管控如果有则覆盖原config文件
@@ -56,7 +55,8 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
"""
from anthropic import Anthropic
watch_dog_patience = 5 # 看门狗的耐心, 设置5秒即可
prompt = generate_payload(inputs, llm_kwargs, history, system_prompt=sys_prompt, stream=True)
if inputs == "": inputs = "空空如也的输入栏"
message = generate_payload(inputs, llm_kwargs, history, stream=True, image_paths=None)
retry = 0
if len(ANTHROPIC_API_KEY) == 0:
raise RuntimeError("没有设置ANTHROPIC_API_KEY选项")
@@ -65,15 +65,16 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
try:
# make a POST request to the API endpoint, stream=False
from .bridge_all import model_info
anthropic = Anthropic(api_key=ANTHROPIC_API_KEY)
anthropic = Anthropic(api_key=ANTHROPIC_API_KEY, base_url=model_info[llm_kwargs['llm_model']]['endpoint'])
# endpoint = model_info[llm_kwargs['llm_model']]['endpoint']
# with ProxyNetworkActivate()
stream = anthropic.completions.create(
prompt=prompt,
max_tokens_to_sample=4096, # The maximum number of tokens to generate before stopping.
stream = anthropic.messages.create(
messages=message,
max_tokens=4096, # The maximum number of tokens to generate before stopping.
model=llm_kwargs['llm_model'],
stream=True,
temperature = llm_kwargs['temperature']
temperature = llm_kwargs['temperature'],
system=sys_prompt
)
break
except Exception as e:
@@ -82,15 +83,19 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
if retry > MAX_RETRY: raise TimeoutError
if MAX_RETRY!=0: print(f'请求超时,正在重试 ({retry}/{MAX_RETRY}) ……')
result = ''
try:
try:
for completion in stream:
result += completion.completion
if not console_slience: print(completion.completion, end='')
if observe_window is not None:
if completion.type == "message_start" or completion.type == "content_block_start":
continue
elif completion.type == "message_stop" or completion.type == "content_block_stop" or completion.type == "message_delta":
break
result += completion.delta.text
if not console_slience: print(completion.delta.text, end='')
if observe_window is not None:
# 观测窗,把已经获取的数据显示出去
if len(observe_window) >= 1: observe_window[0] += completion.completion
if len(observe_window) >= 1: observe_window[0] += completion.delta.text
# 看门狗,如果超过期限没有喂狗,则终止
if len(observe_window) >= 2:
if len(observe_window) >= 2:
if (time.time()-observe_window[1]) > watch_dog_patience:
raise RuntimeError("用户取消了程序。")
except Exception as e:
@@ -98,6 +103,10 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
return result
def make_media_input(history,inputs,image_paths):
for image_path in image_paths:
inputs = inputs + f'<br/><br/><div align="center"><img src="file={os.path.abspath(image_path)}"></div>'
return inputs
def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None):
"""
@@ -109,23 +118,34 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
chatbot 为WebUI中显示的对话列表修改它然后yeild出去可以直接修改对话界面内容
additional_fn代表点击的哪个按钮按钮见functional.py
"""
if inputs == "": inputs = "空空如也的输入栏"
from anthropic import Anthropic
if len(ANTHROPIC_API_KEY) == 0:
chatbot.append((inputs, "没有设置ANTHROPIC_API_KEY"))
yield from update_ui(chatbot=chatbot, history=history, msg="等待响应") # 刷新界面
return
if additional_fn is not None:
from core_functional import handle_core_functionality
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
raw_input = inputs
logging.info(f'[raw_input] {raw_input}')
chatbot.append((inputs, ""))
yield from update_ui(chatbot=chatbot, history=history, msg="等待响应") # 刷新界面
have_recent_file, image_paths = every_image_file_in_path(chatbot)
if len(image_paths) > 20:
chatbot.append((inputs, "图片数量超过api上限(20张)"))
yield from update_ui(chatbot=chatbot, history=history, msg="等待响应")
return
if any([llm_kwargs['llm_model'] == model for model in Claude_3_Models]) and have_recent_file:
if inputs == "" or inputs == "空空如也的输入栏": inputs = "请描述给出的图片"
system_prompt += picture_system_prompt # 由于没有单独的参数保存包含图片的历史,所以只能通过提示词对第几张图片进行定位
chatbot.append((make_media_input(history,inputs, image_paths), ""))
yield from update_ui(chatbot=chatbot, history=history, msg="等待响应") # 刷新界面
else:
chatbot.append((inputs, ""))
yield from update_ui(chatbot=chatbot, history=history, msg="等待响应") # 刷新界面
try:
prompt = generate_payload(inputs, llm_kwargs, history, system_prompt, stream)
message = generate_payload(inputs, llm_kwargs, history, stream, image_paths)
except RuntimeError as e:
chatbot[-1] = (inputs, f"您提供的api-key不满足要求不包含任何可用于{llm_kwargs['llm_model']}的api-key。您可能选择了错误的模型或请求源。")
yield from update_ui(chatbot=chatbot, history=history, msg="api-key不满足要求") # 刷新界面
@@ -138,17 +158,17 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
try:
# make a POST request to the API endpoint, stream=True
from .bridge_all import model_info
anthropic = Anthropic(api_key=ANTHROPIC_API_KEY)
anthropic = Anthropic(api_key=ANTHROPIC_API_KEY, base_url=model_info[llm_kwargs['llm_model']]['endpoint'])
# endpoint = model_info[llm_kwargs['llm_model']]['endpoint']
# with ProxyNetworkActivate()
stream = anthropic.completions.create(
prompt=prompt,
max_tokens_to_sample=4096, # The maximum number of tokens to generate before stopping.
stream = anthropic.messages.create(
messages=message,
max_tokens=4096, # The maximum number of tokens to generate before stopping.
model=llm_kwargs['llm_model'],
stream=True,
temperature = llm_kwargs['temperature']
temperature = llm_kwargs['temperature'],
system=system_prompt
)
break
except:
retry += 1
@@ -158,10 +178,14 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
if retry > MAX_RETRY: raise TimeoutError
gpt_replying_buffer = ""
for completion in stream:
if completion.type == "message_start" or completion.type == "content_block_start":
continue
elif completion.type == "message_stop" or completion.type == "content_block_stop" or completion.type == "message_delta":
break
try:
gpt_replying_buffer = gpt_replying_buffer + completion.completion
gpt_replying_buffer = gpt_replying_buffer + completion.delta.text
history[-1] = gpt_replying_buffer
chatbot[-1] = (history[-2], history[-1])
yield from update_ui(chatbot=chatbot, history=history, msg='正常') # 刷新界面
@@ -172,57 +196,52 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
chatbot[-1] = (chatbot[-1][0], f"[Local Message] 异常 \n\n{tb_str}")
yield from update_ui(chatbot=chatbot, history=history, msg="Json异常" + tb_str) # 刷新界面
return
# https://github.com/jtsang4/claude-to-chatgpt/blob/main/claude_to_chatgpt/adapter.py
def convert_messages_to_prompt(messages):
prompt = ""
role_map = {
"system": "Human",
"user": "Human",
"assistant": "Assistant",
}
for message in messages:
role = message["role"]
content = message["content"]
transformed_role = role_map[role]
prompt += f"\n\n{transformed_role.capitalize()}: {content}"
prompt += "\n\nAssistant: "
return prompt
def generate_payload(inputs, llm_kwargs, history, system_prompt, stream):
def generate_payload(inputs, llm_kwargs, history, stream, image_paths):
"""
整合所有信息选择LLM模型生成http请求为发送请求做准备
"""
from anthropic import Anthropic, HUMAN_PROMPT, AI_PROMPT
conversation_cnt = len(history) // 2
messages = [{"role": "system", "content": system_prompt}]
messages = []
if conversation_cnt:
for index in range(0, 2*conversation_cnt, 2):
what_i_have_asked = {}
what_i_have_asked["role"] = "user"
what_i_have_asked["content"] = history[index]
what_i_have_asked["content"] = [{"type": "text", "text": history[index]}]
what_gpt_answer = {}
what_gpt_answer["role"] = "assistant"
what_gpt_answer["content"] = history[index+1]
if what_i_have_asked["content"] != "":
if what_gpt_answer["content"] == "": continue
if what_gpt_answer["content"] == timeout_bot_msg: continue
what_gpt_answer["content"] = [{"type": "text", "text": history[index+1]}]
if what_i_have_asked["content"][0]["text"] != "":
if what_i_have_asked["content"][0]["text"] == "": continue
if what_i_have_asked["content"][0]["text"] == timeout_bot_msg: continue
messages.append(what_i_have_asked)
messages.append(what_gpt_answer)
else:
messages[-1]['content'] = what_gpt_answer['content']
messages[-1]['content'][0]['text'] = what_gpt_answer['content'][0]['text']
what_i_ask_now = {}
what_i_ask_now["role"] = "user"
what_i_ask_now["content"] = inputs
if any([llm_kwargs['llm_model'] == model for model in Claude_3_Models]) and image_paths:
base64_images = []
for image_path in image_paths:
base64_images.append(encode_image(image_path))
what_i_ask_now = {}
what_i_ask_now["role"] = "user"
what_i_ask_now["content"] = []
for base64_image in base64_images:
what_i_ask_now["content"].append({
"type": "image",
"source": {
"type": "base64",
"media_type": "image/jpeg",
"data": base64_image,
}
})
what_i_ask_now["content"].append({"type": "text", "text": inputs})
else:
what_i_ask_now = {}
what_i_ask_now["role"] = "user"
what_i_ask_now["content"] = [{"type": "text", "text": inputs}]
messages.append(what_i_ask_now)
prompt = convert_messages_to_prompt(messages)
return prompt
return messages

View File

@@ -88,7 +88,7 @@ class GetCoderLMHandle(LocalLLMHandle):
temperature = kwargs['temperature']
history = kwargs['history']
return query, max_length, top_p, temperature, history
query, max_length, top_p, temperature, history = adaptor(kwargs)
history.append({ 'role': 'user', 'content': query})
messages = history
@@ -97,14 +97,14 @@ class GetCoderLMHandle(LocalLLMHandle):
inputs = inputs[:, -max_length:]
inputs = inputs.to(self._model.device)
generation_kwargs = dict(
inputs=inputs,
inputs=inputs,
max_new_tokens=max_length,
do_sample=False,
top_p=top_p,
streamer = self._streamer,
top_k=50,
temperature=temperature,
num_return_sequences=1,
num_return_sequences=1,
eos_token_id=32021,
)
thread = Thread(target=self._model.generate, kwargs=generation_kwargs, daemon=True)

View File

@@ -20,7 +20,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
if get_conf("GEMINI_API_KEY") == "":
raise ValueError(f"请配置 GEMINI_API_KEY。")
genai = GoogleChatInit()
genai = GoogleChatInit(llm_kwargs)
watch_dog_patience = 5 # 看门狗的耐心, 设置5秒即可
gpt_replying_buffer = ''
stream_response = genai.generate_chat(inputs, llm_kwargs, history, sys_prompt)
@@ -61,7 +61,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
chatbot.append((inputs, "没有检测到任何近期上传的图像文件请上传jpg格式的图片此外请注意拓展名需要小写"))
yield from update_ui(chatbot=chatbot, history=history, msg="等待图片") # 刷新界面
return
def make_media_input(inputs, image_paths):
def make_media_input(inputs, image_paths):
for image_path in image_paths:
inputs = inputs + f'<br/><br/><div align="center"><img src="file={os.path.abspath(image_path)}"></div>'
return inputs
@@ -70,7 +70,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
chatbot.append((inputs, ""))
yield from update_ui(chatbot=chatbot, history=history)
genai = GoogleChatInit()
genai = GoogleChatInit(llm_kwargs)
retry = 0
while True:
try:

View File

@@ -82,7 +82,7 @@ class GetInternlmHandle(LocalLLMHandle):
history = kwargs['history']
real_prompt = combine_history(prompt, history)
return model, tokenizer, real_prompt, max_length, top_p, temperature
model, tokenizer, prompt, max_length, top_p, temperature = adaptor()
prefix_allowed_tokens_fn = None
logits_processor = None
@@ -183,7 +183,7 @@ class GetInternlmHandle(LocalLLMHandle):
outputs, model_kwargs, is_encoder_decoder=False
)
unfinished_sequences = unfinished_sequences.mul((min(next_tokens != i for i in eos_token_id)).long())
output_token_ids = input_ids[0].cpu().tolist()
output_token_ids = output_token_ids[input_length:]
for each_eos_token_id in eos_token_id:
@@ -196,7 +196,7 @@ class GetInternlmHandle(LocalLLMHandle):
if unfinished_sequences.max() == 0 or stopping_criteria(input_ids, scores):
return
# ------------------------------------------------------------------------------------------------------------------------
# 🔌💻 GPT-Academic Interface
# ------------------------------------------------------------------------------------------------------------------------

View File

@@ -20,7 +20,7 @@ class GetGLMHandle(Process):
self.check_dependency()
self.start()
self.threadLock = threading.Lock()
def check_dependency(self):
try:
import pandas
@@ -102,7 +102,7 @@ class GetGLMHandle(Process):
else:
break
self.threadLock.release()
global llama_glm_handle
llama_glm_handle = None
#################################################################################
@@ -115,7 +115,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
if llama_glm_handle is None:
llama_glm_handle = GetGLMHandle()
if len(observe_window) >= 1: observe_window[0] = load_message + "\n\n" + llama_glm_handle.info
if not llama_glm_handle.success:
if not llama_glm_handle.success:
error = llama_glm_handle.info
llama_glm_handle = None
raise RuntimeError(error)
@@ -130,7 +130,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
for response in llama_glm_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=sys_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
print(response)
if len(observe_window) >= 1: observe_window[0] = response
if len(observe_window) >= 2:
if len(observe_window) >= 2:
if (time.time()-observe_window[1]) > watch_dog_patience:
raise RuntimeError("程序终止。")
return response
@@ -149,7 +149,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
llama_glm_handle = GetGLMHandle()
chatbot[-1] = (inputs, load_message + "\n\n" + llama_glm_handle.info)
yield from update_ui(chatbot=chatbot, history=[])
if not llama_glm_handle.success:
if not llama_glm_handle.success:
llama_glm_handle = None
return

View File

@@ -20,7 +20,7 @@ class GetGLMHandle(Process):
self.check_dependency()
self.start()
self.threadLock = threading.Lock()
def check_dependency(self):
try:
import pandas
@@ -102,7 +102,7 @@ class GetGLMHandle(Process):
else:
break
self.threadLock.release()
global pangu_glm_handle
pangu_glm_handle = None
#################################################################################
@@ -115,7 +115,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
if pangu_glm_handle is None:
pangu_glm_handle = GetGLMHandle()
if len(observe_window) >= 1: observe_window[0] = load_message + "\n\n" + pangu_glm_handle.info
if not pangu_glm_handle.success:
if not pangu_glm_handle.success:
error = pangu_glm_handle.info
pangu_glm_handle = None
raise RuntimeError(error)
@@ -130,7 +130,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
for response in pangu_glm_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=sys_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
print(response)
if len(observe_window) >= 1: observe_window[0] = response
if len(observe_window) >= 2:
if len(observe_window) >= 2:
if (time.time()-observe_window[1]) > watch_dog_patience:
raise RuntimeError("程序终止。")
return response
@@ -149,7 +149,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
pangu_glm_handle = GetGLMHandle()
chatbot[-1] = (inputs, load_message + "\n\n" + pangu_glm_handle.info)
yield from update_ui(chatbot=chatbot, history=[])
if not pangu_glm_handle.success:
if not pangu_glm_handle.success:
pangu_glm_handle = None
return

View File

@@ -20,7 +20,7 @@ class GetGLMHandle(Process):
self.check_dependency()
self.start()
self.threadLock = threading.Lock()
def check_dependency(self):
try:
import pandas
@@ -102,7 +102,7 @@ class GetGLMHandle(Process):
else:
break
self.threadLock.release()
global rwkv_glm_handle
rwkv_glm_handle = None
#################################################################################
@@ -115,7 +115,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
if rwkv_glm_handle is None:
rwkv_glm_handle = GetGLMHandle()
if len(observe_window) >= 1: observe_window[0] = load_message + "\n\n" + rwkv_glm_handle.info
if not rwkv_glm_handle.success:
if not rwkv_glm_handle.success:
error = rwkv_glm_handle.info
rwkv_glm_handle = None
raise RuntimeError(error)
@@ -130,7 +130,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
for response in rwkv_glm_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=sys_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
print(response)
if len(observe_window) >= 1: observe_window[0] = response
if len(observe_window) >= 2:
if len(observe_window) >= 2:
if (time.time()-observe_window[1]) > watch_dog_patience:
raise RuntimeError("程序终止。")
return response
@@ -149,7 +149,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
rwkv_glm_handle = GetGLMHandle()
chatbot[-1] = (inputs, load_message + "\n\n" + rwkv_glm_handle.info)
yield from update_ui(chatbot=chatbot, history=[])
if not rwkv_glm_handle.success:
if not rwkv_glm_handle.success:
rwkv_glm_handle = None
return

View File

@@ -48,7 +48,7 @@ class GetLlamaHandle(LocalLLMHandle):
history = kwargs['history']
console_slience = kwargs.get('console_slience', True)
return query, max_length, top_p, temperature, history, console_slience
def convert_messages_to_prompt(query, history):
prompt = ""
for a, b in history:
@@ -56,7 +56,7 @@ class GetLlamaHandle(LocalLLMHandle):
prompt += "\n{b}" + b
prompt += f"\n[INST]{query}[/INST]"
return prompt
query, max_length, top_p, temperature, history, console_slience = adaptor(kwargs)
prompt = convert_messages_to_prompt(query, history)
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=--=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=--=-=-
@@ -70,13 +70,13 @@ class GetLlamaHandle(LocalLLMHandle):
thread = Thread(target=self._model.generate, kwargs=generation_kwargs)
thread.start()
generated_text = ""
for new_text in streamer:
for new_text in streamer:
generated_text += new_text
if not console_slience: print(new_text, end='')
yield generated_text.lstrip(prompt_tk_back).rstrip("</s>")
if not console_slience: print()
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=--=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=--=-=-
def try_to_import_special_deps(self, **kwargs):
# import something that will raise error if the user does not install requirement_*.txt
# 🏃‍♂️🏃‍♂️🏃‍♂️ 主进程执行

View File

@@ -18,7 +18,7 @@ class GetGLMHandle(Process):
if self.check_dependency():
self.start()
self.threadLock = threading.Lock()
def check_dependency(self): # 主进程执行
try:
import datasets, os
@@ -54,9 +54,9 @@ class GetGLMHandle(Process):
from models.tokenization_moss import MossTokenizer
parser = argparse.ArgumentParser()
parser.add_argument("--model_name", default="fnlp/moss-moon-003-sft-int4",
choices=["fnlp/moss-moon-003-sft",
"fnlp/moss-moon-003-sft-int8",
parser.add_argument("--model_name", default="fnlp/moss-moon-003-sft-int4",
choices=["fnlp/moss-moon-003-sft",
"fnlp/moss-moon-003-sft-int8",
"fnlp/moss-moon-003-sft-int4"], type=str)
parser.add_argument("--gpu", default="0", type=str)
args = parser.parse_args()
@@ -76,7 +76,7 @@ class GetGLMHandle(Process):
config = MossConfig.from_pretrained(model_path)
self.tokenizer = MossTokenizer.from_pretrained(model_path)
if num_gpus > 1:
if num_gpus > 1:
print("Waiting for all devices to be ready, it may take a few minutes...")
with init_empty_weights():
raw_model = MossForCausalLM._from_config(config, torch_dtype=torch.float16)
@@ -135,15 +135,15 @@ class GetGLMHandle(Process):
inputs = self.tokenizer(self.prompt, return_tensors="pt")
with torch.no_grad():
outputs = self.model.generate(
inputs.input_ids.cuda(),
attention_mask=inputs.attention_mask.cuda(),
max_length=2048,
do_sample=True,
top_k=40,
top_p=0.8,
inputs.input_ids.cuda(),
attention_mask=inputs.attention_mask.cuda(),
max_length=2048,
do_sample=True,
top_k=40,
top_p=0.8,
temperature=0.7,
repetition_penalty=1.02,
num_return_sequences=1,
num_return_sequences=1,
eos_token_id=106068,
pad_token_id=self.tokenizer.pad_token_id)
response = self.tokenizer.decode(outputs[0][inputs.input_ids.shape[1]:], skip_special_tokens=True)
@@ -167,7 +167,7 @@ class GetGLMHandle(Process):
else:
break
self.threadLock.release()
global moss_handle
moss_handle = None
#################################################################################
@@ -180,7 +180,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
if moss_handle is None:
moss_handle = GetGLMHandle()
if len(observe_window) >= 1: observe_window[0] = load_message + "\n\n" + moss_handle.info
if not moss_handle.success:
if not moss_handle.success:
error = moss_handle.info
moss_handle = None
raise RuntimeError(error)
@@ -194,7 +194,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
response = ""
for response in moss_handle.stream_chat(query=inputs, history=history_feedin, sys_prompt=sys_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
if len(observe_window) >= 1: observe_window[0] = response
if len(observe_window) >= 2:
if len(observe_window) >= 2:
if (time.time()-observe_window[1]) > watch_dog_patience:
raise RuntimeError("程序终止。")
return response
@@ -213,7 +213,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
moss_handle = GetGLMHandle()
chatbot[-1] = (inputs, load_message + "\n\n" + moss_handle.info)
yield from update_ui(chatbot=chatbot, history=[])
if not moss_handle.success:
if not moss_handle.success:
moss_handle = None
return
else:

View File

@@ -45,7 +45,7 @@ class GetQwenLMHandle(LocalLLMHandle):
for response in self._model.chat_stream(self._tokenizer, query, history=history):
yield response
def try_to_import_special_deps(self, **kwargs):
# import something that will raise error if the user does not install requirement_*.txt
# 🏃‍♂️🏃‍♂️🏃‍♂️ 主进程执行

View File

@@ -76,7 +76,7 @@ async def run(context, max_token, temperature, top_p, addr, port):
pass
elif content["msg"] in ["process_generating", "process_completed"]:
yield content["output"]["data"][0]
# You can search for your desired end indicator and
# You can search for your desired end indicator and
# stop generation by closing the websocket here
if (content["msg"] == "process_completed"):
break
@@ -117,12 +117,12 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
async def get_result(mutable):
# "tgui:galactica-1.3b@localhost:7860"
async for response in run(context=prompt, max_token=llm_kwargs['max_length'],
temperature=llm_kwargs['temperature'],
async for response in run(context=prompt, max_token=llm_kwargs['max_length'],
temperature=llm_kwargs['temperature'],
top_p=llm_kwargs['top_p'], addr=addr, port=port):
print(response[len(mutable[0]):])
mutable[0] = response
if (time.time() - mutable[1]) > 3:
if (time.time() - mutable[1]) > 3:
print('exit when no listener')
break
asyncio.run(get_result(mutable))
@@ -154,12 +154,12 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history, sys_prompt, obser
def run_coorotine(observe_window):
async def get_result(observe_window):
async for response in run(context=prompt, max_token=llm_kwargs['max_length'],
temperature=llm_kwargs['temperature'],
async for response in run(context=prompt, max_token=llm_kwargs['max_length'],
temperature=llm_kwargs['temperature'],
top_p=llm_kwargs['top_p'], addr=addr, port=port):
print(response[len(observe_window[0]):])
observe_window[0] = response
if (time.time() - observe_window[1]) > 5:
if (time.time() - observe_window[1]) > 5:
print('exit when no listener')
break
asyncio.run(get_result(observe_window))

View File

@@ -119,7 +119,7 @@ class ChatGLMModel():
past_key_values = { k: v for k, v in zip(past_names, past_key_values) }
next_token = self.sample_next_token(logits[0, -1], top_k=top_k, top_p=top_p, temperature=temperature)
output_tokens += [next_token]
if next_token == self.eop_token_id or len(output_tokens) > max_generated_tokens:

View File

@@ -114,8 +114,10 @@ def html_local_img(__file, layout="left", max_width=None, max_height=None, md=Tr
class GoogleChatInit:
def __init__(self):
self.url_gemini = "https://generativelanguage.googleapis.com/v1beta/models/%m:streamGenerateContent?key=%k"
def __init__(self, llm_kwargs):
from .bridge_all import model_info
endpoint = model_info[llm_kwargs['llm_model']]['endpoint']
self.url_gemini = endpoint + "/%m:streamGenerateContent?key=%k"
def generate_chat(self, inputs, llm_kwargs, history, system_prompt):
headers, payload = self.generate_message_payload(

View File

@@ -8,7 +8,7 @@ from toolbox import get_conf, encode_image, get_pictures_list
import logging, os
def input_encode_handler(inputs, llm_kwargs):
def input_encode_handler(inputs, llm_kwargs):
if llm_kwargs["most_recent_uploaded"].get("path"):
image_paths = get_pictures_list(llm_kwargs["most_recent_uploaded"]["path"])
md_encode = []

View File

@@ -2,12 +2,12 @@ import random
def Singleton(cls):
_instance = {}
def _singleton(*args, **kargs):
if cls not in _instance:
_instance[cls] = cls(*args, **kargs)
return _instance[cls]
return _singleton
@@ -16,7 +16,7 @@ class OpenAI_ApiKeyManager():
def __init__(self, mode='blacklist') -> None:
# self.key_avail_list = []
self.key_black_list = []
def add_key_to_blacklist(self, key):
self.key_black_list.append(key)

View File

@@ -90,7 +90,7 @@ class LocalLLMHandle(Process):
return self.state
def set_state(self, new_state):
# ⭐run in main process or 🏃‍♂️🏃‍♂️🏃‍♂️ run in child process
# ⭐run in main process or 🏃‍♂️🏃‍♂️🏃‍♂️ run in child process
if self.is_main_process:
self.state = new_state
else:
@@ -178,8 +178,8 @@ class LocalLLMHandle(Process):
r = self.parent.recv()
continue
break
return
return
def stream_chat(self, **kwargs):
# ⭐run in main process
if self.get_state() == "`准备就绪`":