typo: Fix typos and rename functions across multiple files (#2130)

* typo: Fix typos and rename functions across multiple files

This commit addresses several minor issues:
- Corrected spelling of function names (e.g., `update_ui_lastest_msg` to `update_ui_latest_msg`)
- Fixed typos in comments and variable names
- Corrected capitalization in some strings (e.g., "ArXiv" instead of "Arixv")
- Renamed some variables for consistency
- Corrected some console-related parameter names (e.g., `console_slience` to `console_silence`)

The changes span multiple files across the project, including request LLM bridges, crazy functions, and utility modules.

* fix: f-string expression part cannot include a backslash (#2139)

* raise error when the uploaded tar contain hard/soft link (#2136)

* minor bug fix

* fine tune reasoning css

* upgrade internet gpt plugin

* Update README.md

* fix GHSA-gqp5-wm97-qxcv

* typo fix

* update readme

---------

Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com>
Co-authored-by: binary-husky <qingxu.fu@outlook.com>
This commit is contained in:
Steven Moder
2025-03-02 02:16:10 +08:00
committed by GitHub
parent 5dffe8627f
commit 4a79aa6a93
74 changed files with 325 additions and 325 deletions

View File

@@ -55,7 +55,7 @@ class GetGLMHandle(Process):
if self.jittorllms_model is None:
device = get_conf('LOCAL_MODEL_DEVICE')
from .jittorllms.models import get_model
# availabel_models = ["chatglm", "pangualpha", "llama", "chatrwkv"]
# available_models = ["chatglm", "pangualpha", "llama", "chatrwkv"]
args_dict = {'model': 'llama'}
print('self.jittorllms_model = get_model(types.SimpleNamespace(**args_dict))')
self.jittorllms_model = get_model(types.SimpleNamespace(**args_dict))
@@ -107,7 +107,7 @@ global llama_glm_handle
llama_glm_handle = None
#################################################################################
def predict_no_ui_long_connection(inputs:str, llm_kwargs:dict, history:list=[], sys_prompt:str="",
observe_window:list=[], console_slience:bool=False):
observe_window:list=[], console_silence:bool=False):
"""
多线程方法
函数的说明请见 request_llms/bridge_all.py