version 3.75 (#1702)
* Update version to 3.74 * Add support for Yi Model API (#1635) * 更新以支持零一万物模型 * 删除newbing * 修改config --------- Co-authored-by: binary-husky <qingxu.fu@outlook.com> * Refactor function signatures in bridge files * fix qwen api change * rename and ref functions * rename and move some cookie functions * 增加haiku模型,新增endpoint配置说明 (#1626) * haiku added * 新增haiku,新增endpoint配置说明 * Haiku added * 将说明同步至最新Endpoint --------- Co-authored-by: binary-husky <qingxu.fu@outlook.com> * private_upload目录下进行文件鉴权 (#1596) * private_upload目录下进行文件鉴权 * minor fastapi adjustment * Add logging functionality to enable saving conversation records * waiting to fix username retrieve * support 2rd web path * allow accessing default user dir --------- Co-authored-by: binary-husky <qingxu.fu@outlook.com> * remove yaml deps * fix favicon * fix abs path auth problem * forget to write a return * add `dashscope` to deps * fix GHSA-v9q9-xj86-953p * 用户名重叠越权访问patch (#1681) * add cohere model api access * cohere + can_multi_thread * fix block user access(fail) * fix fastapi bug * change cohere api endpoint * explain version * # fix com_zhipuglm.py illegal temperature problem (#1687) * Update com_zhipuglm.py # fix 用户在使用 zhipuai 界面时遇到了关于温度参数的非法参数错误 * allow store lm model dropdown * add a btn to reverse previous reset * remove extra fns * Add support for glm-4v model (#1700) * 修改chatglm3量化加载方式 (#1688) Co-authored-by: zym9804 <ren990603@gmail.com> * save chat stage 1 * consider null cookie situation * 在点击复制按钮时激活语音 * miss some parts * move all to js * done first stage * add edge tts * bug fix * bug fix * remove console log * bug fix * bug fix * bug fix * audio switch * update tts readme * remove tempfile when done * disable auto audio follow * avoid play queue update after shut up * feat: minimizing common.js * improve tts functionality * deterine whether the cached model is in choices * Add support for Ollama (#1740) * print err when doc2x not successful * add icon * adjust url for doc2x key version * prepare merge --------- Co-authored-by: Menghuan1918 <menghuan2003@outlook.com> Co-authored-by: Skyzayre <120616113+Skyzayre@users.noreply.github.com> Co-authored-by: XIao <46100050+Kilig947@users.noreply.github.com> Co-authored-by: Yuki <903728862@qq.com> Co-authored-by: zyren123 <91042213+zyren123@users.noreply.github.com> Co-authored-by: zym9804 <ren990603@gmail.com>
This commit is contained in:
@@ -75,6 +75,10 @@ def predict(inputs:str, llm_kwargs:dict, plugin_kwargs:dict, chatbot:ChatBotWith
|
||||
llm_kwargs["llm_model"] = zhipuai_default_model
|
||||
|
||||
if llm_kwargs["llm_model"] in ["glm-4v"]:
|
||||
if (len(inputs) + sum(len(temp) for temp in history) + 1047) > 2000:
|
||||
chatbot.append((inputs, "上下文长度超过glm-4v上限2000tokens,注意图片大约占用1,047个tokens"))
|
||||
yield from update_ui(chatbot=chatbot, history=history)
|
||||
return
|
||||
have_recent_file, image_paths = have_any_recent_upload_image_files(chatbot)
|
||||
if not have_recent_file:
|
||||
chatbot.append((inputs, "没有检测到任何近期上传的图像文件,请上传jpg格式的图片,此外,请注意拓展名需要小写"))
|
||||
|
||||
Reference in New Issue
Block a user