Add ChatGLM4 local deployment support and refactor ChatGLM bridge's path configuration (#2062)
* ✨ feat(request_llms and config.py): ChatGLM4 Deployment Add support for local deployment of ChatGLM4 model * 🦄 refactor(bridge_chatglm3.py): ChatGLM3 model path Added ChatGLM3 path customization (in config.py). Removed useless quantization model options that have been annotated --------- Co-authored-by: MarkDeia <17290550+MarkDeia@users.noreply.github.com>
This commit is contained in:
7
request_llms/requirements_chatglm4.txt
Normal file
7
request_llms/requirements_chatglm4.txt
Normal file
@@ -0,0 +1,7 @@
|
||||
protobuf
|
||||
cpm_kernels
|
||||
torch>=1.10
|
||||
transformers>=4.44
|
||||
mdtex2html
|
||||
sentencepiece
|
||||
accelerate
|
||||
Reference in New Issue
Block a user