Skip to content

Commit ba8d0f8

Browse files
liunux4odooqiankunliliqiankun1111zRzRzRzRzRzRzRglide-the
authored
发版:v0.2.5 (#1620)
* 优化configs (#1474) * remove llm_model_dict * optimize configs * fix get_model_path * 更改一些默认参数,添加千帆的默认配置 * Update server_config.py.example * fix merge conflict for #1474 (#1494) * 修复ChatGPT api_base_url错误;用户可以在model_config在线模型配置中覆盖默认的api_base_url (#1496) * 优化LLM模型列表获取、切换的逻辑: (#1497) 1、更准确的获取未运行的可用模型 2、优化WEBUI模型列表显示与切换的控制逻辑 * 更新migrate.py和init_database.py,加强知识库迁移工具: (#1498) 1. 添加--update-in-db参数,按照数据库信息,从本地文件更新向量库 2. 添加--increament参数,根据本地文件增量更新向量库 3. 添加--prune-db参数,删除本地文件后,自动清理相关的向量库 4. 添加--prune-folder参数,根据数据库信息,清理无用的本地文件 5. 取消--update-info-only参数。数据库中存储了向量库信息,该操作意义不大 6. 添加--kb-name参数,所有操作支持指定操作的知识库,不指定则为所有本地知识库 7. 添加知识库迁移的测试用例 8. 删除milvus_kb_service的save_vector_store方法 * feat: support volc fangzhou * 使火山方舟正常工作,添加错误处理和测试用例 * feat: support volc fangzhou (#1501) * feat: support volc fangzhou --------- Co-authored-by: liunux4odoo <[email protected]> Co-authored-by: liqiankun.1111 <[email protected]> * 第一版初步agent实现 (#1503) * 第一版初步agent实现 * 增加steaming参数 * 修改了weather.py --------- Co-authored-by: zR <zRzRzRzRzRzRzR> * 添加configs/prompt_config.py,允许用户自定义prompt模板: (#1504) 1、 默认包含2个模板,分别用于LLM对话,知识库和搜索引擎对话 2、 server/utils.py提供函数get_prompt_template,获取指定的prompt模板内容(支持热加载) 3、 api.py中chat/knowledge_base_chat/search_engine_chat接口支持prompt_name参数 * 增加其它模型的参数适配 * 增加传入矢量名称加载 * 1. 搜索引擎问答支持历史记录; 2. 修复知识库问答历史记录传参错误:用户输入被传入history,问题出在webui中重复获取历史消息,api知识库对话接口并无问题。 * langchain日志开关 * move wrap_done & get_ChatOpenAI from server.chat.utils to server.utils (#1506) * 修复faiss_pool知识库缓存key错误 (#1507) * fix ReadMe anchor link (#1500) * fix : Duplicate variable and function name (#1509) Co-authored-by: Jim <[email protected]> * Update README.md * fix #1519: streamlit-chatbox旧版BUG,但新版有兼容问题,先在webui中作处理,并限定chatbox版本 (#1525) close #1519 * 【功能新增】在线 LLM 模型支持阿里云通义千问 (#1534) * feat: add qwen-api * 使Qwen API支持temperature参数;添加测试用例 * 将online-api的sdk列为可选依赖 --------- Co-authored-by: liunux4odoo <[email protected]> * 处理序列化至磁盘的逻辑 * remove depends on volcengine * update kb_doc_api: use Form instead of Body when upload file * 将所有httpx请求改为使用Client,提高效率,方便以后设置代理等。 (#1554) 将所有httpx请求改为使用Client,提高效率,方便以后设置代理等。 将本项目相关服务加入无代理列表,避免fastchat的服务器请求错误。(windows下无效) * update QR code * update readme_en,readme,requirements_api,requirements,model_config.py.example:测试baichuan2-7b;更新相关文档 * 新增特性:1.支持vllm推理加速框架;2. 更新支持模型列表 * 更新文件:1. startup,model_config.py.example,serve_config.py.example,FAQ * 1. debug vllm加速框架完毕;2. 修改requirements,requirements_api对vllm的依赖;3.注释掉serve_config中baichuan-7b的device为cpu的配置 * 1. 更新congif中关于vllm后端相关说明;2. 更新requirements,requirements_api; * 增加了仅限GPT4的agent功能,陆续补充,中文版readme已写 (#1611) * Dev (#1613) * 增加了仅限GPT4的agent功能,陆续补充,中文版readme已写 * issue提到的一个bug * 温度最小改成0,但是不应该支持负数 * 修改了最小的温度 * fix: set vllm based on platform to avoid error on windows * fix: langchain warnings for import from root * 修复webui中重建知识库以及对话界面UI错误 (#1615) * 修复bug:webui点重建知识库时,如果存在不支持的文件会导致整个接口错误;migrate中没有导入CHUNK_SIZE * 修复:webui对话界面的expander一直为running状态;简化历史消息获取方法 * 根据官方文档,添加对英文版的bge embedding的指示模板 (#1585) Co-authored-by: zR <[email protected]> * Dev (#1618) * 增加了仅限GPT4的agent功能,陆续补充,中文版readme已写 * issue提到的一个bug * 温度最小改成0,但是不应该支持负数 * 修改了最小的温度 * 增加了部分Agent支持和修改了启动文件的部分bug * 修改了GPU数量配置文件 * 1 1 * 修复配置文件错误 * 更新readme,稳定测试 * 更改readme 0928 (#1619) * 增加了仅限GPT4的agent功能,陆续补充,中文版readme已写 * issue提到的一个bug * 温度最小改成0,但是不应该支持负数 * 修改了最小的温度 * 增加了部分Agent支持和修改了启动文件的部分bug * 修改了GPU数量配置文件 * 1 1 * 修复配置文件错误 * 更新readme,稳定测试 * 更新readme * fix readme * 处理序列化至磁盘的逻辑 * update version number to v0.2.5 --------- Co-authored-by: qiankunli <[email protected]> Co-authored-by: liqiankun.1111 <[email protected]> Co-authored-by: zR <[email protected]> Co-authored-by: glide-the <[email protected]> Co-authored-by: Water Zheng <[email protected]> Co-authored-by: Jim Zhang <[email protected]> Co-authored-by: Jim <[email protected]> Co-authored-by: imClumsyPanda <[email protected]> Co-authored-by: Leego <[email protected]> Co-authored-by: hzg0601 <[email protected]> Co-authored-by: WilliamChen-luckbob <[email protected]>
1 parent db169f6 commit ba8d0f8

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

69 files changed

+2949
-821
lines changed

README.md

+60-13
Original file line numberDiff line numberDiff line change
@@ -57,6 +57,25 @@ docker run -d --gpus all -p 80:8501 registry.cn-beijing.aliyuncs.com/chatchat/ch
5757

5858
---
5959

60+
## 环境最低要求
61+
62+
想顺利运行本代码,请按照以下的最低要求进行配置:
63+
+ Python版本: >= 3.8.5, < 3.11
64+
+ Cuda版本: >= 11.7, 且能顺利安装Python
65+
66+
如果想要顺利在GPU运行本地模型(int4版本),你至少需要以下的硬件配置:
67+
68+
+ chatglm2-6b & LLaMA-7B 最低显存要求: 7GB 推荐显卡: RTX 3060, RTX 2060
69+
+ LLaMA-13B 最低显存要求: 11GB 推荐显卡: RTX 2060 12GB, RTX3060 12GB, RTX3080, RTXA2000
70+
+ Qwen-14B-Chat 最低显存要求: 13GB 推荐显卡: RTX 3090
71+
+ LLaMA-30B 最低显存要求: 22GB 推荐显卡:RTX A5000,RTX 3090,RTX 4090,RTX 6000,Tesla V100,RTX Tesla P40
72+
+ LLaMA-65B 最低显存要求: 40GB 推荐显卡:A100,A40,A6000
73+
74+
如果是int8 则显存x1.5 fp16 x2.5的要求
75+
如:使用fp16 推理Qwen-7B-Chat 模型 则需要使用16GB显存。
76+
77+
以上仅为估算,实际情况以nvidia-smi占用为准。
78+
6079
## 变更日志
6180

6281
参见 [版本更新日志](https://github.com/imClumsyPanda/langchain-ChatGLM/releases)
@@ -112,27 +131,29 @@ docker run -d --gpus all -p 80:8501 registry.cn-beijing.aliyuncs.com/chatchat/ch
112131
- [WizardLM/WizardCoder-15B-V1.0](https://huggingface.co/WizardLM/WizardCoder-15B-V1.0)
113132
- [baichuan-inc/baichuan-7B](https://huggingface.co/baichuan-inc/baichuan-7B)
114133
- [internlm/internlm-chat-7b](https://huggingface.co/internlm/internlm-chat-7b)
115-
- [Qwen/Qwen-7B-Chat](https://huggingface.co/Qwen/Qwen-7B-Chat)
134+
- [Qwen/Qwen-7B-Chat/Qwen-14B-Chat](https://huggingface.co/Qwen/)
116135
- [HuggingFaceH4/starchat-beta](https://huggingface.co/HuggingFaceH4/starchat-beta)
117136
- [FlagAlpha/Llama2-Chinese-13b-Chat](https://huggingface.co/FlagAlpha/Llama2-Chinese-13b-Chat) and others
118137
- [BAAI/AquilaChat-7B](https://huggingface.co/BAAI/AquilaChat-7B)
119138
- [all models of OpenOrca](https://huggingface.co/Open-Orca)
120139
- [Spicyboros](https://huggingface.co/jondurbin/spicyboros-7b-2.2?not-for-all-audiences=true) + [airoboros 2.2](https://huggingface.co/jondurbin/airoboros-l2-13b-2.2)
121140
- [VMware&#39;s OpenLLaMa OpenInstruct](https://huggingface.co/VMware/open-llama-7b-open-instruct)
141+
- [baichuan2-7b/baichuan2-13b](https://huggingface.co/baichuan-inc)
122142
- 任何 [EleutherAI](https://huggingface.co/EleutherAI) 的 pythia 模型,如 [pythia-6.9b](https://huggingface.co/EleutherAI/pythia-6.9b)
123143
- 在以上模型基础上训练的任何 [Peft](https://github.com/huggingface/peft) 适配器。为了激活,模型路径中必须有 `peft` 。注意:如果加载多个peft模型,你可以通过在任何模型工作器中设置环境变量 `PEFT_SHARE_BASE_WEIGHTS=true` 来使它们共享基础模型的权重。
124144

125145
以上模型支持列表可能随 [FastChat](https://github.com/lm-sys/FastChat) 更新而持续更新,可参考 [FastChat 已支持模型列表](https://github.com/lm-sys/FastChat/blob/main/docs/model_support.md)
126146

127-
128147
除本地模型外,本项目也支持直接接入 OpenAI API、智谱AI等在线模型,具体设置可参考 `configs/model_configs.py.example` 中的 `llm_model_dict` 的配置信息。
129148

130-
在线 LLM 模型目前已支持:
149+
在线 LLM 模型目前已支持:
150+
131151
- [ChatGPT](https://api.openai.com)
132152
- [智谱AI](http://open.bigmodel.cn)
133153
- [MiniMax](https://api.minimax.chat)
134154
- [讯飞星火](https://xinghuo.xfyun.cn)
135155
- [百度千帆](https://cloud.baidu.com/product/wenxinworkshop?track=dingbutonglan)
156+
- [阿里云通义千问](https://dashscope.aliyun.com/)
136157

137158
项目中默认使用的 LLM 类型为 `THUDM/chatglm2-6b`,如需使用其他 LLM 类型,请在 [configs/model_config.py] 中对 `llm_model_dict``LLM_MODEL` 进行修改。
138159

@@ -157,9 +178,11 @@ docker run -d --gpus all -p 80:8501 registry.cn-beijing.aliyuncs.com/chatchat/ch
157178
- [GanymedeNil/text2vec-large-chinese](https://huggingface.co/GanymedeNil/text2vec-large-chinese)
158179
- [nghuyong/ernie-3.0-nano-zh](https://huggingface.co/nghuyong/ernie-3.0-nano-zh)
159180
- [nghuyong/ernie-3.0-base-zh](https://huggingface.co/nghuyong/ernie-3.0-base-zh)
181+
- [sensenova/piccolo-base-zh](https://huggingface.co/sensenova/piccolo-base-zh)
182+
- [sensenova/piccolo-base-zh](https://huggingface.co/sensenova/piccolo-large-zh)
160183
- [OpenAI/text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings)
161184

162-
项目中默认使用的 Embedding 类型为 `moka-ai/m3e-base`,如需使用其他 Embedding 类型,请在 [configs/model_config.py] 中对 `embedding_model_dict``EMBEDDING_MODEL` 进行修改。
185+
项目中默认使用的 Embedding 类型为 `sensenova/piccolo-base-zh`,如需使用其他 Embedding 类型,请在 [configs/model_config.py] 中对 `embedding_model_dict``EMBEDDING_MODEL` 进行修改。
163186

164187
---
165188

@@ -187,15 +210,27 @@ docker run -d --gpus all -p 80:8501 registry.cn-beijing.aliyuncs.com/chatchat/ch
187210

188211
关于如何使用自定义分词器和贡献自己的分词器,可以参考[Text Splitter 贡献说明](docs/splitter.md)
189212

213+
## Agent生态
214+
### 基础的Agent
215+
在本版本中,我们实现了一个简单的基于OpenAI的React的Agent模型,目前,经过我们测试,仅有以下两个模型支持:
216+
+ OpenAI GPT4
217+
+ ChatGLM2-130B
218+
219+
目前版本的Agent仍然需要对提示词进行大量调试,调试位置
220+
221+
### 构建自己的Agent工具
222+
223+
详见 [自定义Agent说明](docs/自定义Agent.md)
224+
190225
## Docker 部署
191226

192-
🐳 Docker 镜像地址: `registry.cn-beijing.aliyuncs.com/chatchat/chatchat:0.2.3)`
227+
🐳 Docker 镜像地址: `registry.cn-beijing.aliyuncs.com/chatchat/chatchat:0.2.5)`
193228

194229
```shell
195-
docker run -d --gpus all -p 80:8501 registry.cn-beijing.aliyuncs.com/chatchat/chatchat:0.2.3
230+
docker run -d --gpus all -p 80:8501 registry.cn-beijing.aliyuncs.com/chatchat/chatchat:0.2.5
196231
```
197232

198-
- 该版本镜像大小 `35.3GB`,使用 `v0.2.3`,以 `nvidia/cuda:12.1.1-cudnn8-devel-ubuntu22.04` 为基础镜像
233+
- 该版本镜像大小 `35.3GB`,使用 `v0.2.5`,以 `nvidia/cuda:12.1.1-cudnn8-devel-ubuntu22.04` 为基础镜像
199234
- 该版本内置两个 `embedding` 模型:`m3e-large``text2vec-bge-large-chinese`,默认启用后者,内置 `chatglm2-6b-32k`
200235
- 该版本目标为方便一键部署使用,请确保您已经在Linux发行版上安装了NVIDIA驱动程序
201236
- 请注意,您不需要在主机系统上安装CUDA工具包,但需要安装 `NVIDIA Driver` 以及 `NVIDIA Container Toolkit`,请参考[安装指南](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
@@ -391,22 +426,26 @@ CUDA_VISIBLE_DEVICES=0,1 python startup.py -a
391426
- [X] .csv
392427
- [ ] .xlsx
393428
- [ ] 分词及召回
394-
- [ ] 接入不同类型 TextSplitter
395-
- [ ] 优化依据中文标点符号设计的 ChineseTextSplitter
429+
- [X] 接入不同类型 TextSplitter
430+
- [X] 优化依据中文标点符号设计的 ChineseTextSplitter
396431
- [ ] 重新实现上下文拼接召回
397432
- [ ] 本地网页接入
398433
- [ ] SQL 接入
399434
- [ ] 知识图谱/图数据库接入
400435
- [X] 搜索引擎接入
401436
- [X] Bing 搜索
402437
- [X] DuckDuckGo 搜索
403-
- [ ] Agent 实现
438+
- [X] Agent 实现
439+
- [X] 基础React形式的Agent实现,包括调用计算器等
440+
- [X] Langchain 自带的Agent实现和调用
441+
- [ ] 更多模型的Agent支持
442+
- [ ] 更多工具
404443
- [X] LLM 模型接入
405444
- [X] 支持通过调用 [FastChat](https://github.com/lm-sys/fastchat) api 调用 llm
406-
- [ ] 支持 ChatGLM API 等 LLM API 的接入
445+
- [X] 支持 ChatGLM API 等 LLM API 的接入
407446
- [X] Embedding 模型接入
408447
- [X] 支持调用 HuggingFace 中各开源 Emebdding 模型
409-
- [ ] 支持 OpenAI Embedding API 等 Embedding API 的接入
448+
- [X] 支持 OpenAI Embedding API 等 Embedding API 的接入
410449
- [X] 基于 FastAPI 的 API 方式调用
411450
- [X] Web UI
412451
- [X] 基于 Streamlit 的 Web UI
@@ -417,4 +456,12 @@ CUDA_VISIBLE_DEVICES=0,1 python startup.py -a
417456

418457
<img src="img/qr_code_64.jpg" alt="二维码" width="300" height="300" />
419458

420-
🎉 langchain-ChatGLM 项目微信交流群,如果你也对本项目感兴趣,欢迎加入群聊参与讨论交流。
459+
🎉 langchain-Chatchat 项目微信交流群,如果你也对本项目感兴趣,欢迎加入群聊参与讨论交流。
460+
461+
462+
## 关注我们
463+
464+
<img src="img/official_account.png" alt="图片" width="900" height="300" />
465+
🎉 langchain-Chatchat 项目官方公众号,欢迎扫码关注。
466+
467+

README_en.md

+67-8
Original file line numberDiff line numberDiff line change
@@ -56,6 +56,25 @@ docker run -d --gpus all -p 80:8501 registry.cn-beijing.aliyuncs.com/chatchat/ch
5656

5757
---
5858

59+
## Environment Minimum Requirements
60+
61+
To run this code smoothly, please configure it according to the following minimum requirements:
62+
+ Python version: >= 3.8.5, < 3.11
63+
+ Cuda version: >= 11.7, with Python installed.
64+
65+
If you want to run the native model (int4 version) on the GPU without problems, you need at least the following hardware configuration.
66+
67+
+ chatglm2-6b & LLaMA-7B Minimum RAM requirement: 7GB Recommended graphics cards: RTX 3060, RTX 2060
68+
+ LLaMA-13B Minimum graphics memory requirement: 11GB Recommended cards: RTX 2060 12GB, RTX3060 12GB, RTX3080, RTXA2000
69+
+ Qwen-14B-Chat Minimum memory requirement: 13GB Recommended graphics card: RTX 3090
70+
+ LLaMA-30B Minimum Memory Requirement: 22GB Recommended Cards: RTX A5000,RTX 3090,RTX 4090,RTX 6000,Tesla V100,RTX Tesla P40
71+
+ Minimum memory requirement for LLaMA-65B: 40GB Recommended cards: A100,A40,A6000
72+
73+
If int8 then memory x1.5 fp16 x2.5 requirement.
74+
For example: using fp16 to reason about the Qwen-7B-Chat model requires 16GB of video memory.
75+
76+
The above is only an estimate, the actual situation is based on nvidia-smi occupancy.
77+
5978
## Change Log
6079

6180
plese refer to [version change log](https://github.com/imClumsyPanda/langchain-ChatGLM/releases)
@@ -105,18 +124,31 @@ The project use [FastChat](https://github.com/lm-sys/FastChat) to provide the AP
105124
- [WizardLM/WizardCoder-15B-V1.0](https://huggingface.co/WizardLM/WizardCoder-15B-V1.0)
106125
- [baichuan-inc/baichuan-7B](https://huggingface.co/baichuan-inc/baichuan-7B)
107126
- [internlm/internlm-chat-7b](https://huggingface.co/internlm/internlm-chat-7b)
108-
- [Qwen/Qwen-7B-Chat](https://huggingface.co/Qwen/Qwen-7B-Chat)
127+
- [Qwen/Qwen-7B-Chat/Qwen-14B-Chat](https://huggingface.co/Qwen/)
109128
- [HuggingFaceH4/starchat-beta](https://huggingface.co/HuggingFaceH4/starchat-beta)
110129
- [FlagAlpha/Llama2-Chinese-13b-Chat](https://huggingface.co/FlagAlpha/Llama2-Chinese-13b-Chat) and other models of FlagAlpha
111130
- [BAAI/AquilaChat-7B](https://huggingface.co/BAAI/AquilaChat-7B)
112131
- [all models of OpenOrca](https://huggingface.co/Open-Orca)
113132
- [Spicyboros](https://huggingface.co/jondurbin/spicyboros-7b-2.2?not-for-all-audiences=true) + [airoboros 2.2](https://huggingface.co/jondurbin/airoboros-l2-13b-2.2)
133+
- [baichuan2-7b/baichuan2-13b](https://huggingface.co/baichuan-inc)
114134
- [VMware&#39;s OpenLLaMa OpenInstruct](https://huggingface.co/VMware/open-llama-7b-open-instruct)
115135

116136
* Any [EleutherAI](https://huggingface.co/EleutherAI) pythia model such as [pythia-6.9b](https://huggingface.co/EleutherAI/pythia-6.9b)(任何 [EleutherAI](https://huggingface.co/EleutherAI) 的 pythia 模型,如 [pythia-6.9b](https://huggingface.co/EleutherAI/pythia-6.9b))
117137
* Any [Peft](https://github.com/huggingface/peft) adapter trained on top of a model above. To activate, must have `peft` in the model path. Note: If loading multiple peft models, you can have them share the base model weights by setting the environment variable `PEFT_SHARE_BASE_WEIGHTS=true` in any model worker.
118138

119-
Please refer to `llm_model_dict` in `configs.model_configs.py.example` to invoke OpenAI API.
139+
140+
The above model support list may be updated continuously as [FastChat](https://github.com/lm-sys/FastChat) is updated, see [FastChat Supported Models List](https://github.com/lm-sys/FastChat/blob/main /docs/model_support.md).
141+
In addition to local models, this project also supports direct access to online models such as OpenAI API, Wisdom Spectrum AI, etc. For specific settings, please refer to the configuration information of `llm_model_dict` in `configs/model_configs.py.example`.
142+
Online LLM models are currently supported:
143+
144+
- [ChatGPT](https://api.openai.com)
145+
- [Smart Spectrum AI](http://open.bigmodel.cn)
146+
- [MiniMax](https://api.minimax.chat)
147+
- [Xunfei Starfire](https://xinghuo.xfyun.cn)
148+
- [Baidu Qianfan](https://cloud.baidu.com/product/wenxinworkshop?track=dingbutonglan)
149+
- [Aliyun Tongyi Qianqian](https://dashscope.aliyun.com/)
150+
151+
The default LLM type used in the project is `THUDM/chatglm2-6b`, if you need to use other LLM types, please modify `llm_model_dict` and `LLM_MODEL` in [configs/model_config.py].
120152

121153
### Supported Embedding models
122154

@@ -129,6 +161,8 @@ Following models are tested by developers with Embedding class of [HuggingFace](
129161
- [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh)
130162
- [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh)
131163
- [BAAI/bge-large-zh-noinstruct](https://huggingface.co/BAAI/bge-large-zh-noinstruct)
164+
- [sensenova/piccolo-base-zh](https://huggingface.co/sensenova/piccolo-base-zh)
165+
- [sensenova/piccolo-large-zh](https://huggingface.co/sensenova/piccolo-large-zh)
132166
- [shibing624/text2vec-base-chinese-sentence](https://huggingface.co/shibing624/text2vec-base-chinese-sentence)
133167
- [shibing624/text2vec-base-chinese-paraphrase](https://huggingface.co/shibing624/text2vec-base-chinese-paraphrase)
134168
- [shibing624/text2vec-base-multilingual](https://huggingface.co/shibing624/text2vec-base-multilingual)
@@ -137,16 +171,24 @@ Following models are tested by developers with Embedding class of [HuggingFace](
137171
- [GanymedeNil/text2vec-large-chinese](https://huggingface.co/GanymedeNil/text2vec-large-chinese)
138172
- [nghuyong/ernie-3.0-nano-zh](https://huggingface.co/nghuyong/ernie-3.0-nano-zh)
139173
- [nghuyong/ernie-3.0-base-zh](https://huggingface.co/nghuyong/ernie-3.0-base-zh)
174+
- [sensenova/piccolo-base-zh](https://huggingface.co/sensenova/piccolo-base-zh)
175+
- [sensenova/piccolo-base-zh](https://huggingface.co/sensenova/piccolo-large-zh)
140176
- [OpenAI/text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings)
141177

178+
The default Embedding type used in the project is `sensenova/piccolo-base-zh`, if you want to use other Embedding types, please modify `embedding_model_dict` and `embedding_model_dict` and `embedding_model_dict` in [configs/model_config.py]. MODEL` in [configs/model_config.py].
179+
180+
### Build your own Agent tool!
181+
182+
See [Custom Agent Instructions](docs/自定义Agent.md) for details.
183+
142184
---
143185

144186
## Docker Deployment
145187

146-
🐳 Docker image path: `registry.cn-beijing.aliyuncs.com/chatchat/chatchat:0.2.0)`
188+
🐳 Docker image path: `registry.cn-beijing.aliyuncs.com/chatchat/chatchat:0.2.5)`
147189

148190
```shell
149-
docker run -d --gpus all -p 80:8501 registry.cn-beijing.aliyuncs.com/chatchat/chatchat:0.2.0
191+
docker run -d --gpus all -p 80:8501 registry.cn-beijing.aliyuncs.com/chatchat/chatchat:0.2.5
150192
```
151193

152194
- The image size of this version is `33.9GB`, using `v0.2.0`, with `nvidia/cuda:12.1.1-cudnn8-devel-ubuntu22.04` as the base image
@@ -328,17 +370,21 @@ Please refer to [FAQ](docs/FAQ.md)
328370
- [ ] Structured documents
329371
- [X] .csv
330372
- [ ] .xlsx
331-
- [ ] TextSplitter and Retriever
332-
- [x] multiple TextSplitter
333-
- [x] ChineseTextSplitter
373+
- [] TextSplitter and Retriever
374+
- [X] multiple TextSplitter
375+
- [X] ChineseTextSplitter
334376
- [ ] Reconstructed Context Retriever
335377
- [ ] Webpage
336378
- [ ] SQL
337379
- [ ] Knowledge Database
338380
- [X] Search Engines
339381
- [X] Bing
340382
- [X] DuckDuckGo
341-
- [ ] Agent
383+
- [X] Agent
384+
- [X] Agent implementation in the form of basic React, including calls to calculators, etc.
385+
- [X] Langchain's own Agent implementation and calls
386+
- [ ] More Agent support for models
387+
- [ ] More tools
342388
- [X] LLM Models
343389
- [X] [FastChat](https://github.com/lm-sys/fastchat) -based LLM Models
344390
- [ ] Mutiply Remote LLM API
@@ -348,3 +394,16 @@ Please refer to [FAQ](docs/FAQ.md)
348394
- [X] FastAPI-based API
349395
- [X] Web UI
350396
- [X] Streamlit -based Web UI
397+
398+
---
399+
400+
## Wechat Group
401+
402+
<img src="img/qr_code_64.jpg" alt="QR Code" width="300" height="300" />
403+
404+
🎉 langchain-Chatchat project WeChat exchange group, if you are also interested in this project, welcome to join the group chat to participate in the discussion and exchange.
405+
406+
## Follow us
407+
408+
<img src="img/official_account.png" alt="image" width="900" height="300" />
409+
🎉 langchain-Chatchat project official public number, welcome to scan the code to follow.

chains/llmchain_with_history.py

+4-11
Original file line numberDiff line numberDiff line change
@@ -1,19 +1,12 @@
1-
from langchain.chat_models import ChatOpenAI
2-
from configs.model_config import llm_model_dict, LLM_MODEL
3-
from langchain import LLMChain
1+
from server.utils import get_ChatOpenAI
2+
from configs.model_config import LLM_MODEL, TEMPERATURE
3+
from langchain.chains import LLMChain
44
from langchain.prompts.chat import (
55
ChatPromptTemplate,
66
HumanMessagePromptTemplate,
77
)
88

9-
model = ChatOpenAI(
10-
streaming=True,
11-
verbose=True,
12-
# callbacks=[callback],
13-
openai_api_key=llm_model_dict[LLM_MODEL]["api_key"],
14-
openai_api_base=llm_model_dict[LLM_MODEL]["api_base_url"],
15-
model_name=LLM_MODEL
16-
)
9+
model = get_ChatOpenAI(model_name=LLM_MODEL, temperature=TEMPERATURE)
1710

1811

1912
human_prompt = "{input}"

configs/__init__.py

+5-1
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,8 @@
1+
from .basic_config import *
12
from .model_config import *
3+
from .kb_config import *
24
from .server_config import *
5+
from .prompt_config import *
36

4-
VERSION = "v0.2.4"
7+
8+
VERSION = "v0.2.5"

configs/basic_config.py.example

+22
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
import logging
2+
import os
3+
import langchain
4+
5+
# 是否显示详细日志
6+
log_verbose = False
7+
langchain.verbose = False
8+
9+
10+
# 通常情况下不需要更改以下内容
11+
12+
# 日志格式
13+
LOG_FORMAT = "%(asctime)s - %(filename)s[line:%(lineno)d] - %(levelname)s: %(message)s"
14+
logger = logging.getLogger()
15+
logger.setLevel(logging.INFO)
16+
logging.basicConfig(format=LOG_FORMAT)
17+
18+
19+
# 日志存储路径
20+
LOG_PATH = os.path.join(os.path.dirname(os.path.dirname(__file__)), "logs")
21+
if not os.path.exists(LOG_PATH):
22+
os.mkdir(LOG_PATH)

0 commit comments

Comments
 (0)