您的位置:首页 > 房产 > 家装 > deploy local llm ragflow

deploy local llm ragflow

2024/10/7 0:26:09 来源:https://blog.csdn.net/z3502603706/article/details/140789693  浏览:    关键词:deploy local llm ragflow

CPU >= 4 cores
RAM >= 16 GB
Disk >= 50 GB
Docker >= 24.0.0 & Docker Compose >= v2.26.1

下载docker:

官方下载方式:https://docs.docker.com/desktop/install/ubuntu/

其中 DEB package需要手动下载并传输到服务器

国内下载方式:
https://blog.csdn.net/u011278722/article/details/137673353

Ensure vm.max_map_count >= 262144:

check:
$ sysctl vm.max_map_count

Reset vm.max_map_count to a value at least 262144 if it is not:
$ sudo sysctl -w vm.max_map_count=262144

This change will be reset after a system reboot. To ensure your change remains permanent, add or update the vm.max_map_count value in /etc/sysctl.conf accordingly:
$ vm.max_map_count=262144

Clone the repo:
$ git clone https://github.com/infiniflow/ragflow.git
该步骤需要手动下载并传输,国内无法下载

Build the pre-built Docker images and start up the server:
$ cd ragflow/docker
$ chmod +x ./entrypoint.sh
$ docker compose up -d
这一步也需要手动传输或直接用用源代码build(见最后)

Check the server status after having the server up and running:
$ docker logs -f ragflow-server

The following output confirms a successful launch of the system:
____ ______ __
/ __ \ ____ _ ____ _ / // / _ __
/ // // __ // __ // / / // __ | | /| / /
/ , // // // // // / / // // /| |/ |/ /
/
/ || _,/ _, /// // _
/ |/|_/
/____/

  • Running on all addresses (0.0.0.0)
  • Running on http://127.0.0.1:9380
  • Running on http://x.x.x.x:9380
    INFO:werkzeug:Press CTRL+C to quit

In your web browser, enter the IP address of your server and log in to RAGFlow.

With the default settings, you only need to enter http://IP_OF_YOUR_MACHINE (sans port number) as the default HTTP serving port 80 can be omitted when using the default configurations.

In service_conf.yaml, select the desired LLM factory in user_default_llm and update the API_KEY field with the corresponding API key.

See llm_api_key_setup for more information.

Rebuild:

To build the Docker images from source:
$ git clone https://github.com/infiniflow/ragflow.git
$ cd ragflow/
$ docker build -t infiniflow/ragflow:dev .
$ cd ragflow/docker
$ chmod +x ./entrypoint.sh
$ docker compose up -d

卸载原有cuda和驱动
https://blog.alumik.cn/posts/90/#:~:text=Use%20the%20following%20command%20to%20uninstall%20a%20Toolkit,remove%20–purge%20%27%5Envidia-.%2A%27%20sudo%20apt-get%20remove%20–purge%20%27%5Elibnvidia-.%2A%27

CUDA 和 Nvdia driver安装:
https://blog.hellowood.dev/posts/ubuntu-22-%E5%AE%89%E8%A3%85-nvdia-%E6%98%BE%E5%8D%A1%E9%A9%B1%E5%8A%A8%E5%92%8C-cuda/

下载Vllm
https://qwen.readthedocs.io/zh-cn/latest/deployment/vllm.html

国内下载model: /Qwen2-7B-Instruct方法:
pip install modelscope
from modelscope import snapshot_download
model_dir = snapshot_download(‘qwen/Qwen2-7B-Instruct’, cache_dir=‘/home/llmlocal/qwen/qwen/’)

运行llm服务器
python -m vllm.entrypoints.openai.api_server --model /home/llmlocal/qwen/qwen/Qwen2-7B-Instruct --host 0.0.0.0 --port 8000

测试:
curl http://localhost:8000/v1/chat/completions -H “Content-Type: application/json” -d ‘{
“model”: “/home/llmlocal/qwen/qwen/Qwen2-7B-Instruct”,
“messages”: [
{“role”: “system”, “content”: “You are a helpful assistant.”},
{“role”: “user”, “content”: “Tell me something about large language models.”}
],
“temperature”: 0.7,
“top_p”: 0.8,
“repetition_penalty”: 1.05,
“max_tokens”: 512
}’

更改ragflow的MODEL_NAME = “/home/llmlocal/qwen/qwen/Qwen2-7B-Instruct” 路径在rag里的chat_model

版权声明:

本网仅为发布的内容提供存储空间,不对发表、转载的内容提供任何形式的保证。凡本网注明“来源:XXX网络”的作品,均转载自其它媒体,著作权归作者所有,商业转载请联系作者获得授权,非商业转载请注明出处。

我们尊重并感谢每一位作者,均已注明文章来源和作者。如因作品内容、版权或其它问题,请及时与我们联系,联系邮箱:809451989@qq.com,投稿邮箱:809451989@qq.com