您的位置:首页 > 新闻 > 会展 > 建筑模板网_龙文区新冠肺炎_杭州明开seo_seo快速排名软件app

建筑模板网_龙文区新冠肺炎_杭州明开seo_seo快速排名软件app

2024/10/6 16:10:18 来源:https://blog.csdn.net/weixin_43820352/article/details/142656605  浏览:    关键词:建筑模板网_龙文区新冠肺炎_杭州明开seo_seo快速排名软件app
建筑模板网_龙文区新冠肺炎_杭州明开seo_seo快速排名软件app

使用gpu构建llama.cpp

更多详情参见https://github.com/abetlen/llama-cpp-python,官网网站会随着版本迭代更新。

下载并进入llama.cpp

地址:https://github.com/ggerganov/llama.cpp
可以下载到本地再传到服务器上

git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp

编译源码(make)

生成./main和./quantize等二进制文件。详见:https://github.com/ggerganov/llama.cpp/blob/master/docs/build.md

使用CPU
make
使用GPU
make GGML_CUDA=1
可能出现的报错及解决方法

I ccache not found. Consider installing it for faster compilation.

sudo apt-get install ccache

Makefile:1002: *** I ERROR: For CUDA versions < 11.7 a target CUDA architecture must be explicitly provided via environment variable CUDA_DOCKER_ARCH, e.g. by running "export CUDA_DOCKER_ARCH=compute_XX" on Unix-like systems, where XX is the minimum compute capability that the code needs to run on. A list with compute capabilities can be found here: https://developer.nvidia.com/cuda-gpus . Stop.
说明cuda版本太低,如果不是自己下载好的,参考该文章nvcc -V 显示的cuda版本和实际版本不一致更换
NOTICE: The 'server' binary is deprecated. Please use 'llama-server' instead.
提示:随版本迭代,命令可能会失效

正确结果

内容很长,只截取了一部分
在这里插入图片描述

调用大模型

安装llama.cpp,比较慢

CMAKE_ARGS="-DGGML_CUDA=on" pip install llama-cpp-python

调用

from langchain_community.chat_models import ChatLlamaCpp
from langchain_community.llms import LlamaCpplocal_model = "/data/pretrained/gguf/Meta-Llama-3-8B-Instruct-Q5_K_M.gguf"
llm = ChatLlamaCpp(seed=1,temperature=0.5,model_path=local_model,n_ctx=8192,n_gpu_layers=64,n_batch=12,  # Should be between 1 and n_ctx, consider the amount of VRAM in your GPU.max_tokens=8192,repeat_penalty=1.5,top_p=0.5,f16_kv=False,verbose=True,
)
messages = [("system","You are a helpful assistant that translates English to Chinese. Translate the user sentence.",),("human","OpenAI has a tool calling API that lets you describe tools and their arguments, and have the model return a JSON object with a tool to invoke and the inputs to that tool. tool-calling is extremely useful for building tool-using chains and agents, and for getting structured outputs from models more generally."),
]ai_msg = llm.invoke(messages)
print(ai_msg.content)

正确打印中存在如下内容,说明找到gpu

ggml_cuda_init: found 2 CUDA devices:Device 0: 你的gpu型号, compute capability gpu计算能力, VMM: yesDevice 1: 你的gpu型号, compute capability gpu计算能力, VMM: yes
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors:        CPU buffer size =   344.44 MiB
llm_load_tensors:      CUDA0 buffer size =  2932.34 MiB
llm_load_tensors:      CUDA1 buffer size =  2183.15 MiB

版权声明:

本网仅为发布的内容提供存储空间,不对发表、转载的内容提供任何形式的保证。凡本网注明“来源:XXX网络”的作品,均转载自其它媒体,著作权归作者所有,商业转载请联系作者获得授权,非商业转载请注明出处。

我们尊重并感谢每一位作者,均已注明文章来源和作者。如因作品内容、版权或其它问题,请及时与我们联系,联系邮箱:809451989@qq.com,投稿邮箱:809451989@qq.com