您的位置:首页 > 汽车 > 时评 > 中国今天新闻最新消息_深圳设计网站哪个好_网络优化工程师_潍坊网站建设seo

中国今天新闻最新消息_深圳设计网站哪个好_网络优化工程师_潍坊网站建设seo

2025/4/17 16:56:09 来源:https://blog.csdn.net/wanggao_1990/article/details/146007121  浏览:    关键词:中国今天新闻最新消息_深圳设计网站哪个好_网络优化工程师_潍坊网站建设seo
中国今天新闻最新消息_深圳设计网站哪个好_网络优化工程师_潍坊网站建设seo

前面博客 【opencv dnn模块 示例(25) 目标检测 object_detection 之 yolov9
介】 绍了 yolov9 详细使用方式,重参数化、导出端到端模型,使用 torch、opencv、tensorrt 以及 paddle 的测试。

由于存在移动端推理部署的需求,需要进行加速处理,本文在 yolov9 的基础上,使用腾讯的NCNN库进行推理测试。

1、模型转换

1.1、准备转换工具

ncnn项目提供了转换工具,可以直接冲 预编译 release 包形式获取, 链接 https://github.com/Tencent/ncnn/releases 。例如windows下提供的版本。
在这里插入图片描述

我们以 ncnn-20241226-windows-vs2015-shared 包为例,下载后截图如下,分为两个架构,include和lib 为开发者使用, bin为动态库和工具目录。我们这里使用 onnx2ncnn.exe 工具。

在这里插入图片描述

1.2、 onnx模型转换

以 yolov9s 为例,预训练或者训练得到的 yolov9s.pt 模型,先进行重参数化处理,生成精简网络后的 yolov9s-converted.pt 模型文件 ,之后导出 yolov9s-converted.onnx,同时指定参数进行模型精简。

python reparameterization_yolov9-s.py yolov9s.pt
python export.py --weights yolov9-s-converted.pt --include onnx --simplify

这里以我们训练导出好的 best-s-c.onnx 模型文件进行准换,后续也以此作为测试。
进入NCNN的bin目录,使用脚本命令

$ onnx2ncnn.exe best-s-c.onnx best-s-c.onnx.param best-s-c.onnx.binonnx2ncnn may not fully meet your needs. For more accurate and elegant
conversion results, please use PNNX. PyTorch Neural Network eXchange (PNNX) is
an open standard for PyTorch model interoperability. PNNX provides an open model
format for PyTorch. It defines computation graph as well as high level operators
strictly matches PyTorch. You can obtain pnnx through the following ways:
1. Install via pythonpip3 install pnnx
2. Get the executable from https://github.com/pnnx/pnnx
For more information, please refer to https://github.com/pnnx/pnnx

这里运行脚本之后,有一堆警告, 可以按照要求进行额外的操作。 当前转换成功,并输出了2个文件。
在这里插入图片描述

我这里模型为个人训练,6类。 使用netron查看onnx 和 ncnn 模型的网络结构、输入和输出。
两者输入 images、输出 outputs0 相同,但ncnn中输入和出书的维度都是动态的,不像onnx中为静态固定的值。

在这里插入图片描述

2、测试

2.1、测试代码

我们直接使用前面博客中 opencv dnn 测试的代码 上修改。

2.1.1、预处理

先扩充为正方形,之后缩放到 (640,640)。

opencv 代码为

// Create a 4D blob from a frame.
cv::Mat modelInput = frame;
if(letterBoxForSquare && inpWidth == inpHeight)modelInput = formatToSquare(modelInput);// preprocess
cv::dnn::blobFromImage(modelInput, blob, scale, cv::Size2f(inpWidth, inpHeight), mean, swapRB, false);

ncnn的代码如下:

cv::Mat modelInput = frame;
if(letterBoxForSquare && inpWidth == inpHeight)modelInput = formatToSquare(modelInput);// preprocess
ncnn::Mat in = ncnn::Mat::from_pixels_resize((unsigned char*)modelInput.data, ncnn::Mat::PIXEL_BGR2RGB, modelInput.cols, modelInput.rows, (int)inpWidth, (int)inpHeight);float norm_ncnn[] = {1/255.f, 1/255.f, 1/255.f};
in.substract_mean_normalize(0, norm_ncnn);

注意对比,都先转换为letterBox的正方形形式, 之后缩放转换为 4维 blob,并进行归一化。ncnn稍显复杂。

预处理的效率对比,三种实现方式如下,

         preprocesscv::TickMeter tk;tk.reset();for(int i = 0; i < 100; i++) {tk.start();cv::dnn::blobFromImage(modelInput, blob, scale, cv::Size2f(inpWidth, inpHeight), mean, swapRB, false);ncnn::Mat in2;in2.w = inpWidth;in2.h = inpHeight;in2.d = 1;in2.c = 3;in2.data = blob.data;in2.elemsize = 4;in2.elempack = 1;in2.dims = 3;in2.cstep = inpWidth*inpHeight;tk.stop();}std::cout<< tk.getTimeMilli() << "  " << tk.getAvgTimeMilli() << std::endl;tk.reset();for(int i = 0; i < 100; i++) {tk.start();cv::dnn::blobFromImage(modelInput, blob, scale, cv::Size2f(inpWidth, inpHeight), mean, swapRB, false);ncnn::Mat in2(inpWidth, inpHeight, 3, blob.data, 4, 1);tk.stop();}std::cout << tk.getTimeMilli() << "  " << tk.getAvgTimeMilli() << std::endl;tk.reset();for(int i = 0; i < 100; i++) {tk.start();ncnn::Mat in = ncnn::Mat::from_pixels_resize((unsigned char*)modelInput.data, ncnn::Mat::PIXEL_BGR2RGB, modelInput.cols, modelInput.rows, (int)inpWidth, (int)inpHeight);float norm_ncnn[] = {1 / 255.f, 1 / 255.f, 1 / 255.f};in.substract_mean_normalize(0, norm_ncnn);tk.stop();}std::cout << tk.getTimeMilli() << "  " << tk.getAvgTimeMilli() << std::endl;

运行100次,测量总时间和平均时间,对比结果可知ncnn的效率略高13%于opencv dnn。

374.684  3.74684
373.745  3.73745
327.09  3.2709

2.1.2、推理

  • opencv 的推理

    // Run a model.
    net.setInput(blob);
    // output
    std::vector<Mat> outs;
    net.forward(outs, outNames);   // 亦可以使用 单一输出 Mat out=net.forward(outNames);postprocess(frame, modelInput.size(), outs, net);
    

    后处理函数,对网络输出 [1, clsass_num+4, 8400] 进行解码,之后nms处理并绘制。

  • ncnn 的推理

    由于格式不同,为复用后处理函数,对输出进行转换处理

    ex.input("images", in);
    ex.extract("output0", output);
    // 复用opencv dnn的后处理
    std::vector<Mat> outs;
    outs.push_back(cv::Mat({1,output.h,output.w}, CV_32F, output.data));
    

2.2、效率对比

相同图片,使用训练的 yolov9-s模型,仅计算推理时间。

opencv dnn(CPU):300ms
opencv dnn(GPU):15ms

ncnn CPU:170ms
ncnn GPU(vulkan): 报错。

目前仅看cpu,推理加速快接近50%… 在移动端还是提升客观的。

2.3、主体代码

注意,引用 #include "ncnn/net.h" 是,如果报奇怪的未定义错误,将引用提前。

using namespace cv;
using namespace dnn;float inpWidth;
float inpHeight;
float confThreshold, scoreThreshold, nmsThreshold;
std::vector<std::string> classes;
std::vector<cv::Scalar> colors;bool letterBoxForSquare = true;cv::Mat formatToSquare(const cv::Mat &source);void postprocess(Mat& frame, cv::Size inputSz, const std::vector<Mat>& out, Net& net);void drawPred(int classId, float conf, int left, int top, int right, int bottom, Mat& frame);std::random_device rd;
std::mt19937 gen(rd());
std::uniform_int_distribution<int> dis(100, 255);int test_ncnn()
{// 根据选择的检测模型文件进行配置 confThreshold = 0.25;scoreThreshold = 0.45;nmsThreshold = 0.5;float scale = 1 / 255.0;  //0.00392Scalar mean = {0,0,0};bool swapRB = true;inpWidth = 640;inpHeight = 640;//String modelPath = R"(E:\DeepLearning\yolov9\custom-data\traffic_accident_vehicle_test_0218\best-s-c.onnx)";String classesFile = R"(E:\DeepLearning\yolov9\custom-data\traffic_accident_vehicle_test_0218\cls.txt)";std::string param_path = R"(E:\1、交通事故\Traffic Accident Processes For IOS\models\20250221\ncnn\best-s-c.onnx.param)";std::string bin_path =   R"(E:\1、交通事故\Traffic Accident Processes For IOS\models\20250221\ncnn\best-s-c.onnx.bin)";ncnn::Net net;net.load_param(param_path.c_str());net.load_model(bin_path.c_str());net.opt.use_vulkan_compute = true;// Open file with classes names.if(!classesFile.empty()) {const std::string& file = classesFile;std::ifstream ifs(file.c_str());if(!ifs.is_open())CV_Error(Error::StsError, "File " + file + " not found");std::string line;while(std::getline(ifs, line)) {classes.push_back(line);colors.push_back(cv::Scalar(dis(gen), dis(gen), dis(gen)));}}// Create a windowstatic const std::string kWinName = "Deep learning object detection in OpenCV";cv::namedWindow(kWinName, 0);// Open a video file or an image file or a camera stream.VideoCapture cap;cap.open(R"(E:\DeepLearning\yolov9\bus.jpg)");cv::TickMeter tk;// Process frames.Mat frame, blob;while(waitKey(1) < 0) {cap >> frame;if(frame.empty()) {waitKey();break;}// Create a 4D blob from a frame.cv::Mat modelInput = frame;if(letterBoxForSquare && inpWidth == inpHeight)modelInput = formatToSquare(modelInput);// preprocess//cv::dnn::blobFromImage(modelInput, blob, scale, cv::Size2f(inpWidth, inpHeight), mean, swapRB, false);ncnn::Mat in = ncnn::Mat::from_pixels_resize((unsigned char*)modelInput.data, ncnn::Mat::PIXEL_BGR2RGB, modelInput.cols, modelInput.rows, (int)inpWidth, (int)inpHeight);float norm_ncnn[] = {1/255.f, 1/255.f, 1/255.f};in.substract_mean_normalize(0, norm_ncnn);// Run a model.ncnn::Extractor ex = net.create_extractor();ex.input("images", in);ncnn::Mat output;auto tt1 = cv::getTickCount();ex.extract("output0", output);auto tt2 = cv::getTickCount();//for(int i = 0; i < 20; i++) {//    auto tt1 = cv::getTickCount();//    ex.input("images", in);//    ex.extract("output0", output);//    auto tt2 = cv::getTickCount();//   std::cout << "infer time: " << (tt2 - tt1) / cv::getTickFrequency() * 1000 << std::endl;//}std::vector<Mat> outs;outs.push_back(cv::Mat({1,output.h,output.w}, CV_32F, output.data));cv::dnn::Net nullNet;postprocess(frame, modelInput.size(), outs, nullNet);//tk.stop();std::string label = format("Inference time: %.2f ms", (tt2 - tt1) / cv::getTickFrequency() * 1000);cv::putText(frame, label, Point(0, 15), FONT_HERSHEY_SIMPLEX, 0.5, Scalar(0, 255, 0));cv::imshow(kWinName, frame);}return 0;
}cv::Mat formatToSquare(const cv::Mat &source)
{int col = source.cols;int row = source.rows;int _max = MAX(col, row);cv::Mat result = cv::Mat::zeros(_max, _max, CV_8UC3);source.copyTo(result(cv::Rect(0, 0, col, row)));return result;
}void postprocess(Mat& frame, cv::Size inputSz, const std::vector<Mat>& outs, Net& net)
{// yolov8 has an output of shape (batchSize, 84, 8400) (Num classes + box[x,y,w,h] + confidence[c])auto tt1 = cv::getTickCount();float x_factor = inputSz.width / inpWidth;float y_factor = inputSz.height / inpHeight;std::vector<int> class_ids;std::vector<float> confidences;std::vector<cv::Rect> boxes;//int rows = outs[0].size[1];//int dimensions = outs[0].size[2];// [1, 84, 8400] -> [8400,84]int rows = outs[0].size[2];int dimensions = outs[0].size[1];auto tmp = outs[0].reshape(1, dimensions);cv::transpose(tmp, tmp);float *data = (float *)tmp.data;for(int i = 0; i < rows; ++i) {//float confidence = data[4];//if(confidence >= confThreshold) {float *classes_scores = data + 4;cv::Mat scores(1, classes.size(), CV_32FC1, classes_scores);cv::Point class_id;double max_class_score;minMaxLoc(scores, 0, &max_class_score, 0, &class_id);if(max_class_score > scoreThreshold) {confidences.push_back(max_class_score);class_ids.push_back(class_id.x);float x = data[0];float y = data[1];float w = data[2];float h = data[3];          int left = int((x - 0.5 * w) * x_factor);int top = int((y - 0.5 * h) * y_factor);int width = int(w * x_factor);int height = int(h * y_factor);boxes.push_back(cv::Rect(left, top, width, height));}//}data += dimensions;}std::vector<int> indices;NMSBoxes(boxes, confidences, scoreThreshold, nmsThreshold, indices);auto tt2 = cv::getTickCount();std::string label = format("NMS time: %.2f ms", (tt2 - tt1) / cv::getTickFrequency() * 1000);cv::putText(frame, label, Point(0, 30), FONT_HERSHEY_SIMPLEX, 0.5, Scalar(0, 255, 0));for(size_t i = 0; i < indices.size(); ++i) {int idx = indices[i];Rect box = boxes[idx];drawPred(class_ids[idx], confidences[idx], box.x, box.y,box.x + box.width, box.y + box.height, frame);//printf("cls = %d, prob = %.2f\n", class_ids[idx], confidences[idx]);std::cout << "cls " << class_ids[idx] << ", prob = " << confidences[idx] << ", "<< box  << "\n";}
}void drawPred(int classId, float conf, int left, int top, int right, int bottom, Mat& frame)
{rectangle(frame, Point(left, top), Point(right, bottom), Scalar(0, 255, 0));std::string label = format("%.2f", conf);Scalar color = Scalar::all(255);if(!classes.empty()) {CV_Assert(classId < (int)classes.size());label = classes[classId] + ": " + label;color = colors[classId];}int baseLine;Size labelSize = getTextSize(label, FONT_HERSHEY_SIMPLEX, 0.5, 1, &baseLine);top = max(top, labelSize.height);rectangle(frame, Point(left, top - labelSize.height),Point(left + labelSize.width, top + baseLine), color, FILLED);cv::putText(frame, label, Point(left, top), FONT_HERSHEY_SIMPLEX, 0.5, Scalar());
}

3、其他优化

ncnn自带的工具

版权声明:

本网仅为发布的内容提供存储空间,不对发表、转载的内容提供任何形式的保证。凡本网注明“来源:XXX网络”的作品,均转载自其它媒体,著作权归作者所有,商业转载请联系作者获得授权,非商业转载请注明出处。

我们尊重并感谢每一位作者,均已注明文章来源和作者。如因作品内容、版权或其它问题,请及时与我们联系,联系邮箱:809451989@qq.com,投稿邮箱:809451989@qq.com