一、本文介绍
本文记录的是基于Mobile MQA模块的YOLOv10目标检测改进方法研究。MobileNetv4
中的Mobile MQA模块
是用于模型加速,减少内存访问的模块,相比其他全局的自注意力,其不仅加强了模型对全局信息的关注,同时也显著提高了模型效率。
文章目录
- 一、本文介绍
- 二、Mobile MQA注意力原理
- 三、Mobile MQA的实现代码
- 四、添加步骤
- 4.1 改进点⭐
- 五、添加步骤
- 5.1 修改ultralytics/nn/modules/block.py
- 5.2 修改ultralytics/nn/modules/__init__.py
- 5.3 修改ultralytics/nn/modules/tasks.py
- 六、yaml模型文件
- 6.1 模型改进⭐
- 七、成功运行结果
二、Mobile MQA注意力原理
在论文《MobileNetV4 - Universal Models for the Mobile Ecosystem》
中,提出了Mobile MQA
。
一、原理
- 基于MQA改进并结合不对称空间下采样:
MQA(Multi-Query Attention)
简化了传统的多头注意力机制,通过共享keys
和values
来减少内存访问需求。在移动混合模型中,当批量大小较小时,这种方式能有效提高运算强度。- 借鉴
MQA
中对queries
、keys
和values
的不对称计算方式,Mobile MQA
引入了空间缩减注意力(SRA),对keys
和values
进行下采样,同时保持高分辨率的queries。这是因为在混合模型中,早期层的空间混合卷积滤波器使得空间上相邻的标记具有相关性。 Mobile MQA
的计算公式为:
M o b i l e _ M Q A ( X ) = C o n c a t ( a t t e n t i o n 1 , . . . , a t t e n t i o n n ) W O Mobile\_MQA(X)= Concat(attention_1,...,attention_n)W^{O} Mobile_MQA(X)=Concat(attention1,...,attentionn)WO,
其中 a t t e n t i o n j = s o f t m a x ( ( X W Q j ) ( S R ( X ) W K ) T d k ) ( S R ( X ) W V ) attention_j = softmax(\frac{(XW^{Q_j})(SR(X)W^{K})^{T}}{\sqrt{d_k}})(SR(X)W^{V}) attentionj=softmax(dk(XWQj)(SR(X)WK)T)(SR(X)WV),这里SR
可以是空间缩减操作(在设计中是一个步长为2的3x3深度卷积),也可以是恒等函数(当不进行空间缩减时)。
二、特点
- 针对加速器优化:专门为移动加速器进行了优化,考虑了移动加速器的计算和内存特性。
- 不对称空间下采样:通过对
keys
和values
进行下采样,保持queries的高分辨率,在不损失太多精度的情况下,显著提高了效率。 - 操作简单高效:相比传统的注意力机制,
Mobile MQA
的设计更加简单,操作更加高效,更适合在移动设备上运行。
论文:http://arxiv.org/abs/2404.10518
源码:https://github.com/tensorflow/models/blob/master/official/vision/modeling/backbones/mobilenet.py
三、Mobile MQA的实现代码
Mobile MQA模块
的实现代码如下:
def conv2d(in_channels, out_channels, kernel_size=3, stride=1, groups=1, bias=False, norm=True, act=True):conv = nn.Sequential()padding = (kernel_size - 1) // 2conv.append(nn.Conv2d(in_channels, out_channels, kernel_size, stride, padding, bias=bias, groups=groups))if norm:conv.append(nn.BatchNorm2d(out_channels))if act:conv.append(nn.ReLU6())return convclass MultiQueryAttentionLayerWithDownSampling(nn.Module):def __init__(self, in_channels, num_heads, key_dim, value_dim, query_h_strides, query_w_strides, kv_strides, dw_kernel_size=3, dropout=0.0):"""Multi Query Attention with spatial downsampling.Referenced from here https://github.com/tensorflow/models/blob/master/official/vision/modeling/layers/nn_blocks.py3 parameters are introduced for the spatial downsampling:1. kv_strides: downsampling factor on Key and Values only.2. query_h_strides: vertical strides on Query only.3. query_w_strides: horizontal strides on Query only.This is an optimized version.1. Projections in Attention is explict written out as 1x1 Conv2D.2. Additional reshapes are introduced to bring a up to 3x speed up."""super(MultiQueryAttentionLayerWithDownSampling, self).__init__()self.num_heads = num_headsself.key_dim = key_dimself.value_dim = value_dimself.query_h_strides = query_h_stridesself.query_w_strides = query_w_stridesself.kv_strides = kv_stridesself.dw_kernel_size = dw_kernel_sizeself.dropout = dropoutself.head_dim = self.key_dim // num_headsif self.query_h_strides > 1 or self.query_w_strides > 1:self._query_downsampling_norm = nn.BatchNorm2d(in_channels)self._query_proj = conv2d(in_channels, self.num_heads * self.key_dim, 1, 1, norm=False, act=False)if self.kv_strides > 1:self._key_dw_conv = conv2d(in_channels, in_channels, dw_kernel_size, kv_strides, groups=in_channels,norm=True, act=False)self._value_dw_conv = conv2d(in_channels, in_channels, dw_kernel_size, kv_strides, groups=in_channels,norm=True, act=False)self._key_proj = conv2d(in_channels, key_dim, 1, 1, norm=False, act=False)self._value_proj = conv2d(in_channels, key_dim, 1, 1, norm=False, act=False)self._output_proj = conv2d(num_heads * key_dim, in_channels, 1, 1, norm=False, act=False)self.dropout = nn.Dropout(p=dropout)def forward(self, x):bs, seq_len, _, _ = x.size()# print(x.size())if self.query_h_strides > 1 or self.query_w_strides > 1:q = F.avg_pool2d(self.query_h_strides, self.query_w_strides)q = self._query_downsampling_norm(q)q = self._query_proj(q)else:q = self._query_proj(x)px = q.size(2)q = q.view(bs, self.num_heads, -1, self.key_dim) # [batch_size, num_heads, seq_len, key_dim]if self.kv_strides > 1:k = self._key_dw_conv(x)k = self._key_proj(k)v = self._value_dw_conv(x)v = self._value_proj(v)else:k = self._key_proj(x)v = self._value_proj(x)k = k.view(bs, 1, self.key_dim, -1) # [batch_size, 1, key_dim, seq_length]v = v.view(bs, 1, -1, self.key_dim) # [batch_size, 1, seq_length, key_dim]# calculate attention score# print(q.shape, k.shape, v.shape)attn_score = torch.matmul(q, k) / (self.head_dim ** 0.5)attn_score = self.dropout(attn_score)attn_score = F.softmax(attn_score, dim=-1)# context = torch.einsum('bnhm,bmv->bnhv', attn_score, v)# print(attn_score.shape, v.shape)context = torch.matmul(attn_score, v)context = context.view(bs, self.num_heads * self.key_dim, px, px)output = self._output_proj(context)# print(output.shape)return output
参数 | 解释 |
---|---|
in_channels | 输入通道数 |
num_heads | 自注意力头的数量 |
key_dim | 键的维度 |
key_dim | 值的维度 |
value_dim | 仅用于查询的,在H方向上的步长 |
query_h_strides | 仅用于查询的,在W方向上的步长 |
query_w_strides | 仅对键和值进行下采样,1不进行下采样,2下采样 |
dw_kernel_size=3 | 深度可分离卷积的卷积核大小 |
dropout=0.0 | 随机丢失比例 |
四、添加步骤
4.1 改进点⭐
模块改进方法:基于Mobile MQA模块
的C2f
。
改进方法是对YOLOv10
中的C2f模块
进行改进。MobileNetv4
中的Mobile MQA模块
可用于模型加速,减少内存访问的模块,相比其他全局的自注意力,利用Mobile MQA
改进C2f模块
后,不仅加强了模型对全局信息的关注,同时也显著提高了模型效率。
改进代码如下:
class C2f_MQA(nn.Module):"""Faster Implementation of CSP Bottleneck with 2 convolutions."""def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5):"""Initialize CSP bottleneck layer with two convolutions with arguments ch_in, ch_out, number, shortcut, groups,expansion."""super().__init__()self.c = int(c2 * e) # hidden channelsself.cv1 = Conv(c1, 2 * self.c, 1, 1)self.cv2 = Conv((2 + n) * self.c, c2, 1) # optional act=FReLU(c2)self.m = nn.ModuleList(Bottleneck(self.c, self.c, shortcut, g, k=((3, 3), (3, 3)), e=1.0) for _ in range(n))self.att = MultiQueryAttentionLayerWithDownSampling(c2, 2, 48, 48, 1, 1, 1)def forward(self, x):"""Forward pass through C2f layer."""y = list(self.cv1(x).chunk(2, 1))y.extend(m(y[-1]) for m in self.m)return self.att(self.cv2(torch.cat(y, 1)))def forward_split(self, x):"""Forward pass using split() instead of chunk()."""y = list(self.cv1(x).split((self.c, self.c), 1))y.extend(m(y[-1]) for m in self.m)return self.att(self.cv2(torch.cat(y, 1)))
注意❗:在5.2和5.3小节
中的文件中需要声明的模块名称为:MultiQueryAttentionLayerWithDownSampling
和C2f_MQA
。
五、添加步骤
5.1 修改ultralytics/nn/modules/block.py
此处需要修改的文件是ultralytics/nn/modules/block.py
block.py中定义了网络结构的通用模块
,我们想要加入新的模块就只需要将模块代码放到这个文件内即可。
将MultiQueryAttentionLayerWithDownSampling
和C2f_MQA
模块代码添加到此文件下。
5.2 修改ultralytics/nn/modules/init.py
此处需要修改的文件是ultralytics/nn/modules/__init__.py
__init__.py
文件中定义了所有模块的初始化,我们只需要将block.py
中的新的模块命添加到对应的函数即可。
MultiQueryAttentionLayerWithDownSampling
和C2f_MQA
在block.py
中实现,所有要添加在from .block import
:
from .block import (C1,C2,...MultiQueryAttentionLayerWithDownSampling,C2f_MQA
)
5.3 修改ultralytics/nn/modules/tasks.py
在tasks.py
文件中,需要在两处位置添加各模块类名称。
首先:在函数声明中引入MultiQueryAttentionLayerWithDownSampling
和C2f_MQA
其次:在parse_model函数
中注册MultiQueryAttentionLayerWithDownSampling
和C2f_MQA
模块
六、yaml模型文件
6.1 模型改进⭐
在代码配置完成后,配置模型的YAML文件。
此处以ultralytics/cfg/models/v10/yolov10m.yaml
为例,在同目录下创建一个用于自己数据集训练的模型文件yolov10m-C2f_MQA.yaml
。
将yolov10m.yaml
中的内容复制到yolov10m-C2f_MQA.yaml
文件下,修改nc
数量等于自己数据中目标的数量。
📌 模型的修改方法是将骨干网络中的所有C2f模块
替换成C2f_MQA模块
,优化整体,提高效率。
结构如下:
# Ultralytics YOLO 🚀, AGPL-3.0 license
# YOLOv8 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect# Parameters
nc: 1 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolov8n.yaml' will call yolov8.yaml with scale 'n'# [depth, width, max_channels]m: [0.67, 0.75, 768] # YOLOv8m summary: 295 layers, 25902640 parameters, 25902624 gradients, 79.3 GFLOPs# YOLOv8.0n backbone
backbone:# [from, repeats, module, args]- [-1, 1, Conv, [64, 3, 2]] # 0-P1/2- [-1, 1, Conv, [128, 3, 2]] # 1-P2/4- [-1, 3, C2f_MQA, [128, True]]- [-1, 1, Conv, [256, 3, 2]] # 3-P3/8- [-1, 6, C2f_MQA, [256, True]]- [-1, 1, SCDown, [512, 3, 2]] # 5-P4/16- [-1, 6, C2f_MQA, [512, True]]- [-1, 1, SCDown, [1024, 3, 2]] # 7-P5/32- [-1, 3, C2fCIB, [1024, True]]- [-1, 1, SPPF, [1024, 5]] # 9- [-1, 1, PSA, [1024]] # 10# YOLOv8.0n head
head:- [-1, 1, nn.Upsample, [None, 2, "nearest"]]- [[-1, 6], 1, Concat, [1]] # cat backbone P4- [-1, 3, C2f, [512]] # 13- [-1, 1, nn.Upsample, [None, 2, "nearest"]]- [[-1, 4], 1, Concat, [1]] # cat backbone P3- [-1, 3, C2f, [256]] # 16 (P3/8-small)- [-1, 1, Conv, [256, 3, 2]]- [[-1, 13], 1, Concat, [1]] # cat head P4- [-1, 3, C2fCIB, [512, True]] # 19 (P4/16-medium)- [-1, 1, SCDown, [512, 3, 2]]- [[-1, 10], 1, Concat, [1]] # cat head P5- [-1, 3, C2fCIB, [1024, True]] # 22 (P5/32-large)- [[16, 19, 22], 1, v10Detect, [nc]] # Detect(P3, P4, P5)
七、成功运行结果
分别打印网络模型可以看到C2f_MQA
已经加入到模型中,并可以进行训练了。
YOLOv10m-C2f_MQA:
from n params module arguments 0 -1 1 1392 ultralytics.nn.modules.conv.Conv [3, 48, 3, 2] 1 -1 1 41664 ultralytics.nn.modules.conv.Conv [48, 96, 3, 2] 2 -1 2 185472 ultralytics.nn.modules.block.C2f_MQA [96, 96, True] 3 -1 1 166272 ultralytics.nn.modules.conv.Conv [96, 192, 3, 2] 4 -1 4 1257984 ultralytics.nn.modules.block.C2f_MQA [192, 192, True] 5 -1 1 78720 ultralytics.nn.modules.block.SCDown [192, 384, 3, 2] 6 -1 4 4580352 ultralytics.nn.modules.block.C2f_MQA [384, 384, True] 7 -1 1 228672 ultralytics.nn.modules.block.SCDown [384, 576, 3, 2] 8 -1 2 1689984 ultralytics.nn.modules.block.C2fCIB [576, 576, 2, True] 9 -1 1 831168 ultralytics.nn.modules.block.SPPF [576, 576, 5] 10 -1 1 1253088 ultralytics.nn.modules.block.PSA [576, 576] 11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] 12 [-1, 6] 1 0 ultralytics.nn.modules.conv.Concat [1] 13 -1 2 1993728 ultralytics.nn.modules.block.C2f [960, 384, 2] 14 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] 15 [-1, 4] 1 0 ultralytics.nn.modules.conv.Concat [1] 16 -1 2 517632 ultralytics.nn.modules.block.C2f [576, 192, 2] 17 -1 1 332160 ultralytics.nn.modules.conv.Conv [192, 192, 3, 2] 18 [-1, 13] 1 0 ultralytics.nn.modules.conv.Concat [1] 19 -1 2 831744 ultralytics.nn.modules.block.C2fCIB [576, 384, 2, True] 20 -1 1 152448 ultralytics.nn.modules.block.SCDown [384, 384, 3, 2] 21 [-1, 10] 1 0 ultralytics.nn.modules.conv.Concat [1] 22 -1 2 1911168 ultralytics.nn.modules.block.C2fCIB [960, 576, 2, True] 23 [16, 19, 22] 1 2282134 ultralytics.nn.modules.head.v10Detect [1, [192, 384, 576]]
YOLOv10m-C2f_MQA summary: 657 layers, 18335782 parameters, 18335766 gradients, 77.8 GFLOPs