分为模块和Stage模块,其中模块是为了进行下采样,并联不同下采样倍率的操作,Stage模块则是进行特征融合 。由低下采样倍率和高下采样倍率的特征图融合在一起 。
所对应的 __ init __ 方法里的代码如下:
self.stage2_cfg = extra['STAGE2']num_channels = self.stage2_cfg['NUM_CHANNELS']# num_channels [32, 64]block = blocks_dict[self.stage2_cfg['BLOCK']]# basic blocknum_channels = [num_channels[i] * block.expansion for i in range(len(num_channels))]# num_channels [32 * 1, 64 * 1]self.transition1 = self._make_transition_layer([256], num_channels)self.stage2, pre_stage_channels = self._make_stage(self.stage2_cfg, num_channels)
函数
由er函数定义
def _make_transition_layer(self, num_channels_pre_layer, num_channels_cur_layer):# num_channels_pre_layer 之前layer层的channels个数,在stage2之前是256# num_channels_cur_layer 现在layer层的channels个数,在stage2为[32, 64]num_branches_cur = len(num_channels_cur_layer) # 2num_branches_pre = len(num_channels_pre_layer) # 1transition_layers = []# 对应图片上Transition1上的两层3 * 3卷积for i in range(num_branches_cur): # i = 0, 1if i < num_branches_pre:if num_channels_cur_layer[i] != num_channels_pre_layer[i]:# 如果通道数不相等,则通过卷积层改变通道数# 如果通道数相等,则无需卷积操作,可以直接使用,接到下面第一个else语句transition_layers.append(nn.Sequential(nn.Conv2d(num_channels_pre_layer[i],num_channels_cur_layer[i],3, 1, 1, bias=False),nn.BatchNorm2d(num_channels_cur_layer[i]),nn.ReLU(inplace=True)))else:# 对应Transition2, 3中不进行卷积操作的分支transition_layers.append(None)else:# 对应Transition模块上多出的那一个分支,使用stride = 2 再进行下采样conv3x3s = []for j in range(i+1-num_branches_pre):# 利用num_channels_pre_layer之前shape最小的特征层来生成新的分支inchannels = num_channels_pre_layer[-1]outchannels = num_channels_cur_layer[i] \if j == i-num_branches_pre else inchannelsconv3x3s.append(nn.Sequential(nn.Conv2d(inchannels, outchannels, 3, 2, 1, bias=False),# 这里卷积进行下采样,stride = 2nn.BatchNorm2d(outchannels),nn.ReLU(inplace=True)))transition_layers.append(nn.Sequential(*conv3x3s))return nn.ModuleList(transition_layers)
Stage函数
由函数定义:
def _make_stage(self, layer_config, num_inchannels,multi_scale_output=True):num_modules = layer_config['NUM_MODULES'] # 1num_branches = layer_config['NUM_BRANCHES'] # 2num_blocks = layer_config['NUM_BLOCKS'] # [4, 4]num_channels = layer_config['NUM_CHANNELS'] # [32, 64]block = blocks_dict[layer_config['BLOCK']] # BasicBlockfuse_method = layer_config['FUSE_METHOD'] # SUMmodules = []# num_modules 表示一个stage中融合进行几次# 最后一次融合是将其他分支的特征融合到最高分辨率的特征图上,只输出最高分辨率的特征图(multi_scale_output = False)# 前几次融合是将所有分支的特征融合到每个特征图上,输出所有尺寸特征图(multi_scale_output = True)for i in range(num_modules):# multi_scale_output is only used last moduleif not multi_scale_output and i == num_modules - 1:reset_multi_scale_output = Falseelse:reset_multi_scale_output = Truemodules.append(HighResolutionModule(num_branches,block,num_blocks,num_inchannels,num_channels,fuse_method,reset_multi_scale_output))num_inchannels = modules[-1].get_num_inchannels()return nn.Sequential(*modules), num_inchannels
函数
def forward(self, x):if self.num_branches == 1:# 如果只有一个分支,则直接将单个分支特征图作为输入送进self.branchesreturn [self.branches[0](x[0])]# 如果有多个分支,则分别将每个分支特征图作为输入送进self.branches[i],得到x[i]for i in range(self.num_branches):x[i] = self.branches[i](x[i])x_fuse = []# 把不同分支分别进行上采样和下采样然后融合for i in range(len(self.fuse_layers)):y = x[0] if i == 0 else self.fuse_layers[i][0](x[0])for j in range(1, self.num_branches):if i == j:y = y + x[j]else:y = y + self.fuse_layers[i][j](x[j])# 整体部分最后加Relu激活函数x_fuse.append(self.relu(y))return x_fuse
- 人工神经网络分为哪两类,几种神经网络的区别
- 了解ResNet
- 经典卷积神经网络——resnet
- JAVA 3层架构及其实例文件/代码规范
- C++ 单元测试与代码覆盖率测试方法
- [计算机网络]IP协议
- 五 CNN经典网络模型:ResNet简介及代码实现(PyTorch超详细注释版
- 完整的ResNet代码
- 深度学习_经典网络_ResNet详解及常见问题总结
- 三 深度学习笔记:神经网络之九种激活函数Sigmoid、tanh、ReLU、R