VisualPytorch发布域名+双服务器以下:
http://nag.visualpytorch.top/static/ (对应114.115.148.27)
http://visualpytorch.top/static/ (对应39.97.209.22)python
torch.nn | |
---|---|
nn.Parameter | 张量子类,表示可学习参数,如weight, bias |
nn.Module | 全部网络层基类,管理网络属性 |
nn.functional | 函数具体实现,如卷积,池化,激活函数等 |
nn.init | 参数初始化方法 |
属性api
调用步骤:服务器
采用步进(Step into)的调试方法从建立网络模型开始(net =LeNet(classes=2)
)进入到每个被调用函数,观察net的_modules字段什么时候被构建而且赋值,记录其中全部进入的类与函数网络
net = LeNet(classes=2)app
LeNet类 __init__(),super(LeNet, self).__init__()
ide
def __init__(self, classes): super(LeNet, self).__init__() self.conv1 = nn.Conv2d(3, 6, 5) self.conv2 = nn.Conv2d(6, 16, 5) self.fc1 = nn.Linear(16*5*5, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, classes)
Module类 __init__(), self._construct()
,构造8个有序字典函数
def _construct(self): """ Initializes internal Module state, shared by both nn.Module and ScriptModule. """ torch._C._log_api_usage_once("python.nn_module") self._backend = thnn_backend self._parameters = OrderedDict() self._buffers = OrderedDict() self._backward_hooks = OrderedDict() self._forward_hooks = OrderedDict() self._forward_pre_hooks = OrderedDict() self._state_dict_hooks = OrderedDict() self._load_state_dict_pre_hooks = OrderedDict() self._modules = OrderedDict()
LeNet类:构造卷积层 nn.Conv2d(3, 6, 5)
学习
Conv2d类:__init()__
,继承自_ConvNd类,调用父类构造google
def __init__(self, in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros'): kernel_size = _pair(kernel_size) stride = _pair(stride) padding = _pair(padding) dilation = _pair(dilation) super(Conv2d, self).__init__( in_channels, out_channels, kernel_size, stride, padding, dilation, False, _pair(0), groups, bias, padding_mode)
_ConvNd类:__init__()
,继承自Module,调用父类构造,同二三步,再进行变量初始化spa
LeNet类:返回至self.conv1 = nn.Conv2d(3, 6, 5)
,被父类(nn.Model)__setattr__()
函数拦截
# name = 'conv1' # value = Conv2d(3, 6, kernel_size=(5, 5), stride=(1, 1)) modules = self.__dict__.get('_modules') if isinstance(value, Module): if modules is None: raise AttributeError( "cannot assign module before Module.__init__() call") remove_from(self.__dict__, self._parameters, self._buffers) modules[name] = value
于是被记录到LeNet类的_modules中
继续构建其余网络层,最后获得的net以下:
总结
def forward(self, x): out = F.relu(self.conv1(x)) out = F.max_pool2d(out, 2) out = F.relu(self.conv2(out)) out = F.max_pool2d(out, 2) out = out.view(out.size(0), -1) out = F.relu(self.fc1(out)) out = F.relu(self.fc2(out)) out = self.fc3(out) return out
nn.Sequential:顺序性,各网络层之间严格按顺序执行,经常使用于block构建
nn.ModuleList:迭代性,经常使用于大量重复网构建,经过for循环实现重复构建
nn.ModuleDict:索引性,经常使用于可选择的网络层
nn.Sequential 是 nn.module的容器,用于按顺序包装一组网络层
class LeNetSequential(nn.Module): def __init__(self, classes): super(LeNetSequential, self).__init__() self.features = nn.Sequential( nn.Conv2d(3, 6, 5), nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2), nn.Conv2d(6, 16, 5), nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2),) ''' 或者以下:给每层起名,默认以序号排名 self.features = nn.Sequential(OrderedDict({ 'conv1': nn.Conv2d(3, 6, 5), 'relu1': nn.ReLU(inplace=True), 'pool1': nn.MaxPool2d(kernel_size=2, stride=2), 'conv2': nn.Conv2d(6, 16, 5), 'relu2': nn.ReLU(inplace=True), 'pool2': nn.MaxPool2d(kernel_size=2, stride=2), })) ''' self.classifier = nn.Sequential( nn.Linear(16*5*5, 120), nn.ReLU(), nn.Linear(120, 84), nn.ReLU(), nn.Linear(84, classes),) def forward(self, x): x = self.features(x) x = x.view(x.size()[0], -1) x = self.classifier(x) return x
调用步骤
LeNetSequential.__init__()
Sequential.__init__()
def __init__(self, *args): super(Sequential, self).__init__() if len(args) == 1 and isinstance(args[0], OrderedDict): for key, module in args[0].items(): self.add_module(key, module) else: for idx, module in enumerate(args): self.add_module(str(idx), module)
Model.__init__()
Sequential.add_module()
: self._modules[name] = module
LeNetSequential
中将Sequential赋值过程被 __setattr__()
拦截,而一样也是Model
,被设为_models的一部分
nn.ModuleList是 nn.module的容器,用于包装一组网络层,以迭代方式调用网络层
主要方法:
class ModuleList(nn.Module): def __init__(self): super(ModuleList, self).__init__() self.linears = nn.ModuleList([nn.Linear(10, 10) for i in range(20)]) # 仅一行代码实现20层10单元全链接 def forward(self, x): for i, linear in enumerate(self.linears): x = linear(x) return x
其中,ModuleList.__init__()
def __init__(self, modules=None): super(ModuleList, self).__init__() if modules is not None: self += modules
nn.ModuleDict是 nn.module的容器,用于包装一组网络层,以索引方式调用网络层
主要方法:
class ModuleDict(nn.Module): def __init__(self): super(ModuleDict, self).__init__() self.choices = nn.ModuleDict({ 'conv': nn.Conv2d(10, 10, 3), 'pool': nn.MaxPool2d(3) }) self.activations = nn.ModuleDict({ 'relu': nn.ReLU(), 'prelu': nn.PReLU() }) def forward(self, x, choice, act): x = self.choices[choice](x) x = self.activations[act](x) return x
其中,每个ModuleDict
模块至关于多路选择器,在输入时要指定通路:
net = ModuleDict() fake_img = torch.randn((4, 10, 32, 32)) output = net(fake_img, 'conv', 'relu')
AlexNet:2012年以高出第二名10多个百分点的准确率得到ImageNet分类任务冠
军,开创了卷积神经网络的新时代
AlexNet特色以下:
构建:使用了Sequential和其自带的forward()方法
class AlexNet(nn.Module): def __init__(self, num_classes=1000): super(AlexNet, self).__init__() self.features = nn.Sequential( nn.Conv2d(3, 64, kernel_size=11, stride=4, padding=2), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=3, stride=2), nn.Conv2d(64, 192, kernel_size=5, padding=2), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=3, stride=2), nn.Conv2d(192, 384, kernel_size=3, padding=1), nn.ReLU(inplace=True), nn.Conv2d(384, 256, kernel_size=3, padding=1), nn.ReLU(inplace=True), nn.Conv2d(256, 256, kernel_size=3, padding=1), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=3, stride=2), ) ''' 这样命名 self.features = nn.Sequential( 'conv1': nn.Conv2d(3, 64, kernel_size=11, stride=4, padding=2), 'relu1': nn.ReLU(inplace=True), 'pool1': nn.MaxPool2d(kernel_size=3, stride=2), 'conv2': nn.Conv2d(64, 192, kernel_size=5, padding=2), 'relu2': nn.ReLU(inplace=True), 'pool2': nn.MaxPool2d(kernel_size=3, stride=2), 'conv3': nn.Conv2d(192, 384, kernel_size=3, padding=1), 'relu3': nn.ReLU(inplace=True), 'conv4': nn.Conv2d(384, 256, kernel_size=3, padding=1), 'relu4': nn.ReLU(inplace=True), 'conv5': nn.Conv2d(256, 256, kernel_size=3, padding=1), 'relu5': nn.ReLU(inplace=True), 'pool5': nn.MaxPool2d(kernel_size=3, stride=2), ) ''' self.avgpool = nn.AdaptiveAvgPool2d((6, 6)) self.classifier = nn.Sequential( nn.Dropout(), nn.Linear(256 * 6 * 6, 4096), nn.ReLU(inplace=True), nn.Dropout(), nn.Linear(4096, 4096), nn.ReLU(inplace=True), nn.Linear(4096, num_classes), ) def forward(self, x): x = self.features(x) x = self.avgpool(x) x = torch.flatten(x, 1) x = self.classifier(x) return x
一样在torchvision/models
下还有googlenet
resnet
等经典网络的构建。
卷积运算:卷积核在输入信号(图像)上滑动,相应位置上进行乘加
卷积核:又称为滤波器,过滤器,可认为是某种模式,某种特征。
卷积过程相似于用一个模版去图像上寻找与它类似的区域,与卷积核模式越类似,激活值越高,从而实现特征提取(边缘,条纹,色彩这一些细节模式)
卷积维度:通常状况下,卷积核在几个维度上滑动,就是几维卷积
nn.Conv2d( in_channels, # 输入通道数 out_channels, # 输出通道数,等价于卷积核个数 kernel_size, # 卷积核尺寸 stride=1, # 步长 padding=0, # 填充个数 dilation=1, # 空洞卷积大小 groups=1, # 分组卷积设置 bias=True, # 偏置 padding_mode='zeros')
功能:对多个二维信号进行二维卷积
主要参数:
尺寸计算:
set_seed(3) # 设置随机种子 # =================== load img ============ path_img = os.path.join("lena.png") img = Image.open(path_img).convert('RGB') # 0~255 # convert to tensor img_transform = transforms.Compose([transforms.ToTensor()]) img_tensor = img_transform(img) img_tensor.unsqueeze_(dim=0) # C*H*W to B*C*H*W conv_layer = nn.Conv2d(3, 1, 3) # input:(i, o, size) weights:(o, i , h, w) nn.init.xavier_normal_(conv_layer.weight.data) # calculation img_conv = conv_layer(img_tensor)
不一样的卷积核,运算结果不一样:
同时卷积过程当中,尺寸发生变化:
卷积前尺寸:torch.Size([1, 3, 512, 512]) 卷积后尺寸:torch.Size([1, 1, 510, 510])
其中Conv2d对应Parameter是四维张量,进行二维卷积操做。大小是[1,3,3,3](表示1个输出通道(卷积核个数),3个Channel,卷积核大小为3*3)
卷积过程以下:
nn.ConvTranspose2d( in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros')
转置卷积又称部分跨越卷积(Fractionallystrided Convolution) ,用于对图像进行上采样(UpSample)
注意:虽然转置卷积核对应的矩阵与卷积核对应的矩阵形状上乘转置关系,但数值上彻底无关,即为不可逆过程。
conv_layer = nn.ConvTranspose2d(3, 1, 3, stride=2) # input:(i, o, size) # 卷积前尺寸:torch.Size([1, 3, 512, 512]) # 卷积后尺寸:torch.Size([1, 1, 1025, 1025])
图像尺寸变大,出现大量空格,称之为转置卷积的
[棋盘效应]: https://www.jianshu.com/p/36ff39344de5
最大值/平均值
nn.MaxPool2d(kernel_size, stride=None, padding=0, dilation=1, # 池化核间隔大小 return_indices=False, # 记录池化像素索引 ceil_mode=False # 尺寸向上取整 ) nn.AvgPool2d(kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True, # 填充值用于计算 divisor_override=None # 除法因子,除的再也不是核的大小 )
池化运算:对信号进行 “收集”(多变少)并“总结”,相似水池收集水资源,于是
得名池化层
反池化
nn.MaxUnpool2d( kernel_size, stride=None, padding=0 ) forward(self, input, indices, output_size=None)
功能:对二维信号(图像)进行最大值池化上采样
nn.Linear(in_features, # 输入结点数 out_features, # 输出结点数 bias=True)
线性层又称全链接层,其每一个神经元与上一层全部神经元相连,实现对前一层的线性组合,线性变换
Input = [1, 2, 3] shape = (1, 3) W_0 = 𝟏𝟏𝟏𝟐𝟐𝟐𝟑𝟑𝟑𝟒𝟒𝟒 shape = (3, 4) Hidden = Input * W_0 shape = (1, 4) = [6, 12, 18, 24]
激活函数对特征进行非线性变换,赋予多层神经网络具备深度的意义
nn.Sigmoid
计算公式:\(y = \frac{1}{1+e^{-x}}\)
梯度公式:𝒚′ = 𝒚 ∗ 𝟏 − 𝒚
特性:
nn.tanh
计算公式:𝐲 =\(\frac{sinh x}{cosh x} = \frac{e^x-e^{-x}}{e^x+e^{-x}}=\frac{2}{1+e^{-2x}}+1\)
梯度公式:𝒚′ = 𝟏 − y 𝟐
特性:
nn.ReLU
计算公式:𝐲 = max(𝟎, 𝒙)
梯度公式:𝒚′ = 𝟏, 𝒙 > 𝟎
𝒖𝒏𝒅𝒆𝒇 𝒊𝒏𝒆𝒅, 𝒙 = 𝟎
𝟎, 𝒙 < 𝟎
特性:
nn.LeakyReLU
nn.PReLU
nn.RReLU