Pytorch_Part4_损失函数
阅读原文时间:2023年07月12日阅读:1

VisualPytorch beta发布了!

功能概述:通过可视化拖拽网络层方式搭建模型,可选择不同数据集、损失函数、优化器生成可运行pytorch代码

扩展功能:1. 模型搭建支持模块的嵌套;2. 模型市场中能共享及克隆模型;3. 模型推理助你直观的感受神经网络在语义分割、目标探测上的威力;4.添加图像增强、快速入门、参数弹窗等辅助性功能

修复缺陷:1.大幅改进UI界面,提升用户体验;2.修改注销不跳转、图片丢失等已知缺陷;3.实现双服务器访问,缓解访问压力

访问地址http://sunie.top:9000

发布声明详见https://www.cnblogs.com/NAG2020/p/13030602.html

1. 梯度消失与爆炸

\(E(XY)=E(X)E(Y)\)

\(D(X)=E(X^2)-E(X)^2\)

\(D(X+Y)=D(X)+D(Y)\)

\(\Longrightarrow D(XY) = D(X)D(Y)+D(X)E(Y)^2+D(Y)E(X)^2=D(X)D(Y)\)

\(H_{11}=\sum_{i=0}^n X_i*W_{1i}\)

\(\Longrightarrow D(H_{11})=\sum_{i=0}^n D(X_i)*D(W_{1i})=n*1*1=n\)

\(std(H_{11}=\sqrt n)\)

若仍使 \(D(H_1)=nD(X)D(W)=1\)

\(\Longrightarrow D(W)=\frac{1}{n}\)


import os
import torch
import random
import numpy as np
import torch.nn as nn
from common_tools import set_seed

set_seed(3)  # 设置随机种子

class MLP(nn.Module):
    def __init__(self, neural_num, layers):
        super(MLP, self).__init__()
        self.linears = nn.ModuleList([nn.Linear(neural_num, neural_num, bias=False) for i in range(layers)])
        self.neural_num = neural_num

    def forward(self, x):
        for (i, linear) in enumerate(self.linears):
            x = linear(x)
            # x = torch.relu(x)

            print("layer:{}, std:{}".format(i, x.std()))
            if torch.isnan(x.std()):
                print("output is nan in {} layers".format(i))
                break

        return x

    def initialize(self):
        for m in self.modules():
            if isinstance(m, nn.Linear):
                nn.init.normal_(m.weight.data, std=1)

    layer_nums = 100
    neural_nums = 256
    batch_size = 16

    net = MLP(neural_nums, layer_nums)
    net.initialize()

    inputs = torch.randn((batch_size, neural_nums))  # normal: mean=0, std=1

    output = net(inputs)
    print(output)

将W设置为0均值,1标准差的标准正太分布,出现如下所示梯度爆炸现象。正如所料,每层std增加的倍数大概为\(\sqrt{256}=16\) ,将W的标准差设为 np.sqrt(1/self.neural_num) 则正常。

layer:0, std:16.0981502532959
layer:1, std:253.29345703125
layer:2, std:3982.99951171875
...
layer:30, std:2.2885405881461517e+37
layer:31, std:nan
output is nan in 31 layers
tensor([[ 4.9907e+37,        -inf,         inf,  ...,         inf,
                -inf,         inf],
        [       -inf,         inf,  2.1733e+38,  ...,  9.1766e+37,
         -4.5777e+37,  3.3680e+37],
        [ 1.4215e+38,        -inf,         inf,  ...,        -inf,
                 inf,         inf],
        ...,
        [-9.2355e+37, -9.9121e+37, -3.7809e+37,  ...,  4.6074e+37,
          2.2305e+36,  1.2982e+38],
        [       -inf,         inf,        -inf,  ...,        -inf,
         -2.2394e+38,  2.0295e+36],
        [       -inf,         inf,  2.1518e+38,  ...,        -inf,
          1.6132e+38,        -inf]], grad_fn=<MmBackward>)

2. 激活函数初始化

对于不同的激活函数,对W的标准差初始化也不同,保持数据尺度维持在恰当范围,通常方差为1

  1. Sigmoid, tanh------Xavier

  2. Relu------Kaiming

    \(D(W)=\frac{2}{n_i}\)

    \(D(W)=\frac{2}{(1+a^2) n_i}\) , LeakyRelu

    a = np.sqrt(6 / (self.neural_num + self.neural_num))
    tanh_gain = nn.init.calculate_gain('tanh')
    a *= tanh_gain
    nn.init.uniform_(m.weight.data, -a, a)
    # 同 nn.init.xavier_uniform_(m.weight.data, gain=tanh_gain)
    
    nn.init.normal_(m.weight.data, std=np.sqrt(2 / self.neural_num))
    # 同 nn.init.kaiming_normal_(m.weight.data)

3. 十种初始化方法

  1. Xavier均匀分布

  2. Xavier正态分布

  3. Kaiming均匀分布

  4. Kaiming正态分布

  5. 均匀分布

  6. 正态分布

  7. 常数分布

  8. 正交矩阵初始化

  9. 单位矩阵初始化

  10. 稀疏矩阵初始化

    nn.init.calculate_gain(nonlinearity, param=None)

    x = torch.randn(10000)
    out = torch.tanh(x)
    gain = x.std() / out.std() # 1.5909514427185059

    tanh_gain = nn.init.calculate_gain('tanh') # 1.6666666666666667

    相当于每经过一层tanh,x的标准差缩小1.6倍

主要功能:计算激活函数的方差变化尺度

主要参数

  • nonlinearity: 激活函数名称
  • param: 激活函数的参数,如Leaky ReLU的negative_slop

0. 损失函数、熵

损失函数(Loss Function):衡量模型输出与真实标签的差异

\(Loss=f(\hat y , y)\)

代价函数(Cost Function):

\(Cost = \frac{1}{N}\sum_i^N f(\hat y_i, y)\)

目标函数(Objective Function):

\(Obj=Cost+Regularization\)

1. nn.CrossEntropyLoss

nn.CrossEntropyLoss调用过程:

用步进(Step into)的调试方法从loss_functoin = nn.CrossEntropyLoss() 语句进入函数,观察从nn.CrossEntropyLoss()到class Module(object)一共经历了哪些类,记录其中所有进入的类及函数。

  1. CrossEntropyLoss.__init__: super(CrossEntropyLoss, self).__init__

  2. _WeightedLoss.__init__: super(_WeightedLoss,self).__init__

  3. _Loss.__init__: super(_Loss, self).__init__()

    def __init__(self, size_average=None, reduce=None, reduction='mean'): # 前两个不再使用
        super(_Loss, self).__init__()
        if size_average is not None or reduce is not None:
            self.reduction =_Reduction.legacy_get_string(size_average, reduce)
        else:
            self.reduction = reduction
  4. Module.__init__: _construct

功能:nn.LogSoftmax ()与nn.NLLLoss ()结合,进行交叉熵计算

主要参数:

  • weight:各类别的loss设置权值

  • ignore_index:忽略某个类别

  • reduction :计算模式,可为none/sum/mean

    • none 逐个元素计算
    • sum 所有元素求和,返回标量
    • mean 加权平均,返回标量

    inputs = torch.tensor([[1, 2], [1, 3], [1, 3]], dtype=torch.float)
    target = torch.tensor([0, 1, 1], dtype=torch.long)

    --------- CrossEntropy loss: reduction ------------

    def loss function

    loss_f_none = nn.CrossEntropyLoss(weight=None, reduction='none')
    loss_f_sum = nn.CrossEntropyLoss(weight=None, reduction='sum')
    loss_f_mean = nn.CrossEntropyLoss(weight=None, reduction='mean')

    forward

    loss_none = loss_f_none(inputs, target) # tensor([1.3133, 0.1269, 0.1269])
    loss_sum = loss_f_sum(inputs, target) # tensor(1.5671)
    loss_mean = loss_f_mean(inputs, target) # tensor(0.5224)

    '''
    若设置:
    weights = torch.tensor([1, 2], dtype=torch.float)
    结果变为:
    tensor([1.3133, 0.2539, 0.2539]) tensor(1.8210) tensor(0.3642)
    后面两项因为类别为1,故有权2
    '''

2. nn.NLLLoss

功能:实现负对数似然函数中的负号功能

3. nn.BCELoss(逐神经元求Loss)

功能:二分类交叉熵

注意事项:输入值取值在[0,1]

inputs = torch.tensor([[1, 2], [2, 2], [3, 4], [4, 5]], dtype=torch.float)
target = torch.tensor([[1, 0], [1, 0], [0, 1], [0, 1]], dtype=torch.float)
# 注意这里输出端的两个神经元都需要分别计算Loss

target_bce = target

# itarget
inputs = torch.sigmoid(inputs)

weights = torch.tensor([1, 1], dtype=torch.float)

loss_f_none_w = nn.BCELoss(weight=weights, reduction='none')
loss_f_sum = nn.BCELoss(weight=weights, reduction='sum')
loss_f_mean = nn.BCELoss(weight=weights, reduction='mean')

# forward
loss_none_w = loss_f_none_w(inputs, target_bce)
loss_sum = loss_f_sum(inputs, target_bce)
loss_mean = loss_f_mean(inputs, target_bce)

'''
BCE Loss tensor([[0.3133, 2.1269],
        [0.1269, 2.1269],
        [3.0486, 0.0181],
        [4.0181, 0.0067]]) tensor(11.7856) tensor(1.4732)
'''

4. nn.BCEWithLogitsLoss(逐神经元求Loss)

功能:结合Sigmoid与二分类交叉熵

注意事项:网络最后不加sigmoid函数

添加参数:pos_weight 正样本的权值

5. 18种损失函数总结

参数即调用方式详见:

下面仅列出各损失函数功能及表达式

注意:以下公式中出现下标n则表示该函数是对所有神经元进行的逐神经元操作。

函数名

功能

表达式

CrossEntropyLoss

LogSoftmax 与NLLLoss 结合,进行交叉熵计算

\(weight[class](-x[class]+log(\sum_j e^{x[j]}))\)

NLLLoss

负对数似然函数中的负号功能

\(-\omega_{y_n} x_{n,y_n}\)

BCELoss

二分类交叉熵

\(-\omega_n [y_nlogx_n+(1-y_n)log(1-x_n)]\)

BCEWithLogitsLoss

结合Sigmoid与二分类交叉熵

\(-\omega_n [y_nlog\sigma(x_n)+(1-y_n)log(1-\sigma(x_n))]\)

L1Loss

差的绝对值

\(\|x_n-y_n\|\)

MSELoss

差的平方

\((x_n-y_n)^2\)

SmoothL1Loss

在L1基础上平滑

$$ \left{ \begin{array}{lr} 0.5(x_n-y_n)^2 & : |x_n-y_n| < 1\ |x_n-y_n|-0.5 & : otherwise \end{array} \right. $$

PoissonNLLLoss

泊松分布的负对数似然损失函数

$$ \left{ \begin{array}{lr} e^{x_n}-y_n x_n & : log(input)\ x_n-y_nlog(x_n+eps) & : otherwise \end{array} \right. $$

KLDivLoss

计算KLD(divergence),KL散度,相对熵(x必须是对数)

\(y_n(logy_n-x_n)\)

MerginRankingLoss

计算两个向量之间的相似度,用于排序任务

\(max(0, -y(x^{(1)}-x^{(2)})+margin)\)

MultiLabelMarginLoss

多标签边界损失

\(\sum_j^{len(y)}\sum_{i\neq y_j}^{len(x)}\frac{max(0, 1-(x_{y_j}-x_i))}{len(x)}\)

SoftMarginLoss

二分类logistic损失

\(\frac{1}{len(x)}\sum_i log(1+e^{-y_i x_i})\)

MultiLabelSoftMarginLoss

上述多标签版本

\(-\frac{1}{C}\sum_i (y_i log(1+e^{-x_i^{-1}})+(1-y_i)log\frac{e^{-x}}{1+e^{-x}})\)

MultiMarginLoss

计算多分类的折页损失

\(\frac{1}{len(x)}\sum_i max(0, margin-x_y+x_i)^p\)

TripletMarginLoss

计算三元组损失,人脸验证中常用

\(max(d(a_i, p_i)-d(a_i,n_i)+margin, 0)\)

HimgeEmbeddingLoss

计算两个输入的相似性,常用于非线性embedding和半监督学习

$$ \left{ \begin{array}{lr} x_n & : y_n=1\ max(0, margin-x_n) & : y_n=-1 \end{array} \right. $$

CosineEmbeddingLoss

余弦相似度

$$ \left{ \begin{array}{lr} 1-cos(x_1,x_2) & : y=1\ max(0,cos(x_1,x_2)-margin) & : y=-1 \end{array} \right. $$

CTCLoss

计算CTC损失,解决时序类数据的分类

详见CTC loss 理解

pytorch的优化器:管理更新模型中可学习参数的值,使得模型输出更接近真实标签

导数:函数在指定坐标轴上的变化率

方向导数:指定方向上的变化率

梯度:一个向量,方向为方向导数取得最大值的方向

1. Optimizer基本属性

defaults:优化器超参数

state:参数的缓存,如momentum的缓存

params_groups:管理的参数组,字典的list

_step_count:记录更新次数,学习率调整中使用

# ========================= step 4/5 优化器 ==========================
optimizer = optim.SGD(net.parameters(), lr=LR, momentum=0.9)                        # 选择优化器
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=10, gamma=0.1)     # 设置学习率下降策略

# ============================ step 5/5 训练 ========================
for epoch in range(MAX_EPOCH):
    ...
    for i, data in enumerate(train_loader):
        ...
        # backward
        optimizer.zero_grad()
        loss = criterion(outputs, labels)
        loss.backward()

        # update weights
        optimizer.step()

2. 基本方法

zero_grad():清空所管理参数的梯度,pytorch特性:张量梯度不自动清零,而是累加

step():执行一步更新

add_param_group():添加参数组

state_dict():获取优化器当前状态信息字典

load_state_dict() :加载状态信息字典

weight = torch.randn((2, 2), requires_grad=True)
weight.grad = torch.ones((2, 2))

optimizer = optim.SGD([weight], lr=0.1)

# --------------------------- step -----------------------------------
optimizer.step()        # 修改lr=1 0.1观察结果
'''
weight before step:tensor([[0.6614, 0.2669],
        [0.0617, 0.6213]])
weight after step:tensor([[ 0.5614,  0.1669],
        [-0.0383,  0.5213]])
'''

# ------------------------- zero_grad --------------------------------
print("weight in optimizer:{}\nweight in weight:{}\n".format(id(optimizer.param_groups[0]['params'][0]), id(weight)))
print("weight.grad is {}\n".format(weight.grad))    

optimizer.zero_grad()
'''
weight in optimizer:1598931466208
weight in weight:1598931466208 与上一句相同,说明params存储的是指向数据的指针

weight.grad is tensor([[1., 1.],
        [1., 1.]])

after optimizer.zero_grad(), weight.grad is
tensor([[0., 0.],
        [0., 0.]])
'''

# --------------------- add_param_group --------------------------
print("optimizer.param_groups is\n{}".format(optimizer.param_groups))
w2 = torch.randn((3, 3), requires_grad=True)
optimizer.add_param_group({"params": w2, 'lr': 0.0001})
print("optimizer.param_groups is\n{}".format(optimizer.param_groups))

# ------------------- state_dict ----------------------
optimizer = optim.SGD([weight], lr=0.1, momentum=0.9)
opt_state_dict = optimizer.state_dict()

for i in range(10):
    optimizer.step()

torch.save(optimizer.state_dict(), os.path.join(BASE_DIR, "optimizer_state_dict.pkl"))

# -----------------------load state_dict ---------------------------
optimizer = optim.SGD([weight], lr=0.1, momentum=0.9)
state_dict = torch.load(os.path.join(BASE_DIR, "optimizer_state_dict.pkl"))

optimizer.load_state_dict(state_dict)

3. 学习率

梯度下降:

+ = − ( )

+ = − LR * ()

学习率(learning rate)控制更新的步伐

采用不同学习率进行训练,注意在lr>0.3时出现梯度爆炸

iteration = 100
num_lr = 10
lr_min, lr_max = 0.01, 0.2  # .5 .3 .2

lr_list = np.linspace(lr_min, lr_max, num=num_lr).tolist()
loss_rec = [[] for l in range(len(lr_list))]
iter_rec = list()

for i, lr in enumerate(lr_list):
x = torch.tensor([2.], requires_grad=True)
for iter in range(iteration):

y = func(x)
y.backward()
x.data.sub_(lr * x.grad)  # x.data -= x.grad
x.grad.zero_()

loss_rec[i].append(y.item())

for i, loss_r in enumerate(loss_rec):
plt.plot(range(len(loss_r)), loss_r, label="LR: {}".format(lr_list[i]))
plt.legend()
plt.xlabel('Iterations')
plt.ylabel('Loss value')
plt.show()

4. momentum

Momentum(动量,冲量):结合当前梯度与上一次更新信息,用于当前更新

指数加权平均: $$v_t=\beta v_{t-1}+(1-\beta)\theta_t = \sum_i{N}(1-\beta)\betai\theta_{N-1}$$

pytorch中更新公式:

\(v_i=mv_{i-1}+g(w_i)\)

\(w_{i+1}=w_i-lr*v_i\)

1.optim.SGD

主要参数:

• params:管理的参数组

• lr:初始学习率

• momentum:动量系数,贝塔

• weight_decay:L2正则化系数

• nesterov:是否采用NAG

def func(x):
    return torch.pow(2*x, 2)    # y = (2x)^2 = 4*x^2        dy/dx = 8x

iteration = 100
m = 0.9     # .9 .63

lr_list = [0.01, 0.03]

momentum_list = list()
loss_rec = [[] for l in range(len(lr_list))]
iter_rec = list()

for i, lr in enumerate(lr_list):
    x = torch.tensor([2.], requires_grad=True)

    momentum = 0. if lr == 0.03 else m
    momentum_list.append(momentum)

    optimizer = optim.SGD([x], lr=lr, momentum=momentum)

    for iter in range(iteration):

        y = func(x)
        y.backward()

        optimizer.step()
        optimizer.zero_grad()

        loss_rec[i].append(y.item())

上述情况出现弹簧现象,因为在loss接近0的位置仍然有极大的动量,应当适当减小。

其他9个优化器详见PyTorch 学习笔记(七):PyTorch的十个优化器