四、求解器

为了让训练过程更好的收敛,人们设计了很多更复杂的求解器

  • 比如:SGD、L-BFGS、Rprop、RMSprop、Adam、AdamW、AdaGrad、AdaDelta 等等
  • 但是,好在最常用的就是 Adam 或者 AdamW

五、一些常用的损失函数

  • 两个数值的差距,Min Square Error:$\ell_{\mathrm{MSE}}=\frac{1}{N}\sum_{i=1}^N(y_i-\hat{y}_i)^2$ (等价于欧式距离,见下文)
  • 两个向量之间的(欧式)距离:$\ell(\mathbf{y},\mathbf{\hat{y}})=\|\mathbf{y}-\mathbf{\hat{y}}\|$
  • 两个向量之间的夹角(余弦距离):
    <img src="https://maynor.oss-cn-shenzhen.aliyuncs.com/img/20231208153619.png" style="margin-left: 0px" width="400px">
  • 两个概率分布之间的差异,交叉熵:$\ell_{\mathrm{CE}}(p,q)=-\sum_i p_i\log q_i$ ——假设是概率分布 p,q 是离散的
  • 这些损失函数也可以组合使用(在模型蒸馏的场景常见这种情况),例如$L=L_1+\lambda L_2$,其中$\lambda$是一个预先定义的权重,也叫一个「超参」

<div class="alert alert-warning">
思考: 你能找到这些损失函数和分类、聚类、回归问题之间的关系吗?
</div>

六、用 PyTorch 训练一个最简单的神经网络

数据集(MNIST)样例:

<img src="https://maynor.oss-cn-shenzhen.aliyuncs.com/img/20231108201615.jpg" style="margin-left: 0px" width="800px">

输入一张 28×28 的图像,输出标签 0--9

from __future__ import print_function
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
from torch.optim.lr_scheduler import StepLR

BATCH_SIZE = 64
TEST_BACTH_SIZE = 1000
EPOCHS = 5
LR = 0.01
GAMMA = 0.9
WEIGHT_DECAY = 1e-6
SEED = 42
LOG_INTERVAL = 100

# 定义一个全连接网络


class FeedForwardNet(nn.Module):
    def __init__(self):
        super().__init__()
        # 第一层784维输入、256维输出 -- 图像大小28×28=784
        self.fc1 = nn.Linear(784, 256)
        # 第二层256维输入、128维输出
        self.fc2 = nn.Linear(256, 128)
        # 第三层128维输入、64维输出
        self.fc3 = nn.Linear(128, 64)
        # 第三层64维输入、10维输出 -- 输出类别10类(0,1,...9)
        self.fc4 = nn.Linear(64, 10)

        # Dropout module with 0.2 drop probability
        self.dropout = nn.Dropout(p=0.2)

    def forward(self, x):
        # 把输入展平成1D向量
        x = x.view(x.shape[0], -1)

        # 每层激活函数是ReLU,额外加dropout
        x = self.dropout(F.relu(self.fc1(x)))
        x = self.dropout(F.relu(self.fc2(x)))
        x = self.dropout(F.relu(self.fc3(x)))

        # 输出为10维概率分布
        x = F.log_softmax(self.fc4(x), dim=1)

        return x

# 训练过程


def train(model, loss_fn, device, train_loader, optimizer, epoch):
    # 开启梯度计算
    model.train()
    for batch_idx, (data_input, true_label) in enumerate(train_loader):
        # 从数据加载器读取一个batch
        # 把数据上载到GPU(如有)
        data_input, true_label = data_input.to(device), true_label.to(device)
        # 求解器初始化(每个batch初始化一次)
        optimizer.zero_grad()
        # 正向传播:模型由输入预测输出
        output = model(data_input)
        # 计算loss
        loss = loss_fn(output, true_label)  # F.nll_loss(output, target)
        # 反向传播:计算当前batch的loss的梯度
        loss.backward()
        # 由求解器根据梯度更新模型参数
        optimizer.step()

        # 间隔性的输出当前batch的训练loss
        if batch_idx % LOG_INTERVAL == 0:
            print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
                epoch, batch_idx * len(data_input), len(train_loader.dataset),
                100. * batch_idx / len(train_loader), loss.item()))


# 计算在测试集的准确率和loss
def test(model, loss_fn, device, test_loader):
    model.eval()
    test_loss = 0
    correct = 0
    with torch.no_grad():
        for data, target in test_loader:
            data, target = data.to(device), target.to(device)
            output = model(data)
            # sum up batch loss
            test_loss += loss_fn(output, target, reduction='sum').item()
            # get the index of the max log-probability
            pred = output.argmax(dim=1, keepdim=True)
            correct += pred.eq(target.view_as(pred)).sum().item()

    test_loss /= len(test_loader.dataset)

    print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
        test_loss, correct, len(test_loader.dataset),
        100. * correct / len(test_loader.dataset)))


def main():
    # 检查是否有GPU
    use_cuda = torch.cuda.is_available()

    # 设置随机种子(以保证结果可复现)
    torch.manual_seed(SEED)

    # 训练设备(GPU或CPU)
    device = torch.device("cuda" if use_cuda else "cpu")

    # 设置batch size
    train_kwargs = {'batch_size': BATCH_SIZE}
    test_kwargs = {'batch_size': TEST_BACTH_SIZE}

    if use_cuda:
        cuda_kwargs = {'num_workers': 1,
                       'pin_memory': True,
                       'shuffle': True}
        train_kwargs.update(cuda_kwargs)
        test_kwargs.update(cuda_kwargs)

    # 数据预处理(转tensor、数值归一化)
    transform = transforms.Compose([
        transforms.ToTensor(),
        transforms.Normalize((0.1307,), (0.3081,))
    ])

    # 自动下载MNIST数据集
    dataset_train = datasets.MNIST('data', train=True, download=True,
                                   transform=transform)
    dataset_test = datasets.MNIST('data', train=False,
                                  transform=transform)

    # 定义数据加载器(自动对数据加载、多线程、随机化、划分batch、等等)
    train_loader = torch.utils.data.DataLoader(dataset_train, **train_kwargs)
    test_loader = torch.utils.data.DataLoader(dataset_test, **test_kwargs)

    # 创建神经网络模型
    model = FeedForwardNet().to(device)

    # 指定求解器
    optimizer = optim.SGD(model.parameters(), lr=LR, weight_decay=WEIGHT_DECAY)
    # scheduler = StepLR(optimizer, step_size=1, gamma=GAMMA)

    # 定义loss函数
    loss_fn = F.cross_entropy

    # 训练N个epoch
    for epoch in range(1, EPOCHS + 1):
        train(model, loss_fn, device, train_loader, optimizer, epoch)
        test(model, loss_fn, device, test_loader)
        # scheduler.step()


if __name__ == '__main__':
    main()

<div class="alert alert-warning">
如何运行这段代码:
<ol>

<li>不要在Jupyter笔记上直接运行</li>
<li>请将左侧的 exp.py 文件下载到本地</li>
<li>安装相关依赖包: pip install torch torchvision</li>
<li>运行:python3 exp.py</li>

</ol>
</div>

后记

📢博客主页:https://manor.blog.csdn.net

📢欢迎点赞 👍 收藏 ⭐留言 📝 如有错误敬请指正!

📢本文由 Maynor 原创,首发于 CSDN博客🙉

📢不能老盯着手机屏幕,要不时地抬起头,看看老板的位置⭐

📢专栏持续更新,欢迎订阅:https://blog.csdn.net/xianyu120/category_12471942.html

本文由mdnice多平台发布


BigData#1manor
1 声望0 粉丝