【竞赛01】垃圾分类(Garbage Classification) Baseline 比赛题目 | 返回首页

作者:欧新宇(Xinyu OU)
当前版本:Release v1.1
开发平台:Paddle 2.3.2
运行环境:Intel Core i7-7700K CPU 4.2GHz, nVidia GeForce GTX 1080 Ti
本教案所涉及的数据集仅用于教学和交流使用,请勿用作商用。

最后更新:2022年10月16日


所有作业均在AIStudio上进行提交,提交时包含源代码和运行结果

【任务描述】

近年来,随着人工智能的发展,其在语音识别、自然语言处理、图像与视频分析等诸多领域取得了巨大成功。随着政府对环境保护的呼吁,垃圾分类成为一个亟待解决的问题,本次竞赛将聚焦在垃圾图片的分类,利用人工智能技术,对居民生活垃圾图片进行检测,找出图片中有哪些类别的垃圾。 要求参赛者基于Paddle,给出一个算法或模型,对于给定的图片,检测出图片中的垃圾类别。给定图片数据,选手据此训练模型,为每张测试数据预测出最正确的类别。

【数据说明】

本竞赛所用训练和测试图片均来自生活场景。总共四十个类别,类别和标签对应关系在训练集中的dict文件里。图片中垃圾的类别,格式是“一级类别/二级类别”,二级类别是具体的垃圾物体类别,也就是训练数据中标注的类别,比如一次性快餐盒、果皮果肉、旧衣服等。一级类别有四种类别:可回收物、厨余垃圾、有害垃圾和其他垃圾。

数据文件包括训练集(有标注)和测试集(无标注),训练集的所有图片分别保存在train文件夹下面的0-39个文件夹中,文件名即类别标签,测试集共有400张待分类的垃圾图片在test文件夹下。

GarbageClasses

【分数评定】

  1. 所有奖励分数均叠加到期末总成绩中
  2. 所有参与竞赛者,奖励1分
  3. 最分数评定高于Baseline的成绩,奖励2分
  4. 额外奖励:第一名奖励10分,第二名奖励7分,第三名奖励5分
  5. 竞赛成果应用到学科(互联网+)竞赛,获奖队伍所有成员奖励20分(校级以上,不含校级)

【提交答案】

  1. 所有内容均在AIStudio上进行提交,提交时包含源代码和运行结果
  2. 数据集下载地址:https://aistudio.baidu.com/aistudio/datasetdetail/71361
  3. 提交完整程序代码,包括运行过程
  4. 提交结果文件。结果文件为.txt 文件格式,命名为model_result.txt,文件内的字段需要按照指定格式写入。
    • 每个类别的行数和测试集原始数据行数应一一对应,不可乱序。
    • 输出结果应检查是否为400行数据,否则成绩无效。
    • 输出结果文件命名为model_result.txt,一行一个类别标签(数字)
      样例如下:

···
35
3
2
37
10

【参考代码】

一、数据集准备

1.1 数据清洗

1.2 图像路径及标签列表生成

!python "D:\WorkSpace\ExpDatasets\Garbage\generate_annotation.py"

图像列表已生成, 其中训练验证集样本14402,训练集样本11504个, 验证集样本2898个, 测试集样本400个, 共计14802个。

二、全局参数设置、数据预处理及基本功能函数定义

2.1 导入依赖及全局参数配置

#################导入依赖库################################################## 
import os
import sys
import json
import codecs
import numpy as np
import time                        # 载入time时间库,用于计算训练时间
import matplotlib.pyplot as plt    # 载入python的第三方图像处理库
from pprint import pprint          # 载入格式化的print工具
import paddle
from paddle.static import InputSpec
from paddle.io import DataLoader

sys.path.append(r'D:\WorkSpace\DeepLearning\WebsiteV2') # 导入自定义函数保存位置
from utils.getSystemInfo import getSystemInfo                             # 载入自定义系统信息函数
from utils.getVisualization import draw_process                           # 载入自定义可视化绘图函数
from utils.getOptimizer import learning_rate_setting, optimizer_setting   # 载入自定义优化函数
from utils.getLogging import init_log_config                              # 载入自定义日志保存函数
from utils.datasets.Garbage import GarbageDataset                           # 载入Butterfly数据集
from utils.evaluate import evaluate                                             # 载入验证评估函数
from utils.train import train                                             # 载入验证训练函数

################全局参数配置################################################### 
#### 1. 训练超参数定义
args = {
    'project_name': 'Comp01GarbageClassification',
    'dataset_name': 'Garbage',
    'architecture': 'resnet50',          # alexnet, mobilenet_v2, vgg16, resnet18, resnet50 
    'training_data': 'train',                # train调参使用 | trainval生成最终模型使用
    'input_size': [3, 227, 227],             # 输入样本的尺度
    'num_trainval': -1,  
    'num_train': -1,
    'num_val': -1,
    'num_test': -1,
    'class_dim': -1,
    'label_dict': {},    
    'starting_time': time.strftime("%Y%m%d%H%M", time.localtime()),          # 全局启动时间
    'total_epoch': 20,                # 总迭代次数, 代码调试好后考虑
    'batch_size': 32,                 # 设置每个批次的数据大小,同时对训练提供器和测试
    'log_interval': 10,                # 设置训练过程中,每隔多少个batch显示一次
    'eval_interval': 1,               # 设置每个多少个epoch测试一次
    'checkpointed': False,               # 是否保存checkpoint模型
    'checkpoint_train': False,          # 是否接着上一次保存的参数接着训练,优先级高于预训练模型
    'checkpoint_model':'Butterfly_Mobilenetv2',            # 设置恢复训练时载入的模型参数
    'checkpoint_time': '202102182058',   #  恢复训练时所指向的指定时间戳
    'pretrained': True,              # 是否使用预训练的模型    
    'pretrained_model':'API',              # 设置预训练模型, API|Butterflies_AlexNet_final
    'dataset_root_path': 'D:\\Workspace\\ExpDatasets\\',
    'result_root_path': 'D:\\Workspace\\ExpResults\\',
    'deployment_root_path': 'D:\\Workspace\\ExpDeployments\\',
    'useGPU': True,                   # True | Flase
    'learning_strategy': {            # 学习率和优化器相关参数
        'optimizer_strategy': 'Momentum',                   # 优化器:Momentum, RMS, SGD, Adam
        'learning_rate_strategy': 'CosineAnnealingDecay',   # 学习率策略: 固定fixed, 分段衰减PiecewiseDecay, 余弦退火CosineAnnealingDecay, 指数ExponentialDecay, 多项式PolynomialDecay
        'learning_rate': 0.01,                              # 固定学习率
        'momentum': 0.9,                                    # 动量
        'Piecewise_boundaries': [60, 80, 90],               # 分段衰减:变换边界,每当运行到epoch时调整一次
        'Piecewise_values': [0.01, 0.001, 0.0001, 0.00001], # 分段衰减:步进学习率,每次调节的具体值
        'Exponential_gamma': 0.9,                           # 指数衰减:衰减指数
        'Polynomial_decay_steps': 10,                       # 多项式衰减:衰减周期,每个多少个epoch衰减一次
        'verbose': False
    },
    'results_path': {
        'checkpoint_models_path': None,              # 迭代训练模型保存路径
        'final_figures_path': None,                      # 训练过程曲线图
        'final_models_path': None,                         # 最终用于部署和推理的模型
        'logs_path': None,                                        # 训练过程日志
    },
    'deployments_path':{
        'deployment_root_path': None,
        'deployment_checkpoint_model': None,
        'deployment_final_model': None,
        'deployment_final_figures_path ': None,
        'deployment_logs_path': None,
        'deployment_pretrained_model': None,
    }  
}

#### 2. 设置简化参数名

if not args['pretrained']:
    args['model_name'] = args['dataset_name'] + '_' + args['architecture'] + '_withoutPretrained'
else:
    args['model_name'] = args['dataset_name'] + '_' + args['architecture']


#### 3. 定义设备工作模式 [GPU|CPU]
# 定义使用CPU还是GPU,使用CPU时use_cuda = False,使用GPU时use_cuda = True
def init_device(useGPU=args['useGPU']):
    paddle.set_device('gpu:0') if useGPU else paddle.set_device('cpu')
init_device()


#### 4.定义各种路径:模型、训练、日志结果图
# 4.1 数据集路径
args['dataset_root_path'] = os.path.join(args['dataset_root_path'], args['dataset_name'])

# 4.2 训练过程涉及的相关路径
result_root_path = os.path.join(args['result_root_path'], args['project_name'])
args['results_path']['checkpoint_models_path'] = os.path.join(result_root_path, 'checkpoint_models')               # 迭代训练模型保存路径
args['results_path']['final_figures_path'] = os.path.join(result_root_path, 'final_figures')                       # 训练过程曲线图
args['results_path']['final_models_path'] = os.path.join(result_root_path, 'final_models')                         # 最终用于部署和推理的模型
args['results_path']['logs_path'] = os.path.join(result_root_path, 'logs')                                         # 训练过程日志


# 4.3 验证和测试时的相关路径(文件)
deployment_root_path = os.path.join(args['deployment_root_path'], args['project_name'])
args['deployments_path']['deployment_checkpoint_model'] = os.path.join(deployment_root_path, 'checkpoint_models', args['model_name'] + '_final')
args['deployments_path']['deployment_final_model'] = os.path.join(deployment_root_path, 'final_models', args['model_name'] + '_final')
args['deployments_path']['deployment_final_figures_path'] = os.path.join(deployment_root_path, 'final_figures')
args['deployments_path']['deployment_logs_path'] = os.path.join(deployment_root_path, 'logs')
args['deployments_path']['deployment_pretrained_model'] = os.path.join(deployment_root_path, 'pretrained_dir', args['pretrained_model'])

# 4.4 初始化结果目录
def init_result_path():
    if not os.path.exists(args['results_path']['final_models_path']):
        os.makedirs(args['results_path']['final_models_path'])
    if not os.path.exists(args['results_path']['final_figures_path']):
        os.makedirs(args['results_path']['final_figures_path'])
    if not os.path.exists(args['results_path']['logs_path']):
        os.makedirs(args['results_path']['logs_path'])
    if not os.path.exists(args['results_path']['checkpoint_models_path']):
        os.makedirs(args['results_path']['checkpoint_models_path'])
init_result_path()


#### 5. 初始化参数
def init_train_parameters():
    dataset_info = json.loads(open(os.path.join(args['dataset_root_path'] , 'dataset_info.json'), 'r', encoding='utf-8').read())    
    args['num_trainval'] = dataset_info['num_trainval']
    args['num_train'] = dataset_info['num_train']
    args['num_val'] = dataset_info['num_val']
    args['num_test'] = dataset_info['num_test']
    args['class_dim'] = dataset_info['class_dim']
    args['label_dict'] = dataset_info['label_dict']
init_train_parameters()

#### 6. 初始化日志方法
logger = init_log_config(logs_path=args['results_path']['logs_path'], model_name=args['model_name'])

# 输出训练参数 train_parameters
# if __name__ == '__main__':
#     pprint(args)

2.2 数据集类的定义及数据预处理

在Paddle 2.0+ 中,对数据的获取包含两个步骤,一是创建数据集类,二是创建数据迭代读取器。

  1. 使用 paddle.io 构造数据集类。数据类能够实现从本地获取数据列表,并对样本进行数据预处理。全新的 paddle.vision.transforms 可以轻松的实现样本的多种预处理功能,而不用手动去写数据预处理的函数,这大简化了代码的复杂性。对于用于训练训练集(训练验证集),我们可以按照一定的比例进行数据增广,并进行标准的数据预处理;对于验证集和测试集则只需要进行标准的数据预处理即可。
  2. 结合 paddle.io.DataLoader 工具包,可以将读入的数据进行batch划分,确定是否进行随机打乱和是否丢弃最后的冗余样本。
    • 一般来说,对于训练样本(包括train和trainvl),我们需要进行随机打乱,让每次输入到网络中的样本处于不同的组合形式,防止过拟合的产生;对于验证数据和测试数据,由于每次测试都需要对所有样本进行一次完整的遍历,并计算最终的平均值,因此是否进行打乱,并不影响最终的结果。
    • 由于在最终输出的loss和accuracy的平均值时,会事先进行一次批内的平均,因此如果最后一个batch的数据并不能构成完整的一批,即实际样本数量小于batch_size,会导致最终计算精度产生一定的偏差。所以,当样本数较多的时候,可以考虑在训练时丢弃最后一个batch的数据。但值得注意的是,验证集和测试集不能进行丢弃,否则会有一部分样本无法进行测试。
  1. 源代码:Garbage.py

  2. 调用方法:

    sys.path.append(r'D:\WorkSpace\DeepLearning\WebsiteV2   # 定义模块保存位置
    from datasets.Garbage import GarbageDataset               # 导入数据集模块
    
# 1. 从数据集库中获取数据
dataset_trainval = GarbageDataset(args['dataset_root_path'], mode='trainval')
dataset_train = GarbageDataset(args['dataset_root_path'], mode='train')
dataset_val = GarbageDataset(args['dataset_root_path'], mode='val')
dataset_test = GarbageDataset(args['dataset_root_path'], mode='test')

# 2. 创建读取器
trainval_reader = DataLoader(dataset_trainval, batch_size=args['batch_size'], shuffle=True, drop_last=True)
train_reader = DataLoader(dataset_train, batch_size=args['batch_size'], shuffle=True, drop_last=True)
val_reader = DataLoader(dataset_val, batch_size=args['batch_size'], shuffle=False, drop_last=False)
test_reader = DataLoader(dataset_test, batch_size=args['batch_size'], shuffle=False, drop_last=False)

####################################################################################################################
# 3.测试读取器
if __name__ == "__main__":
    for batch_id, (image, label) in enumerate(val_reader()):
        if batch_id == 2:
            break
        print('训练集batch_{}的图像形态:{}, 标签形态:{}'.format(batch_id, image.shape, label.shape))

训练集batch_0的图像形态:[32, 3, 227, 227], 标签形态:[32]
训练集batch_1的图像形态:[32, 3, 227, 227], 标签形态:[32]

2.3 定义过程可视化函数 (无需事先执行)

定义训练过程中用到的可视化方法, 包括训练损失, 训练集批准确率, 测试集损失,测试机准确率等。 根据具体的需求,可以在训练后展示这些数据和迭代次数的关系. 值得注意的是, 训练过程中可以每个epoch绘制一个数据点,也可以每个batch绘制一个数据点,也可以每个n个batch或n个epoch绘制一个数据点。可视化代码除了实现训练后自动可视化的函数,同时实现将可视化的图片和数据进行保存,保存的文件夹由 final_figures_path 指定。

  1. 源代码:getVisualization.py

  2. 调用方法:

    sys.path.append(r'D:\WorkSpace\DeepLearning\WebsiteV2   # 定义模块保存位置
    from utils.getVisualization import draw_process         # 导入数据集模块
    
    draw_process('Training', 'loss', 'accuracy', iters=train_log[0], losses=train_log[1], accuracies=train_log[2], final_figures_path=final_figures_path, figurename='train', isShow=True)      # 模块调用方法
    
# 以下代码为可视化函数的测试代码,仅供演示和测试,本项目的运行无需事先执行。
if __name__ == '__main__': 
    try:
        train_log = np.load(os.path.join(args['results_path']['final_figures_path'], 'train.npy'))
        print('训练数据可视化结果:')
        draw_process('Training', 'loss', 'accuracy', iters=train_log[0], losses=train_log[1], accuracies=train_log[2], final_figures_path=final_figures_path, figurename='train', isShow=True)   
    except:
        print('以下图例为测试数据。')
        draw_process('Training', 'loss', 'accuracy', figurename='default', isShow=True)          

以下图例为测试数据。

output_12_1

2.4 定义日志输出函数 (无需事先执行)

logging是一个专业的日志输出库,可以用于在屏幕上打印运行结果(和print()函数一致),也可以实现将这些运行结果保存到指定的文件夹中,用于后续的研究。

  1. 源代码:getLogging.py

  2. 调用方法:

    sys.path.append(r'D:\WorkSpace\DeepLearning\WebsiteV2     # 定义模块保存位置
    from utils.getLogging import init_log_config                                  # 导入日志模块
    logger = init_log_config(logs_path=logs_path, model_name=model_name)          # 初始化日志模块,配置日志输出路径和模型名称
    
    logger.info('系统基本信息:')                                                   # logger模块调用方法
    
# 以下代码为日志输出测试代码,仅供演示和测试,本项目的运行无需事先执行。
if __name__ == "__main__":
    logger.info('测试一下, 模型名称: {}'.format(args['model_name']))
    SystemInfo = json.dumps(getSystemInfo(), indent=4, ensure_ascii=False, sort_keys=False, separators=(',', ':'))
    logger.info('系统基本信息:')
    logger.info(SystemInfo)
2022-10-19 10:10:53,177 - INFO: 测试一下, 模型名称: Garbage_resnet50
2022-10-19 10:10:54,260 - INFO: 系统基本信息:
2022-10-19 10:10:54,260 - INFO: {
    "操作系统":"Windows-10-10.0.22000-SP0",
    "CPU":"Intel(R) Core(TM) i7-9700 CPU @ 3.00GHz",
    "内存":"10.66G/15.88G (67.10%)",
    "GPU":"b'GeForce RTX 2080' 1.40G/8.00G (0.17%)",
    "CUDA/cuDNN":"10.2 / 7.6.5",
    "Paddle":"2.3.2"
}

三、模型训练与评估

3.1 配置网络模型 (无需事先执行)

从Alexnet开始,包括VGG,GoogLeNet,Resnet等模型都是层次较深的模型,如果按照逐层的方式进行设计,代码会变得非常繁琐。因此,我们可以考虑将相同结构的模型进行汇总和合成,例如Alexnet中,卷积层+激活+池化层就是一个完整的结构体。

如果使用的是标准的网络结构,我们可以直接从Paddle的模型库中进行下载,并启用预训练模型,载入Imagenet预训练参数。再实际应用中,预训练(迁移学习)是非常有效的提高性能和缩减训练时间的方法。在载入Paddle模型库的时候,我们不需要手动设计模型,只需要按照下面的方法来直接调用即可。当然,如果是自己设计的模型,还是需要手动进行模型类的创建。

在飞桨中,Paddle.vision.models 类内置了很多标准模型库,包括LeNet, alexnet, mobilenet_v1, mobilenet_v2, resnet18(34, 50, 101, 152), vgg16, googlenet等,更多的模型请参考:paddle.vision

# 以下代码为模型测试代码,仅供演示和测试,本项目的运行无需事先执行。
import paddle

network = eval("paddle.vision.models." + args['architecture'] + "(num_classes=args['class_dim'], pretrained=args['pretrained'])")
print(paddle.summary(network, (1,3,227,227)))
    2022-10-19 10:10:54,401 - INFO: unique_endpoints {''}
    2022-10-19 10:10:54,402 - INFO: File C:\Users\Administrator/.cache/paddle/hapi/weights\resnet50.pdparams md5 checking...
    2022-10-19 10:10:54,677 - INFO: Found C:\Users\Administrator/.cache/paddle/hapi/weights\resnet50.pdparams
    c:\Users\Administrator\anaconda3\lib\site-packages\paddle\fluid\dygraph\layers.py:1492: UserWarning: Skip loading for fc.weight. fc.weight receives a shape [2048, 1000], but the expected shape is [2048, 40].
      warnings.warn(("Skip loading for {}. ".format(key) + str(err)))
    c:\Users\Administrator\anaconda3\lib\site-packages\paddle\fluid\dygraph\layers.py:1492: UserWarning: Skip loading for fc.bias. fc.bias receives a shape [1000], but the expected shape is [40].
      warnings.warn(("Skip loading for {}. ".format(key) + str(err)))
    

    -------------------------------------------------------------------------------
       Layer (type)         Input Shape          Output Shape         Param #    
    ===============================================================================
         Conv2D-1        [[1, 3, 227, 227]]   [1, 64, 114, 114]        9,408     
       BatchNorm2D-1    [[1, 64, 114, 114]]   [1, 64, 114, 114]         256      
          ReLU-1        [[1, 64, 114, 114]]   [1, 64, 114, 114]          0       
        MaxPool2D-1     [[1, 64, 114, 114]]    [1, 64, 57, 57]           0       
         Conv2D-3        [[1, 64, 57, 57]]     [1, 64, 57, 57]         4,096     
       BatchNorm2D-3     [[1, 64, 57, 57]]     [1, 64, 57, 57]          256      
          ReLU-2         [[1, 256, 57, 57]]    [1, 256, 57, 57]          0       
         Conv2D-4        [[1, 64, 57, 57]]     [1, 64, 57, 57]        36,864     
       BatchNorm2D-4     [[1, 64, 57, 57]]     [1, 64, 57, 57]          256      
         Conv2D-5        [[1, 64, 57, 57]]     [1, 256, 57, 57]       16,384     
       BatchNorm2D-5     [[1, 256, 57, 57]]    [1, 256, 57, 57]        1,024     
         Conv2D-2        [[1, 64, 57, 57]]     [1, 256, 57, 57]       16,384     
       BatchNorm2D-2     [[1, 256, 57, 57]]    [1, 256, 57, 57]        1,024     
     BottleneckBlock-1   [[1, 64, 57, 57]]     [1, 256, 57, 57]          0       
         Conv2D-6        [[1, 256, 57, 57]]    [1, 64, 57, 57]        16,384     
       BatchNorm2D-6     [[1, 64, 57, 57]]     [1, 64, 57, 57]          256      
          ReLU-3         [[1, 256, 57, 57]]    [1, 256, 57, 57]          0       
         Conv2D-7        [[1, 64, 57, 57]]     [1, 64, 57, 57]        36,864     
       BatchNorm2D-7     [[1, 64, 57, 57]]     [1, 64, 57, 57]          256      
         Conv2D-8        [[1, 64, 57, 57]]     [1, 256, 57, 57]       16,384     
       BatchNorm2D-8     [[1, 256, 57, 57]]    [1, 256, 57, 57]        1,024     
     BottleneckBlock-2   [[1, 256, 57, 57]]    [1, 256, 57, 57]          0       
         Conv2D-9        [[1, 256, 57, 57]]    [1, 64, 57, 57]        16,384     
       BatchNorm2D-9     [[1, 64, 57, 57]]     [1, 64, 57, 57]          256      
          ReLU-4         [[1, 256, 57, 57]]    [1, 256, 57, 57]          0       
         Conv2D-10       [[1, 64, 57, 57]]     [1, 64, 57, 57]        36,864     
      BatchNorm2D-10     [[1, 64, 57, 57]]     [1, 64, 57, 57]          256      
         Conv2D-11       [[1, 64, 57, 57]]     [1, 256, 57, 57]       16,384     
      BatchNorm2D-11     [[1, 256, 57, 57]]    [1, 256, 57, 57]        1,024     
     BottleneckBlock-3   [[1, 256, 57, 57]]    [1, 256, 57, 57]          0       
         Conv2D-13       [[1, 256, 57, 57]]    [1, 128, 57, 57]       32,768     
      BatchNorm2D-13     [[1, 128, 57, 57]]    [1, 128, 57, 57]         512      
          ReLU-5         [[1, 512, 29, 29]]    [1, 512, 29, 29]          0       
         Conv2D-14       [[1, 128, 57, 57]]    [1, 128, 29, 29]       147,456    
      BatchNorm2D-14     [[1, 128, 29, 29]]    [1, 128, 29, 29]         512      
         Conv2D-15       [[1, 128, 29, 29]]    [1, 512, 29, 29]       65,536     
      BatchNorm2D-15     [[1, 512, 29, 29]]    [1, 512, 29, 29]        2,048     
         Conv2D-12       [[1, 256, 57, 57]]    [1, 512, 29, 29]       131,072    
      BatchNorm2D-12     [[1, 512, 29, 29]]    [1, 512, 29, 29]        2,048     
     BottleneckBlock-4   [[1, 256, 57, 57]]    [1, 512, 29, 29]          0       
         Conv2D-16       [[1, 512, 29, 29]]    [1, 128, 29, 29]       65,536     
      BatchNorm2D-16     [[1, 128, 29, 29]]    [1, 128, 29, 29]         512      
          ReLU-6         [[1, 512, 29, 29]]    [1, 512, 29, 29]          0       
         Conv2D-17       [[1, 128, 29, 29]]    [1, 128, 29, 29]       147,456    
      BatchNorm2D-17     [[1, 128, 29, 29]]    [1, 128, 29, 29]         512      
         Conv2D-18       [[1, 128, 29, 29]]    [1, 512, 29, 29]       65,536     
      BatchNorm2D-18     [[1, 512, 29, 29]]    [1, 512, 29, 29]        2,048     
     BottleneckBlock-5   [[1, 512, 29, 29]]    [1, 512, 29, 29]          0       
         Conv2D-19       [[1, 512, 29, 29]]    [1, 128, 29, 29]       65,536     
      BatchNorm2D-19     [[1, 128, 29, 29]]    [1, 128, 29, 29]         512      
          ReLU-7         [[1, 512, 29, 29]]    [1, 512, 29, 29]          0       
         Conv2D-20       [[1, 128, 29, 29]]    [1, 128, 29, 29]       147,456    
      BatchNorm2D-20     [[1, 128, 29, 29]]    [1, 128, 29, 29]         512      
         Conv2D-21       [[1, 128, 29, 29]]    [1, 512, 29, 29]       65,536     
      BatchNorm2D-21     [[1, 512, 29, 29]]    [1, 512, 29, 29]        2,048     
     BottleneckBlock-6   [[1, 512, 29, 29]]    [1, 512, 29, 29]          0       
         Conv2D-22       [[1, 512, 29, 29]]    [1, 128, 29, 29]       65,536     
      BatchNorm2D-22     [[1, 128, 29, 29]]    [1, 128, 29, 29]         512      
          ReLU-8         [[1, 512, 29, 29]]    [1, 512, 29, 29]          0       
         Conv2D-23       [[1, 128, 29, 29]]    [1, 128, 29, 29]       147,456    
      BatchNorm2D-23     [[1, 128, 29, 29]]    [1, 128, 29, 29]         512      
         Conv2D-24       [[1, 128, 29, 29]]    [1, 512, 29, 29]       65,536     
      BatchNorm2D-24     [[1, 512, 29, 29]]    [1, 512, 29, 29]        2,048     
     BottleneckBlock-7   [[1, 512, 29, 29]]    [1, 512, 29, 29]          0       
         Conv2D-26       [[1, 512, 29, 29]]    [1, 256, 29, 29]       131,072    
      BatchNorm2D-26     [[1, 256, 29, 29]]    [1, 256, 29, 29]        1,024     
          ReLU-9        [[1, 1024, 15, 15]]   [1, 1024, 15, 15]          0       
         Conv2D-27       [[1, 256, 29, 29]]    [1, 256, 15, 15]       589,824    
      BatchNorm2D-27     [[1, 256, 15, 15]]    [1, 256, 15, 15]        1,024     
         Conv2D-28       [[1, 256, 15, 15]]   [1, 1024, 15, 15]       262,144    
      BatchNorm2D-28    [[1, 1024, 15, 15]]   [1, 1024, 15, 15]        4,096     
         Conv2D-25       [[1, 512, 29, 29]]   [1, 1024, 15, 15]       524,288    
      BatchNorm2D-25    [[1, 1024, 15, 15]]   [1, 1024, 15, 15]        4,096     
     BottleneckBlock-8   [[1, 512, 29, 29]]   [1, 1024, 15, 15]          0       
         Conv2D-29      [[1, 1024, 15, 15]]    [1, 256, 15, 15]       262,144    
      BatchNorm2D-29     [[1, 256, 15, 15]]    [1, 256, 15, 15]        1,024     
          ReLU-10       [[1, 1024, 15, 15]]   [1, 1024, 15, 15]          0       
         Conv2D-30       [[1, 256, 15, 15]]    [1, 256, 15, 15]       589,824    
      BatchNorm2D-30     [[1, 256, 15, 15]]    [1, 256, 15, 15]        1,024     
         Conv2D-31       [[1, 256, 15, 15]]   [1, 1024, 15, 15]       262,144    
      BatchNorm2D-31    [[1, 1024, 15, 15]]   [1, 1024, 15, 15]        4,096     
     BottleneckBlock-9  [[1, 1024, 15, 15]]   [1, 1024, 15, 15]          0       
         Conv2D-32      [[1, 1024, 15, 15]]    [1, 256, 15, 15]       262,144    
      BatchNorm2D-32     [[1, 256, 15, 15]]    [1, 256, 15, 15]        1,024     
          ReLU-11       [[1, 1024, 15, 15]]   [1, 1024, 15, 15]          0       
         Conv2D-33       [[1, 256, 15, 15]]    [1, 256, 15, 15]       589,824    
      BatchNorm2D-33     [[1, 256, 15, 15]]    [1, 256, 15, 15]        1,024     
         Conv2D-34       [[1, 256, 15, 15]]   [1, 1024, 15, 15]       262,144    
      BatchNorm2D-34    [[1, 1024, 15, 15]]   [1, 1024, 15, 15]        4,096     
    BottleneckBlock-10  [[1, 1024, 15, 15]]   [1, 1024, 15, 15]          0       
         Conv2D-35      [[1, 1024, 15, 15]]    [1, 256, 15, 15]       262,144    
      BatchNorm2D-35     [[1, 256, 15, 15]]    [1, 256, 15, 15]        1,024     
          ReLU-12       [[1, 1024, 15, 15]]   [1, 1024, 15, 15]          0       
         Conv2D-36       [[1, 256, 15, 15]]    [1, 256, 15, 15]       589,824    
      BatchNorm2D-36     [[1, 256, 15, 15]]    [1, 256, 15, 15]        1,024     
         Conv2D-37       [[1, 256, 15, 15]]   [1, 1024, 15, 15]       262,144    
      BatchNorm2D-37    [[1, 1024, 15, 15]]   [1, 1024, 15, 15]        4,096     
    BottleneckBlock-11  [[1, 1024, 15, 15]]   [1, 1024, 15, 15]          0       
         Conv2D-38      [[1, 1024, 15, 15]]    [1, 256, 15, 15]       262,144    
      BatchNorm2D-38     [[1, 256, 15, 15]]    [1, 256, 15, 15]        1,024     
          ReLU-13       [[1, 1024, 15, 15]]   [1, 1024, 15, 15]          0       
         Conv2D-39       [[1, 256, 15, 15]]    [1, 256, 15, 15]       589,824    
      BatchNorm2D-39     [[1, 256, 15, 15]]    [1, 256, 15, 15]        1,024     
         Conv2D-40       [[1, 256, 15, 15]]   [1, 1024, 15, 15]       262,144    
      BatchNorm2D-40    [[1, 1024, 15, 15]]   [1, 1024, 15, 15]        4,096     
    BottleneckBlock-12  [[1, 1024, 15, 15]]   [1, 1024, 15, 15]          0       
         Conv2D-41      [[1, 1024, 15, 15]]    [1, 256, 15, 15]       262,144    
      BatchNorm2D-41     [[1, 256, 15, 15]]    [1, 256, 15, 15]        1,024     
          ReLU-14       [[1, 1024, 15, 15]]   [1, 1024, 15, 15]          0       
         Conv2D-42       [[1, 256, 15, 15]]    [1, 256, 15, 15]       589,824    
      BatchNorm2D-42     [[1, 256, 15, 15]]    [1, 256, 15, 15]        1,024     
         Conv2D-43       [[1, 256, 15, 15]]   [1, 1024, 15, 15]       262,144    
      BatchNorm2D-43    [[1, 1024, 15, 15]]   [1, 1024, 15, 15]        4,096     
    BottleneckBlock-13  [[1, 1024, 15, 15]]   [1, 1024, 15, 15]          0       
         Conv2D-45      [[1, 1024, 15, 15]]    [1, 512, 15, 15]       524,288    
      BatchNorm2D-45     [[1, 512, 15, 15]]    [1, 512, 15, 15]        2,048     
          ReLU-15        [[1, 2048, 8, 8]]     [1, 2048, 8, 8]           0       
         Conv2D-46       [[1, 512, 15, 15]]     [1, 512, 8, 8]       2,359,296   
      BatchNorm2D-46      [[1, 512, 8, 8]]      [1, 512, 8, 8]         2,048     
         Conv2D-47        [[1, 512, 8, 8]]     [1, 2048, 8, 8]       1,048,576   
      BatchNorm2D-47     [[1, 2048, 8, 8]]     [1, 2048, 8, 8]         8,192     
         Conv2D-44      [[1, 1024, 15, 15]]    [1, 2048, 8, 8]       2,097,152   
      BatchNorm2D-44     [[1, 2048, 8, 8]]     [1, 2048, 8, 8]         8,192     
    BottleneckBlock-14  [[1, 1024, 15, 15]]    [1, 2048, 8, 8]           0       
         Conv2D-48       [[1, 2048, 8, 8]]      [1, 512, 8, 8]       1,048,576   
      BatchNorm2D-48      [[1, 512, 8, 8]]      [1, 512, 8, 8]         2,048     
          ReLU-16        [[1, 2048, 8, 8]]     [1, 2048, 8, 8]           0       
         Conv2D-49        [[1, 512, 8, 8]]      [1, 512, 8, 8]       2,359,296   
      BatchNorm2D-49      [[1, 512, 8, 8]]      [1, 512, 8, 8]         2,048     
         Conv2D-50        [[1, 512, 8, 8]]     [1, 2048, 8, 8]       1,048,576   
      BatchNorm2D-50     [[1, 2048, 8, 8]]     [1, 2048, 8, 8]         8,192     
    BottleneckBlock-15   [[1, 2048, 8, 8]]     [1, 2048, 8, 8]           0       
         Conv2D-51       [[1, 2048, 8, 8]]      [1, 512, 8, 8]       1,048,576   
      BatchNorm2D-51      [[1, 512, 8, 8]]      [1, 512, 8, 8]         2,048     
          ReLU-17        [[1, 2048, 8, 8]]     [1, 2048, 8, 8]           0       
         Conv2D-52        [[1, 512, 8, 8]]      [1, 512, 8, 8]       2,359,296   
      BatchNorm2D-52      [[1, 512, 8, 8]]      [1, 512, 8, 8]         2,048     
         Conv2D-53        [[1, 512, 8, 8]]     [1, 2048, 8, 8]       1,048,576   
      BatchNorm2D-53     [[1, 2048, 8, 8]]     [1, 2048, 8, 8]         8,192     
    BottleneckBlock-16   [[1, 2048, 8, 8]]     [1, 2048, 8, 8]           0       
    AdaptiveAvgPool2D-1  [[1, 2048, 8, 8]]     [1, 2048, 1, 1]           0       
         Linear-1           [[1, 2048]]            [1, 40]            81,960     
    ===============================================================================
    Total params: 23,643,112
    Trainable params: 23,536,872
    Non-trainable params: 106,240
    -------------------------------------------------------------------------------
    Input size (MB): 0.59
    Forward/backward pass size (MB): 282.41
    Params size (MB): 90.19
    Estimated Total Size (MB): 373.19
    -------------------------------------------------------------------------------
    
    {'total_params': 23643112, 'trainable_params': 23536872}

3.3 定义验证函数 (无需事先执行)

验证函数有两个功能,一是在训练过程中实时地对验证集进行测试(在线测试),二是在训练结束后对测试集进行测试(离线测试)。

验证函数的具体流程包括:

  1. 初始化输出变量,包括top1精度,top5精度和损失
  2. 基于批次batch的结构进行循环测试,具体包括:
    1). 定义输入层(image,label),图像输入维度 [batch, channel, Width, Height] (-1,imgChannel,imgSize,imgSize),标签输入维度 [batch, 1] (-1,1)
    2). 定义输出层:在paddle2.0+中,我们使用model.eval_batch([image],[label])进行评估验证,该函数可以直接输出精度和损失,但在运行前需要使用model.prepare()进行配置。值得注意的,在计算测试集精度的时候,需要对每个批次的精度/损失求取平均值。

在定义eval()函数的时候,我们需要为其指定两个必要参数:model是测试的模型,data_reader是迭代的数据读取器,取值为val_reader(), test_reader(),分别对验证集和测试集。此处验证集和测试集数据的测试过程是相同的,只是所使用的数据不同;此外,可选参数verbose用于定义是否在测试的时候输出过程

  1. 源代码:evaluate.py

  2. 调用方法:

    sys.path.append(r'D:\WorkSpace\DeepLearning\WebsiteV2   # 定义模块保存位置
    from utils.evaluate import evaluate                                                 # 导入优化器模块
    
    avg_loss, avg_acc_top1, avg_acc_top5 = evaluate(model, val_reader(), verbose=0) # 调用方法
    
# 以下代码为优化器测试代码,仅供演示和测试,本项目的运行无需事先执行。
# 开启本项目的验证测试需要在模型完成一次训练后,并将生成的模型拷贝至ExpDeployments目录;或使用如下Project011的运行结果进行测试
if __name__ == '__main__':
    # deployment_checkpoint_model = r'D:\Workspace\ExpDeployments\Project000TestProject\checkpoint_models\Butterfly_Alexnet_final'
    # from utils.models.AlexNet import AlexNet                                  # 载入自定义AlexNet模型

    # try:
        # 设置输入样本的维度    
        input_spec = InputSpec(shape=[None] + args['input_size'], dtype='float32', name='image')
        label_spec = InputSpec(shape=[None, 1], dtype='int64', name='label')

        # 载入模型
        network = eval("paddle.vision.models." + args['architecture'] + "(num_classes=args['class_dim'])")
        # network = AlexNet(num_classes=7)
        # network = paddle.vision.models.mobilenet_v2(num_classes=args['class_dim'])
        model = paddle.Model(network, input_spec, label_spec)                   # 模型实例化
        model.load(args['deployments_path']['deployment_checkpoint_model'])          # 载入调优模型的参数
        model.prepare(loss=paddle.nn.CrossEntropyLoss(),                        # 设置loss
                        metrics=paddle.metric.Accuracy(topk=(1,5)))             # 设置评价指标

        # 执行评估函数,并输出验证集样本的损失和精度
        print('开始评估...')
        avg_loss, avg_acc_top1, avg_acc_top5 = evaluate(model, val_reader, verbose=1)
        print('[验证集] 损失: {:.5f}, top1精度:{:.5f}, top5精度为:{:.5f}'.format(avg_loss, avg_acc_top1, avg_acc_top5), end='')
    # except:
    #     print('数据不存在, 跳过测试')
开始评估...
(100.00%)[验证集] 损失: 0.05025, top1精度:0.56211, top5精度为:0.85059

3.4 定义训练函数 (无需事先执行)

在Paddle 2.0+动态图模式下,动态图模式被作为默认进程,同时动态图守护进程 fluid.dygraph.guard(PLACE) 被取消。

训练部分的具体流程包括:

  1. 定义输入层(image, label): 图像输入维度 [batch, channel, Width, Height] (-1,imgChannel,imgSize,imgSize),标签输入维度 [batch, 1] (-1,1)
  2. 实例化网络模型: model = Alexnet()
  3. 定义学习率策略和优化算法
  4. 定义输出层,即模型准备函数model.prepare()
  5. 基于"周期-批次"两层循环进行训练
  6. 记录训练过程的结果,并定期输出模型。此处,我们分别保存用于调优和恢复训练的checkpoint_model模型和用于部署与预测的final_model模型
  1. 源代码:train.py

  2. 调用方法:

    sys.path.append(r'D:\WorkSpace\DeepLearning\WebsiteV2   # 定义模块保存位置
    from utils.train import train                                               # 导入训练模块
    
    # 各参数的详细调用请参考源代码
    visualization_log =  train(model, args=args, train_reader=train_reader, val_reader=val_reader, logger=logger) # 调用方法
    

3.5 训练主函数

训练主函数主要用于调用训练函数实现模型的训练。在训练过程中包括对模型参数的训练和在线测试两个主要功能,其中模型训练使用训练集样本,在线测试使用验证集样本。此外,当超参数选择确定后,会使用训练验证集样本traival重新在确定的超参数上进行训练。此时,将不再执行模型的验证输出。

#### 训练主函数 ########################################################3
if __name__ == '__main__':
    from paddle.static import InputSpec
    ##### 1. 输出相关提示语,准备开启训练进程 ############################
    # 1.1 输出系统硬件信息
    logger.info('系统基本信息:')
    SystemInfo = json.dumps(getSystemInfo(), indent=4, ensure_ascii=False, sort_keys=False, separators=(',', ':'))
    logger.info(SystemInfo)
    # 1.2 输出训练的超参数信息
    data = json.dumps(args, indent=4, ensure_ascii=False, sort_keys=False, separators=(',', ':'))   # 格式化字典格式的参数列表
    logger.info(data)
    
    # 1.3 载入官方标准模型,若不存在则会自动进行下载,pretrained=True|False控制是否使用Imagenet预训练参数
    if args['pretrained'] == True:
        logger.info('载入Imagenet-{}预训练模型完毕,开始微调训练(fine-tune)。'.format(args['architecture']))
    elif args['pretrained'] == False:
        logger.info('载入{}模型完毕,从初始状态开始训练。'.format(args['architecture']))

    # 1.4 提示语:启动训练过程
    logger.info('训练参数保存完毕,使用{}模型, 训练{}数据, 训练集{}, 启动训练...'.format(args['architecture'],args['dataset_name'],args['training_data']))

    ##### 2. 启动训练进程 #################
    # 2.1 根据配置文件,自动设置训练数据来源,使用trainval时只进行训练,不输出验证结果
    if args['training_data'] == 'trainval':
        train_reader = trainval_reader
    elif args['training_data'] == 'train':
        train_reader = train_reader

    # 2.2 设置输入样本的维度
    input_spec = InputSpec(shape=[None, 3, 227, 227], dtype='float32', name='image')
    label_spec = InputSpec(shape=[None, 1], dtype='int64', name='label')

    # 2.3 初始化模型
    network = eval("paddle.vision.models." + args['architecture'] + "(num_classes=args['class_dim'], pretrained=args['pretrained'])")
    model = paddle.Model(network, input_spec, label_spec) 
    logger.info('模型参数信息:')
    logger.info(model.summary()) # 是否显示神经网络的具体信息

    # 2.4 设置学习率、优化器、损失函数和评价指标
    lr = learning_rate_setting(args=args)
    optimizer = optimizer_setting(model, lr)
    model.prepare(optimizer,
                    paddle.nn.CrossEntropyLoss(),
                    paddle.metric.Accuracy(topk=(1,5)))

    # 2.5 启动训练过程
    visualization_log =  train(model, args=args, train_reader=train_reader, val_reader=val_reader, logger=logger)
    print('训练完毕,结果路径{}.'.format(args['results_path']['final_models_path']))

    ##### 3. 输出训练结果 #########################
    # 输出训练过程图
    draw_process('Training Process', 'Train Loss', 'Train Accuracy', iters=visualization_log['all_train_iters'], losses=visualization_log['all_train_losses'], accuracies=visualization_log['all_train_accs_top1'], final_figures_path=args['results_path']['final_figures_path'], figurename=args['model_name'] + '_train', isShow=True)   
    if args['training_data'] != 'trainval':
        draw_process('Validation Results', 'Validation Loss', 'Validation Accuracy', iters=visualization_log['all_test_iters'], losses=visualization_log['all_test_losses'], accuracies=visualization_log['all_test_accs_top1'], final_figures_path=args['results_path']['final_figures_path'], figurename=args['model_name'] + '_test', isShow=True)   
    logger.info('Done.')
    2022-10-19 10:11:01,784 - INFO: 系统基本信息:
    2022-10-19 10:11:02,854 - INFO: {
        "操作系统":"Windows-10-10.0.22000-SP0",
        "CPU":"Intel(R) Core(TM) i7-9700 CPU @ 3.00GHz",
        "内存":"11.62G/15.88G (73.20%)",
        "GPU":"b'GeForce RTX 2080' 2.33G/8.00G (0.29%)",
        "CUDA/cuDNN":"10.2 / 7.6.5",
        "Paddle":"2.3.2"
    }
    2022-10-19 10:11:02,855 - INFO: {
        "project_name":"Comp01GarbageClassification",
        "dataset_name":"Garbage",
        "architecture":"resnet50",
        "training_data":"train",
        "input_size":[
            3,
            227,
            227
        ],
        "num_trainval":14402,
        "num_train":11504,
        "num_val":2898,
        "num_test":400,
        "class_dim":40,
        "label_dict":{
            "0":"其他垃圾/一次性快餐盒",
            "1":"其他垃圾/污损塑料",
            "2":"其他垃圾/烟蒂",
            "3":"其他垃圾/牙签",
            "4":"其他垃圾/破碎花盆及碟碗",
            "5":"其他垃圾/竹筷",
            "6":"厨余垃圾/剩饭剩菜",
            "7":"厨余垃圾/大骨头",
            "8":"厨余垃圾/水果果皮",
            "9":"厨余垃圾/水果果肉",
            "10":"厨余垃圾/茶叶渣",
            "11":"厨余垃圾/菜叶菜根",
            "12":"厨余垃圾/蛋壳",
            "13":"厨余垃圾/鱼骨",
            "14":"可回收物/充电宝",
            "15":"可回收物/包",
            "16":"可回收物/化妆品瓶",
            "17":"可回收物/塑料玩具",
            "18":"可回收物/塑料碗盆",
            "19":"可回收物/塑料衣架",
            "20":"可回收物/快递纸袋",
            "21":"可回收物/插头电线",
            "22":"可回收物/旧衣服",
            "23":"可回收物/易拉罐",
            "24":"可回收物/枕头",
            "25":"可回收物/毛绒玩具",
            "26":"可回收物/洗发水瓶",
            "27":"可回收物/玻璃杯",
            "28":"可回收物/皮鞋",
            "29":"可回收物/砧板",
            "30":"可回收物/纸板箱",
            "31":"可回收物/调料瓶",
            "32":"可回收物/酒瓶",
            "33":"可回收物/金属食品罐",
            "34":"可回收物/锅",
            "35":"可回收物/食用油桶",
            "36":"可回收物/饮料瓶",
            "37":"有害垃圾/干电池",
            "38":"有害垃圾/软膏",
            "39":"有害垃圾/过期药物"
        },
        "starting_time":"202210191010",
        "total_epoch":20,
        "batch_size":32,
        "log_interval":10,
        "eval_interval":1,
        "checkpointed":false,
        "checkpoint_train":false,
        "checkpoint_model":"Butterfly_Mobilenetv2",
        "checkpoint_time":"202102182058",
        "pretrained":true,
        "pretrained_model":"API",
        "dataset_root_path":"D:\\Workspace\\ExpDatasets\\Garbage",
        "result_root_path":"D:\\Workspace\\ExpResults\\",
        "deployment_root_path":"D:\\Workspace\\ExpDeployments\\",
        "useGPU":true,
        "learning_strategy":{
            "optimizer_strategy":"Momentum",
            "learning_rate_strategy":"CosineAnnealingDecay",
            "learning_rate":0.01,
            "momentum":0.9,
            "Piecewise_boundaries":[
                60,
                80,
                90
            ],
            "Piecewise_values":[
                0.01,
                0.001,
                0.0001,
                1e-05
            ],
            "Exponential_gamma":0.9,
            "Polynomial_decay_steps":10,
            "verbose":false
        },
        "results_path":{
            "checkpoint_models_path":"D:\\Workspace\\ExpResults\\Comp01GarbageClassification\\checkpoint_models",
            "final_figures_path":"D:\\Workspace\\ExpResults\\Comp01GarbageClassification\\final_figures",
            "final_models_path":"D:\\Workspace\\ExpResults\\Comp01GarbageClassification\\final_models",
            "logs_path":"D:\\Workspace\\ExpResults\\Comp01GarbageClassification\\logs"
        },
        "deployments_path":{
            "deployment_root_path":null,
            "deployment_checkpoint_model":"D:\\Workspace\\ExpDeployments\\Comp01GarbageClassification\\checkpoint_models\\Garbage_resnet50_final",
            "deployment_final_model":"D:\\Workspace\\ExpDeployments\\Comp01GarbageClassification\\final_models\\Garbage_resnet50_final",
            "deployment_final_figures_path ":null,
            "deployment_logs_path":"D:\\Workspace\\ExpDeployments\\Comp01GarbageClassification\\logs",
            "deployment_pretrained_model":"D:\\Workspace\\ExpDeployments\\Comp01GarbageClassification\\pretrained_dir\\API",
            "deployment_final_figures_path":"D:\\Workspace\\ExpDeployments\\Comp01GarbageClassification\\final_figures"
        },
        "model_name":"Garbage_resnet50"
    }
    2022-10-19 10:11:02,856 - INFO: 载入Imagenet-resnet50预训练模型完毕,开始微调训练(fine-tune)。
    2022-10-19 10:11:02,857 - INFO: 训练参数保存完毕,使用resnet50模型, 训练Garbage数据, 训练集train, 启动训练...
    2022-10-19 10:11:02,914 - INFO: unique_endpoints {''}
    2022-10-19 10:11:02,915 - INFO: File C:\Users\Administrator/.cache/paddle/hapi/weights\resnet50.pdparams md5 checking...
    2022-10-19 10:11:03,191 - INFO: Found C:\Users\Administrator/.cache/paddle/hapi/weights\resnet50.pdparams
    2022-10-19 10:11:03,948 - INFO: 模型参数信息:
    2022-10-19 10:11:03,981 - INFO: {'total_params': 23643112, 'trainable_params': 23536872}
    

    -------------------------------------------------------------------------------
       Layer (type)         Input Shape          Output Shape         Param #    
    ===============================================================================
         Conv2D-59       [[1, 3, 227, 227]]   [1, 64, 114, 114]        9,408     
      BatchNorm2D-54    [[1, 64, 114, 114]]   [1, 64, 114, 114]         256      
          ReLU-25       [[1, 64, 114, 114]]   [1, 64, 114, 114]          0       
        MaxPool2D-5     [[1, 64, 114, 114]]    [1, 64, 57, 57]           0       
         Conv2D-61       [[1, 64, 57, 57]]     [1, 64, 57, 57]         4,096     
      BatchNorm2D-56     [[1, 64, 57, 57]]     [1, 64, 57, 57]          256      
          ReLU-26        [[1, 256, 57, 57]]    [1, 256, 57, 57]          0       
         Conv2D-62       [[1, 64, 57, 57]]     [1, 64, 57, 57]        36,864     
      BatchNorm2D-57     [[1, 64, 57, 57]]     [1, 64, 57, 57]          256      
         Conv2D-63       [[1, 64, 57, 57]]     [1, 256, 57, 57]       16,384     
      BatchNorm2D-58     [[1, 256, 57, 57]]    [1, 256, 57, 57]        1,024     
         Conv2D-60       [[1, 64, 57, 57]]     [1, 256, 57, 57]       16,384     
      BatchNorm2D-55     [[1, 256, 57, 57]]    [1, 256, 57, 57]        1,024     
    BottleneckBlock-17   [[1, 64, 57, 57]]     [1, 256, 57, 57]          0       
         Conv2D-64       [[1, 256, 57, 57]]    [1, 64, 57, 57]        16,384     
      BatchNorm2D-59     [[1, 64, 57, 57]]     [1, 64, 57, 57]          256      
          ReLU-27        [[1, 256, 57, 57]]    [1, 256, 57, 57]          0       
         Conv2D-65       [[1, 64, 57, 57]]     [1, 64, 57, 57]        36,864     
      BatchNorm2D-60     [[1, 64, 57, 57]]     [1, 64, 57, 57]          256      
         Conv2D-66       [[1, 64, 57, 57]]     [1, 256, 57, 57]       16,384     
      BatchNorm2D-61     [[1, 256, 57, 57]]    [1, 256, 57, 57]        1,024     
    BottleneckBlock-18   [[1, 256, 57, 57]]    [1, 256, 57, 57]          0       
         Conv2D-67       [[1, 256, 57, 57]]    [1, 64, 57, 57]        16,384     
      BatchNorm2D-62     [[1, 64, 57, 57]]     [1, 64, 57, 57]          256      
          ReLU-28        [[1, 256, 57, 57]]    [1, 256, 57, 57]          0       
         Conv2D-68       [[1, 64, 57, 57]]     [1, 64, 57, 57]        36,864     
      BatchNorm2D-63     [[1, 64, 57, 57]]     [1, 64, 57, 57]          256      
         Conv2D-69       [[1, 64, 57, 57]]     [1, 256, 57, 57]       16,384     
      BatchNorm2D-64     [[1, 256, 57, 57]]    [1, 256, 57, 57]        1,024     
    BottleneckBlock-19   [[1, 256, 57, 57]]    [1, 256, 57, 57]          0       
         Conv2D-71       [[1, 256, 57, 57]]    [1, 128, 57, 57]       32,768     
      BatchNorm2D-66     [[1, 128, 57, 57]]    [1, 128, 57, 57]         512      
          ReLU-29        [[1, 512, 29, 29]]    [1, 512, 29, 29]          0       
         Conv2D-72       [[1, 128, 57, 57]]    [1, 128, 29, 29]       147,456    
      BatchNorm2D-67     [[1, 128, 29, 29]]    [1, 128, 29, 29]         512      
         Conv2D-73       [[1, 128, 29, 29]]    [1, 512, 29, 29]       65,536     
      BatchNorm2D-68     [[1, 512, 29, 29]]    [1, 512, 29, 29]        2,048     
         Conv2D-70       [[1, 256, 57, 57]]    [1, 512, 29, 29]       131,072    
      BatchNorm2D-65     [[1, 512, 29, 29]]    [1, 512, 29, 29]        2,048     
    BottleneckBlock-20   [[1, 256, 57, 57]]    [1, 512, 29, 29]          0       
         Conv2D-74       [[1, 512, 29, 29]]    [1, 128, 29, 29]       65,536     
      BatchNorm2D-69     [[1, 128, 29, 29]]    [1, 128, 29, 29]         512      
          ReLU-30        [[1, 512, 29, 29]]    [1, 512, 29, 29]          0       
         Conv2D-75       [[1, 128, 29, 29]]    [1, 128, 29, 29]       147,456    
      BatchNorm2D-70     [[1, 128, 29, 29]]    [1, 128, 29, 29]         512      
         Conv2D-76       [[1, 128, 29, 29]]    [1, 512, 29, 29]       65,536     
      BatchNorm2D-71     [[1, 512, 29, 29]]    [1, 512, 29, 29]        2,048     
    BottleneckBlock-21   [[1, 512, 29, 29]]    [1, 512, 29, 29]          0       
         Conv2D-77       [[1, 512, 29, 29]]    [1, 128, 29, 29]       65,536     
      BatchNorm2D-72     [[1, 128, 29, 29]]    [1, 128, 29, 29]         512      
          ReLU-31        [[1, 512, 29, 29]]    [1, 512, 29, 29]          0       
         Conv2D-78       [[1, 128, 29, 29]]    [1, 128, 29, 29]       147,456    
      BatchNorm2D-73     [[1, 128, 29, 29]]    [1, 128, 29, 29]         512      
         Conv2D-79       [[1, 128, 29, 29]]    [1, 512, 29, 29]       65,536     
      BatchNorm2D-74     [[1, 512, 29, 29]]    [1, 512, 29, 29]        2,048     
    BottleneckBlock-22   [[1, 512, 29, 29]]    [1, 512, 29, 29]          0       
         Conv2D-80       [[1, 512, 29, 29]]    [1, 128, 29, 29]       65,536     
      BatchNorm2D-75     [[1, 128, 29, 29]]    [1, 128, 29, 29]         512      
          ReLU-32        [[1, 512, 29, 29]]    [1, 512, 29, 29]          0       
         Conv2D-81       [[1, 128, 29, 29]]    [1, 128, 29, 29]       147,456    
      BatchNorm2D-76     [[1, 128, 29, 29]]    [1, 128, 29, 29]         512      
         Conv2D-82       [[1, 128, 29, 29]]    [1, 512, 29, 29]       65,536     
      BatchNorm2D-77     [[1, 512, 29, 29]]    [1, 512, 29, 29]        2,048     
    BottleneckBlock-23   [[1, 512, 29, 29]]    [1, 512, 29, 29]          0       
         Conv2D-84       [[1, 512, 29, 29]]    [1, 256, 29, 29]       131,072    
      BatchNorm2D-79     [[1, 256, 29, 29]]    [1, 256, 29, 29]        1,024     
          ReLU-33       [[1, 1024, 15, 15]]   [1, 1024, 15, 15]          0       
         Conv2D-85       [[1, 256, 29, 29]]    [1, 256, 15, 15]       589,824    
      BatchNorm2D-80     [[1, 256, 15, 15]]    [1, 256, 15, 15]        1,024     
         Conv2D-86       [[1, 256, 15, 15]]   [1, 1024, 15, 15]       262,144    
      BatchNorm2D-81    [[1, 1024, 15, 15]]   [1, 1024, 15, 15]        4,096     
         Conv2D-83       [[1, 512, 29, 29]]   [1, 1024, 15, 15]       524,288    
      BatchNorm2D-78    [[1, 1024, 15, 15]]   [1, 1024, 15, 15]        4,096     
    BottleneckBlock-24   [[1, 512, 29, 29]]   [1, 1024, 15, 15]          0       
         Conv2D-87      [[1, 1024, 15, 15]]    [1, 256, 15, 15]       262,144    
      BatchNorm2D-82     [[1, 256, 15, 15]]    [1, 256, 15, 15]        1,024     
          ReLU-34       [[1, 1024, 15, 15]]   [1, 1024, 15, 15]          0       
         Conv2D-88       [[1, 256, 15, 15]]    [1, 256, 15, 15]       589,824    
      BatchNorm2D-83     [[1, 256, 15, 15]]    [1, 256, 15, 15]        1,024     
         Conv2D-89       [[1, 256, 15, 15]]   [1, 1024, 15, 15]       262,144    
      BatchNorm2D-84    [[1, 1024, 15, 15]]   [1, 1024, 15, 15]        4,096     
    BottleneckBlock-25  [[1, 1024, 15, 15]]   [1, 1024, 15, 15]          0       
         Conv2D-90      [[1, 1024, 15, 15]]    [1, 256, 15, 15]       262,144    
      BatchNorm2D-85     [[1, 256, 15, 15]]    [1, 256, 15, 15]        1,024     
          ReLU-35       [[1, 1024, 15, 15]]   [1, 1024, 15, 15]          0       
         Conv2D-91       [[1, 256, 15, 15]]    [1, 256, 15, 15]       589,824    
      BatchNorm2D-86     [[1, 256, 15, 15]]    [1, 256, 15, 15]        1,024     
         Conv2D-92       [[1, 256, 15, 15]]   [1, 1024, 15, 15]       262,144    
      BatchNorm2D-87    [[1, 1024, 15, 15]]   [1, 1024, 15, 15]        4,096     
    BottleneckBlock-26  [[1, 1024, 15, 15]]   [1, 1024, 15, 15]          0       
         Conv2D-93      [[1, 1024, 15, 15]]    [1, 256, 15, 15]       262,144    
      BatchNorm2D-88     [[1, 256, 15, 15]]    [1, 256, 15, 15]        1,024     
          ReLU-36       [[1, 1024, 15, 15]]   [1, 1024, 15, 15]          0       
         Conv2D-94       [[1, 256, 15, 15]]    [1, 256, 15, 15]       589,824    
      BatchNorm2D-89     [[1, 256, 15, 15]]    [1, 256, 15, 15]        1,024     
         Conv2D-95       [[1, 256, 15, 15]]   [1, 1024, 15, 15]       262,144    
      BatchNorm2D-90    [[1, 1024, 15, 15]]   [1, 1024, 15, 15]        4,096     
    BottleneckBlock-27  [[1, 1024, 15, 15]]   [1, 1024, 15, 15]          0       
         Conv2D-96      [[1, 1024, 15, 15]]    [1, 256, 15, 15]       262,144    
      BatchNorm2D-91     [[1, 256, 15, 15]]    [1, 256, 15, 15]        1,024     
          ReLU-37       [[1, 1024, 15, 15]]   [1, 1024, 15, 15]          0       
         Conv2D-97       [[1, 256, 15, 15]]    [1, 256, 15, 15]       589,824    
      BatchNorm2D-92     [[1, 256, 15, 15]]    [1, 256, 15, 15]        1,024     
         Conv2D-98       [[1, 256, 15, 15]]   [1, 1024, 15, 15]       262,144    
      BatchNorm2D-93    [[1, 1024, 15, 15]]   [1, 1024, 15, 15]        4,096     
    BottleneckBlock-28  [[1, 1024, 15, 15]]   [1, 1024, 15, 15]          0       
         Conv2D-99      [[1, 1024, 15, 15]]    [1, 256, 15, 15]       262,144    
      BatchNorm2D-94     [[1, 256, 15, 15]]    [1, 256, 15, 15]        1,024     
          ReLU-38       [[1, 1024, 15, 15]]   [1, 1024, 15, 15]          0       
        Conv2D-100       [[1, 256, 15, 15]]    [1, 256, 15, 15]       589,824    
      BatchNorm2D-95     [[1, 256, 15, 15]]    [1, 256, 15, 15]        1,024     
        Conv2D-101       [[1, 256, 15, 15]]   [1, 1024, 15, 15]       262,144    
      BatchNorm2D-96    [[1, 1024, 15, 15]]   [1, 1024, 15, 15]        4,096     
    BottleneckBlock-29  [[1, 1024, 15, 15]]   [1, 1024, 15, 15]          0       
        Conv2D-103      [[1, 1024, 15, 15]]    [1, 512, 15, 15]       524,288    
      BatchNorm2D-98     [[1, 512, 15, 15]]    [1, 512, 15, 15]        2,048     
          ReLU-39        [[1, 2048, 8, 8]]     [1, 2048, 8, 8]           0       
        Conv2D-104       [[1, 512, 15, 15]]     [1, 512, 8, 8]       2,359,296   
      BatchNorm2D-99      [[1, 512, 8, 8]]      [1, 512, 8, 8]         2,048     
        Conv2D-105        [[1, 512, 8, 8]]     [1, 2048, 8, 8]       1,048,576   
      BatchNorm2D-100    [[1, 2048, 8, 8]]     [1, 2048, 8, 8]         8,192     
        Conv2D-102      [[1, 1024, 15, 15]]    [1, 2048, 8, 8]       2,097,152   
      BatchNorm2D-97     [[1, 2048, 8, 8]]     [1, 2048, 8, 8]         8,192     
    BottleneckBlock-30  [[1, 1024, 15, 15]]    [1, 2048, 8, 8]           0       
        Conv2D-106       [[1, 2048, 8, 8]]      [1, 512, 8, 8]       1,048,576   
      BatchNorm2D-101     [[1, 512, 8, 8]]      [1, 512, 8, 8]         2,048     
          ReLU-40        [[1, 2048, 8, 8]]     [1, 2048, 8, 8]           0       
        Conv2D-107        [[1, 512, 8, 8]]      [1, 512, 8, 8]       2,359,296   
      BatchNorm2D-102     [[1, 512, 8, 8]]      [1, 512, 8, 8]         2,048     
        Conv2D-108        [[1, 512, 8, 8]]     [1, 2048, 8, 8]       1,048,576   
      BatchNorm2D-103    [[1, 2048, 8, 8]]     [1, 2048, 8, 8]         8,192     
    BottleneckBlock-31   [[1, 2048, 8, 8]]     [1, 2048, 8, 8]           0       
        Conv2D-109       [[1, 2048, 8, 8]]      [1, 512, 8, 8]       1,048,576   
      BatchNorm2D-104     [[1, 512, 8, 8]]      [1, 512, 8, 8]         2,048     
          ReLU-41        [[1, 2048, 8, 8]]     [1, 2048, 8, 8]           0       
        Conv2D-110        [[1, 512, 8, 8]]      [1, 512, 8, 8]       2,359,296   
      BatchNorm2D-105     [[1, 512, 8, 8]]      [1, 512, 8, 8]         2,048     
        Conv2D-111        [[1, 512, 8, 8]]     [1, 2048, 8, 8]       1,048,576   
      BatchNorm2D-106    [[1, 2048, 8, 8]]     [1, 2048, 8, 8]         8,192     
    BottleneckBlock-32   [[1, 2048, 8, 8]]     [1, 2048, 8, 8]           0       
    AdaptiveAvgPool2D-2  [[1, 2048, 8, 8]]     [1, 2048, 1, 1]           0       
         Linear-5           [[1, 2048]]            [1, 40]            81,960     
    ===============================================================================
    Total params: 23,643,112
    Trainable params: 23,536,872
    Non-trainable params: 106,240
    -------------------------------------------------------------------------------
    Input size (MB): 0.59
    Forward/backward pass size (MB): 282.41
    Params size (MB): 90.19
    Estimated Total Size (MB): 373.19
    -------------------------------------------------------------------------------
    
    当前学习率策略为: Adam + CosineAnnealingDecay, 初始学习率为:0.001.
    Epoch 0: CosineAnnealingDecay set learning rate to 0.001.
    

    c:\Users\Administrator\anaconda3\lib\site-packages\paddle\nn\layer\norm.py:653: UserWarning: When training, we now always track global mean and variance.
      warnings.warn(
    2022-10-19 10:11:13,470 - INFO: Epoch:1/20, batch:10, train_loss:[4.32524], acc_top1:[0.06250], acc_top5:[0.18750](9.49s)
    2022-10-19 10:11:22,868 - INFO: Epoch:1/20, batch:20, train_loss:[3.65459], acc_top1:[0.06250], acc_top5:[0.15625](9.40s)
    2022-10-19 10:11:32,037 - INFO: Epoch:1/20, batch:30, train_loss:[3.68086], acc_top1:[0.03125], acc_top5:[0.09375](9.17s)
    2022-10-19 10:11:41,167 - INFO: Epoch:1/20, batch:40, train_loss:[3.48308], acc_top1:[0.03125], acc_top5:[0.25000](9.13s)
    2022-10-19 10:11:50,260 - INFO: Epoch:1/20, batch:50, train_loss:[3.38390], acc_top1:[0.12500], acc_top5:[0.34375](9.09s)
    2022-10-19 10:11:59,637 - INFO: Epoch:1/20, batch:60, train_loss:[3.33362], acc_top1:[0.12500], acc_top5:[0.43750](9.38s)
    2022-10-19 10:12:08,975 - INFO: Epoch:1/20, batch:70, train_loss:[3.45425], acc_top1:[0.09375], acc_top5:[0.34375](9.34s)
    2022-10-19 10:12:17,918 - INFO: Epoch:1/20, batch:80, train_loss:[3.55635], acc_top1:[0.06250], acc_top5:[0.21875](8.94s)
    2022-10-19 10:12:27,071 - INFO: Epoch:1/20, batch:90, train_loss:[3.61749], acc_top1:[0.03125], acc_top5:[0.25000](9.15s)
    2022-10-19 10:12:36,231 - INFO: Epoch:1/20, batch:100, train_loss:[3.32273], acc_top1:[0.09375], acc_top5:[0.31250](9.16s)
    2022-10-19 10:12:45,156 - INFO: Epoch:1/20, batch:110, train_loss:[3.58473], acc_top1:[0.06250], acc_top5:[0.18750](8.92s)
    2022-10-19 10:12:54,864 - INFO: Epoch:1/20, batch:120, train_loss:[3.20981], acc_top1:[0.15625], acc_top5:[0.34375](9.71s)
    2022-10-19 10:13:04,006 - INFO: Epoch:1/20, batch:130, train_loss:[3.33912], acc_top1:[0.18750], acc_top5:[0.31250](9.14s)
    2022-10-19 10:13:13,191 - INFO: Epoch:1/20, batch:140, train_loss:[3.31733], acc_top1:[0.06250], acc_top5:[0.37500](9.18s)
    2022-10-19 10:13:22,345 - INFO: Epoch:1/20, batch:150, train_loss:[3.50489], acc_top1:[0.09375], acc_top5:[0.31250](9.15s)
    2022-10-19 10:13:31,590 - INFO: Epoch:1/20, batch:160, train_loss:[3.24787], acc_top1:[0.15625], acc_top5:[0.31250](9.24s)
    2022-10-19 10:13:40,754 - INFO: Epoch:1/20, batch:170, train_loss:[3.24426], acc_top1:[0.18750], acc_top5:[0.43750](9.16s)
    2022-10-19 10:13:49,878 - INFO: Epoch:1/20, batch:180, train_loss:[3.40649], acc_top1:[0.12500], acc_top5:[0.40625](9.12s)
    2022-10-19 10:13:59,519 - INFO: Epoch:1/20, batch:190, train_loss:[3.19681], acc_top1:[0.15625], acc_top5:[0.43750](9.64s)
    2022-10-19 10:14:08,694 - INFO: Epoch:1/20, batch:200, train_loss:[3.58215], acc_top1:[0.06250], acc_top5:[0.18750](9.17s)
    2022-10-19 10:14:17,968 - INFO: Epoch:1/20, batch:210, train_loss:[3.48783], acc_top1:[0.09375], acc_top5:[0.31250](9.27s)
    2022-10-19 10:14:27,019 - INFO: Epoch:1/20, batch:220, train_loss:[3.28741], acc_top1:[0.09375], acc_top5:[0.40625](9.05s)
    2022-10-19 10:14:36,166 - INFO: Epoch:1/20, batch:230, train_loss:[2.84480], acc_top1:[0.21875], acc_top5:[0.56250](9.15s)
    2022-10-19 10:14:45,483 - INFO: Epoch:1/20, batch:240, train_loss:[3.40731], acc_top1:[0.06250], acc_top5:[0.31250](9.32s)
    2022-10-19 10:14:54,717 - INFO: Epoch:1/20, batch:250, train_loss:[3.38678], acc_top1:[0.00000], acc_top5:[0.40625](9.23s)
    2022-10-19 10:15:03,715 - INFO: Epoch:1/20, batch:260, train_loss:[3.02394], acc_top1:[0.15625], acc_top5:[0.40625](9.00s)
    2022-10-19 10:15:12,932 - INFO: Epoch:1/20, batch:270, train_loss:[3.26420], acc_top1:[0.09375], acc_top5:[0.40625](9.22s)
    2022-10-19 10:15:21,998 - INFO: Epoch:1/20, batch:280, train_loss:[3.09105], acc_top1:[0.15625], acc_top5:[0.50000](9.07s)
    2022-10-19 10:15:31,165 - INFO: Epoch:1/20, batch:290, train_loss:[3.05492], acc_top1:[0.28125], acc_top5:[0.53125](9.17s)
    2022-10-19 10:15:40,444 - INFO: Epoch:1/20, batch:300, train_loss:[3.28476], acc_top1:[0.09375], acc_top5:[0.46875](9.28s)
    2022-10-19 10:15:49,207 - INFO: Epoch:1/20, batch:310, train_loss:[2.75382], acc_top1:[0.25000], acc_top5:[0.62500](8.76s)
    2022-10-19 10:15:58,600 - INFO: Epoch:1/20, batch:320, train_loss:[3.44824], acc_top1:[0.12500], acc_top5:[0.31250](9.39s)
    2022-10-19 10:16:07,804 - INFO: Epoch:1/20, batch:330, train_loss:[3.25381], acc_top1:[0.09375], acc_top5:[0.40625](9.20s)
    2022-10-19 10:16:17,325 - INFO: Epoch:1/20, batch:340, train_loss:[3.07573], acc_top1:[0.09375], acc_top5:[0.43750](9.52s)
    2022-10-19 10:16:26,336 - INFO: Epoch:1/20, batch:350, train_loss:[3.33010], acc_top1:[0.09375], acc_top5:[0.31250](9.01s)
    2022-10-19 10:17:00,987 - INFO: [validation] Epoch:1/20, val_loss:[0.11378], val_top1:[0.09144], val_top5:[0.32126]
    c:\Users\Administrator\anaconda3\lib\site-packages\paddle\fluid\layers\math_op_patch.py:336: UserWarning: c:\Users\Administrator\anaconda3\lib\site-packages\paddle\vision\models\resnet.py:167
    The behavior of expression A + B has been unified with elementwise_add(X, Y, axis=-1) from Paddle 2.0. If your code works well in the older versions but crashes in this version, try to use elementwise_add(X, Y, axis=0) instead of A + B. This transitional warning will be dropped in the future.
      warnings.warn(
    2022-10-19 10:17:05,303 - INFO: 已保存当前测试模型(epoch=1)为最优模型:Garbage_resnet50_final
    2022-10-19 10:17:05,303 - INFO: 最优top1测试精度:0.09144 (epoch=1)
    2022-10-19 10:17:05,303 - INFO: 训练完成,最终性能accuracy=0.09144(epoch=1), 总耗时361.32s, 已将其保存为:Garbage_resnet50_final
    2022-10-19 10:17:05,823 - INFO: Epoch:2/20, batch:360, train_loss:[3.38710], acc_top1:[0.09375], acc_top5:[0.37500](39.49s)
    2022-10-19 10:17:10,118 - INFO: Epoch:2/20, batch:370, train_loss:[3.30094], acc_top1:[0.15625], acc_top5:[0.28125](4.29s)
    2022-10-19 10:17:14,471 - INFO: Epoch:2/20, batch:380, train_loss:[3.26537], acc_top1:[0.12500], acc_top5:[0.43750](4.35s)
    2022-10-19 10:17:18,785 - INFO: Epoch:2/20, batch:390, train_loss:[3.25564], acc_top1:[0.12500], acc_top5:[0.34375](4.31s)
    2022-10-19 10:17:22,972 - INFO: Epoch:2/20, batch:400, train_loss:[2.93063], acc_top1:[0.25000], acc_top5:[0.50000](4.19s)
    2022-10-19 10:17:27,240 - INFO: Epoch:2/20, batch:410, train_loss:[2.95027], acc_top1:[0.15625], acc_top5:[0.53125](4.27s)
    2022-10-19 10:17:31,484 - INFO: Epoch:2/20, batch:420, train_loss:[3.11741], acc_top1:[0.12500], acc_top5:[0.40625](4.24s)
    2022-10-19 10:17:35,742 - INFO: Epoch:2/20, batch:430, train_loss:[2.92595], acc_top1:[0.15625], acc_top5:[0.59375](4.26s)
    2022-10-19 10:17:40,008 - INFO: Epoch:2/20, batch:440, train_loss:[2.90801], acc_top1:[0.21875], acc_top5:[0.46875](4.27s)
    2022-10-19 10:17:44,290 - INFO: Epoch:2/20, batch:450, train_loss:[3.03444], acc_top1:[0.15625], acc_top5:[0.50000](4.28s)
    2022-10-19 10:17:48,559 - INFO: Epoch:2/20, batch:460, train_loss:[3.65807], acc_top1:[0.09375], acc_top5:[0.25000](4.27s)
    2022-10-19 10:17:52,987 - INFO: Epoch:2/20, batch:470, train_loss:[2.87464], acc_top1:[0.21875], acc_top5:[0.59375](4.43s)
    2022-10-19 10:17:57,367 - INFO: Epoch:2/20, batch:480, train_loss:[2.99807], acc_top1:[0.09375], acc_top5:[0.46875](4.38s)
    2022-10-19 10:18:01,631 - INFO: Epoch:2/20, batch:490, train_loss:[2.93229], acc_top1:[0.18750], acc_top5:[0.46875](4.26s)
    2022-10-19 10:18:05,936 - INFO: Epoch:2/20, batch:500, train_loss:[3.11235], acc_top1:[0.12500], acc_top5:[0.43750](4.31s)
    2022-10-19 10:18:10,159 - INFO: Epoch:2/20, batch:510, train_loss:[2.91266], acc_top1:[0.25000], acc_top5:[0.53125](4.22s)
    2022-10-19 10:18:14,512 - INFO: Epoch:2/20, batch:520, train_loss:[3.22461], acc_top1:[0.06250], acc_top5:[0.40625](4.35s)
    2022-10-19 10:18:18,852 - INFO: Epoch:2/20, batch:530, train_loss:[2.90637], acc_top1:[0.21875], acc_top5:[0.59375](4.34s)
    2022-10-19 10:18:23,102 - INFO: Epoch:2/20, batch:540, train_loss:[2.66261], acc_top1:[0.18750], acc_top5:[0.59375](4.25s)
    2022-10-19 10:18:27,319 - INFO: Epoch:2/20, batch:550, train_loss:[3.10240], acc_top1:[0.15625], acc_top5:[0.43750](4.22s)
    2022-10-19 10:18:31,655 - INFO: Epoch:2/20, batch:560, train_loss:[3.03120], acc_top1:[0.28125], acc_top5:[0.50000](4.34s)
    2022-10-19 10:18:36,147 - INFO: Epoch:2/20, batch:570, train_loss:[2.68679], acc_top1:[0.21875], acc_top5:[0.56250](4.49s)
    2022-10-19 10:18:40,476 - INFO: Epoch:2/20, batch:580, train_loss:[2.85910], acc_top1:[0.12500], acc_top5:[0.56250](4.33s)
    2022-10-19 10:18:44,787 - INFO: Epoch:2/20, batch:590, train_loss:[2.84482], acc_top1:[0.15625], acc_top5:[0.53125](4.31s)
    2022-10-19 10:18:49,040 - INFO: Epoch:2/20, batch:600, train_loss:[3.21743], acc_top1:[0.28125], acc_top5:[0.43750](4.25s)
    2022-10-19 10:18:53,374 - INFO: Epoch:2/20, batch:610, train_loss:[2.66001], acc_top1:[0.21875], acc_top5:[0.68750](4.33s)
    2022-10-19 10:18:57,699 - INFO: Epoch:2/20, batch:620, train_loss:[2.80245], acc_top1:[0.28125], acc_top5:[0.53125](4.32s)
    2022-10-19 10:19:01,994 - INFO: Epoch:2/20, batch:630, train_loss:[2.77213], acc_top1:[0.25000], acc_top5:[0.62500](4.30s)
    2022-10-19 10:19:06,136 - INFO: Epoch:2/20, batch:640, train_loss:[2.88724], acc_top1:[0.18750], acc_top5:[0.50000](4.14s)
    2022-10-19 10:19:10,322 - INFO: Epoch:2/20, batch:650, train_loss:[2.74397], acc_top1:[0.15625], acc_top5:[0.56250](4.19s)
    2022-10-19 10:19:14,583 - INFO: Epoch:2/20, batch:660, train_loss:[2.72848], acc_top1:[0.18750], acc_top5:[0.62500](4.26s)
    2022-10-19 10:19:18,778 - INFO: Epoch:2/20, batch:670, train_loss:[2.92237], acc_top1:[0.18750], acc_top5:[0.53125](4.20s)
    2022-10-19 10:19:23,035 - INFO: Epoch:2/20, batch:680, train_loss:[2.86723], acc_top1:[0.25000], acc_top5:[0.50000](4.26s)
    2022-10-19 10:19:27,438 - INFO: Epoch:2/20, batch:690, train_loss:[2.68118], acc_top1:[0.18750], acc_top5:[0.78125](4.40s)
    2022-10-19 10:19:31,740 - INFO: Epoch:2/20, batch:700, train_loss:[2.85263], acc_top1:[0.25000], acc_top5:[0.53125](4.30s)
    2022-10-19 10:19:35,980 - INFO: Epoch:2/20, batch:710, train_loss:[3.06940], acc_top1:[0.09375], acc_top5:[0.31250](4.24s)
    2022-10-19 10:19:55,026 - INFO: [validation] Epoch:2/20, val_loss:[0.09223], val_top1:[0.21981], val_top5:[0.53382]
    2022-10-19 10:19:59,464 - INFO: 已保存当前测试模型(epoch=2)为最优模型:Garbage_resnet50_final
    2022-10-19 10:19:59,464 - INFO: 最优top1测试精度:0.21981 (epoch=2)
    2022-10-19 10:19:59,465 - INFO: 训练完成,最终性能accuracy=0.21981(epoch=2), 总耗时535.48s, 已将其保存为:Garbage_resnet50_final
    2022-10-19 10:20:00,375 - INFO: Epoch:3/20, batch:720, train_loss:[2.71197], acc_top1:[0.28125], acc_top5:[0.59375](24.40s)
    2022-10-19 10:20:04,610 - INFO: Epoch:3/20, batch:730, train_loss:[2.64692], acc_top1:[0.18750], acc_top5:[0.68750](4.23s)
    2022-10-19 10:20:08,807 - INFO: Epoch:3/20, batch:740, train_loss:[2.97643], acc_top1:[0.25000], acc_top5:[0.56250](4.20s)
    2022-10-19 10:20:13,041 - INFO: Epoch:3/20, batch:750, train_loss:[2.62380], acc_top1:[0.25000], acc_top5:[0.65625](4.23s)
    2022-10-19 10:20:17,236 - INFO: Epoch:3/20, batch:760, train_loss:[2.56195], acc_top1:[0.37500], acc_top5:[0.65625](4.19s)
    2022-10-19 10:20:21,425 - INFO: Epoch:3/20, batch:770, train_loss:[2.81913], acc_top1:[0.21875], acc_top5:[0.59375](4.19s)
    2022-10-19 10:20:25,679 - INFO: Epoch:3/20, batch:780, train_loss:[2.73061], acc_top1:[0.28125], acc_top5:[0.62500](4.25s)
    2022-10-19 10:20:29,963 - INFO: Epoch:3/20, batch:790, train_loss:[2.46469], acc_top1:[0.34375], acc_top5:[0.65625](4.28s)
    2022-10-19 10:20:34,257 - INFO: Epoch:3/20, batch:800, train_loss:[3.01556], acc_top1:[0.15625], acc_top5:[0.46875](4.29s)
    2022-10-19 10:20:38,528 - INFO: Epoch:3/20, batch:810, train_loss:[2.32271], acc_top1:[0.37500], acc_top5:[0.75000](4.27s)
    2022-10-19 10:20:42,809 - INFO: Epoch:3/20, batch:820, train_loss:[2.59420], acc_top1:[0.28125], acc_top5:[0.68750](4.28s)
    2022-10-19 10:20:47,106 - INFO: Epoch:3/20, batch:830, train_loss:[2.60235], acc_top1:[0.31250], acc_top5:[0.59375](4.30s)
    2022-10-19 10:20:51,372 - INFO: Epoch:3/20, batch:840, train_loss:[3.01538], acc_top1:[0.28125], acc_top5:[0.50000](4.27s)
    2022-10-19 10:20:55,635 - INFO: Epoch:3/20, batch:850, train_loss:[3.10497], acc_top1:[0.12500], acc_top5:[0.43750](4.26s)
    2022-10-19 10:20:59,897 - INFO: Epoch:3/20, batch:860, train_loss:[3.05400], acc_top1:[0.15625], acc_top5:[0.53125](4.26s)
    2022-10-19 10:21:04,155 - INFO: Epoch:3/20, batch:870, train_loss:[2.69267], acc_top1:[0.28125], acc_top5:[0.65625](4.26s)
    2022-10-19 10:21:08,359 - INFO: Epoch:3/20, batch:880, train_loss:[2.53891], acc_top1:[0.18750], acc_top5:[0.59375](4.20s)
    2022-10-19 10:21:12,614 - INFO: Epoch:3/20, batch:890, train_loss:[2.79345], acc_top1:[0.18750], acc_top5:[0.59375](4.26s)
    2022-10-19 10:21:16,838 - INFO: Epoch:3/20, batch:900, train_loss:[2.82543], acc_top1:[0.15625], acc_top5:[0.53125](4.22s)
    2022-10-19 10:21:21,083 - INFO: Epoch:3/20, batch:910, train_loss:[2.66411], acc_top1:[0.18750], acc_top5:[0.68750](4.25s)
    2022-10-19 10:21:25,381 - INFO: Epoch:3/20, batch:920, train_loss:[2.75353], acc_top1:[0.25000], acc_top5:[0.53125](4.30s)
    2022-10-19 10:21:29,681 - INFO: Epoch:3/20, batch:930, train_loss:[2.88351], acc_top1:[0.18750], acc_top5:[0.59375](4.30s)
    2022-10-19 10:21:33,883 - INFO: Epoch:3/20, batch:940, train_loss:[2.91400], acc_top1:[0.28125], acc_top5:[0.59375](4.20s)
    2022-10-19 10:21:38,117 - INFO: Epoch:3/20, batch:950, train_loss:[2.70042], acc_top1:[0.18750], acc_top5:[0.71875](4.23s)
    2022-10-19 10:21:42,355 - INFO: Epoch:3/20, batch:960, train_loss:[2.49651], acc_top1:[0.28125], acc_top5:[0.56250](4.24s)
    2022-10-19 10:21:46,608 - INFO: Epoch:3/20, batch:970, train_loss:[2.76543], acc_top1:[0.15625], acc_top5:[0.59375](4.25s)
    2022-10-19 10:21:50,819 - INFO: Epoch:3/20, batch:980, train_loss:[2.28192], acc_top1:[0.28125], acc_top5:[0.68750](4.21s)
    2022-10-19 10:21:55,084 - INFO: Epoch:3/20, batch:990, train_loss:[2.57460], acc_top1:[0.25000], acc_top5:[0.50000](4.27s)
    2022-10-19 10:21:59,294 - INFO: Epoch:3/20, batch:1000, train_loss:[2.64524], acc_top1:[0.21875], acc_top5:[0.50000](4.21s)
    2022-10-19 10:22:03,440 - INFO: Epoch:3/20, batch:1010, train_loss:[2.64692], acc_top1:[0.18750], acc_top5:[0.62500](4.15s)
    2022-10-19 10:22:07,691 - INFO: Epoch:3/20, batch:1020, train_loss:[2.70774], acc_top1:[0.21875], acc_top5:[0.56250](4.25s)
    2022-10-19 10:22:11,963 - INFO: Epoch:3/20, batch:1030, train_loss:[2.49191], acc_top1:[0.34375], acc_top5:[0.65625](4.27s)
    2022-10-19 10:22:16,193 - INFO: Epoch:3/20, batch:1040, train_loss:[2.30810], acc_top1:[0.34375], acc_top5:[0.71875](4.23s)
    2022-10-19 10:22:20,401 - INFO: Epoch:3/20, batch:1050, train_loss:[2.96469], acc_top1:[0.21875], acc_top5:[0.59375](4.21s)
    2022-10-19 10:22:24,619 - INFO: Epoch:3/20, batch:1060, train_loss:[2.52110], acc_top1:[0.25000], acc_top5:[0.68750](4.22s)
    2022-10-19 10:22:28,815 - INFO: Epoch:3/20, batch:1070, train_loss:[3.20195], acc_top1:[0.21875], acc_top5:[0.46875](4.20s)
    2022-10-19 10:22:47,332 - INFO: [validation] Epoch:3/20, val_loss:[0.07581], val_top1:[0.30607], val_top5:[0.68288]
    2022-10-19 10:22:50,771 - INFO: 已保存当前测试模型(epoch=3)为最优模型:Garbage_resnet50_final
    2022-10-19 10:22:50,772 - INFO: 最优top1测试精度:0.30607 (epoch=3)
    2022-10-19 10:22:50,772 - INFO: 训练完成,最终性能accuracy=0.30607(epoch=3), 总耗时706.79s, 已将其保存为:Garbage_resnet50_final
    2022-10-19 10:22:52,135 - INFO: Epoch:4/20, batch:1080, train_loss:[2.15923], acc_top1:[0.37500], acc_top5:[0.71875](23.32s)
    2022-10-19 10:22:56,300 - INFO: Epoch:4/20, batch:1090, train_loss:[2.69533], acc_top1:[0.28125], acc_top5:[0.59375](4.17s)
    2022-10-19 10:23:00,551 - INFO: Epoch:4/20, batch:1100, train_loss:[2.39181], acc_top1:[0.31250], acc_top5:[0.65625](4.25s)
    2022-10-19 10:23:04,818 - INFO: Epoch:4/20, batch:1110, train_loss:[2.61913], acc_top1:[0.25000], acc_top5:[0.56250](4.27s)
    2022-10-19 10:23:08,998 - INFO: Epoch:4/20, batch:1120, train_loss:[2.49215], acc_top1:[0.34375], acc_top5:[0.65625](4.18s)
    2022-10-19 10:23:13,296 - INFO: Epoch:4/20, batch:1130, train_loss:[2.62620], acc_top1:[0.21875], acc_top5:[0.75000](4.30s)
    2022-10-19 10:23:17,546 - INFO: Epoch:4/20, batch:1140, train_loss:[2.77998], acc_top1:[0.28125], acc_top5:[0.62500](4.25s)
    2022-10-19 10:23:21,804 - INFO: Epoch:4/20, batch:1150, train_loss:[2.61589], acc_top1:[0.28125], acc_top5:[0.59375](4.26s)
    2022-10-19 10:23:26,050 - INFO: Epoch:4/20, batch:1160, train_loss:[2.62030], acc_top1:[0.31250], acc_top5:[0.53125](4.25s)
    2022-10-19 10:23:30,372 - INFO: Epoch:4/20, batch:1170, train_loss:[2.77338], acc_top1:[0.18750], acc_top5:[0.62500](4.32s)
    2022-10-19 10:23:34,696 - INFO: Epoch:4/20, batch:1180, train_loss:[2.31885], acc_top1:[0.34375], acc_top5:[0.81250](4.32s)
    2022-10-19 10:23:38,915 - INFO: Epoch:4/20, batch:1190, train_loss:[2.38648], acc_top1:[0.37500], acc_top5:[0.62500](4.22s)
    2022-10-19 10:23:43,172 - INFO: Epoch:4/20, batch:1200, train_loss:[2.62541], acc_top1:[0.34375], acc_top5:[0.68750](4.26s)
    2022-10-19 10:23:47,341 - INFO: Epoch:4/20, batch:1210, train_loss:[2.96897], acc_top1:[0.21875], acc_top5:[0.46875](4.17s)
    2022-10-19 10:23:51,656 - INFO: Epoch:4/20, batch:1220, train_loss:[2.80954], acc_top1:[0.15625], acc_top5:[0.53125](4.31s)
    2022-10-19 10:23:55,983 - INFO: Epoch:4/20, batch:1230, train_loss:[2.75519], acc_top1:[0.21875], acc_top5:[0.56250](4.33s)
    2022-10-19 10:24:00,284 - INFO: Epoch:4/20, batch:1240, train_loss:[2.81566], acc_top1:[0.25000], acc_top5:[0.59375](4.30s)
    2022-10-19 10:24:04,642 - INFO: Epoch:4/20, batch:1250, train_loss:[2.67497], acc_top1:[0.18750], acc_top5:[0.56250](4.36s)
    2022-10-19 10:24:08,838 - INFO: Epoch:4/20, batch:1260, train_loss:[2.68656], acc_top1:[0.21875], acc_top5:[0.56250](4.20s)
    2022-10-19 10:24:13,144 - INFO: Epoch:4/20, batch:1270, train_loss:[2.62300], acc_top1:[0.28125], acc_top5:[0.59375](4.31s)
    2022-10-19 10:24:17,382 - INFO: Epoch:4/20, batch:1280, train_loss:[2.25913], acc_top1:[0.46875], acc_top5:[0.75000](4.24s)
    2022-10-19 10:24:21,531 - INFO: Epoch:4/20, batch:1290, train_loss:[1.99958], acc_top1:[0.46875], acc_top5:[0.65625](4.15s)
    2022-10-19 10:24:25,744 - INFO: Epoch:4/20, batch:1300, train_loss:[2.88125], acc_top1:[0.12500], acc_top5:[0.59375](4.21s)
    2022-10-19 10:24:29,954 - INFO: Epoch:4/20, batch:1310, train_loss:[2.33258], acc_top1:[0.25000], acc_top5:[0.75000](4.21s)
    2022-10-19 10:24:34,170 - INFO: Epoch:4/20, batch:1320, train_loss:[2.26912], acc_top1:[0.37500], acc_top5:[0.68750](4.22s)
    2022-10-19 10:24:38,477 - INFO: Epoch:4/20, batch:1330, train_loss:[2.68317], acc_top1:[0.28125], acc_top5:[0.65625](4.31s)
    2022-10-19 10:24:42,698 - INFO: Epoch:4/20, batch:1340, train_loss:[2.33929], acc_top1:[0.28125], acc_top5:[0.62500](4.22s)
    2022-10-19 10:24:46,967 - INFO: Epoch:4/20, batch:1350, train_loss:[2.59938], acc_top1:[0.25000], acc_top5:[0.62500](4.27s)
    2022-10-19 10:24:51,203 - INFO: Epoch:4/20, batch:1360, train_loss:[2.64852], acc_top1:[0.28125], acc_top5:[0.65625](4.24s)
    2022-10-19 10:24:55,444 - INFO: Epoch:4/20, batch:1370, train_loss:[2.11958], acc_top1:[0.37500], acc_top5:[0.71875](4.24s)
    2022-10-19 10:24:59,780 - INFO: Epoch:4/20, batch:1380, train_loss:[2.75718], acc_top1:[0.21875], acc_top5:[0.62500](4.34s)
    2022-10-19 10:25:04,065 - INFO: Epoch:4/20, batch:1390, train_loss:[2.61351], acc_top1:[0.28125], acc_top5:[0.59375](4.28s)
    2022-10-19 10:25:08,281 - INFO: Epoch:4/20, batch:1400, train_loss:[2.51717], acc_top1:[0.28125], acc_top5:[0.59375](4.22s)
    2022-10-19 10:25:12,488 - INFO: Epoch:4/20, batch:1410, train_loss:[2.28647], acc_top1:[0.34375], acc_top5:[0.71875](4.21s)
    2022-10-19 10:25:16,729 - INFO: Epoch:4/20, batch:1420, train_loss:[2.78554], acc_top1:[0.18750], acc_top5:[0.59375](4.24s)
    2022-10-19 10:25:21,044 - INFO: Epoch:4/20, batch:1430, train_loss:[2.07224], acc_top1:[0.37500], acc_top5:[0.68750](4.31s)
    2022-10-19 10:25:39,417 - INFO: [validation] Epoch:4/20, val_loss:[0.08336], val_top1:[0.29124], val_top5:[0.65252]
    2022-10-19 10:25:39,418 - INFO: 最优top1测试精度:0.30607 (epoch=3)
    2022-10-19 10:25:39,419 - INFO: 训练完成,最终性能accuracy=0.30607(epoch=3), 总耗时875.44s, 已将其保存为:Garbage_resnet50_final
    2022-10-19 10:25:41,180 - INFO: Epoch:5/20, batch:1440, train_loss:[2.24610], acc_top1:[0.34375], acc_top5:[0.75000](20.14s)
    2022-10-19 10:25:45,478 - INFO: Epoch:5/20, batch:1450, train_loss:[2.40609], acc_top1:[0.31250], acc_top5:[0.81250](4.30s)
    2022-10-19 10:25:49,751 - INFO: Epoch:5/20, batch:1460, train_loss:[2.39544], acc_top1:[0.37500], acc_top5:[0.62500](4.27s)
    2022-10-19 10:25:54,137 - INFO: Epoch:5/20, batch:1470, train_loss:[2.57881], acc_top1:[0.37500], acc_top5:[0.56250](4.38s)
    2022-10-19 10:25:58,349 - INFO: Epoch:5/20, batch:1480, train_loss:[2.38220], acc_top1:[0.31250], acc_top5:[0.75000](4.21s)
    2022-10-19 10:26:02,629 - INFO: Epoch:5/20, batch:1490, train_loss:[2.71802], acc_top1:[0.28125], acc_top5:[0.68750](4.28s)
    2022-10-19 10:26:06,893 - INFO: Epoch:5/20, batch:1500, train_loss:[2.81119], acc_top1:[0.21875], acc_top5:[0.62500](4.26s)
    2022-10-19 10:26:11,192 - INFO: Epoch:5/20, batch:1510, train_loss:[2.30538], acc_top1:[0.28125], acc_top5:[0.75000](4.30s)
    2022-10-19 10:26:15,443 - INFO: Epoch:5/20, batch:1520, train_loss:[2.93200], acc_top1:[0.21875], acc_top5:[0.40625](4.25s)
    2022-10-19 10:26:19,706 - INFO: Epoch:5/20, batch:1530, train_loss:[2.29114], acc_top1:[0.34375], acc_top5:[0.75000](4.26s)
    2022-10-19 10:26:24,083 - INFO: Epoch:5/20, batch:1540, train_loss:[2.41357], acc_top1:[0.31250], acc_top5:[0.75000](4.38s)
    2022-10-19 10:26:28,440 - INFO: Epoch:5/20, batch:1550, train_loss:[2.79615], acc_top1:[0.25000], acc_top5:[0.59375](4.36s)
    2022-10-19 10:26:32,678 - INFO: Epoch:5/20, batch:1560, train_loss:[2.22544], acc_top1:[0.34375], acc_top5:[0.75000](4.24s)
    2022-10-19 10:26:36,933 - INFO: Epoch:5/20, batch:1570, train_loss:[2.43990], acc_top1:[0.25000], acc_top5:[0.65625](4.25s)
    2022-10-19 10:26:41,210 - INFO: Epoch:5/20, batch:1580, train_loss:[2.13865], acc_top1:[0.40625], acc_top5:[0.68750](4.28s)
    2022-10-19 10:26:45,521 - INFO: Epoch:5/20, batch:1590, train_loss:[2.50211], acc_top1:[0.21875], acc_top5:[0.68750](4.31s)
    2022-10-19 10:26:49,753 - INFO: Epoch:5/20, batch:1600, train_loss:[2.33968], acc_top1:[0.34375], acc_top5:[0.75000](4.23s)
    2022-10-19 10:26:54,026 - INFO: Epoch:5/20, batch:1610, train_loss:[2.56131], acc_top1:[0.40625], acc_top5:[0.62500](4.27s)
    2022-10-19 10:26:58,307 - INFO: Epoch:5/20, batch:1620, train_loss:[1.89060], acc_top1:[0.50000], acc_top5:[0.84375](4.28s)
    2022-10-19 10:27:02,609 - INFO: Epoch:5/20, batch:1630, train_loss:[2.08904], acc_top1:[0.40625], acc_top5:[0.75000](4.30s)
    2022-10-19 10:27:06,830 - INFO: Epoch:5/20, batch:1640, train_loss:[2.04870], acc_top1:[0.43750], acc_top5:[0.71875](4.22s)
    2022-10-19 10:27:11,090 - INFO: Epoch:5/20, batch:1650, train_loss:[2.64743], acc_top1:[0.28125], acc_top5:[0.62500](4.26s)
    2022-10-19 10:27:15,378 - INFO: Epoch:5/20, batch:1660, train_loss:[2.37162], acc_top1:[0.43750], acc_top5:[0.62500](4.29s)
    2022-10-19 10:27:19,633 - INFO: Epoch:5/20, batch:1670, train_loss:[2.76848], acc_top1:[0.28125], acc_top5:[0.59375](4.25s)
    2022-10-19 10:27:23,886 - INFO: Epoch:5/20, batch:1680, train_loss:[2.84738], acc_top1:[0.28125], acc_top5:[0.43750](4.25s)
    2022-10-19 10:27:28,117 - INFO: Epoch:5/20, batch:1690, train_loss:[2.25190], acc_top1:[0.40625], acc_top5:[0.68750](4.23s)
    2022-10-19 10:27:32,401 - INFO: Epoch:5/20, batch:1700, train_loss:[2.42829], acc_top1:[0.25000], acc_top5:[0.75000](4.28s)
    2022-10-19 10:27:36,628 - INFO: Epoch:5/20, batch:1710, train_loss:[2.32327], acc_top1:[0.28125], acc_top5:[0.75000](4.23s)
    2022-10-19 10:27:40,857 - INFO: Epoch:5/20, batch:1720, train_loss:[2.30071], acc_top1:[0.40625], acc_top5:[0.71875](4.23s)
    2022-10-19 10:27:45,227 - INFO: Epoch:5/20, batch:1730, train_loss:[2.44939], acc_top1:[0.31250], acc_top5:[0.68750](4.37s)
    2022-10-19 10:27:49,494 - INFO: Epoch:5/20, batch:1740, train_loss:[2.44004], acc_top1:[0.21875], acc_top5:[0.65625](4.27s)
    2022-10-19 10:27:53,802 - INFO: Epoch:5/20, batch:1750, train_loss:[2.14856], acc_top1:[0.37500], acc_top5:[0.68750](4.31s)
    2022-10-19 10:27:58,102 - INFO: Epoch:5/20, batch:1760, train_loss:[2.38856], acc_top1:[0.31250], acc_top5:[0.71875](4.30s)
    2022-10-19 10:28:02,355 - INFO: Epoch:5/20, batch:1770, train_loss:[2.57989], acc_top1:[0.25000], acc_top5:[0.71875](4.25s)
    2022-10-19 10:28:06,639 - INFO: Epoch:5/20, batch:1780, train_loss:[2.58243], acc_top1:[0.31250], acc_top5:[0.68750](4.28s)
    2022-10-19 10:28:10,902 - INFO: Epoch:5/20, batch:1790, train_loss:[2.26634], acc_top1:[0.40625], acc_top5:[0.68750](4.26s)
    2022-10-19 10:28:28,785 - INFO: [validation] Epoch:5/20, val_loss:[0.07670], val_top1:[0.31090], val_top5:[0.69600]
    2022-10-19 10:28:32,522 - INFO: 已保存当前测试模型(epoch=5)为最优模型:Garbage_resnet50_final
    2022-10-19 10:28:32,523 - INFO: 最优top1测试精度:0.31090 (epoch=5)
    2022-10-19 10:28:32,523 - INFO: 训练完成,最终性能accuracy=0.31090(epoch=5), 总耗时1048.54s, 已将其保存为:Garbage_resnet50_final
    2022-10-19 10:28:34,686 - INFO: Epoch:6/20, batch:1800, train_loss:[2.90457], acc_top1:[0.25000], acc_top5:[0.53125](23.78s)
    2022-10-19 10:28:39,045 - INFO: Epoch:6/20, batch:1810, train_loss:[2.09320], acc_top1:[0.34375], acc_top5:[0.71875](4.36s)
    2022-10-19 10:28:43,297 - INFO: Epoch:6/20, batch:1820, train_loss:[2.23452], acc_top1:[0.31250], acc_top5:[0.71875](4.25s)
    2022-10-19 10:28:47,622 - INFO: Epoch:6/20, batch:1830, train_loss:[2.89177], acc_top1:[0.31250], acc_top5:[0.59375](4.33s)
    2022-10-19 10:28:51,903 - INFO: Epoch:6/20, batch:1840, train_loss:[2.69877], acc_top1:[0.15625], acc_top5:[0.59375](4.28s)
    2022-10-19 10:28:56,117 - INFO: Epoch:6/20, batch:1850, train_loss:[2.42274], acc_top1:[0.21875], acc_top5:[0.68750](4.21s)
    2022-10-19 10:29:00,339 - INFO: Epoch:6/20, batch:1860, train_loss:[2.06705], acc_top1:[0.34375], acc_top5:[0.84375](4.22s)
    2022-10-19 10:29:04,580 - INFO: Epoch:6/20, batch:1870, train_loss:[2.77649], acc_top1:[0.12500], acc_top5:[0.53125](4.24s)
    2022-10-19 10:29:08,806 - INFO: Epoch:6/20, batch:1880, train_loss:[2.37722], acc_top1:[0.31250], acc_top5:[0.65625](4.23s)
    2022-10-19 10:29:13,014 - INFO: Epoch:6/20, batch:1890, train_loss:[2.21261], acc_top1:[0.28125], acc_top5:[0.68750](4.21s)
    2022-10-19 10:29:17,234 - INFO: Epoch:6/20, batch:1900, train_loss:[2.40164], acc_top1:[0.31250], acc_top5:[0.71875](4.22s)
    2022-10-19 10:29:21,493 - INFO: Epoch:6/20, batch:1910, train_loss:[2.28378], acc_top1:[0.28125], acc_top5:[0.75000](4.26s)
    2022-10-19 10:29:25,801 - INFO: Epoch:6/20, batch:1920, train_loss:[2.22809], acc_top1:[0.46875], acc_top5:[0.68750](4.31s)
    2022-10-19 10:29:30,113 - INFO: Epoch:6/20, batch:1930, train_loss:[2.24167], acc_top1:[0.34375], acc_top5:[0.65625](4.31s)
    2022-10-19 10:29:34,350 - INFO: Epoch:6/20, batch:1940, train_loss:[2.12272], acc_top1:[0.34375], acc_top5:[0.71875](4.24s)
    2022-10-19 10:29:38,539 - INFO: Epoch:6/20, batch:1950, train_loss:[2.05044], acc_top1:[0.34375], acc_top5:[0.81250](4.19s)
    2022-10-19 10:29:42,884 - INFO: Epoch:6/20, batch:1960, train_loss:[2.71655], acc_top1:[0.25000], acc_top5:[0.65625](4.35s)
    2022-10-19 10:29:47,150 - INFO: Epoch:6/20, batch:1970, train_loss:[2.19162], acc_top1:[0.40625], acc_top5:[0.71875](4.26s)
    2022-10-19 10:29:51,444 - INFO: Epoch:6/20, batch:1980, train_loss:[1.81106], acc_top1:[0.50000], acc_top5:[0.90625](4.29s)
    2022-10-19 10:29:55,787 - INFO: Epoch:6/20, batch:1990, train_loss:[2.24923], acc_top1:[0.31250], acc_top5:[0.75000](4.34s)
    2022-10-19 10:30:00,033 - INFO: Epoch:6/20, batch:2000, train_loss:[2.31223], acc_top1:[0.40625], acc_top5:[0.71875](4.25s)
    2022-10-19 10:30:04,421 - INFO: Epoch:6/20, batch:2010, train_loss:[2.46211], acc_top1:[0.40625], acc_top5:[0.62500](4.39s)
    2022-10-19 10:30:08,722 - INFO: Epoch:6/20, batch:2020, train_loss:[2.23371], acc_top1:[0.34375], acc_top5:[0.68750](4.30s)
    2022-10-19 10:30:13,022 - INFO: Epoch:6/20, batch:2030, train_loss:[1.94631], acc_top1:[0.46875], acc_top5:[0.78125](4.30s)
    2022-10-19 10:30:17,323 - INFO: Epoch:6/20, batch:2040, train_loss:[1.73826], acc_top1:[0.46875], acc_top5:[0.84375](4.30s)
    2022-10-19 10:30:21,592 - INFO: Epoch:6/20, batch:2050, train_loss:[2.25856], acc_top1:[0.40625], acc_top5:[0.68750](4.27s)
    2022-10-19 10:30:25,960 - INFO: Epoch:6/20, batch:2060, train_loss:[2.65176], acc_top1:[0.31250], acc_top5:[0.68750](4.37s)
    2022-10-19 10:30:30,303 - INFO: Epoch:6/20, batch:2070, train_loss:[2.29189], acc_top1:[0.34375], acc_top5:[0.71875](4.34s)
    2022-10-19 10:30:34,596 - INFO: Epoch:6/20, batch:2080, train_loss:[2.27966], acc_top1:[0.37500], acc_top5:[0.75000](4.29s)
    2022-10-19 10:30:38,783 - INFO: Epoch:6/20, batch:2090, train_loss:[2.56072], acc_top1:[0.28125], acc_top5:[0.65625](4.19s)
    2022-10-19 10:30:42,991 - INFO: Epoch:6/20, batch:2100, train_loss:[2.23143], acc_top1:[0.37500], acc_top5:[0.75000](4.21s)
    2022-10-19 10:30:47,233 - INFO: Epoch:6/20, batch:2110, train_loss:[2.61089], acc_top1:[0.25000], acc_top5:[0.62500](4.24s)
    2022-10-19 10:30:51,490 - INFO: Epoch:6/20, batch:2120, train_loss:[2.40583], acc_top1:[0.21875], acc_top5:[0.65625](4.26s)
    2022-10-19 10:30:55,801 - INFO: Epoch:6/20, batch:2130, train_loss:[2.22645], acc_top1:[0.37500], acc_top5:[0.71875](4.31s)
    2022-10-19 10:31:00,040 - INFO: Epoch:6/20, batch:2140, train_loss:[2.58042], acc_top1:[0.25000], acc_top5:[0.59375](4.24s)
    2022-10-19 10:31:04,298 - INFO: Epoch:6/20, batch:2150, train_loss:[2.07912], acc_top1:[0.34375], acc_top5:[0.68750](4.26s)
    2022-10-19 10:31:21,886 - INFO: [validation] Epoch:6/20, val_loss:[0.07077], val_top1:[0.35542], val_top5:[0.71532]
    2022-10-19 10:31:25,671 - INFO: 已保存当前测试模型(epoch=6)为最优模型:Garbage_resnet50_final
    2022-10-19 10:31:25,672 - INFO: 最优top1测试精度:0.35542 (epoch=6)
    2022-10-19 10:31:25,672 - INFO: 训练完成,最终性能accuracy=0.35542(epoch=6), 总耗时1221.69s, 已将其保存为:Garbage_resnet50_final
    2022-10-19 10:31:28,323 - INFO: Epoch:7/20, batch:2160, train_loss:[2.13461], acc_top1:[0.43750], acc_top5:[0.71875](24.02s)
    2022-10-19 10:31:32,610 - INFO: Epoch:7/20, batch:2170, train_loss:[1.65555], acc_top1:[0.53125], acc_top5:[0.81250](4.29s)
    2022-10-19 10:31:36,834 - INFO: Epoch:7/20, batch:2180, train_loss:[2.47861], acc_top1:[0.31250], acc_top5:[0.68750](4.22s)
    2022-10-19 10:31:41,147 - INFO: Epoch:7/20, batch:2190, train_loss:[2.03420], acc_top1:[0.37500], acc_top5:[0.75000](4.31s)
    2022-10-19 10:31:45,397 - INFO: Epoch:7/20, batch:2200, train_loss:[2.28761], acc_top1:[0.40625], acc_top5:[0.62500](4.25s)
    2022-10-19 10:31:49,705 - INFO: Epoch:7/20, batch:2210, train_loss:[2.37601], acc_top1:[0.18750], acc_top5:[0.81250](4.31s)
    2022-10-19 10:31:53,996 - INFO: Epoch:7/20, batch:2220, train_loss:[2.66091], acc_top1:[0.18750], acc_top5:[0.53125](4.29s)
    2022-10-19 10:31:58,547 - INFO: Epoch:7/20, batch:2230, train_loss:[2.40703], acc_top1:[0.21875], acc_top5:[0.71875](4.55s)
    2022-10-19 10:32:02,777 - INFO: Epoch:7/20, batch:2240, train_loss:[1.96577], acc_top1:[0.46875], acc_top5:[0.81250](4.23s)
    2022-10-19 10:32:07,094 - INFO: Epoch:7/20, batch:2250, train_loss:[2.02257], acc_top1:[0.43750], acc_top5:[0.71875](4.32s)
    2022-10-19 10:32:11,384 - INFO: Epoch:7/20, batch:2260, train_loss:[2.18261], acc_top1:[0.40625], acc_top5:[0.62500](4.29s)
    2022-10-19 10:32:15,724 - INFO: Epoch:7/20, batch:2270, train_loss:[2.08435], acc_top1:[0.40625], acc_top5:[0.75000](4.34s)
    2022-10-19 10:32:20,052 - INFO: Epoch:7/20, batch:2280, train_loss:[2.59611], acc_top1:[0.25000], acc_top5:[0.68750](4.33s)
    2022-10-19 10:32:24,388 - INFO: Epoch:7/20, batch:2290, train_loss:[2.48529], acc_top1:[0.28125], acc_top5:[0.65625](4.34s)
    2022-10-19 10:32:28,759 - INFO: Epoch:7/20, batch:2300, train_loss:[1.88581], acc_top1:[0.43750], acc_top5:[0.81250](4.37s)
    2022-10-19 10:32:33,068 - INFO: Epoch:7/20, batch:2310, train_loss:[2.22690], acc_top1:[0.43750], acc_top5:[0.75000](4.31s)
    2022-10-19 10:32:37,337 - INFO: Epoch:7/20, batch:2320, train_loss:[2.44716], acc_top1:[0.34375], acc_top5:[0.75000](4.27s)
    2022-10-19 10:32:41,779 - INFO: Epoch:7/20, batch:2330, train_loss:[2.40096], acc_top1:[0.28125], acc_top5:[0.71875](4.44s)
    2022-10-19 10:32:46,048 - INFO: Epoch:7/20, batch:2340, train_loss:[2.45770], acc_top1:[0.50000], acc_top5:[0.71875](4.27s)
    2022-10-19 10:32:50,353 - INFO: Epoch:7/20, batch:2350, train_loss:[2.25782], acc_top1:[0.31250], acc_top5:[0.68750](4.30s)
    2022-10-19 10:32:54,769 - INFO: Epoch:7/20, batch:2360, train_loss:[2.13085], acc_top1:[0.34375], acc_top5:[0.78125](4.42s)
    2022-10-19 10:32:59,073 - INFO: Epoch:7/20, batch:2370, train_loss:[1.93324], acc_top1:[0.43750], acc_top5:[0.75000](4.30s)
    2022-10-19 10:33:03,368 - INFO: Epoch:7/20, batch:2380, train_loss:[2.26601], acc_top1:[0.28125], acc_top5:[0.71875](4.30s)
    2022-10-19 10:33:07,639 - INFO: Epoch:7/20, batch:2390, train_loss:[2.40099], acc_top1:[0.31250], acc_top5:[0.56250](4.27s)
    2022-10-19 10:33:11,949 - INFO: Epoch:7/20, batch:2400, train_loss:[2.33961], acc_top1:[0.28125], acc_top5:[0.71875](4.31s)
    2022-10-19 10:33:16,325 - INFO: Epoch:7/20, batch:2410, train_loss:[1.85235], acc_top1:[0.53125], acc_top5:[0.81250](4.38s)
    2022-10-19 10:33:20,573 - INFO: Epoch:7/20, batch:2420, train_loss:[1.81752], acc_top1:[0.40625], acc_top5:[0.81250](4.25s)
    2022-10-19 10:33:24,911 - INFO: Epoch:7/20, batch:2430, train_loss:[2.61251], acc_top1:[0.34375], acc_top5:[0.59375](4.34s)
    2022-10-19 10:33:29,200 - INFO: Epoch:7/20, batch:2440, train_loss:[2.60869], acc_top1:[0.37500], acc_top5:[0.71875](4.29s)
    2022-10-19 10:33:33,486 - INFO: Epoch:7/20, batch:2450, train_loss:[2.76270], acc_top1:[0.34375], acc_top5:[0.59375](4.29s)
    2022-10-19 10:33:37,754 - INFO: Epoch:7/20, batch:2460, train_loss:[2.11021], acc_top1:[0.37500], acc_top5:[0.75000](4.27s)
    2022-10-19 10:33:42,030 - INFO: Epoch:7/20, batch:2470, train_loss:[2.18996], acc_top1:[0.40625], acc_top5:[0.75000](4.28s)
    2022-10-19 10:33:46,280 - INFO: Epoch:7/20, batch:2480, train_loss:[2.24885], acc_top1:[0.40625], acc_top5:[0.75000](4.25s)
    2022-10-19 10:33:50,578 - INFO: Epoch:7/20, batch:2490, train_loss:[2.05664], acc_top1:[0.46875], acc_top5:[0.71875](4.30s)
    2022-10-19 10:33:54,884 - INFO: Epoch:7/20, batch:2500, train_loss:[2.38581], acc_top1:[0.31250], acc_top5:[0.68750](4.31s)
    2022-10-19 10:33:59,206 - INFO: Epoch:7/20, batch:2510, train_loss:[1.95020], acc_top1:[0.40625], acc_top5:[0.81250](4.32s)
    2022-10-19 10:34:16,149 - INFO: [validation] Epoch:7/20, val_loss:[0.06575], val_top1:[0.41960], val_top5:[0.77191]
    2022-10-19 10:34:19,886 - INFO: 已保存当前测试模型(epoch=7)为最优模型:Garbage_resnet50_final
    2022-10-19 10:34:19,886 - INFO: 最优top1测试精度:0.41960 (epoch=7)
    2022-10-19 10:34:19,887 - INFO: 训练完成,最终性能accuracy=0.41960(epoch=7), 总耗时1395.91s, 已将其保存为:Garbage_resnet50_final
    2022-10-19 10:34:22,953 - INFO: Epoch:8/20, batch:2520, train_loss:[2.20661], acc_top1:[0.37500], acc_top5:[0.75000](23.75s)
    2022-10-19 10:34:27,246 - INFO: Epoch:8/20, batch:2530, train_loss:[1.63713], acc_top1:[0.62500], acc_top5:[0.78125](4.29s)
    2022-10-19 10:34:31,550 - INFO: Epoch:8/20, batch:2540, train_loss:[1.71182], acc_top1:[0.59375], acc_top5:[0.81250](4.30s)
    2022-10-19 10:34:35,776 - INFO: Epoch:8/20, batch:2550, train_loss:[2.03994], acc_top1:[0.46875], acc_top5:[0.75000](4.23s)
    2022-10-19 10:34:40,022 - INFO: Epoch:8/20, batch:2560, train_loss:[2.39635], acc_top1:[0.34375], acc_top5:[0.56250](4.25s)
    2022-10-19 10:34:44,340 - INFO: Epoch:8/20, batch:2570, train_loss:[2.03170], acc_top1:[0.31250], acc_top5:[0.78125](4.32s)
    2022-10-19 10:34:48,629 - INFO: Epoch:8/20, batch:2580, train_loss:[1.96327], acc_top1:[0.34375], acc_top5:[0.78125](4.29s)
    2022-10-19 10:34:52,925 - INFO: Epoch:8/20, batch:2590, train_loss:[2.64053], acc_top1:[0.34375], acc_top5:[0.71875](4.30s)
    2022-10-19 10:34:57,125 - INFO: Epoch:8/20, batch:2600, train_loss:[2.23217], acc_top1:[0.37500], acc_top5:[0.71875](4.20s)
    2022-10-19 10:35:01,420 - INFO: Epoch:8/20, batch:2610, train_loss:[2.72176], acc_top1:[0.28125], acc_top5:[0.56250](4.30s)
    2022-10-19 10:35:05,726 - INFO: Epoch:8/20, batch:2620, train_loss:[1.57850], acc_top1:[0.46875], acc_top5:[0.87500](4.31s)
    2022-10-19 10:35:10,001 - INFO: Epoch:8/20, batch:2630, train_loss:[1.87596], acc_top1:[0.56250], acc_top5:[0.71875](4.28s)
    2022-10-19 10:35:14,287 - INFO: Epoch:8/20, batch:2640, train_loss:[2.14026], acc_top1:[0.37500], acc_top5:[0.81250](4.29s)
    2022-10-19 10:35:18,529 - INFO: Epoch:8/20, batch:2650, train_loss:[2.32277], acc_top1:[0.31250], acc_top5:[0.59375](4.24s)
    2022-10-19 10:35:22,756 - INFO: Epoch:8/20, batch:2660, train_loss:[2.23833], acc_top1:[0.37500], acc_top5:[0.68750](4.23s)
    2022-10-19 10:35:27,011 - INFO: Epoch:8/20, batch:2670, train_loss:[2.06293], acc_top1:[0.31250], acc_top5:[0.78125](4.25s)
    2022-10-19 10:35:31,270 - INFO: Epoch:8/20, batch:2680, train_loss:[2.19186], acc_top1:[0.31250], acc_top5:[0.71875](4.26s)
    2022-10-19 10:35:35,548 - INFO: Epoch:8/20, batch:2690, train_loss:[1.79504], acc_top1:[0.40625], acc_top5:[0.87500](4.28s)
    2022-10-19 10:35:39,776 - INFO: Epoch:8/20, batch:2700, train_loss:[1.95714], acc_top1:[0.37500], acc_top5:[0.84375](4.23s)
    2022-10-19 10:35:44,068 - INFO: Epoch:8/20, batch:2710, train_loss:[2.30321], acc_top1:[0.43750], acc_top5:[0.71875](4.29s)
    2022-10-19 10:35:48,379 - INFO: Epoch:8/20, batch:2720, train_loss:[1.99608], acc_top1:[0.50000], acc_top5:[0.81250](4.31s)
    2022-10-19 10:35:52,701 - INFO: Epoch:8/20, batch:2730, train_loss:[2.18222], acc_top1:[0.31250], acc_top5:[0.75000](4.32s)
    2022-10-19 10:35:57,041 - INFO: Epoch:8/20, batch:2740, train_loss:[2.26335], acc_top1:[0.31250], acc_top5:[0.65625](4.34s)
    2022-10-19 10:36:01,315 - INFO: Epoch:8/20, batch:2750, train_loss:[2.18703], acc_top1:[0.43750], acc_top5:[0.71875](4.27s)
    2022-10-19 10:36:05,636 - INFO: Epoch:8/20, batch:2760, train_loss:[2.07426], acc_top1:[0.43750], acc_top5:[0.68750](4.32s)
    2022-10-19 10:36:09,792 - INFO: Epoch:8/20, batch:2770, train_loss:[2.35451], acc_top1:[0.43750], acc_top5:[0.62500](4.16s)
    2022-10-19 10:36:13,941 - INFO: Epoch:8/20, batch:2780, train_loss:[2.22203], acc_top1:[0.46875], acc_top5:[0.75000](4.15s)
    2022-10-19 10:36:18,268 - INFO: Epoch:8/20, batch:2790, train_loss:[2.18155], acc_top1:[0.43750], acc_top5:[0.71875](4.33s)
    2022-10-19 10:36:22,481 - INFO: Epoch:8/20, batch:2800, train_loss:[1.91512], acc_top1:[0.37500], acc_top5:[0.90625](4.21s)
    2022-10-19 10:36:26,736 - INFO: Epoch:8/20, batch:2810, train_loss:[2.07014], acc_top1:[0.50000], acc_top5:[0.78125](4.25s)
    2022-10-19 10:36:30,941 - INFO: Epoch:8/20, batch:2820, train_loss:[1.89065], acc_top1:[0.46875], acc_top5:[0.81250](4.21s)
    2022-10-19 10:36:35,259 - INFO: Epoch:8/20, batch:2830, train_loss:[1.91022], acc_top1:[0.34375], acc_top5:[0.78125](4.32s)
    2022-10-19 10:36:39,575 - INFO: Epoch:8/20, batch:2840, train_loss:[2.19712], acc_top1:[0.31250], acc_top5:[0.81250](4.32s)
    2022-10-19 10:36:43,806 - INFO: Epoch:8/20, batch:2850, train_loss:[2.03915], acc_top1:[0.50000], acc_top5:[0.81250](4.23s)
    2022-10-19 10:36:48,060 - INFO: Epoch:8/20, batch:2860, train_loss:[2.31442], acc_top1:[0.31250], acc_top5:[0.65625](4.26s)
    2022-10-19 10:36:52,372 - INFO: Epoch:8/20, batch:2870, train_loss:[2.49450], acc_top1:[0.31250], acc_top5:[0.56250](4.31s)
    2022-10-19 10:37:09,016 - INFO: [validation] Epoch:8/20, val_loss:[0.06588], val_top1:[0.42029], val_top5:[0.75466]
    2022-10-19 10:37:12,391 - INFO: 已保存当前测试模型(epoch=8)为最优模型:Garbage_resnet50_final
    2022-10-19 10:37:12,392 - INFO: 最优top1测试精度:0.42029 (epoch=8)
    2022-10-19 10:37:12,392 - INFO: 训练完成,最终性能accuracy=0.42029(epoch=8), 总耗时1568.41s, 已将其保存为:Garbage_resnet50_final
    2022-10-19 10:37:15,871 - INFO: Epoch:9/20, batch:2880, train_loss:[2.35830], acc_top1:[0.37500], acc_top5:[0.65625](23.50s)
    2022-10-19 10:37:20,089 - INFO: Epoch:9/20, batch:2890, train_loss:[2.12999], acc_top1:[0.40625], acc_top5:[0.75000](4.22s)
    2022-10-19 10:37:24,414 - INFO: Epoch:9/20, batch:2900, train_loss:[2.06918], acc_top1:[0.37500], acc_top5:[0.71875](4.33s)
    2022-10-19 10:37:28,679 - INFO: Epoch:9/20, batch:2910, train_loss:[2.03201], acc_top1:[0.43750], acc_top5:[0.81250](4.26s)
    2022-10-19 10:37:32,973 - INFO: Epoch:9/20, batch:2920, train_loss:[2.23768], acc_top1:[0.40625], acc_top5:[0.81250](4.29s)
    2022-10-19 10:37:37,229 - INFO: Epoch:9/20, batch:2930, train_loss:[2.16037], acc_top1:[0.31250], acc_top5:[0.65625](4.26s)
    2022-10-19 10:37:41,520 - INFO: Epoch:9/20, batch:2940, train_loss:[2.15404], acc_top1:[0.37500], acc_top5:[0.71875](4.29s)
    2022-10-19 10:37:45,813 - INFO: Epoch:9/20, batch:2950, train_loss:[2.12646], acc_top1:[0.34375], acc_top5:[0.78125](4.29s)
    2022-10-19 10:37:50,040 - INFO: Epoch:9/20, batch:2960, train_loss:[2.64777], acc_top1:[0.25000], acc_top5:[0.65625](4.23s)
    2022-10-19 10:37:54,291 - INFO: Epoch:9/20, batch:2970, train_loss:[1.94711], acc_top1:[0.40625], acc_top5:[0.78125](4.25s)
    2022-10-19 10:37:58,529 - INFO: Epoch:9/20, batch:2980, train_loss:[1.98098], acc_top1:[0.40625], acc_top5:[0.78125](4.24s)
    2022-10-19 10:38:02,797 - INFO: Epoch:9/20, batch:2990, train_loss:[1.90895], acc_top1:[0.43750], acc_top5:[0.81250](4.27s)
    2022-10-19 10:38:07,134 - INFO: Epoch:9/20, batch:3000, train_loss:[2.22470], acc_top1:[0.40625], acc_top5:[0.71875](4.34s)
    2022-10-19 10:38:11,461 - INFO: Epoch:9/20, batch:3010, train_loss:[2.37069], acc_top1:[0.28125], acc_top5:[0.59375](4.33s)
    2022-10-19 10:38:15,705 - INFO: Epoch:9/20, batch:3020, train_loss:[1.96731], acc_top1:[0.50000], acc_top5:[0.81250](4.24s)
    2022-10-19 10:38:19,917 - INFO: Epoch:9/20, batch:3030, train_loss:[1.33685], acc_top1:[0.65625], acc_top5:[0.90625](4.21s)
    2022-10-19 10:38:24,108 - INFO: Epoch:9/20, batch:3040, train_loss:[1.80902], acc_top1:[0.56250], acc_top5:[0.78125](4.19s)
    2022-10-19 10:38:28,419 - INFO: Epoch:9/20, batch:3050, train_loss:[2.42102], acc_top1:[0.34375], acc_top5:[0.65625](4.31s)
    2022-10-19 10:38:32,734 - INFO: Epoch:9/20, batch:3060, train_loss:[1.99071], acc_top1:[0.46875], acc_top5:[0.71875](4.32s)
    2022-10-19 10:38:36,958 - INFO: Epoch:9/20, batch:3070, train_loss:[2.01021], acc_top1:[0.43750], acc_top5:[0.78125](4.22s)
    2022-10-19 10:38:41,253 - INFO: Epoch:9/20, batch:3080, train_loss:[1.98351], acc_top1:[0.43750], acc_top5:[0.71875](4.29s)
    2022-10-19 10:38:45,561 - INFO: Epoch:9/20, batch:3090, train_loss:[2.55133], acc_top1:[0.28125], acc_top5:[0.62500](4.31s)
    2022-10-19 10:38:49,784 - INFO: Epoch:9/20, batch:3100, train_loss:[1.79998], acc_top1:[0.46875], acc_top5:[0.84375](4.22s)
    2022-10-19 10:38:54,020 - INFO: Epoch:9/20, batch:3110, train_loss:[1.84389], acc_top1:[0.43750], acc_top5:[0.87500](4.24s)
    2022-10-19 10:38:58,272 - INFO: Epoch:9/20, batch:3120, train_loss:[2.39888], acc_top1:[0.28125], acc_top5:[0.71875](4.25s)
    2022-10-19 10:39:02,585 - INFO: Epoch:9/20, batch:3130, train_loss:[1.70708], acc_top1:[0.43750], acc_top5:[0.84375](4.31s)
    2022-10-19 10:39:06,811 - INFO: Epoch:9/20, batch:3140, train_loss:[2.34346], acc_top1:[0.21875], acc_top5:[0.71875](4.23s)
    2022-10-19 10:39:11,060 - INFO: Epoch:9/20, batch:3150, train_loss:[1.81172], acc_top1:[0.53125], acc_top5:[0.81250](4.25s)
    2022-10-19 10:39:15,325 - INFO: Epoch:9/20, batch:3160, train_loss:[2.63932], acc_top1:[0.28125], acc_top5:[0.65625](4.26s)
    2022-10-19 10:39:19,631 - INFO: Epoch:9/20, batch:3170, train_loss:[1.81727], acc_top1:[0.50000], acc_top5:[0.78125](4.31s)
    2022-10-19 10:39:23,959 - INFO: Epoch:9/20, batch:3180, train_loss:[1.65926], acc_top1:[0.50000], acc_top5:[0.84375](4.33s)
    2022-10-19 10:39:28,157 - INFO: Epoch:9/20, batch:3190, train_loss:[2.14922], acc_top1:[0.40625], acc_top5:[0.75000](4.20s)
    2022-10-19 10:39:32,381 - INFO: Epoch:9/20, batch:3200, train_loss:[1.67554], acc_top1:[0.59375], acc_top5:[0.84375](4.22s)
    2022-10-19 10:39:36,718 - INFO: Epoch:9/20, batch:3210, train_loss:[1.76406], acc_top1:[0.46875], acc_top5:[0.78125](4.34s)
    2022-10-19 10:39:40,951 - INFO: Epoch:9/20, batch:3220, train_loss:[1.96766], acc_top1:[0.53125], acc_top5:[0.84375](4.23s)
    2022-10-19 10:39:45,210 - INFO: Epoch:9/20, batch:3230, train_loss:[1.90432], acc_top1:[0.43750], acc_top5:[0.75000](4.26s)
    2022-10-19 10:40:01,376 - INFO: [validation] Epoch:9/20, val_loss:[0.07279], val_top1:[0.36680], val_top5:[0.71843]
    2022-10-19 10:40:01,376 - INFO: 最优top1测试精度:0.42029 (epoch=8)
    2022-10-19 10:40:01,377 - INFO: 训练完成,最终性能accuracy=0.42029(epoch=8), 总耗时1737.39s, 已将其保存为:Garbage_resnet50_final
    2022-10-19 10:40:05,275 - INFO: Epoch:10/20, batch:3240, train_loss:[1.97086], acc_top1:[0.50000], acc_top5:[0.81250](20.06s)
    2022-10-19 10:40:09,572 - INFO: Epoch:10/20, batch:3250, train_loss:[2.09813], acc_top1:[0.46875], acc_top5:[0.71875](4.30s)
    2022-10-19 10:40:13,808 - INFO: Epoch:10/20, batch:3260, train_loss:[2.03465], acc_top1:[0.50000], acc_top5:[0.81250](4.24s)
    2022-10-19 10:40:18,051 - INFO: Epoch:10/20, batch:3270, train_loss:[2.38625], acc_top1:[0.31250], acc_top5:[0.68750](4.24s)
    2022-10-19 10:40:22,284 - INFO: Epoch:10/20, batch:3280, train_loss:[2.30643], acc_top1:[0.46875], acc_top5:[0.65625](4.23s)
    2022-10-19 10:40:26,511 - INFO: Epoch:10/20, batch:3290, train_loss:[1.66249], acc_top1:[0.50000], acc_top5:[0.87500](4.23s)
    2022-10-19 10:40:30,783 - INFO: Epoch:10/20, batch:3300, train_loss:[2.09381], acc_top1:[0.43750], acc_top5:[0.78125](4.27s)
    2022-10-19 10:40:35,010 - INFO: Epoch:10/20, batch:3310, train_loss:[1.72490], acc_top1:[0.50000], acc_top5:[0.78125](4.23s)
    2022-10-19 10:40:39,326 - INFO: Epoch:10/20, batch:3320, train_loss:[1.97592], acc_top1:[0.43750], acc_top5:[0.78125](4.32s)
    2022-10-19 10:40:43,623 - INFO: Epoch:10/20, batch:3330, train_loss:[2.18514], acc_top1:[0.31250], acc_top5:[0.68750](4.30s)
    2022-10-19 10:40:47,901 - INFO: Epoch:10/20, batch:3340, train_loss:[1.98816], acc_top1:[0.46875], acc_top5:[0.71875](4.28s)
    2022-10-19 10:40:52,184 - INFO: Epoch:10/20, batch:3350, train_loss:[1.63053], acc_top1:[0.53125], acc_top5:[0.84375](4.28s)
    2022-10-19 10:40:56,475 - INFO: Epoch:10/20, batch:3360, train_loss:[2.37998], acc_top1:[0.40625], acc_top5:[0.68750](4.29s)
    2022-10-19 10:41:00,840 - INFO: Epoch:10/20, batch:3370, train_loss:[2.01636], acc_top1:[0.40625], acc_top5:[0.75000](4.36s)
    2022-10-19 10:41:05,103 - INFO: Epoch:10/20, batch:3380, train_loss:[2.03413], acc_top1:[0.43750], acc_top5:[0.71875](4.26s)
    2022-10-19 10:41:09,446 - INFO: Epoch:10/20, batch:3390, train_loss:[1.87542], acc_top1:[0.50000], acc_top5:[0.87500](4.34s)
    2022-10-19 10:41:13,758 - INFO: Epoch:10/20, batch:3400, train_loss:[1.52968], acc_top1:[0.59375], acc_top5:[0.87500](4.31s)
    2022-10-19 10:41:18,035 - INFO: Epoch:10/20, batch:3410, train_loss:[2.29650], acc_top1:[0.34375], acc_top5:[0.71875](4.28s)
    2022-10-19 10:41:22,342 - INFO: Epoch:10/20, batch:3420, train_loss:[2.21459], acc_top1:[0.46875], acc_top5:[0.65625](4.31s)
    2022-10-19 10:41:26,623 - INFO: Epoch:10/20, batch:3430, train_loss:[2.29504], acc_top1:[0.34375], acc_top5:[0.68750](4.28s)
    2022-10-19 10:41:30,877 - INFO: Epoch:10/20, batch:3440, train_loss:[2.07172], acc_top1:[0.46875], acc_top5:[0.75000](4.25s)
    2022-10-19 10:41:35,232 - INFO: Epoch:10/20, batch:3450, train_loss:[2.13000], acc_top1:[0.34375], acc_top5:[0.65625](4.35s)
    2022-10-19 10:41:39,510 - INFO: Epoch:10/20, batch:3460, train_loss:[2.12486], acc_top1:[0.43750], acc_top5:[0.68750](4.28s)
    2022-10-19 10:41:43,820 - INFO: Epoch:10/20, batch:3470, train_loss:[2.15163], acc_top1:[0.43750], acc_top5:[0.75000](4.31s)
    2022-10-19 10:41:48,036 - INFO: Epoch:10/20, batch:3480, train_loss:[1.90560], acc_top1:[0.37500], acc_top5:[0.78125](4.22s)
    2022-10-19 10:41:52,448 - INFO: Epoch:10/20, batch:3490, train_loss:[1.67848], acc_top1:[0.59375], acc_top5:[0.90625](4.41s)
    2022-10-19 10:41:56,756 - INFO: Epoch:10/20, batch:3500, train_loss:[1.84421], acc_top1:[0.46875], acc_top5:[0.71875](4.31s)
    2022-10-19 10:42:01,026 - INFO: Epoch:10/20, batch:3510, train_loss:[2.06201], acc_top1:[0.50000], acc_top5:[0.71875](4.27s)
    2022-10-19 10:42:05,260 - INFO: Epoch:10/20, batch:3520, train_loss:[1.96983], acc_top1:[0.43750], acc_top5:[0.75000](4.23s)
    2022-10-19 10:42:09,526 - INFO: Epoch:10/20, batch:3530, train_loss:[1.93697], acc_top1:[0.37500], acc_top5:[0.81250](4.27s)
    2022-10-19 10:42:13,834 - INFO: Epoch:10/20, batch:3540, train_loss:[1.99131], acc_top1:[0.46875], acc_top5:[0.71875](4.31s)
    2022-10-19 10:42:18,112 - INFO: Epoch:10/20, batch:3550, train_loss:[1.86204], acc_top1:[0.53125], acc_top5:[0.78125](4.28s)
    2022-10-19 10:42:22,434 - INFO: Epoch:10/20, batch:3560, train_loss:[2.13440], acc_top1:[0.31250], acc_top5:[0.84375](4.32s)
    2022-10-19 10:42:26,694 - INFO: Epoch:10/20, batch:3570, train_loss:[1.97994], acc_top1:[0.50000], acc_top5:[0.71875](4.26s)
    2022-10-19 10:42:31,024 - INFO: Epoch:10/20, batch:3580, train_loss:[2.00687], acc_top1:[0.37500], acc_top5:[0.81250](4.33s)
    2022-10-19 10:42:35,241 - INFO: Epoch:10/20, batch:3590, train_loss:[2.11056], acc_top1:[0.37500], acc_top5:[0.75000](4.22s)
    2022-10-19 10:42:51,093 - INFO: [validation] Epoch:10/20, val_loss:[0.06461], val_top1:[0.42961], val_top5:[0.77364]
    2022-10-19 10:42:55,384 - INFO: 已保存当前测试模型(epoch=10)为最优模型:Garbage_resnet50_final
    2022-10-19 10:42:55,385 - INFO: 最优top1测试精度:0.42961 (epoch=10)
    2022-10-19 10:42:55,386 - INFO: 训练完成,最终性能accuracy=0.42961(epoch=10), 总耗时1911.40s, 已将其保存为:Garbage_resnet50_final
    2022-10-19 10:42:59,792 - INFO: Epoch:11/20, batch:3600, train_loss:[2.19995], acc_top1:[0.37500], acc_top5:[0.78125](24.55s)
    2022-10-19 10:43:04,129 - INFO: Epoch:11/20, batch:3610, train_loss:[1.86201], acc_top1:[0.56250], acc_top5:[0.81250](4.34s)
    2022-10-19 10:43:08,373 - INFO: Epoch:11/20, batch:3620, train_loss:[1.99063], acc_top1:[0.40625], acc_top5:[0.78125](4.24s)
    2022-10-19 10:43:12,716 - INFO: Epoch:11/20, batch:3630, train_loss:[1.79790], acc_top1:[0.46875], acc_top5:[0.81250](4.34s)
    2022-10-19 10:43:17,010 - INFO: Epoch:11/20, batch:3640, train_loss:[1.78138], acc_top1:[0.46875], acc_top5:[0.84375](4.29s)
    2022-10-19 10:43:21,367 - INFO: Epoch:11/20, batch:3650, train_loss:[1.90081], acc_top1:[0.43750], acc_top5:[0.81250](4.36s)
    2022-10-19 10:43:25,726 - INFO: Epoch:11/20, batch:3660, train_loss:[2.13153], acc_top1:[0.50000], acc_top5:[0.75000](4.36s)
    2022-10-19 10:43:30,074 - INFO: Epoch:11/20, batch:3670, train_loss:[1.62108], acc_top1:[0.62500], acc_top5:[0.87500](4.35s)
    2022-10-19 10:43:34,366 - INFO: Epoch:11/20, batch:3680, train_loss:[2.02996], acc_top1:[0.43750], acc_top5:[0.78125](4.29s)
    2022-10-19 10:43:38,693 - INFO: Epoch:11/20, batch:3690, train_loss:[1.44822], acc_top1:[0.56250], acc_top5:[0.87500](4.33s)
    2022-10-19 10:43:42,975 - INFO: Epoch:11/20, batch:3700, train_loss:[1.62213], acc_top1:[0.50000], acc_top5:[0.78125](4.28s)
    2022-10-19 10:43:47,214 - INFO: Epoch:11/20, batch:3710, train_loss:[1.95457], acc_top1:[0.56250], acc_top5:[0.75000](4.24s)
    2022-10-19 10:43:51,561 - INFO: Epoch:11/20, batch:3720, train_loss:[2.20034], acc_top1:[0.34375], acc_top5:[0.75000](4.35s)
    2022-10-19 10:43:55,889 - INFO: Epoch:11/20, batch:3730, train_loss:[1.69224], acc_top1:[0.40625], acc_top5:[0.90625](4.33s)
    2022-10-19 10:44:00,243 - INFO: Epoch:11/20, batch:3740, train_loss:[2.13158], acc_top1:[0.40625], acc_top5:[0.68750](4.35s)
    2022-10-19 10:44:04,638 - INFO: Epoch:11/20, batch:3750, train_loss:[1.91640], acc_top1:[0.50000], acc_top5:[0.78125](4.39s)
    2022-10-19 10:44:08,976 - INFO: Epoch:11/20, batch:3760, train_loss:[1.92370], acc_top1:[0.40625], acc_top5:[0.71875](4.34s)
    2022-10-19 10:44:13,281 - INFO: Epoch:11/20, batch:3770, train_loss:[2.08578], acc_top1:[0.37500], acc_top5:[0.71875](4.30s)
    2022-10-19 10:44:17,597 - INFO: Epoch:11/20, batch:3780, train_loss:[2.34641], acc_top1:[0.46875], acc_top5:[0.65625](4.32s)
    2022-10-19 10:44:21,899 - INFO: Epoch:11/20, batch:3790, train_loss:[2.02382], acc_top1:[0.37500], acc_top5:[0.75000](4.30s)
    2022-10-19 10:44:26,200 - INFO: Epoch:11/20, batch:3800, train_loss:[2.05641], acc_top1:[0.43750], acc_top5:[0.75000](4.30s)
    2022-10-19 10:44:30,416 - INFO: Epoch:11/20, batch:3810, train_loss:[2.02442], acc_top1:[0.43750], acc_top5:[0.78125](4.22s)
    2022-10-19 10:44:34,732 - INFO: Epoch:11/20, batch:3820, train_loss:[1.65038], acc_top1:[0.37500], acc_top5:[0.87500](4.32s)
    2022-10-19 10:44:39,020 - INFO: Epoch:11/20, batch:3830, train_loss:[1.68060], acc_top1:[0.43750], acc_top5:[0.90625](4.29s)
    2022-10-19 10:44:43,410 - INFO: Epoch:11/20, batch:3840, train_loss:[1.58740], acc_top1:[0.50000], acc_top5:[1.00000](4.39s)
    2022-10-19 10:44:47,655 - INFO: Epoch:11/20, batch:3850, train_loss:[1.81508], acc_top1:[0.43750], acc_top5:[0.81250](4.25s)
    2022-10-19 10:44:52,156 - INFO: Epoch:11/20, batch:3860, train_loss:[1.85690], acc_top1:[0.53125], acc_top5:[0.75000](4.50s)
    2022-10-19 10:44:56,481 - INFO: Epoch:11/20, batch:3870, train_loss:[2.03395], acc_top1:[0.50000], acc_top5:[0.81250](4.33s)
    2022-10-19 10:45:00,807 - INFO: Epoch:11/20, batch:3880, train_loss:[2.01333], acc_top1:[0.40625], acc_top5:[0.71875](4.33s)
    2022-10-19 10:45:05,138 - INFO: Epoch:11/20, batch:3890, train_loss:[1.65341], acc_top1:[0.46875], acc_top5:[0.84375](4.33s)
    2022-10-19 10:45:09,573 - INFO: Epoch:11/20, batch:3900, train_loss:[2.01748], acc_top1:[0.43750], acc_top5:[0.81250](4.43s)
    2022-10-19 10:45:13,948 - INFO: Epoch:11/20, batch:3910, train_loss:[1.95654], acc_top1:[0.34375], acc_top5:[0.78125](4.38s)
    2022-10-19 10:45:18,277 - INFO: Epoch:11/20, batch:3920, train_loss:[1.75301], acc_top1:[0.56250], acc_top5:[0.81250](4.33s)
    2022-10-19 10:45:22,552 - INFO: Epoch:11/20, batch:3930, train_loss:[2.12802], acc_top1:[0.40625], acc_top5:[0.78125](4.28s)
    2022-10-19 10:45:26,948 - INFO: Epoch:11/20, batch:3940, train_loss:[2.33049], acc_top1:[0.28125], acc_top5:[0.68750](4.40s)
    2022-10-19 10:45:46,704 - INFO: [validation] Epoch:11/20, val_loss:[0.08433], val_top1:[0.42719], val_top5:[0.76812]
    2022-10-19 10:45:46,704 - INFO: 最优top1测试精度:0.42961 (epoch=10)
    2022-10-19 10:45:46,705 - INFO: 训练完成,最终性能accuracy=0.42961(epoch=10), 总耗时2082.72s, 已将其保存为:Garbage_resnet50_final
    2022-10-19 10:45:47,168 - INFO: Epoch:12/20, batch:3950, train_loss:[1.89370], acc_top1:[0.37500], acc_top5:[0.75000](20.22s)
    2022-10-19 10:45:51,417 - INFO: Epoch:12/20, batch:3960, train_loss:[2.02063], acc_top1:[0.50000], acc_top5:[0.81250](4.25s)
    2022-10-19 10:45:55,784 - INFO: Epoch:12/20, batch:3970, train_loss:[1.53129], acc_top1:[0.53125], acc_top5:[0.81250](4.37s)
    2022-10-19 10:46:00,070 - INFO: Epoch:12/20, batch:3980, train_loss:[1.88866], acc_top1:[0.56250], acc_top5:[0.78125](4.29s)
    2022-10-19 10:46:04,361 - INFO: Epoch:12/20, batch:3990, train_loss:[1.51942], acc_top1:[0.59375], acc_top5:[0.90625](4.29s)
    2022-10-19 10:46:08,684 - INFO: Epoch:12/20, batch:4000, train_loss:[2.18813], acc_top1:[0.34375], acc_top5:[0.75000](4.32s)
    2022-10-19 10:46:13,040 - INFO: Epoch:12/20, batch:4010, train_loss:[1.76709], acc_top1:[0.50000], acc_top5:[0.78125](4.36s)
    2022-10-19 10:46:17,474 - INFO: Epoch:12/20, batch:4020, train_loss:[1.59637], acc_top1:[0.56250], acc_top5:[0.78125](4.43s)
    2022-10-19 10:46:21,793 - INFO: Epoch:12/20, batch:4030, train_loss:[1.43410], acc_top1:[0.59375], acc_top5:[0.84375](4.32s)
    2022-10-19 10:46:26,048 - INFO: Epoch:12/20, batch:4040, train_loss:[1.75072], acc_top1:[0.50000], acc_top5:[0.78125](4.25s)
    2022-10-19 10:46:30,331 - INFO: Epoch:12/20, batch:4050, train_loss:[1.58238], acc_top1:[0.56250], acc_top5:[0.87500](4.28s)
    2022-10-19 10:46:34,563 - INFO: Epoch:12/20, batch:4060, train_loss:[2.09980], acc_top1:[0.43750], acc_top5:[0.75000](4.23s)
    2022-10-19 10:46:38,819 - INFO: Epoch:12/20, batch:4070, train_loss:[2.03352], acc_top1:[0.40625], acc_top5:[0.78125](4.26s)
    2022-10-19 10:46:43,058 - INFO: Epoch:12/20, batch:4080, train_loss:[1.65166], acc_top1:[0.56250], acc_top5:[0.90625](4.24s)
    2022-10-19 10:46:47,300 - INFO: Epoch:12/20, batch:4090, train_loss:[1.47135], acc_top1:[0.53125], acc_top5:[0.90625](4.24s)
    2022-10-19 10:46:51,602 - INFO: Epoch:12/20, batch:4100, train_loss:[2.10103], acc_top1:[0.43750], acc_top5:[0.75000](4.30s)
    2022-10-19 10:46:55,916 - INFO: Epoch:12/20, batch:4110, train_loss:[1.63580], acc_top1:[0.43750], acc_top5:[0.90625](4.31s)
    2022-10-19 10:47:00,181 - INFO: Epoch:12/20, batch:4120, train_loss:[1.96149], acc_top1:[0.46875], acc_top5:[0.78125](4.26s)
    2022-10-19 10:47:04,451 - INFO: Epoch:12/20, batch:4130, train_loss:[1.46688], acc_top1:[0.50000], acc_top5:[0.90625](4.27s)
    2022-10-19 10:47:08,708 - INFO: Epoch:12/20, batch:4140, train_loss:[1.88070], acc_top1:[0.53125], acc_top5:[0.84375](4.26s)
    2022-10-19 10:47:12,987 - INFO: Epoch:12/20, batch:4150, train_loss:[1.85398], acc_top1:[0.40625], acc_top5:[0.87500](4.28s)
    2022-10-19 10:47:17,237 - INFO: Epoch:12/20, batch:4160, train_loss:[2.28596], acc_top1:[0.31250], acc_top5:[0.68750](4.25s)
    2022-10-19 10:47:21,513 - INFO: Epoch:12/20, batch:4170, train_loss:[1.82440], acc_top1:[0.46875], acc_top5:[0.81250](4.28s)
    2022-10-19 10:47:25,808 - INFO: Epoch:12/20, batch:4180, train_loss:[2.07411], acc_top1:[0.46875], acc_top5:[0.78125](4.30s)
    2022-10-19 10:47:30,161 - INFO: Epoch:12/20, batch:4190, train_loss:[1.77815], acc_top1:[0.50000], acc_top5:[0.84375](4.35s)
    2022-10-19 10:47:34,449 - INFO: Epoch:12/20, batch:4200, train_loss:[1.81513], acc_top1:[0.50000], acc_top5:[0.75000](4.29s)
    2022-10-19 10:47:38,894 - INFO: Epoch:12/20, batch:4210, train_loss:[1.76384], acc_top1:[0.46875], acc_top5:[0.84375](4.45s)
    2022-10-19 10:47:43,050 - INFO: Epoch:12/20, batch:4220, train_loss:[1.58087], acc_top1:[0.53125], acc_top5:[0.81250](4.16s)
    2022-10-19 10:47:47,270 - INFO: Epoch:12/20, batch:4230, train_loss:[1.70090], acc_top1:[0.50000], acc_top5:[0.75000](4.22s)
    2022-10-19 10:47:51,471 - INFO: Epoch:12/20, batch:4240, train_loss:[2.06320], acc_top1:[0.40625], acc_top5:[0.84375](4.20s)
    2022-10-19 10:47:55,698 - INFO: Epoch:12/20, batch:4250, train_loss:[2.42510], acc_top1:[0.25000], acc_top5:[0.62500](4.23s)
    2022-10-19 10:47:59,942 - INFO: Epoch:12/20, batch:4260, train_loss:[1.68968], acc_top1:[0.56250], acc_top5:[0.78125](4.24s)
    2022-10-19 10:48:04,134 - INFO: Epoch:12/20, batch:4270, train_loss:[1.86573], acc_top1:[0.34375], acc_top5:[0.84375](4.19s)
    2022-10-19 10:48:08,302 - INFO: Epoch:12/20, batch:4280, train_loss:[1.60063], acc_top1:[0.50000], acc_top5:[0.81250](4.17s)
    2022-10-19 10:48:12,517 - INFO: Epoch:12/20, batch:4290, train_loss:[1.87960], acc_top1:[0.56250], acc_top5:[0.75000](4.22s)
    2022-10-19 10:48:16,706 - INFO: Epoch:12/20, batch:4300, train_loss:[1.82223], acc_top1:[0.56250], acc_top5:[0.78125](4.19s)
    2022-10-19 10:48:35,575 - INFO: [validation] Epoch:12/20, val_loss:[0.05411], val_top1:[0.50242], val_top5:[0.84610]
    2022-10-19 10:48:39,011 - INFO: 已保存当前测试模型(epoch=12)为最优模型:Garbage_resnet50_final
    2022-10-19 10:48:39,012 - INFO: 最优top1测试精度:0.50242 (epoch=12)
    2022-10-19 10:48:39,012 - INFO: 训练完成,最终性能accuracy=0.50242(epoch=12), 总耗时2255.03s, 已将其保存为:Garbage_resnet50_final
    2022-10-19 10:48:39,916 - INFO: Epoch:13/20, batch:4310, train_loss:[1.58223], acc_top1:[0.56250], acc_top5:[0.87500](23.21s)
    2022-10-19 10:48:44,110 - INFO: Epoch:13/20, batch:4320, train_loss:[1.70869], acc_top1:[0.50000], acc_top5:[0.87500](4.19s)
    2022-10-19 10:48:48,324 - INFO: Epoch:13/20, batch:4330, train_loss:[2.01298], acc_top1:[0.37500], acc_top5:[0.84375](4.21s)
    2022-10-19 10:48:52,420 - INFO: Epoch:13/20, batch:4340, train_loss:[1.46319], acc_top1:[0.65625], acc_top5:[0.84375](4.10s)
    2022-10-19 10:48:56,656 - INFO: Epoch:13/20, batch:4350, train_loss:[2.01883], acc_top1:[0.40625], acc_top5:[0.75000](4.24s)
    2022-10-19 10:49:00,800 - INFO: Epoch:13/20, batch:4360, train_loss:[2.02162], acc_top1:[0.40625], acc_top5:[0.75000](4.14s)
    2022-10-19 10:49:05,086 - INFO: Epoch:13/20, batch:4370, train_loss:[1.71565], acc_top1:[0.53125], acc_top5:[0.78125](4.29s)
    2022-10-19 10:49:09,209 - INFO: Epoch:13/20, batch:4380, train_loss:[1.92731], acc_top1:[0.53125], acc_top5:[0.87500](4.12s)
    2022-10-19 10:49:13,409 - INFO: Epoch:13/20, batch:4390, train_loss:[1.51533], acc_top1:[0.50000], acc_top5:[0.84375](4.20s)
    2022-10-19 10:49:17,631 - INFO: Epoch:13/20, batch:4400, train_loss:[1.54969], acc_top1:[0.59375], acc_top5:[0.71875](4.22s)
    2022-10-19 10:49:21,830 - INFO: Epoch:13/20, batch:4410, train_loss:[1.99863], acc_top1:[0.43750], acc_top5:[0.78125](4.20s)
    2022-10-19 10:49:26,023 - INFO: Epoch:13/20, batch:4420, train_loss:[1.92851], acc_top1:[0.34375], acc_top5:[0.78125](4.19s)
    2022-10-19 10:49:30,189 - INFO: Epoch:13/20, batch:4430, train_loss:[1.81922], acc_top1:[0.46875], acc_top5:[0.81250](4.17s)
    2022-10-19 10:49:34,407 - INFO: Epoch:13/20, batch:4440, train_loss:[1.58303], acc_top1:[0.53125], acc_top5:[0.81250](4.22s)
    2022-10-19 10:49:38,615 - INFO: Epoch:13/20, batch:4450, train_loss:[1.50083], acc_top1:[0.56250], acc_top5:[0.90625](4.21s)
    2022-10-19 10:49:42,926 - INFO: Epoch:13/20, batch:4460, train_loss:[2.09902], acc_top1:[0.31250], acc_top5:[0.75000](4.31s)
    2022-10-19 10:49:47,147 - INFO: Epoch:13/20, batch:4470, train_loss:[1.67663], acc_top1:[0.43750], acc_top5:[0.81250](4.22s)
    2022-10-19 10:49:51,329 - INFO: Epoch:13/20, batch:4480, train_loss:[1.81791], acc_top1:[0.50000], acc_top5:[0.87500](4.18s)
    2022-10-19 10:49:55,515 - INFO: Epoch:13/20, batch:4490, train_loss:[1.52643], acc_top1:[0.43750], acc_top5:[0.87500](4.19s)
    2022-10-19 10:49:59,767 - INFO: Epoch:13/20, batch:4500, train_loss:[1.80188], acc_top1:[0.50000], acc_top5:[0.71875](4.25s)
    2022-10-19 10:50:04,047 - INFO: Epoch:13/20, batch:4510, train_loss:[1.92404], acc_top1:[0.40625], acc_top5:[0.81250](4.28s)
    2022-10-19 10:50:08,187 - INFO: Epoch:13/20, batch:4520, train_loss:[1.50691], acc_top1:[0.62500], acc_top5:[0.87500](4.14s)
    2022-10-19 10:50:12,380 - INFO: Epoch:13/20, batch:4530, train_loss:[1.58878], acc_top1:[0.53125], acc_top5:[0.78125](4.19s)
    2022-10-19 10:50:16,513 - INFO: Epoch:13/20, batch:4540, train_loss:[1.36278], acc_top1:[0.53125], acc_top5:[0.90625](4.13s)
    2022-10-19 10:50:20,635 - INFO: Epoch:13/20, batch:4550, train_loss:[1.75355], acc_top1:[0.59375], acc_top5:[0.87500](4.12s)
    2022-10-19 10:50:24,847 - INFO: Epoch:13/20, batch:4560, train_loss:[1.97722], acc_top1:[0.37500], acc_top5:[0.71875](4.21s)
    2022-10-19 10:50:29,103 - INFO: Epoch:13/20, batch:4570, train_loss:[1.96402], acc_top1:[0.43750], acc_top5:[0.81250](4.26s)
    2022-10-19 10:50:33,362 - INFO: Epoch:13/20, batch:4580, train_loss:[2.09749], acc_top1:[0.40625], acc_top5:[0.75000](4.26s)
    2022-10-19 10:50:37,607 - INFO: Epoch:13/20, batch:4590, train_loss:[1.80959], acc_top1:[0.43750], acc_top5:[0.81250](4.24s)
    2022-10-19 10:50:41,807 - INFO: Epoch:13/20, batch:4600, train_loss:[1.65327], acc_top1:[0.46875], acc_top5:[0.84375](4.20s)
    2022-10-19 10:50:46,002 - INFO: Epoch:13/20, batch:4610, train_loss:[2.02264], acc_top1:[0.50000], acc_top5:[0.78125](4.19s)
    2022-10-19 10:50:50,203 - INFO: Epoch:13/20, batch:4620, train_loss:[1.66130], acc_top1:[0.56250], acc_top5:[0.84375](4.20s)
    2022-10-19 10:50:54,423 - INFO: Epoch:13/20, batch:4630, train_loss:[1.87929], acc_top1:[0.46875], acc_top5:[0.84375](4.22s)
    2022-10-19 10:50:58,668 - INFO: Epoch:13/20, batch:4640, train_loss:[2.02307], acc_top1:[0.31250], acc_top5:[0.87500](4.25s)
    2022-10-19 10:51:02,906 - INFO: Epoch:13/20, batch:4650, train_loss:[1.55040], acc_top1:[0.46875], acc_top5:[0.84375](4.24s)
    2022-10-19 10:51:07,126 - INFO: Epoch:13/20, batch:4660, train_loss:[1.72516], acc_top1:[0.46875], acc_top5:[0.81250](4.22s)
    2022-10-19 10:51:25,636 - INFO: [validation] Epoch:13/20, val_loss:[0.05298], val_top1:[0.51001], val_top5:[0.84161]
    2022-10-19 10:51:29,374 - INFO: 已保存当前测试模型(epoch=13)为最优模型:Garbage_resnet50_final
    2022-10-19 10:51:29,375 - INFO: 最优top1测试精度:0.51001 (epoch=13)
    2022-10-19 10:51:29,376 - INFO: 训练完成,最终性能accuracy=0.51001(epoch=13), 总耗时2425.39s, 已将其保存为:Garbage_resnet50_final
    2022-10-19 10:51:30,697 - INFO: Epoch:14/20, batch:4670, train_loss:[2.03042], acc_top1:[0.37500], acc_top5:[0.71875](23.57s)
    2022-10-19 10:51:34,865 - INFO: Epoch:14/20, batch:4680, train_loss:[1.71113], acc_top1:[0.53125], acc_top5:[0.75000](4.17s)
    2022-10-19 10:51:39,046 - INFO: Epoch:14/20, batch:4690, train_loss:[1.92477], acc_top1:[0.53125], acc_top5:[0.75000](4.18s)
    2022-10-19 10:51:43,209 - INFO: Epoch:14/20, batch:4700, train_loss:[1.77112], acc_top1:[0.50000], acc_top5:[0.81250](4.16s)
    2022-10-19 10:51:47,406 - INFO: Epoch:14/20, batch:4710, train_loss:[2.08608], acc_top1:[0.43750], acc_top5:[0.75000](4.20s)
    2022-10-19 10:51:51,705 - INFO: Epoch:14/20, batch:4720, train_loss:[1.53679], acc_top1:[0.53125], acc_top5:[0.84375](4.30s)
    2022-10-19 10:51:55,867 - INFO: Epoch:14/20, batch:4730, train_loss:[1.94849], acc_top1:[0.40625], acc_top5:[0.75000](4.16s)
    2022-10-19 10:52:00,098 - INFO: Epoch:14/20, batch:4740, train_loss:[1.96227], acc_top1:[0.31250], acc_top5:[0.84375](4.23s)
    2022-10-19 10:52:04,287 - INFO: Epoch:14/20, batch:4750, train_loss:[2.04921], acc_top1:[0.40625], acc_top5:[0.84375](4.19s)
    2022-10-19 10:52:08,500 - INFO: Epoch:14/20, batch:4760, train_loss:[1.70275], acc_top1:[0.56250], acc_top5:[0.78125](4.21s)
    2022-10-19 10:52:12,746 - INFO: Epoch:14/20, batch:4770, train_loss:[1.47247], acc_top1:[0.53125], acc_top5:[0.90625](4.25s)
    2022-10-19 10:52:17,014 - INFO: Epoch:14/20, batch:4780, train_loss:[1.46566], acc_top1:[0.65625], acc_top5:[0.84375](4.27s)
    2022-10-19 10:52:21,187 - INFO: Epoch:14/20, batch:4790, train_loss:[2.31183], acc_top1:[0.40625], acc_top5:[0.68750](4.17s)
    2022-10-19 10:52:25,497 - INFO: Epoch:14/20, batch:4800, train_loss:[1.51768], acc_top1:[0.62500], acc_top5:[0.87500](4.31s)
    2022-10-19 10:52:29,693 - INFO: Epoch:14/20, batch:4810, train_loss:[1.58266], acc_top1:[0.59375], acc_top5:[0.81250](4.20s)
    2022-10-19 10:52:33,883 - INFO: Epoch:14/20, batch:4820, train_loss:[1.98756], acc_top1:[0.37500], acc_top5:[0.81250](4.19s)
    2022-10-19 10:52:38,126 - INFO: Epoch:14/20, batch:4830, train_loss:[2.18566], acc_top1:[0.43750], acc_top5:[0.78125](4.24s)
    2022-10-19 10:52:42,418 - INFO: Epoch:14/20, batch:4840, train_loss:[1.58169], acc_top1:[0.50000], acc_top5:[0.84375](4.29s)
    2022-10-19 10:52:46,694 - INFO: Epoch:14/20, batch:4850, train_loss:[1.85846], acc_top1:[0.50000], acc_top5:[0.84375](4.28s)
    2022-10-19 10:52:50,943 - INFO: Epoch:14/20, batch:4860, train_loss:[1.35058], acc_top1:[0.62500], acc_top5:[0.87500](4.25s)
    2022-10-19 10:52:55,176 - INFO: Epoch:14/20, batch:4870, train_loss:[2.16534], acc_top1:[0.37500], acc_top5:[0.68750](4.23s)
    2022-10-19 10:52:59,377 - INFO: Epoch:14/20, batch:4880, train_loss:[2.17427], acc_top1:[0.43750], acc_top5:[0.71875](4.20s)
    2022-10-19 10:53:03,633 - INFO: Epoch:14/20, batch:4890, train_loss:[1.70250], acc_top1:[0.50000], acc_top5:[0.81250](4.26s)
    2022-10-19 10:53:07,815 - INFO: Epoch:14/20, batch:4900, train_loss:[2.11553], acc_top1:[0.50000], acc_top5:[0.78125](4.18s)
    2022-10-19 10:53:12,071 - INFO: Epoch:14/20, batch:4910, train_loss:[2.16082], acc_top1:[0.34375], acc_top5:[0.68750](4.26s)
    2022-10-19 10:53:16,247 - INFO: Epoch:14/20, batch:4920, train_loss:[2.08946], acc_top1:[0.50000], acc_top5:[0.75000](4.18s)
    2022-10-19 10:53:20,475 - INFO: Epoch:14/20, batch:4930, train_loss:[1.69390], acc_top1:[0.56250], acc_top5:[0.81250](4.23s)
    2022-10-19 10:53:24,660 - INFO: Epoch:14/20, batch:4940, train_loss:[1.34181], acc_top1:[0.62500], acc_top5:[0.84375](4.19s)
    2022-10-19 10:53:28,841 - INFO: Epoch:14/20, batch:4950, train_loss:[1.74795], acc_top1:[0.50000], acc_top5:[0.81250](4.18s)
    2022-10-19 10:53:33,052 - INFO: Epoch:14/20, batch:4960, train_loss:[1.75125], acc_top1:[0.56250], acc_top5:[0.71875](4.21s)
    2022-10-19 10:53:37,341 - INFO: Epoch:14/20, batch:4970, train_loss:[2.05291], acc_top1:[0.37500], acc_top5:[0.68750](4.29s)
    2022-10-19 10:53:41,599 - INFO: Epoch:14/20, batch:4980, train_loss:[2.32725], acc_top1:[0.31250], acc_top5:[0.68750](4.26s)
    2022-10-19 10:53:45,799 - INFO: Epoch:14/20, batch:4990, train_loss:[1.76955], acc_top1:[0.43750], acc_top5:[0.81250](4.20s)
    2022-10-19 10:53:49,991 - INFO: Epoch:14/20, batch:5000, train_loss:[1.80047], acc_top1:[0.46875], acc_top5:[0.87500](4.19s)
    2022-10-19 10:53:54,170 - INFO: Epoch:14/20, batch:5010, train_loss:[2.20584], acc_top1:[0.34375], acc_top5:[0.68750](4.18s)
    2022-10-19 10:53:58,346 - INFO: Epoch:14/20, batch:5020, train_loss:[1.77470], acc_top1:[0.50000], acc_top5:[0.84375](4.18s)
    2022-10-19 10:54:16,414 - INFO: [validation] Epoch:14/20, val_loss:[0.05226], val_top1:[0.52622], val_top5:[0.84231]
    2022-10-19 10:54:19,868 - INFO: 已保存当前测试模型(epoch=14)为最优模型:Garbage_resnet50_final
    2022-10-19 10:54:19,868 - INFO: 最优top1测试精度:0.52622 (epoch=14)
    2022-10-19 10:54:19,869 - INFO: 训练完成,最终性能accuracy=0.52622(epoch=14), 总耗时2595.89s, 已将其保存为:Garbage_resnet50_final
    2022-10-19 10:54:21,622 - INFO: Epoch:15/20, batch:5030, train_loss:[1.96617], acc_top1:[0.56250], acc_top5:[0.71875](23.28s)
    2022-10-19 10:54:25,938 - INFO: Epoch:15/20, batch:5040, train_loss:[1.26343], acc_top1:[0.62500], acc_top5:[0.96875](4.32s)
    2022-10-19 10:54:30,168 - INFO: Epoch:15/20, batch:5050, train_loss:[2.03148], acc_top1:[0.53125], acc_top5:[0.81250](4.23s)
    2022-10-19 10:54:34,364 - INFO: Epoch:15/20, batch:5060, train_loss:[1.72363], acc_top1:[0.43750], acc_top5:[0.93750](4.20s)
    2022-10-19 10:54:38,528 - INFO: Epoch:15/20, batch:5070, train_loss:[1.76150], acc_top1:[0.59375], acc_top5:[0.84375](4.17s)
    2022-10-19 10:54:42,764 - INFO: Epoch:15/20, batch:5080, train_loss:[2.15402], acc_top1:[0.40625], acc_top5:[0.81250](4.24s)
    2022-10-19 10:54:46,995 - INFO: Epoch:15/20, batch:5090, train_loss:[1.97373], acc_top1:[0.37500], acc_top5:[0.78125](4.23s)
    2022-10-19 10:54:51,281 - INFO: Epoch:15/20, batch:5100, train_loss:[1.67044], acc_top1:[0.56250], acc_top5:[0.81250](4.29s)
    2022-10-19 10:54:55,513 - INFO: Epoch:15/20, batch:5110, train_loss:[1.81130], acc_top1:[0.43750], acc_top5:[0.81250](4.23s)
    2022-10-19 10:54:59,853 - INFO: Epoch:15/20, batch:5120, train_loss:[1.84582], acc_top1:[0.46875], acc_top5:[0.68750](4.34s)
    2022-10-19 10:55:04,134 - INFO: Epoch:15/20, batch:5130, train_loss:[1.57873], acc_top1:[0.43750], acc_top5:[0.93750](4.28s)
    2022-10-19 10:55:08,449 - INFO: Epoch:15/20, batch:5140, train_loss:[1.81305], acc_top1:[0.53125], acc_top5:[0.71875](4.32s)
    2022-10-19 10:55:12,665 - INFO: Epoch:15/20, batch:5150, train_loss:[1.09434], acc_top1:[0.68750], acc_top5:[0.87500](4.22s)
    2022-10-19 10:55:16,940 - INFO: Epoch:15/20, batch:5160, train_loss:[1.91547], acc_top1:[0.46875], acc_top5:[0.84375](4.27s)
    2022-10-19 10:55:21,188 - INFO: Epoch:15/20, batch:5170, train_loss:[1.68062], acc_top1:[0.53125], acc_top5:[0.90625](4.25s)
    2022-10-19 10:55:25,448 - INFO: Epoch:15/20, batch:5180, train_loss:[1.85875], acc_top1:[0.46875], acc_top5:[0.78125](4.26s)
    2022-10-19 10:55:29,752 - INFO: Epoch:15/20, batch:5190, train_loss:[1.48608], acc_top1:[0.50000], acc_top5:[0.87500](4.30s)
    2022-10-19 10:55:33,931 - INFO: Epoch:15/20, batch:5200, train_loss:[2.14224], acc_top1:[0.40625], acc_top5:[0.78125](4.18s)
    2022-10-19 10:55:38,135 - INFO: Epoch:15/20, batch:5210, train_loss:[1.27923], acc_top1:[0.65625], acc_top5:[0.84375](4.20s)
    2022-10-19 10:55:42,392 - INFO: Epoch:15/20, batch:5220, train_loss:[1.41265], acc_top1:[0.68750], acc_top5:[0.84375](4.26s)
    2022-10-19 10:55:46,651 - INFO: Epoch:15/20, batch:5230, train_loss:[1.91409], acc_top1:[0.43750], acc_top5:[0.81250](4.26s)
    2022-10-19 10:55:50,909 - INFO: Epoch:15/20, batch:5240, train_loss:[1.53470], acc_top1:[0.65625], acc_top5:[0.90625](4.26s)
    2022-10-19 10:55:55,181 - INFO: Epoch:15/20, batch:5250, train_loss:[1.92799], acc_top1:[0.40625], acc_top5:[0.78125](4.27s)
    2022-10-19 10:55:59,461 - INFO: Epoch:15/20, batch:5260, train_loss:[1.75385], acc_top1:[0.43750], acc_top5:[0.81250](4.28s)
    2022-10-19 10:56:03,698 - INFO: Epoch:15/20, batch:5270, train_loss:[1.42717], acc_top1:[0.68750], acc_top5:[0.87500](4.24s)
    2022-10-19 10:56:07,852 - INFO: Epoch:15/20, batch:5280, train_loss:[1.73711], acc_top1:[0.46875], acc_top5:[0.81250](4.15s)
    2022-10-19 10:56:12,098 - INFO: Epoch:15/20, batch:5290, train_loss:[1.80768], acc_top1:[0.46875], acc_top5:[0.78125](4.25s)
    2022-10-19 10:56:16,356 - INFO: Epoch:15/20, batch:5300, train_loss:[1.49477], acc_top1:[0.50000], acc_top5:[0.84375](4.26s)
    2022-10-19 10:56:20,605 - INFO: Epoch:15/20, batch:5310, train_loss:[2.26463], acc_top1:[0.37500], acc_top5:[0.75000](4.25s)
    2022-10-19 10:56:24,889 - INFO: Epoch:15/20, batch:5320, train_loss:[1.56436], acc_top1:[0.65625], acc_top5:[0.84375](4.28s)
    2022-10-19 10:56:29,139 - INFO: Epoch:15/20, batch:5330, train_loss:[1.49267], acc_top1:[0.62500], acc_top5:[0.84375](4.25s)
    2022-10-19 10:56:33,415 - INFO: Epoch:15/20, batch:5340, train_loss:[1.98398], acc_top1:[0.53125], acc_top5:[0.71875](4.28s)
    2022-10-19 10:56:37,601 - INFO: Epoch:15/20, batch:5350, train_loss:[1.91312], acc_top1:[0.43750], acc_top5:[0.87500](4.19s)
    2022-10-19 10:56:41,823 - INFO: Epoch:15/20, batch:5360, train_loss:[1.75752], acc_top1:[0.50000], acc_top5:[0.84375](4.22s)
    2022-10-19 10:56:46,030 - INFO: Epoch:15/20, batch:5370, train_loss:[1.42814], acc_top1:[0.56250], acc_top5:[0.90625](4.21s)
    2022-10-19 10:56:50,298 - INFO: Epoch:15/20, batch:5380, train_loss:[1.38577], acc_top1:[0.56250], acc_top5:[0.90625](4.27s)
    2022-10-19 10:57:07,956 - INFO: [validation] Epoch:15/20, val_loss:[0.05141], val_top1:[0.52657], val_top5:[0.85956]
    2022-10-19 10:57:11,626 - INFO: 已保存当前测试模型(epoch=15)为最优模型:Garbage_resnet50_final
    2022-10-19 10:57:11,626 - INFO: 最优top1测试精度:0.52657 (epoch=15)
    2022-10-19 10:57:11,627 - INFO: 训练完成,最终性能accuracy=0.52657(epoch=15), 总耗时2767.65s, 已将其保存为:Garbage_resnet50_final
    2022-10-19 10:57:13,800 - INFO: Epoch:16/20, batch:5390, train_loss:[1.83667], acc_top1:[0.46875], acc_top5:[0.75000](23.50s)
    2022-10-19 10:57:18,044 - INFO: Epoch:16/20, batch:5400, train_loss:[1.49338], acc_top1:[0.50000], acc_top5:[0.93750](4.24s)
    2022-10-19 10:57:22,264 - INFO: Epoch:16/20, batch:5410, train_loss:[1.60645], acc_top1:[0.59375], acc_top5:[0.90625](4.22s)
    2022-10-19 10:57:26,555 - INFO: Epoch:16/20, batch:5420, train_loss:[1.79298], acc_top1:[0.46875], acc_top5:[0.75000](4.29s)
    2022-10-19 10:57:30,821 - INFO: Epoch:16/20, batch:5430, train_loss:[1.72767], acc_top1:[0.53125], acc_top5:[0.84375](4.27s)
    2022-10-19 10:57:35,068 - INFO: Epoch:16/20, batch:5440, train_loss:[1.85999], acc_top1:[0.43750], acc_top5:[0.81250](4.25s)
    2022-10-19 10:57:39,257 - INFO: Epoch:16/20, batch:5450, train_loss:[1.87797], acc_top1:[0.40625], acc_top5:[0.81250](4.19s)
    2022-10-19 10:57:43,498 - INFO: Epoch:16/20, batch:5460, train_loss:[1.56627], acc_top1:[0.53125], acc_top5:[0.87500](4.24s)
    2022-10-19 10:57:47,748 - INFO: Epoch:16/20, batch:5470, train_loss:[1.34304], acc_top1:[0.65625], acc_top5:[0.87500](4.25s)
    2022-10-19 10:57:51,997 - INFO: Epoch:16/20, batch:5480, train_loss:[1.13153], acc_top1:[0.59375], acc_top5:[0.96875](4.25s)
    2022-10-19 10:57:56,251 - INFO: Epoch:16/20, batch:5490, train_loss:[1.80124], acc_top1:[0.43750], acc_top5:[0.81250](4.25s)
    2022-10-19 10:58:00,497 - INFO: Epoch:16/20, batch:5500, train_loss:[1.77730], acc_top1:[0.50000], acc_top5:[0.75000](4.25s)
    2022-10-19 10:58:04,703 - INFO: Epoch:16/20, batch:5510, train_loss:[1.94497], acc_top1:[0.43750], acc_top5:[0.68750](4.21s)
    2022-10-19 10:58:08,887 - INFO: Epoch:16/20, batch:5520, train_loss:[1.89527], acc_top1:[0.43750], acc_top5:[0.87500](4.18s)
    2022-10-19 10:58:13,094 - INFO: Epoch:16/20, batch:5530, train_loss:[1.55237], acc_top1:[0.59375], acc_top5:[0.87500](4.21s)
    2022-10-19 10:58:17,283 - INFO: Epoch:16/20, batch:5540, train_loss:[2.10716], acc_top1:[0.50000], acc_top5:[0.71875](4.19s)
    2022-10-19 10:58:21,522 - INFO: Epoch:16/20, batch:5550, train_loss:[1.62285], acc_top1:[0.53125], acc_top5:[0.84375](4.24s)
    2022-10-19 10:58:25,870 - INFO: Epoch:16/20, batch:5560, train_loss:[1.31785], acc_top1:[0.68750], acc_top5:[0.87500](4.35s)
    2022-10-19 10:58:30,152 - INFO: Epoch:16/20, batch:5570, train_loss:[1.68810], acc_top1:[0.46875], acc_top5:[0.84375](4.28s)
    2022-10-19 10:58:34,333 - INFO: Epoch:16/20, batch:5580, train_loss:[1.62400], acc_top1:[0.53125], acc_top5:[0.78125](4.18s)
    2022-10-19 10:58:38,478 - INFO: Epoch:16/20, batch:5590, train_loss:[2.02412], acc_top1:[0.46875], acc_top5:[0.75000](4.14s)
    2022-10-19 10:58:42,751 - INFO: Epoch:16/20, batch:5600, train_loss:[1.61153], acc_top1:[0.59375], acc_top5:[0.81250](4.27s)
    2022-10-19 10:58:47,087 - INFO: Epoch:16/20, batch:5610, train_loss:[1.74764], acc_top1:[0.37500], acc_top5:[0.84375](4.34s)
    2022-10-19 10:58:51,389 - INFO: Epoch:16/20, batch:5620, train_loss:[1.60188], acc_top1:[0.59375], acc_top5:[0.81250](4.30s)
    2022-10-19 10:58:55,682 - INFO: Epoch:16/20, batch:5630, train_loss:[1.40139], acc_top1:[0.53125], acc_top5:[0.84375](4.29s)
    2022-10-19 10:58:59,904 - INFO: Epoch:16/20, batch:5640, train_loss:[1.80006], acc_top1:[0.43750], acc_top5:[0.78125](4.22s)
    2022-10-19 10:59:04,080 - INFO: Epoch:16/20, batch:5650, train_loss:[1.63276], acc_top1:[0.50000], acc_top5:[0.87500](4.18s)
    2022-10-19 10:59:08,327 - INFO: Epoch:16/20, batch:5660, train_loss:[1.35068], acc_top1:[0.53125], acc_top5:[0.90625](4.25s)
    2022-10-19 10:59:12,617 - INFO: Epoch:16/20, batch:5670, train_loss:[1.87519], acc_top1:[0.43750], acc_top5:[0.84375](4.29s)
    2022-10-19 10:59:16,931 - INFO: Epoch:16/20, batch:5680, train_loss:[1.79568], acc_top1:[0.43750], acc_top5:[0.90625](4.31s)
    2022-10-19 10:59:21,104 - INFO: Epoch:16/20, batch:5690, train_loss:[1.65080], acc_top1:[0.56250], acc_top5:[0.75000](4.17s)
    2022-10-19 10:59:25,321 - INFO: Epoch:16/20, batch:5700, train_loss:[2.00841], acc_top1:[0.40625], acc_top5:[0.81250](4.22s)
    2022-10-19 10:59:29,564 - INFO: Epoch:16/20, batch:5710, train_loss:[1.42455], acc_top1:[0.46875], acc_top5:[0.87500](4.24s)
    2022-10-19 10:59:33,847 - INFO: Epoch:16/20, batch:5720, train_loss:[1.85413], acc_top1:[0.46875], acc_top5:[0.75000](4.28s)
    2022-10-19 10:59:38,073 - INFO: Epoch:16/20, batch:5730, train_loss:[1.65353], acc_top1:[0.53125], acc_top5:[0.81250](4.23s)
    2022-10-19 10:59:42,248 - INFO: Epoch:16/20, batch:5740, train_loss:[2.35102], acc_top1:[0.34375], acc_top5:[0.71875](4.17s)
    2022-10-19 10:59:59,464 - INFO: [validation] Epoch:16/20, val_loss:[0.05840], val_top1:[0.50587], val_top5:[0.81366]
    2022-10-19 10:59:59,465 - INFO: 最优top1测试精度:0.52657 (epoch=15)
    2022-10-19 10:59:59,465 - INFO: 训练完成,最终性能accuracy=0.52657(epoch=15), 总耗时2935.48s, 已将其保存为:Garbage_resnet50_final
    2022-10-19 11:00:02,034 - INFO: Epoch:17/20, batch:5750, train_loss:[1.65470], acc_top1:[0.53125], acc_top5:[0.84375](19.79s)
    2022-10-19 11:00:06,218 - INFO: Epoch:17/20, batch:5760, train_loss:[1.55988], acc_top1:[0.59375], acc_top5:[0.78125](4.18s)
    2022-10-19 11:00:10,485 - INFO: Epoch:17/20, batch:5770, train_loss:[1.97312], acc_top1:[0.46875], acc_top5:[0.71875](4.27s)
    2022-10-19 11:00:14,725 - INFO: Epoch:17/20, batch:5780, train_loss:[1.57725], acc_top1:[0.62500], acc_top5:[0.81250](4.24s)
    2022-10-19 11:00:18,939 - INFO: Epoch:17/20, batch:5790, train_loss:[1.60437], acc_top1:[0.50000], acc_top5:[0.90625](4.21s)
    2022-10-19 11:00:23,142 - INFO: Epoch:17/20, batch:5800, train_loss:[1.77993], acc_top1:[0.56250], acc_top5:[0.84375](4.20s)
    2022-10-19 11:00:27,346 - INFO: Epoch:17/20, batch:5810, train_loss:[1.30859], acc_top1:[0.62500], acc_top5:[0.87500](4.20s)
    2022-10-19 11:00:31,624 - INFO: Epoch:17/20, batch:5820, train_loss:[1.82988], acc_top1:[0.46875], acc_top5:[0.75000](4.28s)
    2022-10-19 11:00:35,808 - INFO: Epoch:17/20, batch:5830, train_loss:[1.48084], acc_top1:[0.56250], acc_top5:[0.90625](4.18s)
    2022-10-19 11:00:40,046 - INFO: Epoch:17/20, batch:5840, train_loss:[1.48624], acc_top1:[0.50000], acc_top5:[0.93750](4.24s)
    2022-10-19 11:00:44,266 - INFO: Epoch:17/20, batch:5850, train_loss:[1.25124], acc_top1:[0.53125], acc_top5:[0.81250](4.22s)
    2022-10-19 11:00:48,542 - INFO: Epoch:17/20, batch:5860, train_loss:[1.54887], acc_top1:[0.56250], acc_top5:[0.87500](4.28s)
    2022-10-19 11:00:52,697 - INFO: Epoch:17/20, batch:5870, train_loss:[1.91030], acc_top1:[0.46875], acc_top5:[0.84375](4.16s)
    2022-10-19 11:00:56,954 - INFO: Epoch:17/20, batch:5880, train_loss:[1.72050], acc_top1:[0.56250], acc_top5:[0.81250](4.26s)
    2022-10-19 11:01:01,154 - INFO: Epoch:17/20, batch:5890, train_loss:[1.47103], acc_top1:[0.59375], acc_top5:[0.90625](4.20s)
    2022-10-19 11:01:05,396 - INFO: Epoch:17/20, batch:5900, train_loss:[1.91996], acc_top1:[0.40625], acc_top5:[0.78125](4.24s)
    2022-10-19 11:01:09,586 - INFO: Epoch:17/20, batch:5910, train_loss:[1.39142], acc_top1:[0.56250], acc_top5:[0.93750](4.19s)
    2022-10-19 11:01:13,830 - INFO: Epoch:17/20, batch:5920, train_loss:[1.52724], acc_top1:[0.56250], acc_top5:[0.87500](4.24s)
    2022-10-19 11:01:18,027 - INFO: Epoch:17/20, batch:5930, train_loss:[1.85776], acc_top1:[0.46875], acc_top5:[0.78125](4.20s)
    2022-10-19 11:01:22,275 - INFO: Epoch:17/20, batch:5940, train_loss:[2.23637], acc_top1:[0.46875], acc_top5:[0.68750](4.25s)
    2022-10-19 11:01:26,522 - INFO: Epoch:17/20, batch:5950, train_loss:[1.35044], acc_top1:[0.56250], acc_top5:[0.93750](4.25s)
    2022-10-19 11:01:30,715 - INFO: Epoch:17/20, batch:5960, train_loss:[1.92042], acc_top1:[0.46875], acc_top5:[0.78125](4.19s)
    2022-10-19 11:01:35,018 - INFO: Epoch:17/20, batch:5970, train_loss:[1.25842], acc_top1:[0.68750], acc_top5:[0.87500](4.30s)
    2022-10-19 11:01:39,252 - INFO: Epoch:17/20, batch:5980, train_loss:[1.32804], acc_top1:[0.62500], acc_top5:[0.93750](4.23s)
    2022-10-19 11:01:43,550 - INFO: Epoch:17/20, batch:5990, train_loss:[1.71035], acc_top1:[0.40625], acc_top5:[0.87500](4.30s)
    2022-10-19 11:01:47,711 - INFO: Epoch:17/20, batch:6000, train_loss:[1.86269], acc_top1:[0.53125], acc_top5:[0.81250](4.16s)
    2022-10-19 11:01:51,938 - INFO: Epoch:17/20, batch:6010, train_loss:[1.33987], acc_top1:[0.71875], acc_top5:[0.87500](4.23s)
    2022-10-19 11:01:56,149 - INFO: Epoch:17/20, batch:6020, train_loss:[1.38884], acc_top1:[0.59375], acc_top5:[0.84375](4.21s)
    2022-10-19 11:02:00,341 - INFO: Epoch:17/20, batch:6030, train_loss:[1.94551], acc_top1:[0.40625], acc_top5:[0.78125](4.19s)
    2022-10-19 11:02:04,562 - INFO: Epoch:17/20, batch:6040, train_loss:[1.60312], acc_top1:[0.59375], acc_top5:[0.90625](4.22s)
    2022-10-19 11:02:08,957 - INFO: Epoch:17/20, batch:6050, train_loss:[1.27708], acc_top1:[0.59375], acc_top5:[0.87500](4.40s)
    2022-10-19 11:02:13,215 - INFO: Epoch:17/20, batch:6060, train_loss:[1.49812], acc_top1:[0.62500], acc_top5:[0.87500](4.26s)
    2022-10-19 11:02:17,423 - INFO: Epoch:17/20, batch:6070, train_loss:[1.32592], acc_top1:[0.59375], acc_top5:[0.93750](4.21s)
    2022-10-19 11:02:21,661 - INFO: Epoch:17/20, batch:6080, train_loss:[1.53104], acc_top1:[0.56250], acc_top5:[0.78125](4.24s)
    2022-10-19 11:02:25,898 - INFO: Epoch:17/20, batch:6090, train_loss:[1.35367], acc_top1:[0.46875], acc_top5:[0.87500](4.24s)
    2022-10-19 11:02:30,094 - INFO: Epoch:17/20, batch:6100, train_loss:[1.80538], acc_top1:[0.50000], acc_top5:[0.81250](4.20s)
    2022-10-19 11:02:46,939 - INFO: [validation] Epoch:17/20, val_loss:[0.05164], val_top1:[0.53278], val_top5:[0.85438]
    2022-10-19 11:02:50,889 - INFO: 已保存当前测试模型(epoch=17)为最优模型:Garbage_resnet50_final
    2022-10-19 11:02:50,891 - INFO: 最优top1测试精度:0.53278 (epoch=17)
    2022-10-19 11:02:50,891 - INFO: 训练完成,最终性能accuracy=0.53278(epoch=17), 总耗时3106.91s, 已将其保存为:Garbage_resnet50_final
    2022-10-19 11:02:53,894 - INFO: Epoch:18/20, batch:6110, train_loss:[1.36500], acc_top1:[0.56250], acc_top5:[0.93750](23.80s)
    2022-10-19 11:02:58,137 - INFO: Epoch:18/20, batch:6120, train_loss:[1.13447], acc_top1:[0.68750], acc_top5:[0.90625](4.24s)
    2022-10-19 11:03:02,412 - INFO: Epoch:18/20, batch:6130, train_loss:[1.52581], acc_top1:[0.53125], acc_top5:[0.87500](4.27s)
    2022-10-19 11:03:06,668 - INFO: Epoch:18/20, batch:6140, train_loss:[1.47320], acc_top1:[0.59375], acc_top5:[0.84375](4.26s)
    2022-10-19 11:03:10,862 - INFO: Epoch:18/20, batch:6150, train_loss:[1.98311], acc_top1:[0.43750], acc_top5:[0.78125](4.19s)
    2022-10-19 11:03:15,095 - INFO: Epoch:18/20, batch:6160, train_loss:[1.64049], acc_top1:[0.53125], acc_top5:[0.84375](4.23s)
    2022-10-19 11:03:19,246 - INFO: Epoch:18/20, batch:6170, train_loss:[1.88283], acc_top1:[0.46875], acc_top5:[0.84375](4.15s)
    2022-10-19 11:03:23,599 - INFO: Epoch:18/20, batch:6180, train_loss:[1.40621], acc_top1:[0.65625], acc_top5:[0.87500](4.35s)
    2022-10-19 11:03:27,846 - INFO: Epoch:18/20, batch:6190, train_loss:[1.89483], acc_top1:[0.40625], acc_top5:[0.75000](4.25s)
    2022-10-19 11:03:32,087 - INFO: Epoch:18/20, batch:6200, train_loss:[1.31622], acc_top1:[0.59375], acc_top5:[0.93750](4.24s)
    2022-10-19 11:03:36,322 - INFO: Epoch:18/20, batch:6210, train_loss:[1.43485], acc_top1:[0.56250], acc_top5:[0.93750](4.23s)
    2022-10-19 11:03:40,618 - INFO: Epoch:18/20, batch:6220, train_loss:[1.48246], acc_top1:[0.50000], acc_top5:[0.90625](4.30s)
    2022-10-19 11:03:44,918 - INFO: Epoch:18/20, batch:6230, train_loss:[1.68964], acc_top1:[0.53125], acc_top5:[0.81250](4.30s)
    2022-10-19 11:03:49,189 - INFO: Epoch:18/20, batch:6240, train_loss:[1.45430], acc_top1:[0.68750], acc_top5:[0.87500](4.27s)
    2022-10-19 11:03:53,444 - INFO: Epoch:18/20, batch:6250, train_loss:[1.36314], acc_top1:[0.65625], acc_top5:[0.87500](4.26s)
    2022-10-19 11:03:57,636 - INFO: Epoch:18/20, batch:6260, train_loss:[1.56667], acc_top1:[0.53125], acc_top5:[0.87500](4.19s)
    2022-10-19 11:04:02,006 - INFO: Epoch:18/20, batch:6270, train_loss:[1.40110], acc_top1:[0.53125], acc_top5:[0.87500](4.37s)
    2022-10-19 11:04:06,235 - INFO: Epoch:18/20, batch:6280, train_loss:[1.57083], acc_top1:[0.50000], acc_top5:[0.81250](4.23s)
    2022-10-19 11:04:10,495 - INFO: Epoch:18/20, batch:6290, train_loss:[1.44719], acc_top1:[0.56250], acc_top5:[0.84375](4.26s)
    2022-10-19 11:04:14,756 - INFO: Epoch:18/20, batch:6300, train_loss:[1.49134], acc_top1:[0.53125], acc_top5:[0.84375](4.26s)
    2022-10-19 11:04:19,043 - INFO: Epoch:18/20, batch:6310, train_loss:[1.74848], acc_top1:[0.50000], acc_top5:[0.81250](4.29s)
    2022-10-19 11:04:23,270 - INFO: Epoch:18/20, batch:6320, train_loss:[1.81140], acc_top1:[0.59375], acc_top5:[0.75000](4.23s)
    2022-10-19 11:04:27,469 - INFO: Epoch:18/20, batch:6330, train_loss:[1.22542], acc_top1:[0.59375], acc_top5:[0.96875](4.20s)
    2022-10-19 11:04:31,736 - INFO: Epoch:18/20, batch:6340, train_loss:[1.28248], acc_top1:[0.59375], acc_top5:[0.87500](4.27s)
    2022-10-19 11:04:35,940 - INFO: Epoch:18/20, batch:6350, train_loss:[1.68951], acc_top1:[0.46875], acc_top5:[0.87500](4.20s)
    2022-10-19 11:04:40,178 - INFO: Epoch:18/20, batch:6360, train_loss:[1.50537], acc_top1:[0.53125], acc_top5:[0.87500](4.24s)
    2022-10-19 11:04:44,348 - INFO: Epoch:18/20, batch:6370, train_loss:[1.38784], acc_top1:[0.53125], acc_top5:[0.87500](4.17s)
    2022-10-19 11:04:48,595 - INFO: Epoch:18/20, batch:6380, train_loss:[1.55862], acc_top1:[0.50000], acc_top5:[0.90625](4.25s)
    2022-10-19 11:04:52,842 - INFO: Epoch:18/20, batch:6390, train_loss:[1.62530], acc_top1:[0.56250], acc_top5:[0.81250](4.25s)
    2022-10-19 11:04:57,046 - INFO: Epoch:18/20, batch:6400, train_loss:[1.67288], acc_top1:[0.50000], acc_top5:[0.81250](4.20s)
    2022-10-19 11:05:01,253 - INFO: Epoch:18/20, batch:6410, train_loss:[1.98411], acc_top1:[0.40625], acc_top5:[0.75000](4.21s)
    2022-10-19 11:05:05,462 - INFO: Epoch:18/20, batch:6420, train_loss:[1.68639], acc_top1:[0.53125], acc_top5:[0.84375](4.21s)
    2022-10-19 11:05:09,653 - INFO: Epoch:18/20, batch:6430, train_loss:[1.69290], acc_top1:[0.46875], acc_top5:[0.90625](4.19s)
    2022-10-19 11:05:13,890 - INFO: Epoch:18/20, batch:6440, train_loss:[1.07645], acc_top1:[0.65625], acc_top5:[0.93750](4.24s)
    2022-10-19 11:05:18,099 - INFO: Epoch:18/20, batch:6450, train_loss:[1.86956], acc_top1:[0.53125], acc_top5:[0.90625](4.21s)
    2022-10-19 11:05:22,322 - INFO: Epoch:18/20, batch:6460, train_loss:[1.47730], acc_top1:[0.56250], acc_top5:[0.81250](4.22s)
    2022-10-19 11:05:38,758 - INFO: [validation] Epoch:18/20, val_loss:[0.05025], val_top1:[0.56211], val_top5:[0.85059]
    2022-10-19 11:05:43,474 - INFO: 已保存当前测试模型(epoch=18)为最优模型:Garbage_resnet50_final
    2022-10-19 11:05:43,475 - INFO: 最优top1测试精度:0.56211 (epoch=18)
    2022-10-19 11:05:43,476 - INFO: 训练完成,最终性能accuracy=0.56211(epoch=18), 总耗时3279.49s, 已将其保存为:Garbage_resnet50_final
    2022-10-19 11:05:46,930 - INFO: Epoch:19/20, batch:6470, train_loss:[1.30090], acc_top1:[0.59375], acc_top5:[0.84375](24.61s)
    2022-10-19 11:05:51,195 - INFO: Epoch:19/20, batch:6480, train_loss:[1.73645], acc_top1:[0.62500], acc_top5:[0.84375](4.27s)
    2022-10-19 11:05:55,424 - INFO: Epoch:19/20, batch:6490, train_loss:[1.67441], acc_top1:[0.43750], acc_top5:[0.78125](4.23s)
    2022-10-19 11:05:59,642 - INFO: Epoch:19/20, batch:6500, train_loss:[1.37024], acc_top1:[0.65625], acc_top5:[0.84375](4.22s)
    2022-10-19 11:06:03,927 - INFO: Epoch:19/20, batch:6510, train_loss:[1.19058], acc_top1:[0.65625], acc_top5:[0.93750](4.29s)
    2022-10-19 11:06:08,138 - INFO: Epoch:19/20, batch:6520, train_loss:[2.13067], acc_top1:[0.34375], acc_top5:[0.71875](4.21s)
    2022-10-19 11:06:12,347 - INFO: Epoch:19/20, batch:6530, train_loss:[1.42944], acc_top1:[0.62500], acc_top5:[0.84375](4.21s)
    2022-10-19 11:06:16,535 - INFO: Epoch:19/20, batch:6540, train_loss:[1.62072], acc_top1:[0.56250], acc_top5:[0.87500](4.19s)
    2022-10-19 11:06:20,771 - INFO: Epoch:19/20, batch:6550, train_loss:[1.49834], acc_top1:[0.56250], acc_top5:[0.84375](4.24s)
    2022-10-19 11:06:25,032 - INFO: Epoch:19/20, batch:6560, train_loss:[1.52614], acc_top1:[0.62500], acc_top5:[0.87500](4.26s)
    2022-10-19 11:06:29,303 - INFO: Epoch:19/20, batch:6570, train_loss:[1.58255], acc_top1:[0.56250], acc_top5:[0.87500](4.27s)
    2022-10-19 11:06:33,488 - INFO: Epoch:19/20, batch:6580, train_loss:[1.57009], acc_top1:[0.65625], acc_top5:[0.84375](4.19s)
    2022-10-19 11:06:37,696 - INFO: Epoch:19/20, batch:6590, train_loss:[1.37292], acc_top1:[0.62500], acc_top5:[0.87500](4.21s)
    2022-10-19 11:06:41,906 - INFO: Epoch:19/20, batch:6600, train_loss:[1.63097], acc_top1:[0.53125], acc_top5:[0.87500](4.21s)
    2022-10-19 11:06:46,104 - INFO: Epoch:19/20, batch:6610, train_loss:[1.87442], acc_top1:[0.43750], acc_top5:[0.84375](4.20s)
    2022-10-19 11:06:50,300 - INFO: Epoch:19/20, batch:6620, train_loss:[1.77253], acc_top1:[0.50000], acc_top5:[0.84375](4.20s)
    2022-10-19 11:06:54,562 - INFO: Epoch:19/20, batch:6630, train_loss:[1.53967], acc_top1:[0.50000], acc_top5:[0.87500](4.26s)
    2022-10-19 11:06:58,888 - INFO: Epoch:19/20, batch:6640, train_loss:[1.70158], acc_top1:[0.46875], acc_top5:[0.81250](4.33s)
    2022-10-19 11:07:03,159 - INFO: Epoch:19/20, batch:6650, train_loss:[1.87491], acc_top1:[0.46875], acc_top5:[0.81250](4.27s)
    2022-10-19 11:07:07,381 - INFO: Epoch:19/20, batch:6660, train_loss:[1.10603], acc_top1:[0.65625], acc_top5:[0.90625](4.22s)
    2022-10-19 11:07:11,619 - INFO: Epoch:19/20, batch:6670, train_loss:[1.52498], acc_top1:[0.50000], acc_top5:[0.90625](4.24s)
    2022-10-19 11:07:15,883 - INFO: Epoch:19/20, batch:6680, train_loss:[1.43919], acc_top1:[0.59375], acc_top5:[0.90625](4.26s)
    2022-10-19 11:07:20,189 - INFO: Epoch:19/20, batch:6690, train_loss:[2.16794], acc_top1:[0.56250], acc_top5:[0.75000](4.31s)
    2022-10-19 11:07:24,440 - INFO: Epoch:19/20, batch:6700, train_loss:[1.76873], acc_top1:[0.34375], acc_top5:[0.81250](4.25s)
    2022-10-19 11:07:28,688 - INFO: Epoch:19/20, batch:6710, train_loss:[1.35580], acc_top1:[0.56250], acc_top5:[0.84375](4.25s)
    2022-10-19 11:07:32,915 - INFO: Epoch:19/20, batch:6720, train_loss:[1.51215], acc_top1:[0.59375], acc_top5:[0.87500](4.23s)
    2022-10-19 11:07:37,177 - INFO: Epoch:19/20, batch:6730, train_loss:[1.84701], acc_top1:[0.50000], acc_top5:[0.87500](4.26s)
    2022-10-19 11:07:41,444 - INFO: Epoch:19/20, batch:6740, train_loss:[1.98370], acc_top1:[0.40625], acc_top5:[0.81250](4.27s)
    2022-10-19 11:07:45,663 - INFO: Epoch:19/20, batch:6750, train_loss:[1.23107], acc_top1:[0.59375], acc_top5:[0.90625](4.22s)
    2022-10-19 11:07:49,899 - INFO: Epoch:19/20, batch:6760, train_loss:[2.29076], acc_top1:[0.37500], acc_top5:[0.71875](4.24s)
    2022-10-19 11:07:54,192 - INFO: Epoch:19/20, batch:6770, train_loss:[1.34356], acc_top1:[0.65625], acc_top5:[0.93750](4.29s)
    2022-10-19 11:07:58,399 - INFO: Epoch:19/20, batch:6780, train_loss:[1.86987], acc_top1:[0.46875], acc_top5:[0.81250](4.21s)
    2022-10-19 11:08:02,646 - INFO: Epoch:19/20, batch:6790, train_loss:[1.50193], acc_top1:[0.46875], acc_top5:[0.90625](4.25s)
    2022-10-19 11:08:06,898 - INFO: Epoch:19/20, batch:6800, train_loss:[1.65148], acc_top1:[0.53125], acc_top5:[0.87500](4.25s)
    2022-10-19 11:08:11,139 - INFO: Epoch:19/20, batch:6810, train_loss:[2.00102], acc_top1:[0.40625], acc_top5:[0.75000](4.24s)
    2022-10-19 11:08:15,377 - INFO: Epoch:19/20, batch:6820, train_loss:[1.34111], acc_top1:[0.68750], acc_top5:[0.81250](4.24s)
    2022-10-19 11:08:31,350 - INFO: [validation] Epoch:19/20, val_loss:[0.05239], val_top1:[0.54969], val_top5:[0.86646]
    2022-10-19 11:08:31,351 - INFO: 最优top1测试精度:0.56211 (epoch=18)
    2022-10-19 11:08:31,352 - INFO: 训练完成,最终性能accuracy=0.56211(epoch=18), 总耗时3447.37s, 已将其保存为:Garbage_resnet50_final
    2022-10-19 11:08:35,150 - INFO: Epoch:20/20, batch:6830, train_loss:[1.40682], acc_top1:[0.56250], acc_top5:[0.90625](19.77s)
    2022-10-19 11:08:39,326 - INFO: Epoch:20/20, batch:6840, train_loss:[1.64376], acc_top1:[0.53125], acc_top5:[0.87500](4.18s)
    2022-10-19 11:08:43,566 - INFO: Epoch:20/20, batch:6850, train_loss:[1.20418], acc_top1:[0.68750], acc_top5:[0.93750](4.24s)
    2022-10-19 11:08:47,759 - INFO: Epoch:20/20, batch:6860, train_loss:[1.80906], acc_top1:[0.40625], acc_top5:[0.84375](4.19s)
    2022-10-19 11:08:51,970 - INFO: Epoch:20/20, batch:6870, train_loss:[1.11572], acc_top1:[0.62500], acc_top5:[0.93750](4.21s)
    2022-10-19 11:08:56,188 - INFO: Epoch:20/20, batch:6880, train_loss:[1.05961], acc_top1:[0.68750], acc_top5:[0.81250](4.22s)
    2022-10-19 11:09:00,414 - INFO: Epoch:20/20, batch:6890, train_loss:[1.63669], acc_top1:[0.50000], acc_top5:[0.84375](4.23s)
    2022-10-19 11:09:04,681 - INFO: Epoch:20/20, batch:6900, train_loss:[2.06076], acc_top1:[0.37500], acc_top5:[0.87500](4.27s)
    2022-10-19 11:09:08,870 - INFO: Epoch:20/20, batch:6910, train_loss:[1.74586], acc_top1:[0.43750], acc_top5:[0.81250](4.19s)
    2022-10-19 11:09:13,138 - INFO: Epoch:20/20, batch:6920, train_loss:[1.96107], acc_top1:[0.50000], acc_top5:[0.78125](4.27s)
    2022-10-19 11:09:17,278 - INFO: Epoch:20/20, batch:6930, train_loss:[1.44383], acc_top1:[0.56250], acc_top5:[0.90625](4.14s)
    2022-10-19 11:09:21,525 - INFO: Epoch:20/20, batch:6940, train_loss:[1.49134], acc_top1:[0.56250], acc_top5:[0.87500](4.25s)
    2022-10-19 11:09:25,704 - INFO: Epoch:20/20, batch:6950, train_loss:[1.32534], acc_top1:[0.62500], acc_top5:[0.87500](4.18s)
    2022-10-19 11:09:29,946 - INFO: Epoch:20/20, batch:6960, train_loss:[1.59022], acc_top1:[0.50000], acc_top5:[0.87500](4.24s)
    2022-10-19 11:09:34,174 - INFO: Epoch:20/20, batch:6970, train_loss:[1.39769], acc_top1:[0.65625], acc_top5:[0.93750](4.23s)
    2022-10-19 11:09:38,427 - INFO: Epoch:20/20, batch:6980, train_loss:[1.63461], acc_top1:[0.50000], acc_top5:[0.84375](4.25s)
    2022-10-19 11:09:42,671 - INFO: Epoch:20/20, batch:6990, train_loss:[1.27649], acc_top1:[0.62500], acc_top5:[0.84375](4.24s)
    2022-10-19 11:09:46,836 - INFO: Epoch:20/20, batch:7000, train_loss:[1.80139], acc_top1:[0.65625], acc_top5:[0.81250](4.16s)
    2022-10-19 11:09:51,122 - INFO: Epoch:20/20, batch:7010, train_loss:[1.78041], acc_top1:[0.56250], acc_top5:[0.81250](4.29s)
    2022-10-19 11:09:55,410 - INFO: Epoch:20/20, batch:7020, train_loss:[2.01233], acc_top1:[0.50000], acc_top5:[0.71875](4.29s)
    2022-10-19 11:09:59,609 - INFO: Epoch:20/20, batch:7030, train_loss:[1.88425], acc_top1:[0.37500], acc_top5:[0.78125](4.20s)
    2022-10-19 11:10:03,823 - INFO: Epoch:20/20, batch:7040, train_loss:[1.64764], acc_top1:[0.53125], acc_top5:[0.78125](4.21s)
    2022-10-19 11:10:08,004 - INFO: Epoch:20/20, batch:7050, train_loss:[1.59875], acc_top1:[0.43750], acc_top5:[0.87500](4.18s)
    2022-10-19 11:10:12,228 - INFO: Epoch:20/20, batch:7060, train_loss:[1.07798], acc_top1:[0.62500], acc_top5:[0.90625](4.22s)
    2022-10-19 11:10:16,394 - INFO: Epoch:20/20, batch:7070, train_loss:[1.84751], acc_top1:[0.43750], acc_top5:[0.81250](4.17s)
    2022-10-19 11:10:20,570 - INFO: Epoch:20/20, batch:7080, train_loss:[1.43728], acc_top1:[0.62500], acc_top5:[0.78125](4.18s)
    2022-10-19 11:10:24,761 - INFO: Epoch:20/20, batch:7090, train_loss:[1.68076], acc_top1:[0.50000], acc_top5:[0.81250](4.19s)
    2022-10-19 11:10:28,951 - INFO: Epoch:20/20, batch:7100, train_loss:[1.57907], acc_top1:[0.53125], acc_top5:[0.87500](4.19s)
    2022-10-19 11:10:33,168 - INFO: Epoch:20/20, batch:7110, train_loss:[1.78409], acc_top1:[0.40625], acc_top5:[0.84375](4.22s)
    2022-10-19 11:10:37,372 - INFO: Epoch:20/20, batch:7120, train_loss:[1.48713], acc_top1:[0.53125], acc_top5:[0.90625](4.20s)
    2022-10-19 11:10:41,658 - INFO: Epoch:20/20, batch:7130, train_loss:[1.82912], acc_top1:[0.53125], acc_top5:[0.78125](4.29s)
    2022-10-19 11:10:45,808 - INFO: Epoch:20/20, batch:7140, train_loss:[1.80293], acc_top1:[0.43750], acc_top5:[0.87500](4.15s)
    2022-10-19 11:10:50,045 - INFO: Epoch:20/20, batch:7150, train_loss:[1.66981], acc_top1:[0.50000], acc_top5:[0.84375](4.24s)
    2022-10-19 11:10:54,313 - INFO: Epoch:20/20, batch:7160, train_loss:[1.51164], acc_top1:[0.59375], acc_top5:[0.81250](4.27s)
    2022-10-19 11:10:58,519 - INFO: Epoch:20/20, batch:7170, train_loss:[1.68946], acc_top1:[0.56250], acc_top5:[0.81250](4.21s)
    2022-10-19 11:11:02,804 - INFO: Epoch:20/20, batch:7180, train_loss:[1.46470], acc_top1:[0.56250], acc_top5:[0.81250](4.29s)
    2022-10-19 11:11:18,427 - INFO: [validation] Epoch:20/20, val_loss:[0.04960], val_top1:[0.54831], val_top5:[0.86749]
    2022-10-19 11:11:18,427 - INFO: 最优top1测试精度:0.56211 (epoch=18)
    2022-10-19 11:11:18,428 - INFO: 训练完成,最终性能accuracy=0.56211(epoch=18), 总耗时3614.45s, 已将其保存为:Garbage_resnet50_final
    
    训练完毕,结果路径D:\Workspace\ExpResults\Comp01GarbageClassification\final_models.
    2022-10-19 11:11:18,897 - INFO: Done.

output_21_4

output_21_5

训练完成后,建议将 ExpResults 文件夹的最终文件 copyExpDeployments 用于进行部署和应用。

3.5 离线测试

注意Garage数据集中的 test 样本未提供 label 信息,因此无法进行准确度评估

if __name__ == '__main__':
    # 设置输入样本的维度    
    input_spec = InputSpec(shape=[None] + args['input_size'], dtype='float32', name='image')
    label_spec = InputSpec(shape=[None, 1], dtype='int64', name='label')
    
    # 载入模型
    network =  eval("paddle.vision.models." + args['architecture'] + "(num_classes=args['class_dim'])")
    model = paddle.Model(network, input_spec, label_spec)                   # 模型实例化
    model.load(args['deployments_path']['deployment_checkpoint_model'])                                # 载入调优模型的参数
    model.prepare(loss=paddle.nn.CrossEntropyLoss(),                        # 设置loss
                    metrics=paddle.metric.Accuracy(topk=(1,5)))               # 设置评价指标
    
    # 执行评估函数,并输出验证集样本的损失和精度
    print('开始评估...')
    avg_loss, avg_acc_top1, avg_acc_top5 = evaluate(model, val_reader, verbose=1)
    print('[验证集] 损失: {:.5f}, top1精度:{:.5f}, top5精度为:{:.5f}\n'.format(avg_loss, avg_acc_top1, avg_acc_top5))    
    # avg_loss, avg_acc_top1, avg_acc_top5 = evaluate(model, test_reader, verbose=1)
    # print('\r[测试集] 损失: {:.5f}, top1精度:{:.5f}, top5精度为:{:.5f}'.format(avg_loss, avg_acc_top1, avg_acc_top5), end='')    
开始评估...
    (100.00%)[验证集] 损失: 0.05025, top1精度:0.56211, top5精度为:0.85059

四、模型测试与应用

4.1 导入依赖库及全局参数配置

# 导入依赖库
import numpy as np
import random
import os
import cv2
import json
import matplotlib.pyplot as plt
import paddle
import paddle.nn.functional as F
from paddle.io import DataLoader
sys.path.append(r'D:\WorkSpace\DeepLearning\WebsiteV2')      # 导入自定义函数保存位置
from utils.datasets.Garbage import GarbageDataset
from utils.predict import predict, predict_batch

args={
    'project_name': 'Comp01GarbageClassification',
    'dataset_name': 'Garbage',
    'architecture': 'resnet50',
    'model_name': None,
    'input_size': [3, 227, 227],                             # 输入样本的尺度
    'batch_size': 64,
    'dataset_root_path': 'D:\\Workspace\\ExpDatasets\\',
    'result_root_path': 'D:\\Workspace\\ExpResults\\',
    'deployment_root_path': 'D:\\Workspace\\ExpDeployments\\',
    'pretrained': True,              # 是否使用预训练的模型    
}


if not args['pretrained']:
    model_name = args['dataset_name'] + '_' + args['architecture'] + '_withoutPretrained'
else:
    model_name = args['dataset_name'] + '_' + args['architecture']

prediction_path  = os.path.join(args['deployment_root_path'], args['project_name'], model_name+'_prediction.txt')
deployment_final_models = os.path.join(args['deployment_root_path'], args['project_name'], 'final_models', model_name + '_final')
dataset_root_path = os.path.join(args['dataset_root_path'], args['dataset_name'])
json_dataset_info = os.path.join(dataset_root_path, 'dataset_info.json')

4.2 单样本预测

  1. 源代码:predict.py

  2. 调用方法:

    sys.path.append(r'D:\WorkSpace\DeepLearning\WebsiteV2   # 定义模块保存位置
    from utils.predict import predict, predict_batch                                            # 导入训练模块
    
    # 各参数的详细调用请参考源代码
    pred_id = predict(model, image)                            # 当样本预测                   
    pred = predict_batch(model, test_reader, prediction_path)  # 批量样本预测
    
import os
import sys
import cv2
import json
import random
import matplotlib.pyplot as plt


# 1. 从数据集的dataset_info文件中,获取图像标签信息
with open(json_dataset_info, 'r', encoding='utf-8') as f_info:
    dataset_info = json.load(f_info)

# 2. 获取图像
# 2.1 从测试列表中随机选择一个图像
test_list = os.path.join(dataset_root_path, 'test.txt')
with open(test_list, 'r') as f_test:
    lines = f_test.readlines()
img_path = random.choice(lines).strip()
# 2.2 获取图像的路径和标签
# img_path, label = line.split()
# img_path = os.path.join(dataset_root_path, 'Data', img_path)           
# 2.3 根据路径读取图像路径
image = cv2.imread(img_path, 1)

# 3. 使用部署模型进行预测
# 3.1 载入模型
model = paddle.jit.load(deployment_final_models)

# 3.2 调用predict方法进行预测
pred_id = predict(model, image)

# 3.3 将预测的label和ground_turth label转换为label name
# label_name_gt = dataset_info['label_dict'][str(label)]
label_name_pred = dataset_info['label_dict'][str(pred_id)]

# 4. 输出结果
# 4.1 输出预测结果
# print('待测样本的类别为:{}, 预测类别为:{}'.format(label_name_gt, label_name_pred))
print('待测样本的类别为:{}'.format(label_name_pred))

# 4.2 显示待预测样本
image_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image_rgb)
plt.show()

待测样本的类别为:可回收物/锅

output_28_2

4.3 批量预测

批量预测实现将测试集数据进行批量推理,并将推理结果按照数据列表的数据输出至文本文件。

# 1. 载入模型
model = paddle.jit.load(deployment_final_models)

dataset_test = GarbageDataset(dataset_root_path, mode='test')
test_reader = DataLoader(dataset_test, batch_size=args['batch_size'], shuffle=False, drop_last=False)

# 2. 计算测试集的准确度 
pred = predict_batch(model, test_reader, prediction_path)

# 3. 显示预测结果
print(pred)
    结果文件保存到 D:\Workspace\ExpDeployments\Comp01GarbageClassification\Garbage_resnet50_prediction.txt 成功.
    [29 33 23 37  5 33  1 31 39 28 16 17 37 22 24 23 20 35 28 30  5 15 36 26
     23 14 10  4  7 30 38 15 19 14 20 15 38 28 37 37 19 27 39 27  6 24 34 18
     23  3 23 19  6 35 31 31 18 35 14 23 36  1 24 36 22  5 35  4  3  1 15 10
     28  7  9 15 18 37  9 20 36 16 21  9 23 32 28 16 12 27  1  6 14 21  1 18
     25 26 24 27 12 38 38 22 11 35  3  5 11 24 23  7 25 14 24 16  6 15  9  3
     14 28 19 33 33 27 39 31 21  9  0 28  6 10  3 11 29 11 38 33 33  7 35 13
     30  1 33 22 18 10 19 20 29  2 25 26 15 22 31 26  3 28 28 26 16 33 26 12
     24 36  8  9  9 36 16 19 15  5 15 20 34 29 33 25 36 16 36 27  9 34 27 28
     36  9 19 16 27 33  5 23 15  2 32  5 25 11 14 18 33 26 27 10 19 23 11 14
     15 18 14 20 34 23 29 26 25 38 24 26 13 38  2 27  4 16 36  8 17 26 10 25
     28 36 28 38 31  9 23 37 29 10  2  5 20 34 17 33 25  0 36  5 24  7  2 29
      6 36 15 26 20 34 37 12  4 18 12 18 16 10 19 26 31 25 27 38 30 35  1 27
     33 12 19 34  2 30 29 28 29 15 34 39 20 27 34 36  7 33 17 27  4 31  6 35
      5 10 18 39 38  2 24 37 18 20 12 24 36 19  6 30 28 22 21 11 33  5 26  6
     10 10 26 28 23 33 11 30 13 35 28 33 12 36 23  6  1 23 37 21 32  7  2 34
     30 16 38 14 15 14 20 10 14 33 30  6  2 27 26 18 23  5 30 13 29 12 24 17
     25 21 32 28  7 21 28 18 11  6 30 21  4 26 33 23]

五、实验结果及评价

下面我们将在ResNet50, ResNet18, Mobilenetv2, VGG16四个模型对垃圾分类进行评估,所有模型设置batch_size=32。所有模型均采用Imagenet预训练模型进行训练,训练集为train。测试结果需要上传至竞赛平台进行获取。

1. 实验结果

模型名称 Baseline模型 ImageNet预训练 learning_rate best_epoch val_top1_acc val_top5_acc loss test_acc 单batch时间/总训练时间(s) 可训练参数/总参数
Garbage_Resnet18 ResNet18 0.001
Garbage_Resnet50 ResNet50 0.001
Garbage_Resnet50 ResNet50 0.01 18/20 0.56211 0.85059 0.05025 4.35/3614.5 23,643,112/23,536,872
Garbage_VGG16 VGG16 0.001
Garbage_Mobilenetv2 Mobilenetv2 0.001 9/10 0.67529 0.92133 0.036198 6.6/1507.1 2,241,000/2,309,224

2. 结果输出

经过初步训练,在XX个epoch下,XXXX模型具有最好的性能。因此,我们固定所有参数,将训练集改为trainval,进行二次训练。训练过程中观察loss曲线,当曲线平缓时停止训练。最终的Baseline模型使用test子集在trainval输出的模型上进行推理。