当前位置: 首页> 房产> 市场 > 作文网_网站建设zg886_北京做seo的公司_泰安seo排名

作文网_网站建设zg886_北京做seo的公司_泰安seo排名

时间:2025/7/16 23:39:22来源:https://blog.csdn.net/weixin_74085818/article/details/143244278 浏览次数:0次
作文网_网站建设zg886_北京做seo的公司_泰安seo排名
  • 🍨 本文为🔗365天深度学习训练营 中的学习记录博客
  • 🍖 原作者:K同学啊

前言

  • 这个案例还是以学习API为主,学习了tensorflow如何动态加载学习率、如何设置早停等方法
  • 这个案例主要学习为主,由于模型结构比较简单,故在验证集上效果并不是很好;
  • 以后本人会每周至少更新一篇深度学习与机器学习的案例,明天或后天更新机器学习;
  • 欢迎收藏+关注,本人会持续更新。

文章目录

  • 1、知识积累
  • 2、案例演示
    • 1、数据处理
      • 1、导入库
      • 2、加载数据类别
      • 3、加载训练集和验证集数据
      • 4、展示部分数据图片
    • 2、内存优化
    • 3、搭建CNN网络模型
    • 3、模型训练
      • 1、设置超参数(包含动态设置学习率)
      • 2、模型训练
    • 4、结果展示

1、知识积累

👿 图片加载

使用image_dataset_from_directory方法将磁盘中的数据加载到tf.data.Dataset

  • tf.keras.preprocessing.image_dataset_from_directory():是 TensorFlow 的 Keras 模块中的一个函数,用于从目录中创建一个图像数据集(dataset)。这个函数可以以更方便的方式加载图像数据,用于训练和评估神经网络模型。

✌️ 测试集和验证集的关系

  1. 验证集并没有参与训练过程梯度下降过程的,狭义上来讲是没有参与模型的参数训练更新的。
  2. 但是广义上来讲,验证集存在的意义确实参与了一个“人工调参”的过程,我们根据每一个epoch训练之后模型在valid data上的表现来决定是否需要训练进行early stop,或者根据这个过程模型的性能变化来调整模型的超参数,如学习率,batch_size等等。
  3. 因此,我们也可以认为,验证集也参与了训练,但是并没有使得模型去overfit验证集

动态学习率的加载

ExponentialDecay函数:

tf.keras.optimizers.schedules.ExponentialDecay是 TensorFlow 中的一个学习率衰减策略,用于在训练神经网络时动态地降低学习率。学习率衰减是一种常用的技巧,可以帮助优化算法更有效地收敛到全局最小值,从而提高模型的性能。

🔎 主要参数:

  • **initial_learning_rate(初始学习率):**初始学习率大小。
  • **decay_steps(衰减步数):**学习率衰减的步数。在经过 decay_steps 步后,学习率将按照指数函数衰减。例如,如果 decay_steps 设置为 10,则每10步衰减一次。
  • **decay_rate(衰减率):**学习率的衰减率。它决定了学习率如何衰减。通常,取值在 0 到 1 之间。
  • **staircase(阶梯式衰减):**一个布尔值,控制学习率的衰减方式。如果设置为 True,则学习率在每个 decay_steps 步之后直接减小,形成阶梯状下降。如果设置为 False,则学习率将连续衰减。

🍃 学习率大与学习率小的优缺点分析:

学习率大

  • 优点:

    • 1、加快学习速率。
    • 2、有助于跳出局部最优值。
  • 缺点:

    • 1、导致模型训练不收敛。
    • 2、单单使用大学习率容易导致模型不精确。

学习率小

  • 优点:

    • 1、有助于模型收敛、模型细化。
    • 2、提高模型精度。
  • 缺点:

    • 1、很难跳出局部最优值。
    • 2、收敛缓慢。

🚙 早停与保存最佳模型参数

EarlyStopping()参数说明

  • monitor: 被监测的数据。

  • min_delta: 在被监测的数据中被认为是提升的最小变化, 例如,小于 min_delta 的绝对变化会被认为没有提升。

  • patience: 没有进步的训练轮数,在这之后训练就会被停止。

  • verbose: 详细信息模式。

  • mode: {auto, min, max} 其中之一。 在 min 模式中, 当被监测的数据停止下降,训练就会停止;在 max 模式中,当被监测的数据停止上升,训练就会停止;在 auto 模式中,方向会自动从被监测的数据的名字中判断出来。

  • baseline: 要监控的数量的基准值。 如果模型没有显示基准的改善,训练将停止。

  • estore_best_weights: 是否从具有监测数量的最佳值的时期恢复模型权重。 如果为 False,则使用在训练的最后一步获得的模型权重。

# 设置早停
earlystopper = EarlyStopping(monitor='val_accuracy', min_delta=0.001,patience=20, verbose=1)

这段代码的以上是训练patience=20轮,但是val_accuracy提升小于0.001,则提前停止。

2、案例演示

1、数据处理

1、导入库

import tensorflow as tf 
from tensorflow.keras import datasets, models, layers
import numpy as np gpus = tf.config.list_physical_devices("GPU")if gpus:gpu0 = gpus[0]  # 如果有多块gpu,选择第一块tf.config.experimental.set_memory_growth(gpu0, True)tf.config.set_visible_devices([gpu0], "GPU")gpus

输出:

[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]

2、加载数据类别

文件加中数据,分为训练集和测试集两个文件,每一个文件都有不同类别文件夹,文件夹以不同鞋为分类,即不同文件夹就是不同类别。

# 任务:遍历文件夹,展示类别名称
import os, PIL, pathlib# 数据目录
data_dir = './data/train/'
data_dir = pathlib.Path(data_dir)  # 转换成pathlib对象# 遍历
classnames = [str(path).split('/')[0] for path in os.listdir(data_dir)]
classnames  # 输出类别名称

输出:

['adidas', 'nike']

3、加载训练集和验证集数据

# 任务:加载训练集和验证集数据batch_size = 32
img_width = 224
img_height = 224train_ds = tf.keras.preprocessing.image_dataset_from_directory('./data/train',batch_size=batch_size,   # 默认也是32seed=42,image_size=(img_height, img_width)
)val_ds = tf.keras.preprocessing.image_dataset_from_directory('./data/test/',batch_size=batch_size,seed=42,image_size=(img_height, img_width)
)# 输出数据格式
for x, y in train_ds:print('数据信息(每一批):',x.shape)print('一批数据类别编码:',y)break
Found 502 files belonging to 2 classes.
Found 76 files belonging to 2 classes.
数据信息(每一批): (32, 224, 224, 3)
一批数据类别编码: tf.Tensor([0 1 0 1 1 0 0 0 1 1 0 1 0 1 0 0 0 0 0 1 1 1 1 0 0 1 0 1 1 0 0 0], shape=(32,), dtype=int32)

4、展示部分数据图片

# 任务:展示训练集随机20个图像
import matplotlib.pyplot as plt plt.figure(figsize=(20, 15))
for images, labels in train_ds.take(1):for i in range(20):plt.subplot(5, 10, i + 1)plt.imshow(images[i].numpy().astype('uint8'))plt.title(classnames[labels[i]])plt.axis('off')plt.show()


在这里插入图片描述

2、内存优化

from tensorflow.data.experimental import AUTOTUNEAUTOTUNE = tf.data.experimental.AUTOTUNEtrain_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
vals_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)

3、搭建CNN网络模型

三层卷积,三层池化,两层连接层

创建模型的两种方法:

  1. 首先创建models.Sequential(),然后调用add函数添加;
  2. 直接在[]中添加所有层,本人一般用这种情况。
model = models.Sequential([# 压缩像素,[0,255]---> [0,1]layers.experimental.preprocessing.Rescaling(1./255, input_shape=(img_height, img_width, 3)),# 第一层要输入维度layers.Conv2D(16, (3, 3), activation='relu', input_shape=(img_height, img_width, 3)),layers.AveragePooling2D((2,2)),layers.Conv2D(32, (3, 3), activation='relu'),layers.AveragePooling2D((2,2)),layers.Dropout(0.3),layers.Conv2D(32, (3, 3), activation='relu'),layers.AveragePooling2D((2,2)),layers.Dropout(0.3),layers.Flatten(),layers.Dense(128, activation='relu'),layers.Dense(len(classnames))
])model.summary()
Model: "sequential"
_________________________________________________________________Layer (type)                Output Shape              Param #   
=================================================================rescaling (Rescaling)       (None, 224, 224, 3)       0         conv2d (Conv2D)             (None, 222, 222, 16)      448       average_pooling2d (AverageP  (None, 111, 111, 16)     0         ooling2D)                                                       conv2d_1 (Conv2D)           (None, 109, 109, 32)      4640      average_pooling2d_1 (Averag  (None, 54, 54, 32)       0         ePooling2D)                                                     dropout (Dropout)           (None, 54, 54, 32)        0         conv2d_2 (Conv2D)           (None, 52, 52, 32)        9248      average_pooling2d_2 (Averag  (None, 26, 26, 32)       0         ePooling2D)                                                     dropout_1 (Dropout)         (None, 26, 26, 32)        0         flatten (Flatten)           (None, 21632)             0         dense (Dense)               (None, 128)               2769024   dense_1 (Dense)             (None, 2)                 258       =================================================================
Total params: 2,783,618
Trainable params: 2,783,618
Non-trainable params: 0
_________________________________________________________________

3、模型训练

1、设置超参数(包含动态设置学习率)

# 设置初始化学习率
learning_rate = 0.001# 设置动态学习率
lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(learning_rate,decay_steps=10,decay_rate=0.92,staircase=True
)# 创建优化器
optimizer = tf.keras.optimizers.Adam(learning_rate=lr_schedule)# 设置超参数
model.compile(optimizer=optimizer,loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),metrics=['accuracy']
)

2、模型训练

from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStoppingepochs = 50# 保存最佳模型参数
checkpointer = ModelCheckpoint('best_model.h5',monitor='val_accuracy',verbose=1,save_best_only=True,save_weights_only=True)# 设置早停
earlystopper = EarlyStopping(monitor='val_accuracy', min_delta=0.001,patience=20, verbose=1)epochs = 50# 保存最佳模型参数
checkpointer = ModelCheckpoint('best_model.h5',monitor='val_accuracy',verbose=1,save_best_only=True,save_weights_only=True)# 模型训练
history = model.fit(train_ds,validation_data=val_ds,epochs=epochs,callbacks=[checkpointer, earlystopper])
14/16 [=========================>....] - ETA: 0s - loss: 0.7837 - accuracy: 0.5023
Epoch 1: val_accuracy improved from -inf to 0.53947, saving model to best_model.h5
16/16 [==============================] - 4s 35ms/step - loss: 0.7714 - accuracy: 0.5000 - val_loss: 0.6803 - val_accuracy: 0.5395
Epoch 2/50
12/16 [=====================>........] - ETA: 0s - loss: 0.6765 - accuracy: 0.5642
Epoch 2: val_accuracy improved from 0.53947 to 0.61842, saving model to best_model.h5
16/16 [==============================] - 0s 15ms/step - loss: 0.6783 - accuracy: 0.5677 - val_loss: 0.6532 - val_accuracy: 0.6184
Epoch 3/50
12/16 [=====================>........] - ETA: 0s - loss: 0.6705 - accuracy: 0.5775
Epoch 3: val_accuracy improved from 0.61842 to 0.65789, saving model to best_model.h5
16/16 [==============================] - 0s 15ms/step - loss: 0.6589 - accuracy: 0.6016 - val_loss: 0.6367 - val_accuracy: 0.6579
Epoch 4/50
12/16 [=====================>........] - ETA: 0s - loss: 0.6296 - accuracy: 0.6283
Epoch 4: val_accuracy did not improve from 0.65789
16/16 [==============================] - 0s 13ms/step - loss: 0.6453 - accuracy: 0.6116 - val_loss: 0.6192 - val_accuracy: 0.6447
Epoch 5/50
16/16 [==============================] - ETA: 0s - loss: 0.6124 - accuracy: 0.6594
Epoch 5: val_accuracy did not improve from 0.65789
16/16 [==============================] - 0s 13ms/step - loss: 0.6124 - accuracy: 0.6594 - val_loss: 0.6184 - val_accuracy: 0.6579
Epoch 6/50
11/16 [===================>..........] - ETA: 0s - loss: 0.5986 - accuracy: 0.6790
Epoch 6: val_accuracy improved from 0.65789 to 0.67105, saving model to best_model.h5
16/16 [==============================] - 0s 15ms/step - loss: 0.5803 - accuracy: 0.7092 - val_loss: 0.5611 - val_accuracy: 0.6711
Epoch 7/50
12/16 [=====================>........] - ETA: 0s - loss: 0.5406 - accuracy: 0.7139
Epoch 7: val_accuracy improved from 0.67105 to 0.73684, saving model to best_model.h5
16/16 [==============================] - 0s 15ms/step - loss: 0.5405 - accuracy: 0.7351 - val_loss: 0.5235 - val_accuracy: 0.7368
Epoch 8/50
11/16 [===================>..........] - ETA: 0s - loss: 0.5141 - accuracy: 0.7727
Epoch 8: val_accuracy did not improve from 0.73684
16/16 [==============================] - 0s 13ms/step - loss: 0.5055 - accuracy: 0.7729 - val_loss: 0.5382 - val_accuracy: 0.7368
Epoch 9/50
12/16 [=====================>........] - ETA: 0s - loss: 0.5257 - accuracy: 0.7166
Epoch 9: val_accuracy improved from 0.73684 to 0.77632, saving model to best_model.h5
16/16 [==============================] - 0s 15ms/step - loss: 0.5087 - accuracy: 0.7470 - val_loss: 0.5095 - val_accuracy: 0.7763
Epoch 10/50
12/16 [=====================>........] - ETA: 0s - loss: 0.4528 - accuracy: 0.8128
Epoch 10: val_accuracy did not improve from 0.77632
16/16 [==============================] - 0s 13ms/step - loss: 0.4459 - accuracy: 0.8187 - val_loss: 0.5135 - val_accuracy: 0.7500
Epoch 11/50
16/16 [==============================] - ETA: 0s - loss: 0.4237 - accuracy: 0.8147
Epoch 11: val_accuracy did not improve from 0.77632
16/16 [==============================] - 0s 13ms/step - loss: 0.4237 - accuracy: 0.8147 - val_loss: 0.4899 - val_accuracy: 0.7632
Epoch 12/50
11/16 [===================>..........] - ETA: 0s - loss: 0.4103 - accuracy: 0.8125
Epoch 12: val_accuracy did not improve from 0.77632
16/16 [==============================] - 0s 13ms/step - loss: 0.4010 - accuracy: 0.8127 - val_loss: 0.4718 - val_accuracy: 0.7500
Epoch 13/50
11/16 [===================>..........] - ETA: 0s - loss: 0.3687 - accuracy: 0.8352
Epoch 13: val_accuracy improved from 0.77632 to 0.78947, saving model to best_model.h5
16/16 [==============================] - 0s 15ms/step - loss: 0.3754 - accuracy: 0.8207 - val_loss: 0.4818 - val_accuracy: 0.7895
Epoch 14/50
12/16 [=====================>........] - ETA: 0s - loss: 0.3643 - accuracy: 0.8503
Epoch 14: val_accuracy did not improve from 0.78947
16/16 [==============================] - 0s 13ms/step - loss: 0.3607 - accuracy: 0.8506 - val_loss: 0.4916 - val_accuracy: 0.7500
Epoch 15/50
12/16 [=====================>........] - ETA: 0s - loss: 0.3609 - accuracy: 0.8369
Epoch 15: val_accuracy did not improve from 0.78947
16/16 [==============================] - 0s 13ms/step - loss: 0.3578 - accuracy: 0.8426 - val_loss: 0.4996 - val_accuracy: 0.7237
Epoch 16/50
12/16 [=====================>........] - ETA: 0s - loss: 0.3303 - accuracy: 0.8583
Epoch 16: val_accuracy did not improve from 0.78947
16/16 [==============================] - 0s 13ms/step - loss: 0.3420 - accuracy: 0.8526 - val_loss: 0.4879 - val_accuracy: 0.7763
Epoch 17/50
12/16 [=====================>........] - ETA: 0s - loss: 0.3240 - accuracy: 0.8690
Epoch 17: val_accuracy did not improve from 0.78947
16/16 [==============================] - 0s 13ms/step - loss: 0.3270 - accuracy: 0.8705 - val_loss: 0.4793 - val_accuracy: 0.7763
Epoch 18/50
12/16 [=====================>........] - ETA: 0s - loss: 0.2970 - accuracy: 0.8743
Epoch 18: val_accuracy did not improve from 0.78947
16/16 [==============================] - 0s 13ms/step - loss: 0.3069 - accuracy: 0.8725 - val_loss: 0.4800 - val_accuracy: 0.7895
Epoch 19/50
12/16 [=====================>........] - ETA: 0s - loss: 0.2982 - accuracy: 0.8717
Epoch 19: val_accuracy did not improve from 0.78947
16/16 [==============================] - 0s 13ms/step - loss: 0.2969 - accuracy: 0.8745 - val_loss: 0.4766 - val_accuracy: 0.7763
Epoch 20/50
16/16 [==============================] - ETA: 0s - loss: 0.2915 - accuracy: 0.8865
Epoch 20: val_accuracy did not improve from 0.78947
16/16 [==============================] - 0s 13ms/step - loss: 0.2915 - accuracy: 0.8865 - val_loss: 0.4894 - val_accuracy: 0.7763
Epoch 21/50
11/16 [===================>..........] - ETA: 0s - loss: 0.2953 - accuracy: 0.8665
Epoch 21: val_accuracy did not improve from 0.78947
16/16 [==============================] - 0s 13ms/step - loss: 0.2901 - accuracy: 0.8725 - val_loss: 0.5096 - val_accuracy: 0.7500
Epoch 22/50
12/16 [=====================>........] - ETA: 0s - loss: 0.2775 - accuracy: 0.8877
Epoch 22: val_accuracy did not improve from 0.78947
16/16 [==============================] - 0s 13ms/step - loss: 0.2808 - accuracy: 0.8865 - val_loss: 0.4826 - val_accuracy: 0.7763
Epoch 23/50
12/16 [=====================>........] - ETA: 0s - loss: 0.2748 - accuracy: 0.8797
Epoch 23: val_accuracy did not improve from 0.78947
16/16 [==============================] - 0s 13ms/step - loss: 0.2785 - accuracy: 0.8805 - val_loss: 0.5012 - val_accuracy: 0.7500
Epoch 24/50
12/16 [=====================>........] - ETA: 0s - loss: 0.2585 - accuracy: 0.9118
Epoch 24: val_accuracy did not improve from 0.78947
16/16 [==============================] - 0s 13ms/step - loss: 0.2636 - accuracy: 0.9084 - val_loss: 0.4887 - val_accuracy: 0.7763
Epoch 25/50
12/16 [=====================>........] - ETA: 0s - loss: 0.2694 - accuracy: 0.9011
Epoch 25: val_accuracy did not improve from 0.78947
16/16 [==============================] - 0s 13ms/step - loss: 0.2685 - accuracy: 0.9044 - val_loss: 0.4841 - val_accuracy: 0.7632
Epoch 26/50
16/16 [==============================] - ETA: 0s - loss: 0.2588 - accuracy: 0.9004
Epoch 26: val_accuracy did not improve from 0.78947
16/16 [==============================] - 0s 14ms/step - loss: 0.2588 - accuracy: 0.9004 - val_loss: 0.4855 - val_accuracy: 0.7632
Epoch 27/50
12/16 [=====================>........] - ETA: 0s - loss: 0.2661 - accuracy: 0.9037
Epoch 27: val_accuracy did not improve from 0.78947
16/16 [==============================] - 0s 13ms/step - loss: 0.2521 - accuracy: 0.9084 - val_loss: 0.4844 - val_accuracy: 0.7632
Epoch 28/50
12/16 [=====================>........] - ETA: 0s - loss: 0.2492 - accuracy: 0.9171
Epoch 28: val_accuracy did not improve from 0.78947
16/16 [==============================] - 0s 13ms/step - loss: 0.2457 - accuracy: 0.9183 - val_loss: 0.4820 - val_accuracy: 0.7632
Epoch 29/50
12/16 [=====================>........] - ETA: 0s - loss: 0.2669 - accuracy: 0.8877
Epoch 29: val_accuracy did not improve from 0.78947
16/16 [==============================] - 0s 13ms/step - loss: 0.2591 - accuracy: 0.8964 - val_loss: 0.4844 - val_accuracy: 0.7632
Epoch 30/50
12/16 [=====================>........] - ETA: 0s - loss: 0.2399 - accuracy: 0.9064
Epoch 30: val_accuracy did not improve from 0.78947
16/16 [==============================] - 0s 13ms/step - loss: 0.2501 - accuracy: 0.9024 - val_loss: 0.4872 - val_accuracy: 0.7632
Epoch 31/50
12/16 [=====================>........] - ETA: 0s - loss: 0.2410 - accuracy: 0.9251
Epoch 31: val_accuracy did not improve from 0.78947
16/16 [==============================] - 0s 13ms/step - loss: 0.2513 - accuracy: 0.9143 - val_loss: 0.4847 - val_accuracy: 0.7632
Epoch 32/50
12/16 [=====================>........] - ETA: 0s - loss: 0.2528 - accuracy: 0.9091
Epoch 32: val_accuracy did not improve from 0.78947
16/16 [==============================] - 0s 13ms/step - loss: 0.2504 - accuracy: 0.9104 - val_loss: 0.4911 - val_accuracy: 0.7632
Epoch 33/50
11/16 [===================>..........] - ETA: 0s - loss: 0.2489 - accuracy: 0.9091
Epoch 33: val_accuracy did not improve from 0.78947
16/16 [==============================] - 0s 13ms/step - loss: 0.2525 - accuracy: 0.9104 - val_loss: 0.4852 - val_accuracy: 0.7632
Epoch 33: early stopping

4、结果展示

acc = history.history['accuracy']
val_acc = history.history['val_accuracy']loss = history.history['loss']
val_loss = history.history['val_loss']epochs_range = range(len(loss))plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()


在这里插入图片描述

关键字:作文网_网站建设zg886_北京做seo的公司_泰安seo排名

版权声明:

本网仅为发布的内容提供存储空间,不对发表、转载的内容提供任何形式的保证。凡本网注明“来源:XXX网络”的作品,均转载自其它媒体,著作权归作者所有,商业转载请联系作者获得授权,非商业转载请注明出处。

我们尊重并感谢每一位作者,均已注明文章来源和作者。如因作品内容、版权或其它问题,请及时与我们联系,联系邮箱:809451989@qq.com,投稿邮箱:809451989@qq.com

责任编辑: