您的位置:首页 > 房产 > 建筑 > it服务公司_手机软件商店_外贸网站免费推广b2b_百度高级搜索引擎

it服务公司_手机软件商店_外贸网站免费推广b2b_百度高级搜索引擎

2024/12/23 9:21:46 来源:https://blog.csdn.net/weixin_43414521/article/details/144202839  浏览:    关键词:it服务公司_手机软件商店_外贸网站免费推广b2b_百度高级搜索引擎
it服务公司_手机软件商店_外贸网站免费推广b2b_百度高级搜索引擎

>- **🍨 本文为[🔗365天深度学习训练营]中的学习记录博客**
>- **🍖 原作者:[K同学啊]**

往期文章可查阅: 深度学习总结

学习目标:

学习使用LSTM对糖尿病进行探索预测

本文的预测准确率,请尝试将其提升到70%以上

🏡 我的环境:

  • 语言环境:Python3.8
  • 编译器:Jupyter Notebook
  • 深度学习环境:Pytorch
    • torch==2.3.1+cu118
    • torchvision==0.18.1+cu118

 1、数据预处理

1.1、设置GPU

import torch.nn as nn
import torch.nn.functional as F
import torchvision,torch# 设置硬件设备,如果有GPU则使用,没有则使用cpu
device=torch.device("cuda" if torch.cuda.is_available() else "cpu")
device

运行结果:

device(type='cuda')

1.2、数据导入

import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
plt.rcParams['savefig.dpi']=500  # 图片像素
plt.rcParams['figure.dpi']=500  # 分辨率plt.rcParams['font.sans-serif']=['SimHei'] # 用来正常显示中文标签import warnings
warnings.filterwarnings("ignore")DataFrame=pd.read_excel("D:\THE MNIST DATABASE\RNN\R6\dia.xls")
DataFrame.head()

运行结果:

DataFrame.shape

运行结果:

(1006, 16)

1.3、数据检查

# 查看数据是否有缺失值
print('数据缺失值------------------')
print(DataFrame.isnull().sum())

运行结果:

数据缺失值------------------
卡号            0
性别            0
年龄            0
高密度脂蛋白胆固醇     0
低密度脂蛋白胆固醇     0
极低密度脂蛋白胆固醇    0
甘油三酯          0
总胆固醇          0
脉搏            0
舒张压           0
高血压史          0
尿素氮           0
尿酸            0
肌酐            0
体重检查结果        0
是否糖尿病         0
dtype: int64

# 查看数据是否有重复值
print('数据重复值------------')
print('数据集的重复值为:'f'{DataFrame.duplicated().sum()}')

运行结果:

数据重复值------------
数据集的重复值为:0

1.4、数据分布分析

feature_map={'年龄':'年龄','高密度脂蛋白胆固醇':'高密度脂蛋白胆固醇','低密度脂蛋白胆固醇':'低密度脂蛋白胆固醇','极低密度脂蛋白胆固醇':'极低密度脂蛋白胆固醇','甘油三酯':'甘油三酯','总胆固醇':'总胆固醇','脉搏':'脉搏','舒张压':'舒张压','高血压史':'高血压史','尿素氮':'尿素氮','尿酸':'尿酸','肌酐':'肌酐','体重检查结果':'体重检查结果',
}
plt.figure(figsize=(15,10))for i,(col,col_name) in enumerate(feature_map.items(),1):plt.subplot(3,5,i)sns.boxenplot(x=DataFrame['是否糖尿病'],y=DataFrame[col])plt.title(f'{col_name}的箱线图',fontsize=14)plt.ylabel('数值',fontsize=12)plt.grid(axis='y',linestyle='--',alpha=0.7)plt.tight_layout()
plt.show()

运行结果:

2、LSTM模型

2.1、数据集构建

from sklearn.preprocessing import StandardScaler# '高密度脂蛋白胆固醇'字段与糖尿病负相关,故而在x中去掉该字段
x=DataFrame.drop(['卡号','是否糖尿病','高密度脂蛋白胆固醇'],axis=1)
y=DataFrame['是否糖尿病']x=torch.tensor(np.array(x),dtype=torch.float32)
y=torch.tensor(np.array(y),dtype=torch.int64)train_x,test_x,train_y,test_y=train_test_split(x,y,test_size=0.2,random_state=1
)
train_x.shape,train_y.shape

运行结果:

(torch.Size([804, 13]), torch.Size([804]))

from torch.utils.data import TensorDataset,DataLoadertrain_dl=DataLoader(TensorDataset(train_x,train_y),batch_size=64,shuffle=False)
test_dl=DataLoader(TensorDataset(test_x,test_y),batch_size=64,shuffle=False)

2.2、定义模型

class model_lstm(nn.Module):def __init__(self):super(model_lstm,self).__init__()self.lstm0=nn.LSTM(input_size=13,hidden_size=200,num_layers=1,batch_first=True)self.lstm1=nn.LSTM(input_size=200,hidden_size=200,num_layers=1,batch_first=True)self.fc0=nn.Linear(200,2)def forward(self,x):out,hidden1=self.lstm0(x)out,_=self.lstm1(out,hidden1)out=self.fc0(out)return outmodel=model_lstm().to(device)
model

运行结果:

model_lstm((lstm0): LSTM(13, 200, batch_first=True)(lstm1): LSTM(200, 200, batch_first=True)(fc0): Linear(in_features=200, out_features=2, bias=True)
)

3、训练模型

3.1、定义训练函数

# 训练循环
def train(dataloader,model,loss_fn,optimizer):size=len(dataloader.dataset) # 训练集的大小num_batches=len(dataloader) # 批次数目,(size/batch_size,向上取整)train_loss,train_acc=0,0 # 初始化训练损失和正确率for x,y in dataloader: # 获取图片及其标签x,y=x.to(device),y.to(device)# 计算预测误差pred=model(x) # 网络输出loss=loss_fn(pred,y) # 计算网络输出和真实值之间的差距,targets为真实值,计算二者差值即为损失# 反向传播optimizer.zero_grad() # grad属性归零loss.backward() # 反向传播optimizer.step() # 每一步自动更新# 记录acc与losstrain_acc+=(pred.argmax(1)==y).type(torch.float).sum().item()train_loss+=loss.item()train_acc/=sizetrain_loss/=num_batchesreturn train_acc,train_loss

3.2、定义测试函数

def test(dataloader,model,loss_fn):size=len(dataloader.dataset)  # 测试集的大小num_batches=len(dataloader)  # 批次数目,(size/batch_size,向上取整)test_loss,test_acc=0,0# 当不进行训练时,停止梯度更新,节省计算内存消耗with torch.no_grad():for imgs,target in dataloader:imgs,target=imgs.to(device),target.to(device)# 计算losstarget_pred=model(imgs)loss=loss_fn(target_pred,target)test_loss+=loss.item()test_acc+=(target_pred.argmax(1)==target).type(torch.float).sum().item()test_acc/=sizetest_loss/=num_batchesreturn test_acc,test_loss

3.3、训练模型

loss_fn=nn.CrossEntropyLoss()  # 创建损失函数
learn_rate=1e-4  # 学习率
opt=torch.optim.Adam(model.parameters(),lr=learn_rate)
epochs=50train_loss=[]
train_acc=[]
test_loss=[]
test_acc=[]for epoch in range(epochs):model.train()epoch_train_acc,epoch_train_loss=train(train_dl,model,loss_fn,opt)model.eval()epoch_test_acc,epoch_test_loss=test(test_dl,model,loss_fn)train_acc.append(epoch_train_acc)train_loss.append(epoch_train_loss)test_acc.append(epoch_test_acc)test_loss.append(epoch_test_loss)# 获取当前的学习率lr=opt.state_dict()['param_groups'][0]['lr']template=('Epoch:{:2d},Train_acc:{:.1f}%,Train_loss:{:.3f},Test_acc:{:.1f}%,Test_loss:{:.3f},lr:{:.2E}')print(template.format(epoch+1,epoch_train_acc*100,epoch_train_loss,epoch_test_acc*100,epoch_test_loss,lr))print("="*20,'Done',"="*20)

运行结果:

Epoch: 1,Train_acc:49.6%,Train_loss:0.696,Test_acc:56.4%,Test_loss:0.697,lr:1.00E-04
Epoch: 2,Train_acc:56.1%,Train_loss:0.687,Test_acc:53.5%,Test_loss:0.709,lr:1.00E-04
Epoch: 3,Train_acc:56.2%,Train_loss:0.685,Test_acc:53.0%,Test_loss:0.713,lr:1.00E-04
Epoch: 4,Train_acc:56.2%,Train_loss:0.683,Test_acc:53.0%,Test_loss:0.716,lr:1.00E-04
Epoch: 5,Train_acc:56.2%,Train_loss:0.681,Test_acc:53.0%,Test_loss:0.718,lr:1.00E-04
Epoch: 6,Train_acc:56.2%,Train_loss:0.679,Test_acc:53.0%,Test_loss:0.720,lr:1.00E-04
Epoch: 7,Train_acc:56.2%,Train_loss:0.677,Test_acc:53.0%,Test_loss:0.721,lr:1.00E-04
Epoch: 8,Train_acc:56.2%,Train_loss:0.675,Test_acc:53.0%,Test_loss:0.720,lr:1.00E-04
Epoch: 9,Train_acc:56.2%,Train_loss:0.673,Test_acc:53.0%,Test_loss:0.720,lr:1.00E-04
Epoch:10,Train_acc:56.5%,Train_loss:0.671,Test_acc:52.5%,Test_loss:0.719,lr:1.00E-04
Epoch:11,Train_acc:57.0%,Train_loss:0.667,Test_acc:52.5%,Test_loss:0.718,lr:1.00E-04
Epoch:12,Train_acc:57.0%,Train_loss:0.663,Test_acc:53.0%,Test_loss:0.715,lr:1.00E-04
Epoch:13,Train_acc:57.7%,Train_loss:0.658,Test_acc:55.0%,Test_loss:0.711,lr:1.00E-04
Epoch:14,Train_acc:59.1%,Train_loss:0.652,Test_acc:56.4%,Test_loss:0.707,lr:1.00E-04
Epoch:15,Train_acc:60.6%,Train_loss:0.645,Test_acc:57.9%,Test_loss:0.701,lr:1.00E-04
Epoch:16,Train_acc:64.6%,Train_loss:0.637,Test_acc:59.4%,Test_loss:0.696,lr:1.00E-04
Epoch:17,Train_acc:66.5%,Train_loss:0.627,Test_acc:57.9%,Test_loss:0.689,lr:1.00E-04
Epoch:18,Train_acc:67.7%,Train_loss:0.617,Test_acc:58.9%,Test_loss:0.685,lr:1.00E-04
Epoch:19,Train_acc:68.8%,Train_loss:0.606,Test_acc:58.9%,Test_loss:0.680,lr:1.00E-04
Epoch:20,Train_acc:71.9%,Train_loss:0.593,Test_acc:59.4%,Test_loss:0.674,lr:1.00E-04
Epoch:21,Train_acc:72.3%,Train_loss:0.580,Test_acc:59.9%,Test_loss:0.671,lr:1.00E-04
Epoch:22,Train_acc:72.5%,Train_loss:0.566,Test_acc:60.4%,Test_loss:0.671,lr:1.00E-04
Epoch:23,Train_acc:73.4%,Train_loss:0.552,Test_acc:60.4%,Test_loss:0.673,lr:1.00E-04
Epoch:24,Train_acc:73.6%,Train_loss:0.538,Test_acc:60.9%,Test_loss:0.670,lr:1.00E-04
Epoch:25,Train_acc:75.0%,Train_loss:0.525,Test_acc:60.9%,Test_loss:0.670,lr:1.00E-04
Epoch:26,Train_acc:76.6%,Train_loss:0.512,Test_acc:62.4%,Test_loss:0.666,lr:1.00E-04
Epoch:27,Train_acc:76.9%,Train_loss:0.500,Test_acc:61.9%,Test_loss:0.671,lr:1.00E-04
Epoch:28,Train_acc:78.1%,Train_loss:0.488,Test_acc:62.4%,Test_loss:0.665,lr:1.00E-04
Epoch:29,Train_acc:78.4%,Train_loss:0.477,Test_acc:60.9%,Test_loss:0.665,lr:1.00E-04
Epoch:30,Train_acc:80.5%,Train_loss:0.465,Test_acc:63.9%,Test_loss:0.665,lr:1.00E-04
Epoch:31,Train_acc:79.0%,Train_loss:0.455,Test_acc:61.4%,Test_loss:0.664,lr:1.00E-04
Epoch:32,Train_acc:81.5%,Train_loss:0.442,Test_acc:61.9%,Test_loss:0.671,lr:1.00E-04
Epoch:33,Train_acc:79.7%,Train_loss:0.436,Test_acc:61.9%,Test_loss:0.664,lr:1.00E-04
Epoch:34,Train_acc:81.5%,Train_loss:0.425,Test_acc:62.4%,Test_loss:0.676,lr:1.00E-04
Epoch:35,Train_acc:81.3%,Train_loss:0.414,Test_acc:62.4%,Test_loss:0.673,lr:1.00E-04
Epoch:36,Train_acc:82.0%,Train_loss:0.413,Test_acc:64.9%,Test_loss:0.648,lr:1.00E-04
Epoch:37,Train_acc:81.8%,Train_loss:0.401,Test_acc:61.4%,Test_loss:0.707,lr:1.00E-04
Epoch:38,Train_acc:82.5%,Train_loss:0.395,Test_acc:64.4%,Test_loss:0.675,lr:1.00E-04
Epoch:39,Train_acc:83.2%,Train_loss:0.379,Test_acc:61.9%,Test_loss:0.684,lr:1.00E-04
Epoch:40,Train_acc:82.3%,Train_loss:0.384,Test_acc:63.9%,Test_loss:0.668,lr:1.00E-04
Epoch:41,Train_acc:84.2%,Train_loss:0.367,Test_acc:62.4%,Test_loss:0.678,lr:1.00E-04
Epoch:42,Train_acc:82.7%,Train_loss:0.373,Test_acc:62.4%,Test_loss:0.699,lr:1.00E-04
Epoch:43,Train_acc:83.2%,Train_loss:0.360,Test_acc:66.3%,Test_loss:0.660,lr:1.00E-04
Epoch:44,Train_acc:84.2%,Train_loss:0.366,Test_acc:64.9%,Test_loss:0.707,lr:1.00E-04
Epoch:45,Train_acc:83.6%,Train_loss:0.355,Test_acc:66.8%,Test_loss:0.704,lr:1.00E-04
Epoch:46,Train_acc:83.5%,Train_loss:0.369,Test_acc:66.8%,Test_loss:0.701,lr:1.00E-04
Epoch:47,Train_acc:84.5%,Train_loss:0.367,Test_acc:65.8%,Test_loss:0.698,lr:1.00E-04
Epoch:48,Train_acc:85.6%,Train_loss:0.345,Test_acc:64.4%,Test_loss:0.685,lr:1.00E-04
Epoch:49,Train_acc:87.1%,Train_loss:0.326,Test_acc:62.9%,Test_loss:0.700,lr:1.00E-04
Epoch:50,Train_acc:87.7%,Train_loss:0.315,Test_acc:65.3%,Test_loss:0.709,lr:1.00E-04
==================== Done ====================

4、模型评估

4.1、Loss与Accuracy图

import matplotlib.pyplot as plt
# 隐藏警告
import warnings
warnings.filterwarnings("ignore")  # 忽略警告信息
plt.rcParams['font.sans-serif']=['SimHei']  # 用来正常显示中文标签
plt.rcParams['axes.unicode_minus']=False  # 用来正常显示负号
plt.rcParams['figure.dpi']=300  # 分辨率epochs_range=range(epochs)plt.figure(figsize=(12,3))
plt.subplot(1,2,1)plt.plot(epochs_range,train_acc,label='Training Accuracy')
plt.plot(epochs_range,test_acc,label='Test Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')plt.subplot(1,2,2)
plt.plot(epochs_range,train_loss,label='Training Loss')
plt.plot(epochs_range,test_loss,label='Test Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()

运行结果:

4.2、模型调整

 根据要求,调整模型参数,将学习率调整为1e-2,运行后结果如下所示:

Epoch: 1,Train_acc:55.1%,Train_loss:1.044,Test_acc:48.5%,Test_loss:0.723,lr:1.00E-02
Epoch: 2,Train_acc:55.0%,Train_loss:0.695,Test_acc:51.0%,Test_loss:0.710,lr:1.00E-02
Epoch: 3,Train_acc:53.4%,Train_loss:0.687,Test_acc:68.3%,Test_loss:0.674,lr:1.00E-02
Epoch: 4,Train_acc:59.7%,Train_loss:0.664,Test_acc:66.3%,Test_loss:0.647,lr:1.00E-02
Epoch: 5,Train_acc:61.1%,Train_loss:0.653,Test_acc:61.9%,Test_loss:0.680,lr:1.00E-02
Epoch: 6,Train_acc:64.8%,Train_loss:0.598,Test_acc:58.9%,Test_loss:0.649,lr:1.00E-02
Epoch: 7,Train_acc:63.4%,Train_loss:0.618,Test_acc:68.3%,Test_loss:0.599,lr:1.00E-02
Epoch: 8,Train_acc:68.4%,Train_loss:0.585,Test_acc:55.9%,Test_loss:0.678,lr:1.00E-02
Epoch: 9,Train_acc:68.4%,Train_loss:0.547,Test_acc:59.9%,Test_loss:0.620,lr:1.00E-02
Epoch:10,Train_acc:64.8%,Train_loss:0.613,Test_acc:54.0%,Test_loss:0.696,lr:1.00E-02
Epoch:11,Train_acc:69.0%,Train_loss:0.583,Test_acc:62.4%,Test_loss:0.611,lr:1.00E-02
Epoch:12,Train_acc:69.2%,Train_loss:0.556,Test_acc:54.0%,Test_loss:0.690,lr:1.00E-02
Epoch:13,Train_acc:68.7%,Train_loss:0.581,Test_acc:54.0%,Test_loss:0.652,lr:1.00E-02
Epoch:14,Train_acc:66.7%,Train_loss:0.599,Test_acc:70.3%,Test_loss:0.624,lr:1.00E-02
Epoch:15,Train_acc:72.5%,Train_loss:0.562,Test_acc:73.3%,Test_loss:0.553,lr:1.00E-02
Epoch:16,Train_acc:73.4%,Train_loss:0.543,Test_acc:73.8%,Test_loss:0.537,lr:1.00E-02
Epoch:17,Train_acc:75.6%,Train_loss:0.513,Test_acc:74.8%,Test_loss:0.519,lr:1.00E-02
Epoch:18,Train_acc:76.4%,Train_loss:0.493,Test_acc:74.8%,Test_loss:0.559,lr:1.00E-02
Epoch:19,Train_acc:77.6%,Train_loss:0.475,Test_acc:74.3%,Test_loss:0.551,lr:1.00E-02
Epoch:20,Train_acc:75.9%,Train_loss:0.526,Test_acc:57.9%,Test_loss:0.683,lr:1.00E-02
Epoch:21,Train_acc:73.0%,Train_loss:0.520,Test_acc:75.2%,Test_loss:0.527,lr:1.00E-02
Epoch:22,Train_acc:77.7%,Train_loss:0.490,Test_acc:71.8%,Test_loss:0.597,lr:1.00E-02
Epoch:23,Train_acc:76.9%,Train_loss:0.521,Test_acc:61.4%,Test_loss:0.655,lr:1.00E-02
Epoch:24,Train_acc:75.5%,Train_loss:0.522,Test_acc:72.3%,Test_loss:0.574,lr:1.00E-02
Epoch:25,Train_acc:77.4%,Train_loss:0.503,Test_acc:72.8%,Test_loss:0.540,lr:1.00E-02
Epoch:26,Train_acc:77.9%,Train_loss:0.483,Test_acc:73.3%,Test_loss:0.549,lr:1.00E-02
Epoch:27,Train_acc:78.6%,Train_loss:0.482,Test_acc:69.8%,Test_loss:0.592,lr:1.00E-02
Epoch:28,Train_acc:77.9%,Train_loss:0.510,Test_acc:70.8%,Test_loss:0.574,lr:1.00E-02
Epoch:29,Train_acc:78.1%,Train_loss:0.488,Test_acc:75.2%,Test_loss:0.551,lr:1.00E-02
Epoch:30,Train_acc:79.4%,Train_loss:0.471,Test_acc:74.8%,Test_loss:0.537,lr:1.00E-02
Epoch:31,Train_acc:79.0%,Train_loss:0.477,Test_acc:71.3%,Test_loss:0.573,lr:1.00E-02
Epoch:32,Train_acc:78.7%,Train_loss:0.489,Test_acc:74.3%,Test_loss:0.541,lr:1.00E-02
Epoch:33,Train_acc:78.7%,Train_loss:0.477,Test_acc:74.8%,Test_loss:0.546,lr:1.00E-02
Epoch:34,Train_acc:78.2%,Train_loss:0.469,Test_acc:73.3%,Test_loss:0.555,lr:1.00E-02
Epoch:35,Train_acc:79.0%,Train_loss:0.484,Test_acc:74.3%,Test_loss:0.543,lr:1.00E-02
Epoch:36,Train_acc:79.1%,Train_loss:0.458,Test_acc:74.3%,Test_loss:0.561,lr:1.00E-02
Epoch:37,Train_acc:79.0%,Train_loss:0.457,Test_acc:73.8%,Test_loss:0.570,lr:1.00E-02
Epoch:38,Train_acc:79.1%,Train_loss:0.479,Test_acc:73.3%,Test_loss:0.590,lr:1.00E-02
Epoch:39,Train_acc:78.5%,Train_loss:0.479,Test_acc:74.8%,Test_loss:0.542,lr:1.00E-02
Epoch:40,Train_acc:79.9%,Train_loss:0.452,Test_acc:74.3%,Test_loss:0.561,lr:1.00E-02
Epoch:41,Train_acc:79.9%,Train_loss:0.453,Test_acc:72.8%,Test_loss:0.586,lr:1.00E-02
Epoch:42,Train_acc:78.6%,Train_loss:0.483,Test_acc:76.2%,Test_loss:0.620,lr:1.00E-02
Epoch:43,Train_acc:78.7%,Train_loss:0.469,Test_acc:74.8%,Test_loss:0.566,lr:1.00E-02
Epoch:44,Train_acc:79.0%,Train_loss:0.462,Test_acc:73.8%,Test_loss:0.587,lr:1.00E-02
Epoch:45,Train_acc:79.9%,Train_loss:0.449,Test_acc:72.8%,Test_loss:0.577,lr:1.00E-02
Epoch:46,Train_acc:80.1%,Train_loss:0.450,Test_acc:72.8%,Test_loss:0.591,lr:1.00E-02
Epoch:47,Train_acc:79.6%,Train_loss:0.454,Test_acc:74.3%,Test_loss:0.566,lr:1.00E-02
Epoch:48,Train_acc:79.5%,Train_loss:0.448,Test_acc:75.7%,Test_loss:0.595,lr:1.00E-02
Epoch:49,Train_acc:79.5%,Train_loss:0.446,Test_acc:73.8%,Test_loss:0.583,lr:1.00E-02
Epoch:50,Train_acc:80.2%,Train_loss:0.450,Test_acc:75.7%,Test_loss:0.599,lr:1.00E-02
==================== Done ====================

 Loss与Accuracy图如下:

5、心得体会 

本项目通过调整学习率,提升准确率至70%以上。虽然准确率中间部分波动较大,但最终结果提升较为明显,且损失(Loss)得到了极大的降低。

版权声明:

本网仅为发布的内容提供存储空间,不对发表、转载的内容提供任何形式的保证。凡本网注明“来源:XXX网络”的作品,均转载自其它媒体,著作权归作者所有,商业转载请联系作者获得授权,非商业转载请注明出处。

我们尊重并感谢每一位作者,均已注明文章来源和作者。如因作品内容、版权或其它问题,请及时与我们联系,联系邮箱:809451989@qq.com,投稿邮箱:809451989@qq.com