您的位置:首页 > 财经 > 产业 > 天元网游关服了吗_营销网站设计公司排名_竞价推广员月挣多少_百度软件应用中心下载

天元网游关服了吗_营销网站设计公司排名_竞价推广员月挣多少_百度软件应用中心下载

2024/11/16 12:36:04 来源:https://blog.csdn.net/weixin_41012399/article/details/143515849  浏览:    关键词:天元网游关服了吗_营销网站设计公司排名_竞价推广员月挣多少_百度软件应用中心下载
天元网游关服了吗_营销网站设计公司排名_竞价推广员月挣多少_百度软件应用中心下载

参考:https://blog.csdn.net/weixin_44966641/article/details/121872773
单卡代码,启动代码 python train.py:

import torch
import torch.nn as nn
from torch.optim import SGD
from torch.autograd import Variable
from torch.utils.data import Dataset, DataLoader
import os
import argparseparser = argparse.ArgumentParser()
parser.add_argument('--gpu_id', type=str, default='0,2')
parser.add_argument('--batchSize', type=int, default=32)
parser.add_argument('--epochs', type=int, default=500)
parser.add_argument('--dataset-size', type=int, default=128)
parser.add_argument('--num-classes', type=int, default=10)
config = parser.parse_args()os.environ['CUDA_VISIBLE_DEVICES'] = config.gpu_id# 定义一个随机数据集,随机生成样本
class RandomDataset(Dataset):def __init__(self, dataset_size, image_size=32):images = torch.randn(dataset_size, 3, image_size, image_size)labels = torch.zeros(dataset_size, dtype=int)self.data = list(zip(images, labels))def __getitem__(self, index):return self.data[index]def __len__(self):return len(self.data)# 定义模型,简单的一层卷积加一层全连接softmax
class Model(nn.Module):def __init__(self, num_classes):super(Model, self).__init__()self.conv2d = nn.Conv2d(3, 16, 3)self.fc = nn.Linear(30*30*16, num_classes)self.softmax = nn.Softmax(dim=1)def forward(self, x):batch_size = x.shape[0]x = self.conv2d(x)x = x.reshape(batch_size, -1)x = self.fc(x)out = self.softmax(x)return out# 实例化模型、数据集、加载器和优化器
model = Model(config.num_classes)
dataset = RandomDataset(config.dataset_size)
loader = DataLoader(dataset, batch_size=config.batchSize, shuffle=True)
loss_func = nn.CrossEntropyLoss()if torch.cuda.is_available():model.cuda()
optimizer = SGD(model.parameters(), lr=0.1, momentum=0.9)# 若使用DP,仅需一行
# if torch.cuda.device_count > 1: model = nn.DataParallel(model)
# 我们不用DP,而将用DDP# 开始训练
for epoch in range(config.epochs):for step, (images, labels) in enumerate(loader):if torch.cuda.is_available(): images = images.cuda()labels = labels.cuda()preds = model(images)loss = loss_func(preds, labels)optimizer.zero_grad()loss.backward()optimizer.step()print(f'Step: {step}, Loss: {loss.item()}')print(f'Epoch {epoch} Finished !')

多卡ddp训练代代码,启动代码:

import torch
import torch.nn as nn
from torch.optim import SGD
from torch.autograd import Variable
from torch.utils.data import Dataset, DataLoader
from torch.utils.data.distributed import DistributedSampler
import os
import argparse# 定义一个随机数据集
class RandomDataset(Dataset):def __init__(self, dataset_size, image_size=32):images = torch.randn(dataset_size, 3, image_size, image_size)labels = torch.zeros(dataset_size, dtype=int)self.data = list(zip(images, labels))def __getitem__(self, index):return self.data[index]def __len__(self):return len(self.data)# 定义模型
class Model(nn.Module):def __init__(self, num_classes):super(Model, self).__init__()self.conv2d = nn.Conv2d(3, 16, 3)self.fc = nn.Linear(30*30*16, num_classes)self.softmax = nn.Softmax(dim=1)def forward(self, x):batch_size = x.shape[0]x = self.conv2d(x)x = x.reshape(batch_size, -1)x = self.fc(x)out = self.softmax(x)return outparser = argparse.ArgumentParser()
parser.add_argument('--gpu_id', type=str, default='0,1,2,3')
parser.add_argument('--batchSize', type=int, default=64)
parser.add_argument('--epochs', type=int, default=500)
parser.add_argument('--dataset-size', type=int, default=1024)
parser.add_argument('--num-classes', type=int, default=10)
config = parser.parse_args()os.environ['CUDA_VISIBLE_DEVICES'] = config.gpu_id
torch.distributed.init_process_group(backend='nccl', init_method='env://')local_rank = torch.distributed.get_rank()
torch.cuda.set_device(local_rank)
device = torch.device("cuda", local_rank)# 实例化模型、数据集和加载器loader
model = Model(config.num_classes)dataset = RandomDataset(config.dataset_size)
sampler = DistributedSampler(dataset) # 这个sampler会自动分配数据到各个gpu上
loader = DataLoader(dataset, batch_size=config.batchSize, sampler=sampler)# loader = DataLoader(dataset, batch_size=config.batchSize, shuffle=True)
loss_func = nn.CrossEntropyLoss()if torch.cuda.is_available():model.cuda()
model = torch.nn.parallel.DistributedDataParallel(model)
optimizer = SGD(model.parameters(), lr=0.1, momentum=0.9)# 开始训练
for epoch in range(config.epochs):for step, (images, labels) in enumerate(loader):if torch.cuda.is_available(): images = images.cuda()labels = labels.cuda()preds = model(images)# print(f"data: {images.device}, model: {next(model.parameters()).device}")loss = loss_func(preds, labels)optimizer.zero_grad()loss.backward()optimizer.step()print(f'Step: {step}, Loss: {loss.item()}')print(f'Epoch {epoch} Finished !')

启动代码

CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.run --nproc_per_node=4 --master_port 12355 train-tmp.py --batchSize 256 --epochs 5000

或者

 torchrun  --nproc_per_node=2 train-tmp.py --batchSize 64 --epochs 500

这里的卡设置要小于等于代码里可见的卡设置。

版权声明:

本网仅为发布的内容提供存储空间,不对发表、转载的内容提供任何形式的保证。凡本网注明“来源:XXX网络”的作品,均转载自其它媒体,著作权归作者所有,商业转载请联系作者获得授权,非商业转载请注明出处。

我们尊重并感谢每一位作者,均已注明文章来源和作者。如因作品内容、版权或其它问题,请及时与我们联系,联系邮箱:809451989@qq.com,投稿邮箱:809451989@qq.com