导包
导包过程中添加:
from torch.utils.tensorboard import SummaryWriter
损失
在损失计算的位置添加:
tags = ["d_loss_val", "g_loss_val", "l1_loss_val", "mask_loss_val", "vgg_loss_val"]
tb_writer.add_scalar(tags[0], d_loss_val, idx)
tb_writer.add_scalar(tags[1], g_loss_val, idx)
tb_writer.add_scalar(tags[2], l1_loss_val, idx)
tb_writer.add_scalar(tags[3], mask_loss_val, idx)
tb_writer.add_scalar(tags[4], vgg_loss_val, idx)
(举例)只是举例用法,要根据自己的loss进行对应修改
main
main里添加:
print('Start Tensorboard with "tensorboard --logdir=runs", view at http://localhost:6006/')
tb_writer = SummaryWriter()
举例图片:
查看
这样就可以从 http://localhost:6007/ 查看tensorboard
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
另外也可以用后台启动的方法:
举例:
nohup python -u train_bac.py --batch 3 --ckpt pretrained/lizhen_full_python.pt ./My_train_input/FV_output/ > train03.log 2>&1 &
这种方法会保存为log日志,tail -f train03.log也可查看
====================================================================
(自己备忘学习)tqdm迭代器:
pbar = range(args.iter)if get_rank() == 0:pbar = tqdm(pbar, initial=args.start_iter, dynamic_ncols=True, smoothing=0.01)
#iter是总论次: parser.add_argument("--iter", type=int, default=800000, help="total training iterations")# 打印损失:
pbar.set_description((f"d: {d_loss_val:.4f}; g: {g_loss_val:.4f}; l1: {l1_loss_val:.4f}; vgg: {vgg_loss_val:.4f}; mask: {mask_loss_val:.4f} "))