当在木星笔记本中训练PyTorch闪电模型时,控制台日志输出是很尴尬的:
Epoch 0: 100%|█████████▉| 2315/2318 [02:05<00:00, 18.41it/s, loss=1.69, v_num=26, acc=0.562] Validating: 0it [00:00, ?it/s] Validating: 0%| | 0/1 [00:00<?, ?it/s] Epoch 0: 100%|██████████| 2318/2318 [02:09<00:00, 17.84it/s, loss=1.72, v_num=26, acc=0.500, val_loss=1.570, val_acc=0.564] Epoch 1: 100%|█████████▉| 2315/2318 [02:04<00:00, 18.63it/s, loss=1.56, v_num=26, acc=0.594, val_loss=1.570, val_acc=0.564] Validating: 0it [00:00, ?it/s] Validating: 0%| | 0/1 [00:00<?, ?it/s] Epoch 1: 100%|██████████| 2318/2318 [02:08<00:00, 18.07it/s, loss=1.59, v_num=26, acc=0.528, val_loss=1.490, val_acc=0.583] Epoch 2: 100%|█████████▉| 2315/2318 [02:01<00:00, 19.02it/s, loss=1.53, v_num=26, acc=0.617, val_loss=1.490, val_acc=0.583] Validating: 0it [00:00, ?it/s] Validating: 0%| | 0/1 [00:00<?, ?it/s] Epoch 2: 100%|██████████| 2318/2318 [02:05<00:00, 18.42it/s, loss=1.57, v_num=26, acc=0.500, val_loss=1.460, val_acc=0.589]
同样的培训的“正确”输出应该是:
Epoch 0: 100%|██████████| 2318/2318 [02:09<00:00, 17.84it/s, loss=1.72, v_num=26, acc=0.500, val_loss=1.570, val_acc=0.564] Epoch 1: 100%|██████████| 2318/2318 [02:08<00:00, 18.07it/s, loss=1.59, v_num=26, acc=0.528, val_loss=1.490, val_acc=0.583] Epoch 2: 100%|██████████| 2318/2318 [02:05<00:00, 18.42it/s, loss=1.57, v_num=26, acc=0.500, val_loss=1.460, val_acc=0.589]
为什么划时代的线条会以这种方式被无谓地重复和分裂?而且,我也不确定 Validating 行的用途是什么,因为它们似乎没有提供任何信息。
Validating
该模型的培训和验证步骤如下:
def training_step(self, train_batch, batch_idx): x, y = train_batch y_hat = self.forward(x) loss = torch.nn.NLLLoss()(torch.log(y_hat), y.argmax(dim=1)) acc = tm.functional.accuracy(y_hat.argmax(dim=1), y.argmax(dim=1)) self.log("acc", acc, prog_bar=True) return loss def validation_step(self, valid_batch, batch_idx):