duler.CosineAnnealingLR(optimizer, >>> T_max=300) >>> swa_start = 160 >>> swa_scheduler = SWALR(optimizer, swa_lr=0.05) >>> for i in range(300): >>> for input, target in loader: >>> optimizer.zero_grad() >>> loss_fn(model(input), target).backward() >>> optimizer.step() >>> if i > swa_start: >>> swa_model.update_parameters(model) >>> swa_scheduler.step() >>> else: >>> scheduler.step() >>> >>> # Update bn statistics for the swa_model at the end >>> torch.optim.swa_utils.update_bn(loader, swa_model) You can also use custom averaging functions with the `avg_fn` or `multi_avg_fn` parameters. If no averaging function is provided, the default is to compute equally-weighted average of the weights (SWA). Example: >>> # xdoctest: +SKIP("undefined variables") >>> # Compute exponential moving averages of the weights and buffers >>> ema_model = torch.optim.swa_utils.AveragedModel(model, >>> torch.optim.swa_utils.get_ema_multi_avg_fn(0.9), use_buffers=True) .. note:: When using SWA/EMA with models containing Batch Normalization you may need to update the activation statistics for Batch Normalization. This can be done either by using the :meth:`torch.optim.swa_utils.update_bn` or by setting :attr:`use_buffers` to `True`. The first approach updates the statistics in a post-training step by passing data through the model. The second does it during the parameter update phase by averaging all buffers. Empirical evidence has shown that updating the statistics in normalization layers increases accuracy, but you may wish to empirically test which approach yields the best results in your problem. .. note:: :attr:`avg_fn` and `multi_avg_fn` are not saved in the :meth:`state_dict` of the model. .. note:: When :meth:`update_parameters` is called for the first time (i.e. :attr:`n_averaged` is `0`) the parameters of `model` are copied to the parameters of :class:`AveragedModel`. For every subsequent call of :meth:`update_parameters` the function `avg_fn` is used to update the parameters. .. _Averaging Weights Leads to Wider Optima and Better Generalization: https://arxiv.org/abs/1803.05407 .. _There Are Many Consistent Explanations of Unlabeled Data: Why You Should Average: https://arxiv.org/abs/1806.05594 .. _SWALP: Stochastic Weight Averaging in Low-Precision Training: https://arxiv.org/abs/1904.11943 .. _Stochastic Weight Averaging in Parallel: Large-Batch Training That Generalizes Well: https://arxiv.org/abs/2001.02312 .. _Polyak averaging: https://paperswithcode.com/method/polyak-averaging NFc