1. 首页
  2. 人工智能
  3. 论文/代码
  4. MaxUp:提高神经网络训练通用性的简单方法

MaxUp:提高神经网络训练通用性的简单方法

上传者: 2021-01-22 15:22:48上传 .PDF文件 214.98 KB 热度 7次

我们提出了\ emph {MaxUp},这是一种令人尴尬的简单高效的技术,用于提高机器学习模型(尤其是深度神经网络)的泛化性能。这个想法是生成一组带有一些随机扰动或变换的扩充数据,并使该扩充数据的最大或最坏情况损失最小化。..

MaxUp: A Simple Way to Improve Generalization of Neural Network Training

We propose \emph{MaxUp}, an embarrassingly simple, highly effective technique for improving the generalization performance of machine learning models, especially deep neural networks. The idea is to generate a set of augmented data with some random perturbations or transforms and minimize the maximum, or worst case loss over the augmented data.By doing so, we implicitly introduce a smoothness or robustness regularization against the random perturbations, and hence improve the generation performance. For example, in the case of Gaussian perturbation, \emph{MaxUp} is asymptotically equivalent to using the gradient norm of the loss as a penalty to encourage smoothness. We test \emph{MaxUp} on a range of tasks, including image classification, language modeling, and adversarial certification, on which \emph{MaxUp} consistently outperforms the existing best baseline methods, without introducing substantial computational overhead. In particular, we improve ImageNet classification from the state-of-the-art top-1 accuracy $85.5\%$ without extra data to $85.8\%$. Code will be released soon.

下载地址
用户评论