How Does Data Augmentation Affect Differential Privacy in Deep Learning?

Deep learning often adopts data augmentation as an essential and efficient technique to generate new training examples from existing data to improve the model robustness and generalization. Differential privacy is a technique used to preserve the privacy of individual data points while releasing statistical information about a dataset. In this work, we study the relationship between data augmentation and differential privacy in deep learning for image and text classification. We found that although data augmentation has a negative effect on the performance of models trained with differential privacy, it improves the model robustness against membership inference attacks.

The code is in GitHub How Does Data Augmentation Affect Differential Privacy in Deep Learning?.

How Does Data Augmentation Affect Differential Privacy in Deep Learning?

How Does Data Augmentation Affect Differential Privacy in Deep Learning?