Adversarial Learning in Face Recognition – Two Sides of the Security Coin


Mayank Vatsa


Advancements in deep learning has seen proliferated applications in multiple domains including face recognition. These deep architectures have high expressive power and learning capacity has shown very high accuracies on challenging face databases. However, they are essentially a black box method since it is not easy to mathematically formulate the functions that are learned within its many layers of representation. Realizing this, many researchers have started to design methods to exploit the drawbacks of deep learning based algorithms questioning their robustness and exposing their singularities. However, adversarial attacks on automated classification systems has been an area of interest for a long time. In 2002, Ratha et al. proposed eleven points of attacks on a biometric/face recognition system. For instance, an adversary can operate at the input/image level or the decision level, and lead to incorrect face recognition results. The research on adversarial learning for attacking face recognition systems has three key components: (i) creating adversarial images, (ii) detecting whether an image is adversely altered or not, and (iii) mitigating the effect of the adversarial perturbation process. These adversaries create different kinds of effect on the input and detecting them requires the application of a combination of hand-crafted as well as learnt features; for instance, some of the existing attacks can be detected using principal components while some hand-crafted attacks can be detected using well defined image processing operations. Therefore, it is important to detect the adversarial perturbations and mitigate the effect caused due to such adversaries using ensemble of defense algorithms.

While there are several advantages of achieving high accuracies via deep face recognition algorithms, they also pose a threat to the privacy of individuals. Several facial attributes such as age, gender, and race can be predicted from one’s profile or social media images. In a recent research, Wang and Kosinksi predicted the “secrets of the heart”, such as predicting sexual orientation from face images. They reported 81% accuracy for differentiating between gay and heterosexual men with a single image and 74% accuracy is achieved for women. Similarly, targeted advertisements by predicting the gender and age from the profile and social media photographs have been a topic of research for the last few years. These cases pose a serious question: “Can we anonymize certain attributes for privacy preservation and fool automated methods for attribute predictions?” To answer this question, adversarial learning can be used as privacy enabler for face images.

This tutorial will focus on three key ideas related to adversarial learning (aka perturbations, defense, and utilization of adversarial learning for improving learning and enabling privacy), building from basics of neural network, deep learning, adversarial learning to discussing new algorithms for defense, utilizing adversarial learning for privacy preserving, using adversarial learning for data fine-tuning (a new approach recently proposed to adapt new data distributions for a black-box deep learning model), and conclude with some of the research questions in this spectrum.

  1. Deep learning architectures
  2. What is Adversarial Perturbation?
  3. How adversarial perturbations affect deep learning based face recognition algorithms?
  4. Simple algorithm for creating adversarial perturbations
  5. Popular algorithms for adversarial pattern perturbations
  6. How to detect perturbations?
  7. Popular algorithms for adversarial perturbation detection in deep learning framework
  8. What is Mitigation wrt Adversarial Perturbation?
  9. Algorithms for mitigating the effect of perturbation
  10. Using adversarial perturbation for facial privacy preserving and data fine-tuning
  11. Future research ideas


Mayank Vatsa received the M.S. and Ph.D. degrees in computer science from West Virginia University, USA, in 2005 and 2008, respectively. He is currently Head of Infosys Center for Artificial Intelligence, an Associate Professor with the IIIT-Delhi, India, and Adjunct Associate Professor at West Virginia University, USA. He has authored more than 250 publications in refereed journals, book chapters, and conferences. Recently, he has been conferred the prestigious Swarnajayanti Fellowship from Government of India. He was also a recipient of the FAST Award Project by the Department of Science and Technology (India) and several Best Paper and Best Poster Awards at international conferences. His areas of interest are biometrics, image processing, machine learning, and information fusion. He was the Vice President (Publications) of IEEE Biometrics Council where he started IEEE Transactions on Biometrics, Behavior, and Identity Science (IEEE T-BIOM). He is a senior member of IEEE and the Association for Computing Machinery. He is also an Area Chair of the Information Fusion (Elsevier) journal, and severed as the PC Co-Chair of the 2013 International Conference on Biometrics and the 2014 International Joint Conference on Biometrics.