Adversarial Mask: Real-World Adversarial Attack Against Face Recognition Models
Deep learning-based facial recognition (FR) models have demonstrated
state-of-the-art performance in the past few years, even when wearing
protective medical face masks became commonplace during the COVID-19 pandemic.
Given the outstanding performance of these models, the machine learning
research community has shown increasing interest in challenging their
robustness. Initially, researchers presented adversarial attacks in the digital
domain, and later the attacks were transferred to the physical domain. However,
in many cases, attacks in the physical domain are conspicuous, requiring, for
example, the placement of a sticker on the face, and thus may raise suspicion
in real-world environments (e.g., airports). In this paper, we propose
Adversarial Mask, a physical adversarial universal perturbation (UAP) against
state-of-the-art FR models that is applied on face masks in the form of a
carefully crafted pattern. In our experiments, we examined the transferability
of our adversarial mask to a wide range of FR model architectures and datasets.
In addition, we validated our adversarial mask effectiveness in real-world
experiments by printing the adversarial pattern on a fabric medical face mask,
causing the FR system to identify only 3.34% of the participants wearing the
mask (compared to a minimum of 83.34% with other evaluated masks).