RECTIFYING ADVERSARIAL EXAMPLES USING THEIR VULNERABILITIES

Rectifying Adversarial Examples Using Their Vulnerabilities

Rectifying Adversarial Examples Using Their Vulnerabilities

Blog Article

Deep neural network-based classifiers are prone to errors when processing adversarial examples (AEs).AEs are minimally perturbed input data undetectable to humans posing significant risks to security-dependent applications.Hence, extensive research has been undertaken to develop defense mechanisms that mitigate their threats.Most existing methods primarily focus on discriminating AEs based on the input sample features, emphasizing AE detection without addressing the correct sample categorization before an attack.While some tasks may only require mere rejection on detected AEs, others necessitate identifying the correct original input category such as traffic sign recognition in autonomous driving.

The objective of this study is to propose a method for rectifying AEs to estimate the correct labels of their original inputs.Our method glitter foam vellen action is based on re-attacking AEs to move them beyond the decision boundary for accurate label prediction, effectively addressing the issue of rectifying minimally perceptible AEs created using white-box attack methods.However, challenge remains with respect to effectively rectifying AEs produced by black-box attacks at a distance from the boundary, or those misclassified into low-confidence categories by targeted attacks.By adopting a straightforward approach of only considering AEs as inputs, the proposed method can address diverse attacks while avoiding the requirement of parameter adjustments or preliminary training.Results demonstrate that the proposed method exhibits consistent performance in rectifying AEs sauceland generated via various attack methods, including targeted and black-box attacks.

Moreover, it outperforms conventional rectification and input transformation methods in terms of stability against various attacks.

Report this page