Securing deep learning models processing medical images against adversarial attacks
Loading...
Date
item.page.authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Medical image analysis has been revolutionized by deep learning models, which enable
newlineextremely precise automated disease detection and diagnosis. Nevertheless, these
newlinemodels including Convolutional Neural Networks (CNNs), Vision Transformers (ViTs),
newlineand Contrastive Learning (CL)-based frameworks are extremely susceptible to adversarial
newlineattacks, which cause imperceptible modifications to input images and provide
newlineinaccurate predictions. The objective of this research is to improve robustness while
newlinemaintaining diagnostic accuracy by creating strong adversarial defense mechanisms
newlinespecifically designed for medical imaging models.
newlineFirst, we experimentally proved that the adversarial attacks possible in natural
newlineimages are also transferable to medical images, paralyzing the diagnostic process
newlineand threatening the robustness of underlying CNN based classifiers. We have first
newlinedemonstrated the effectiveness of well-known natural image adversarial attacks such as
newlineFast Gradient Sign Method (FGSM), Projected Gradient Method (PGD), Basic Iterative
newlineMethod (BIM), and Carlini and Wagner (CW) on Malaria cell images. Afterwards, we
newlinepropose a novel defense methodology, namely FRNet, that leverages well-established
newlinefeatures such as Histogram of Oriented Gradients, Local Binary Patterns, KAZE features,
newlineand Scale-Invariant Feature Transform that are able to detect edges and objects
newlinewhile they remain robust against imperceptible adversarial perturbations. The method
newlineutilizes a multi-layer perceptron to efficiently concatenate the features to FRNet making
newlineit convenient and resulting in an architecturally neural and attack generic methodology. Our experimental results demonstrate that when applying FRNet on different CNN architectures such as simple CNN, EfficientNet, and MobileNet, it decreases the impact of adversarial attacks by as much as 67% compared to the corresponding base models.