Securing deep learning models processing medical images against adversarial attacks
| dc.contributor.guide | Chaturvedi, Vivek and Shafique, Muhammad | |
| dc.coverage.spatial | ||
| dc.creator.researcher | Neha, A. S. | |
| dc.date.accessioned | 2026-02-16T11:40:15Z | |
| dc.date.available | 2026-02-16T11:40:15Z | |
| dc.date.awarded | 2025 | |
| dc.date.completed | 2025 | |
| dc.date.registered | 2019 | |
| dc.description.abstract | Medical image analysis has been revolutionized by deep learning models, which enable newlineextremely precise automated disease detection and diagnosis. Nevertheless, these newlinemodels including Convolutional Neural Networks (CNNs), Vision Transformers (ViTs), newlineand Contrastive Learning (CL)-based frameworks are extremely susceptible to adversarial newlineattacks, which cause imperceptible modifications to input images and provide newlineinaccurate predictions. The objective of this research is to improve robustness while newlinemaintaining diagnostic accuracy by creating strong adversarial defense mechanisms newlinespecifically designed for medical imaging models. newlineFirst, we experimentally proved that the adversarial attacks possible in natural newlineimages are also transferable to medical images, paralyzing the diagnostic process newlineand threatening the robustness of underlying CNN based classifiers. We have first newlinedemonstrated the effectiveness of well-known natural image adversarial attacks such as newlineFast Gradient Sign Method (FGSM), Projected Gradient Method (PGD), Basic Iterative newlineMethod (BIM), and Carlini and Wagner (CW) on Malaria cell images. Afterwards, we newlinepropose a novel defense methodology, namely FRNet, that leverages well-established newlinefeatures such as Histogram of Oriented Gradients, Local Binary Patterns, KAZE features, newlineand Scale-Invariant Feature Transform that are able to detect edges and objects newlinewhile they remain robust against imperceptible adversarial perturbations. The method newlineutilizes a multi-layer perceptron to efficiently concatenate the features to FRNet making newlineit convenient and resulting in an architecturally neural and attack generic methodology. Our experimental results demonstrate that when applying FRNet on different CNN architectures such as simple CNN, EfficientNet, and MobileNet, it decreases the impact of adversarial attacks by as much as 67% compared to the corresponding base models. | |
| dc.description.note | ||
| dc.format.accompanyingmaterial | None | |
| dc.format.dimensions | ||
| dc.format.extent | xxii, 158p. | |
| dc.identifier.researcherid | 0009-0008-1910-1342 | |
| dc.identifier.uri | http://hdl.handle.net/10603/696016 | |
| dc.language | English | |
| dc.publisher.institution | Department of Computer Science and Engineering | |
| dc.publisher.place | Palakkad | |
| dc.publisher.university | Indian Institute of Technology Palakkad | |
| dc.relation | 171 | |
| dc.rights | university | |
| dc.source.university | University | |
| dc.subject.keyword | Adversarial Attack | |
| dc.subject.keyword | Computer Science | |
| dc.subject.keyword | Contrastive Learning | |
| dc.subject.keyword | Engineering and Technology | |
| dc.subject.keyword | Imaging Science and Photographic Technology | |
| dc.subject.keyword | Medical Imaging | |
| dc.subject.keyword | Neural networks (Computer science) | |
| dc.title | Securing deep learning models processing medical images against adversarial attacks | |
| dc.title.alternative | ||
| dc.type.degree | Ph.D. |
Files
Original bundle
1 - 5 of 13
Loading...
- Name:
- 01_title.pdf
- Size:
- 72.42 KB
- Format:
- Adobe Portable Document Format
- Description:
- Attached File
Loading...
- Name:
- 02_prelim pages.pdf
- Size:
- 307.09 KB
- Format:
- Adobe Portable Document Format
License bundle
1 - 1 of 1