Adversarial Attacks And Defense Mechanisms In Deep Learning

dc.contributor.authorSAIDANI ALA
dc.contributor.authorKHOUDOUR MERIEM ANFEL
dc.date.accessioned2025-11-04T09:47:51Z
dc.date.issued2025
dc.description.abstractThis work explores the adversarial vulnerabilities of deep learning models in image clas- sification, with a focus on evaluating and defending against evasion-based attacks. Using the MNIST dataset and a ResNet18 architecture, we implemented several notable adversarial at- tacks, including FGSM, PGD, Clean Label, Backdoor (BadNet), and Square Attack. To mitigate these threats, we applied a variety of defense mechanisms across three cate- gories: preprocessing (Gaussian noise, bit-depth reduction, JPEG compression), training-based (adversarial training, label smoothing), and postprocessing (confidence thresholding, random- ized smoothing). Evaluation was conducted using standard performance metrics and qualitative visualizations. The results confirm the effectiveness of adversarial training and hybrid approaches in en- hancing model robustness. This work provides a reproducible framework and contributes to ongoing efforts toward secure and resilient deep learning systems.
dc.identifier.citationMM/881
dc.identifier.issnMM/881
dc.identifier.urihttps://dspace.univ-bba.dz/handle/123456789/948
dc.language.isoen
dc.publisheruniversity of bordj bou arreridj
dc.subject: Deep learning
dc.subjectadversarial attacks
dc.subjectmodel robustness
dc.subjectimage classification
dc.subjectad- versarial training
dc.subjectdefense mechanisms
dc.titleAdversarial Attacks And Defense Mechanisms In Deep Learning
dc.typeThesis

Files

Original bundle

Now showing 1 - 1 of 1
Thumbnail Image
Name:
ADV.DL.pdf
Size:
7.45 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed to upon submission
Description: