Adversarial Attacks And Defense Mechanisms In Deep Learning
| dc.contributor.author | SAIDANI ALA | |
| dc.contributor.author | KHOUDOUR MERIEM ANFEL | |
| dc.date.accessioned | 2025-11-04T09:47:51Z | |
| dc.date.issued | 2025 | |
| dc.description.abstract | This work explores the adversarial vulnerabilities of deep learning models in image clas- sification, with a focus on evaluating and defending against evasion-based attacks. Using the MNIST dataset and a ResNet18 architecture, we implemented several notable adversarial at- tacks, including FGSM, PGD, Clean Label, Backdoor (BadNet), and Square Attack. To mitigate these threats, we applied a variety of defense mechanisms across three cate- gories: preprocessing (Gaussian noise, bit-depth reduction, JPEG compression), training-based (adversarial training, label smoothing), and postprocessing (confidence thresholding, random- ized smoothing). Evaluation was conducted using standard performance metrics and qualitative visualizations. The results confirm the effectiveness of adversarial training and hybrid approaches in en- hancing model robustness. This work provides a reproducible framework and contributes to ongoing efforts toward secure and resilient deep learning systems. | |
| dc.identifier.citation | MM/881 | |
| dc.identifier.issn | MM/881 | |
| dc.identifier.uri | https://dspace.univ-bba.dz/handle/123456789/948 | |
| dc.language.iso | en | |
| dc.publisher | university of bordj bou arreridj | |
| dc.subject | : Deep learning | |
| dc.subject | adversarial attacks | |
| dc.subject | model robustness | |
| dc.subject | image classification | |
| dc.subject | ad- versarial training | |
| dc.subject | defense mechanisms | |
| dc.title | Adversarial Attacks And Defense Mechanisms In Deep Learning | |
| dc.type | Thesis |