Evaluating the Vulnerabilities in ML systems in terms of adversarial attacks

Avatar
Poster
Voices Powered byElevenlabs logo
Connected to paperThis paper is a preprint and has not been certified by peer review

Evaluating the Vulnerabilities in ML systems in terms of adversarial attacks

Authors

John Harshith, Mantej Singh Gill, Madhan Jothimani

Abstract

There have been recent adversarial attacks that are difficult to find. These new adversarial attacks methods may pose challenges to current deep learning cyber defense systems and could influence the future defense of cyberattacks. The authors focus on this domain in this research paper. They explore the consequences of vulnerabilities in AI systems. This includes discussing how they might arise, differences between randomized and adversarial examples and also potential ethical implications of vulnerabilities. Moreover, it is important to train the AI systems appropriately when they are in testing phase and getting them ready for broader use.

Follow Us on

0 comments

Add comment