Please note that the ITEA Office will be closed from 25 December 2024 to 1 January 2025 inclusive.
Generating Adversarial Examples
- Project
- 18022 IVVES
- Description
Adversarial attacks and defense in Machine Learning applications.
- Contact
- Sima Sinaei and Mehrdad Saadatmand, RISE Research Institutes of Sweden
- sima.sinaei@ri.se
- Technical features
Input(s):
- Image datasets
Main feature(s):
- Enhancing security and robustness of Neural Networks especially in the face of an adversary who wishes to fool the model
Output(s):
- A slightly perturbed image, still easily recognizable by human observers with the goal of producing a wrong output from the correct target class
- Integration constraints
A labeled dataset and a primary machine learning model for classification are needed. The security and Robustness of this Neural Network can be improved by generating an adversarial dataset.
- Targeted customer(s)
AI-based system’s developer.
- Conditions for reuse
Licensing and permission required.
- Confidentiality
- Public
- Publication date
- 29-11-2022
- Involved partners
- RISE - Research institutes of Sweden (SWE)