Model Fooling Attacks Against Medical Imaging: A Short Survey
Source:
Information & Security: An International Journal,Volume: 46,
Issue2,
p.215-224
(2020)
Keywords:
adversarial images, artificial neural networks, deep learning, machine learning, medical imagingAbstract:
This study aims to find a list of methods to fool artificial neural networks used in medical imaging. We collected a short list of publications related to machine learning model fooling to see if these methods have been used in the medical imaging domain. Specifically, we focused our interest to pathological whole slide images used to study human tissues. While useful, machine learning models such as deep neural networks can be fooled by quite simple attacks involving purposefully engineered images. Such attacks pose a threat to many domains, including the one we focused on since there have been some studies describing such threats.
85
Views
477
Downloads
ACM Computing Surveys
(2024):
Information Technology, Cyber Security
Master
(2024):
Handbook of Security and Privacy of AI-Enabled Healthcare Systems and Internet of Medical Things
(2023):
arXiv:2303.14133v1
IV
(2023):
World Conference on Information Systems and Technologies WorldCIST 2022
(2022):
Electronics
10,
no. 2132
(2021):
rXiv preprint arXiv:2105.13771
(2021):
Trends and Applications in Information Systems and Technologies (WorldCIST 2021)
(2021):
197–203.
Proceedings of the 29th Conference of Open Innovations Association FRUCT
(2021):
206–213.