01159nas a2200205 4500000000100000008004100001260000900042653002300051653003100074653001800105653002100123653002000144100001700164700001700181700001800198245006600216300001200282490000700294520065200301 2020 d c202010aadversarial images10aartificial neural networks10adeep learning10amachine learning10amedical imaging1 aTuomo Sipola1 aSamir Puuska1 aTero Kokkonen00aModel Fooling Attacks Against Medical Imaging: A Short Survey a215-2240 v463 a

This study aims to find a list of methods to fool artificial neural networks used in medical imaging. We collected a short list of publications related to machine learning model fooling to see if these methods have been used in the medical imaging domain. Specifically, we focused our interest to pathological whole slide images used to study human tissues. While useful, machine learning models such as deep neural networks can be fooled by quite simple attacks involving purposefully engineered images. Such attacks pose a threat to many domains, including the one we focused on since there have been some studies describing such threats.