Russian Federation
Russian Federation
Russian Federation
Modern computer vision systems based on deep neural networks demonstrate the highest accuracy in object classification and recognition tasks. However, they turned out to be vulnerable to specially designed perturbations — adversarial attacks, which remain invisible to human perception, but can dramatically change the prediction of the model. This study is devoted to the systematic analysis of threats to the security of neural network models of computer vision. The paper presents a classification of attacks according to the level of available information about the model, discusses in detail the mechanisms for creating adversarial examples using gradient methods, and analyzes modern approaches to protection, including adversarial training and detection of abnormal input data. Special attention is paid to practical aspects: the results of testing the stability of popular architectures and quantitative indicators of the effectiveness of various protection methods are presented. The study confirms that the problem of adversarial attacks remains critically important for the deployment of reliable computer vision systems in real conditions.
adversarial attacks, adversarial examples, computer vision, AI security, model stability, gradient methods, adversarial training, white box, black box
1. Tumoyan, E. P. Razrabotka metoda modelirovaniya setevyh atak na osnove neyronnyh setey i veroyatnostnyh grafov / E. P. Tumoyan // Izvestiya YuFU. Tehnicheskie nauki. – 2009. – № 1(90). – S. 148-153. – EDN MBXFKD.
2. Doynikova, E. V. Metodika vybora zaschitnyh mer dlya reagirovaniya na incidenty bez-opasnosti v komp'yuternyh setyah na osnove pokazateley zaschischennosti / E. V. Doynikova, I. V. Kotenko // Informacionnye tehnologii v upravlenii (ITU-2016) : Materialy 9-y konferencii po problemam upravleniya, Sankt-Peterburg, 04–06 oktyabrya 2016 goda / Predsedatel' prezidiuma mul'tikonferencii V. G. Peshehonov. – Sankt-Peterburg: Koncern "Central'nyy nauchno-issledovatel'skiy institut "Elektropribor", 2016. – S. 700-705. – EDN XFCHWP.
3. Shaburov, A. S. Model' obnaruzheniya komp'yuternyh atak na ob'ekty kriticheskoy informacionnoy infrastruktury / A. S. Shaburov, A. S. Nikitin // Vestnik Permskogo nacional'nogo issledovatel'skogo politehnicheskogo universiteta. Elektrotehnika, informacionnye tehnologii, sistemy upravleniya. – 2019. – № 29. – S. 104-117. – EDN ZBKJTN. DOI: https://doi.org/10.15593/2224-9397/2019.1.07



