Show simple item record

AuthorKang, Xu
AuthorSong, Bin
AuthorDu, Xiaojiang
AuthorGuizani, Mohsen
Available date2022-12-25T06:23:22Z
Publication Date2020-01-01
Publication NameIEEE Access
Identifierhttp://dx.doi.org/10.1109/ACCESS.2020.2973069
CitationKang, X., Song, B., Du, X., & Guizani, M. (2020). Adversarial attacks for image segmentation on multiple lightweight models. IEEE Access, 8, 31359-31370.‏
URIhttps://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85081060202&origin=inward
URIhttp://hdl.handle.net/10576/37554
AbstractDue to the powerful ability of data fitting, deep neural networks have been applied in a wide range of applications in many key areas. However, in recent years, it was found that some adversarial samples easily fool the deep neural networks. These input samples are generated by adding a few small perturbations based on the original sample, making a very significant influence on the decision of the target model in the case of not being perceived. Image segmentation is one of the most important technologies in the medical image and automatic driving field. This paper mainly explores the security of deep neural network models based on the image segmentation tasks. Two lightweight image segmentation models on the embedded device suffered from the white-box attack by using local perturbations and universal perturbations. The perturbations are generated indirectly by a noise function and an intermediate variable so that the gradient of pixels can be propagated unlimitedly. Through experiments, we find that different models have different blind spots, and the adversarial samples trained for a single model have no transferability. In the end, multiple models are attacked by our joint learning. Finally, under the constraint of low perturbation, most of the pixels in the attacked area have been misclassified by both lightweight models. The experimental result shows that the proposed adversary is more likely to affect the performance of the segmentation model compared with the FGSM.
SponsorThis work was supported in part by the National Natural Science Foundation of China under Grant 61772387, in part by the Fundamental Research Funds of Ministry of Education and China Mobile under Grant MCM20170202, in part by the National Natural Science Foundation of Shaanxi Province under Grant 2019ZDLGY03-03, and in part by the ISN State Key Laboratory.
Languageen
PublisherInstitute of Electrical and Electronics Engineers Inc.
SubjectAdversarial samples
image segmentation
joint learning
multi-model attack
perturbations
TitleAdversarial Attacks for Image Segmentation on Multiple Lightweight Models
TypeArticle
Pagination31359-31370
Volume Number8
dc.accessType Open Access


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record