Show simple item record

dc.contributor.authorArshad, Iram
dc.contributor.authorAsghar, Mamoona Naveed
dc.contributor.authorQiao, Yuansong
dc.contributor.authorLee, Brian
dc.contributor.authorYe, Yuhang
dc.date.accessioned2022-12-20T11:16:11Z
dc.date.available2022-12-20T11:16:11Z
dc.date.copyright2021
dc.date.issued2021-08-23
dc.identifier.citationArshad, I, Asghar, M.N., Qiao, Y., Lee, B., Yuhang, Y. (2022). Pixdoor: a pixel-space backdoor attack on deep learning models. Published in: 29th European Sirocessing Conference (EUSIPCO). , Dublin, Ireland, August 23-27, 2021. pp,681-685. doi: 10.23919/EUSIPCO54536.2021.9616118en_US
dc.identifier.isbn978-9-0827-9706-0
dc.identifier.urihttps://research.thea.ie/handle/20.500.12065/4344
dc.description.abstractDeep learning algorithms outperform the machine learning techniques in various fields and are widely deployed for recognition and classification tasks. However, recent research focuses on exploring these deep learning models’ weaknesses as these can be vulnerable due to outsourced training data and transfer learning. This paper proposed a rudimentary, stealthy Pixel-space based Backdoor attack (Pixdoor) during the training phase of deep learning models. For generating the poisoned dataset, the bit-inversion technique is used for injecting errors in the pixel bits of training images. Then 3% of the poisoned dataset is mixed with the clean dataset to corrupt the complete training images dataset. The experimental results show that the minimal percent of data poisoning can effectively fool a deep learning model with a high degree of accuracy. Likewise, in experiments, we witness a marginal degradation of the model accuracy by 0.02%.en_US
dc.formatPDFen_US
dc.language.isoengen_US
dc.publisherIEEEen_US
dc.relation.ispartof29th European Sirocessing Conference (EUSIPCO).en_US
dc.rightsAttribution-NonCommercial-NoDerivs 3.0 United States*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/us/*
dc.subjectBackdoor attacken_US
dc.subjectCausative attacken_US
dc.subjectPixel-spaceen_US
dc.subjectPoisoned dataseten_US
dc.subjectTraining phaseen_US
dc.titlePixdoor: a pixel-space backdoor attack on deep learning modelsen_US
dc.conference.date2021-08-23
dc.conference.hostEUSIPCOen_US
dc.conference.locationDublinen_US
dc.contributor.affiliationTechnological University of the Shannon: Midlands Midwesten_US
dc.description.fundingPresident's Doctoral Scholarship (Athlone Institute of Technology - TUS Midlands)
dc.description.peerreviewyesen_US
dc.identifier.doi10.23919/EUSIPCO54536.2021.9616118en_US
dc.identifier.orcidhttps://orcid.org/0000-0003-0755-5896en_US
dc.identifier.orcidhttps://orcid.org/0000-0001-7460-266Xen_US
dc.identifier.orcidhttps://orcid.org/0000-0002-1543-1589en_US
dc.identifier.orcidhttps://orcid.org/0000-0002-8475-4074en_US
dc.identifier.orcidhttps://orcid.org/0000-0003-4608-1451en_US
dc.rights.accessrightsinfo:eu-repo/semantics/openAccessen_US
dc.subject.departmentDepartment of Computer & Software Engineering: TUS Midlandsen_US
dc.type.versioninfo:eu-repo/semantics/acceptedVersionen_US


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record

Attribution-NonCommercial-NoDerivs 3.0 United States
Except where otherwise noted, this item's license is described as Attribution-NonCommercial-NoDerivs 3.0 United States