M201 類神經正切泛化攻擊模型

摘要:

This is the repo for Neural Tangent Generalization Attacks, Chia-Hung Yuan and Shan-Hung Wu, In Proceedings of ICML 2021.

版本:0

說明

github : https://github.com/lionelmessi6410/ntga

This is the repo for Neural Tangent Generalization Attacks, Chia-Hung Yuan and Shan-Hung Wu, In Proceedings of ICML 2021.

We propose the generalization attack, a new direction for poisoning attacks, where an attacker aims to modify training data in order to spoil the training process such that a trained network lacks generalizability. We devise Neural Tangent Generalization Attack (NTGA), a first efficient work enabling clean-label, black-box generalization attacks against Deep Neural Networks.

NTGA declines the generalization ability sharply, i.e. 99% -> 15%, 92% -> 33%, 99% -> 72% on MNIST, CIFAR10 and 2- class ImageNet, respectively. Please see Results or the main paper for more complete results. We also release the unlearnable MNIST, CIFAR-10, and 2-class ImageNet generated by NTGA, which can be found and downloaded in Unlearnable Datasets, and also launch learning on unlearnable data competitions. The following figures show one clean and the corresponding poisoned examples.

開發團隊

子計畫二

數據說明

Annotation

規範文件清單

請填入個人資料以進行下載或授權申請

送出
Copyright © 2020 人工智慧技術暨全幅健康照護聯合研究中心
- design by Morcept