Wang et al., 2021 - Google Patents
Defending adversarial attacks via semantic feature manipulationWang et al., 2021
View PDF- Document ID
- 6308736238361856270
- Author
- Wang S
- Nepal S
- Rudolph C
- Grobler M
- Chen S
- Chen T
- An Z
- Publication year
- Publication venue
- IEEE Transactions on Services Computing
External Links
Snippet
Machine learning models have demonstrated vulnerability to adversarial attacks, more specifically misclassification of adversarial examples. In this article, we propose a one-off and attack-agnostic Feature Manipulation (FM)-Defense to detect and purify adversarial …
- 238000004140 cleaning 0 abstract description 45
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/62—Methods or arrangements for recognition using electronic means
- G06K9/6267—Classification techniques
- G06K9/6268—Classification techniques relating to the classification paradigm, e.g. parametric or non-parametric approaches
- G06K9/627—Classification techniques relating to the classification paradigm, e.g. parametric or non-parametric approaches based on distances between the pattern to be recognised and training or reference patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/62—Methods or arrangements for recognition using electronic means
- G06K9/6288—Fusion techniques, i.e. combining data from various sources, e.g. sensor fusion
- G06K9/6292—Fusion techniques, i.e. combining data from various sources, e.g. sensor fusion of classification results, e.g. of classification results related to same input data
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Yuan et al. | Adversarial examples: Attacks and defenses for deep learning | |
| Chakraborty et al. | A survey on adversarial attacks and defences | |
| Miller et al. | Adversarial learning targeting deep neural network classification: A comprehensive review of defenses against attacks | |
| Zhao et al. | AFA: Adversarial fingerprinting authentication for deep neural networks | |
| Grosse et al. | Adversarial perturbations against deep neural networks for malware classification | |
| Zhang et al. | A survey on learning to reject | |
| Ahmad et al. | Enhancing SVM performance in intrusion detection using optimal feature subset selection based on genetic principal components | |
| Peng et al. | Bilateral dependency optimization: Defending against model-inversion attacks | |
| Marchisio et al. | Is spiking secure? a comparative study on the security vulnerabilities of spiking and deep neural networks | |
| Wang et al. | Defending adversarial attacks via semantic feature manipulation | |
| Sengan et al. | Improved LSTM-based anomaly detection model with cybertwin deep learning to detect cutting-edge cybersecurity attacks | |
| Ye et al. | Feature autoencoder for detecting adversarial examples | |
| Chen et al. | QUEEN: Query unlearning against model extraction | |
| Khoda et al. | Selective adversarial learning for mobile malware | |
| Du et al. | LC-GAN: Improving adversarial robustness of face recognition systems on edge devices | |
| Zhang et al. | Boosting deepfake detection generalizability via expansive learning and confidence judgement | |
| Li et al. | Contrastive learning for money laundering detection: Node-subgraph-node method with context aggregation and enhancement strategy | |
| Hou et al. | Flare: Towards universal dataset purification against backdoor attacks | |
| Li et al. | Security application of intrusion detection model based on deep learning in english online education | |
| Zhou et al. | MalPurifier: Enhancing Android malware detection with adversarial purification against evasion attacks | |
| Marchisio et al. | Snn under attack: are spiking deep belief networks vulnerable to adversarial examples | |
| Liu et al. | Enhancing Generalization in Few-Shot Learning for Detecting Unknown Adversarial Examples | |
| Miller et al. | Adversarial Learning and Secure AI | |
| Pedraza et al. | Lyapunov stability for detecting adversarial image examples | |
| Qin et al. | Improving behavior based authentication against adversarial attack using XAI |