+

Wang et al., 2021 - Google Patents

Defending adversarial attacks via semantic feature manipulation

Wang et al., 2021

View PDF
Document ID
6308736238361856270
Author
Wang S
Nepal S
Rudolph C
Grobler M
Chen S
Chen T
An Z
Publication year
Publication venue
IEEE Transactions on Services Computing

External Links

Snippet

Machine learning models have demonstrated vulnerability to adversarial attacks, more specifically misclassification of adversarial examples. In this article, we propose a one-off and attack-agnostic Feature Manipulation (FM)-Defense to detect and purify adversarial …
Continue reading at arxiv.org (PDF) (other versions)

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/62Methods or arrangements for recognition using electronic means
    • G06K9/6267Classification techniques
    • G06K9/6268Classification techniques relating to the classification paradigm, e.g. parametric or non-parametric approaches
    • G06K9/627Classification techniques relating to the classification paradigm, e.g. parametric or non-parametric approaches based on distances between the pattern to be recognised and training or reference patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/62Methods or arrangements for recognition using electronic means
    • G06K9/6288Fusion techniques, i.e. combining data from various sources, e.g. sensor fusion
    • G06K9/6292Fusion techniques, i.e. combining data from various sources, e.g. sensor fusion of classification results, e.g. of classification results related to same input data

Similar Documents

Publication Publication Date Title
Yuan et al. Adversarial examples: Attacks and defenses for deep learning
Chakraborty et al. A survey on adversarial attacks and defences
Miller et al. Adversarial learning targeting deep neural network classification: A comprehensive review of defenses against attacks
Zhao et al. AFA: Adversarial fingerprinting authentication for deep neural networks
Grosse et al. Adversarial perturbations against deep neural networks for malware classification
Zhang et al. A survey on learning to reject
Ahmad et al. Enhancing SVM performance in intrusion detection using optimal feature subset selection based on genetic principal components
Peng et al. Bilateral dependency optimization: Defending against model-inversion attacks
Marchisio et al. Is spiking secure? a comparative study on the security vulnerabilities of spiking and deep neural networks
Wang et al. Defending adversarial attacks via semantic feature manipulation
Sengan et al. Improved LSTM-based anomaly detection model with cybertwin deep learning to detect cutting-edge cybersecurity attacks
Ye et al. Feature autoencoder for detecting adversarial examples
Chen et al. QUEEN: Query unlearning against model extraction
Khoda et al. Selective adversarial learning for mobile malware
Du et al. LC-GAN: Improving adversarial robustness of face recognition systems on edge devices
Zhang et al. Boosting deepfake detection generalizability via expansive learning and confidence judgement
Li et al. Contrastive learning for money laundering detection: Node-subgraph-node method with context aggregation and enhancement strategy
Hou et al. Flare: Towards universal dataset purification against backdoor attacks
Li et al. Security application of intrusion detection model based on deep learning in english online education
Zhou et al. MalPurifier: Enhancing Android malware detection with adversarial purification against evasion attacks
Marchisio et al. Snn under attack: are spiking deep belief networks vulnerable to adversarial examples
Liu et al. Enhancing Generalization in Few-Shot Learning for Detecting Unknown Adversarial Examples
Miller et al. Adversarial Learning and Secure AI
Pedraza et al. Lyapunov stability for detecting adversarial image examples
Qin et al. Improving behavior based authentication against adversarial attack using XAI
点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载