We alert the FL community that even pFL methods with parameter decoupling are still highly vulnerable to backdoor attacks.
The resistance of pFL methods with parameter decoupling is attributed to the heterogeneous classifiers between
malicious clients and benign counterparts. We analyze two direct causes of the heterogeneous classifiers:
(1) data heterogeneity inherently exists among clients and (2) poisoning by malicious clients further exac
erbates the data heterogeneity. To address these issues, we propose a two-pronged attack method, BapFL,
which comprises two simple yet effective strategies: (1) poisoning only the feature encoder while keeping
the classifier fixed and (2) diversifying the classifier through noise introduction to simulate that of the benign
clients. Extensive experiments on three benchmark datasets under varying conditions demonstrate the
effectiveness of our proposed attack. Additionally, we evaluate the effectiveness of six widely used defense
methods and find that BapFL still poses a significant threat even in the presence of the best defense, Multi-Krum.
forked from BapFL/code
-
Notifications
You must be signed in to change notification settings - Fork 0
huizhouhnu/BapFL
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
About
TKDD2024-BapFL: You can Backdoor Attack Personalized Federated Learning
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published
Languages
- Python 100.0%