这是indexloc提供的服务,不要输入任何密码
Skip to content

Guohanzhong/RID

Repository files navigation

Real-time Identity Defenses against Malicious Personalization of Diffusion Models

page

🏷️ News!

🔎 Overview framework

seesr

System Requirements

Hardware Requirements

The RID library requires a GPU with at least 3 GB of VRAM for inference with fp32 and at least 2GB of VRAM for inference with fp16 with single batch. If using CPU, one core is sufficient, and the RAM requirement is 3GB. For training, a GPU with 40 GB of VRAM or higher is necessary, along with at least 4 CPU cores (3.3 GHz or higher), and 16 GB of RAM.

Software Requirements

OS Requirements

The development version of the package has been tested on Linux operating systems. The package is compatible with the following systems:

  • Linux: Ubuntu 16.04
  • Mac OSX: Supported
  • Windows: Supported

Before setting up the package, users should have Python version 3.8 or higher installed, along with the necessary dependencies specified in the requirements.txt file. No need for any non-standard hardware.

Installation Guide

Install from Github

git clone https://github.com/Guohanzhong/RID
cd RID

# create an environment with python >= 3.8
conda create -n RID python=3.8
conda activate RID
pip install -r requirements.txt

Inference Demo

RID inference to protect your images !

To carry out the defense on your own image, to run the following commands and changes the model path and images-folder path.

python infer.py -m 'model_path' -f 'folder_path' 

This will process a whole folder in 'folder_path' and save all the protected images in the '/output_folder/', the processing speed is 8 images per second when using A100. 'model_path' is the checkpoint of RID network, which can download from (Google Drive).

Evaluation the performance of protection

Personalization methods

In order to evaluation the protection performance, based on Diffusers, run the personalization using the following commands

cd tuning-based-personalization
accelerate launch --main_process_port $(expr $RANDOM % 10000 + 10000) train_sd_lora_dreambooth_token.py  --config=config/sd_lora.py  

'config/sd_lora.py' contains the parameters needed for personalization training. ‘train_sd_lora_dreambooth_token.py' is used for training the 'LoRA+TI', costs about 10 minutes to train. While set 'config.use_lora = False' in 'config/sd_lora.py', the personalization method becomes 'TI', which costs 5 minutes to train.

The following command is use the 'DB' as the personalization method, which costs 30 minutes to train.

cd tuning-based-personalization
accelerate launch --main_process_port $(expr $RANDOM % 10000 + 10000) train_sd_dreambooth_token.py  --config=config/sd.py  

More results about the robustness

In the Extended Data Figure3, Extended Data Figure4, Extended Data Figure5, Extended Data Figure6 in the Main text, we provide robustness of RID across a wide range of conditions (including a larger number of identities, diverse gender, and racial groups) and mixed clean/protected training data settings. To better help to reproduce the results in the paper, we provide the data of different IDs used in evaluation.

Due to some potential privacy issues, we only put up mixed results for the evaluation set used by 115 IDs and the mixed clean/protected training data settings.

Results after post-processing on defened images.

In the real-world scenarios, adversaries may apply various post-processing techniques to protected images to weaken the defense before launching personalized image generation, which we already considered in the page 12 in the Main text, to better reproduce the results of Figure~6 in the Main text, we provide code repositories for different post-processing methods. For any post-processing approach, simply apply their processing to the defended images obtained from RID inference to generate post-processed defended images.

(1) JPEG compression (JPEG-C), a traditional approach that compresses high-frequency image information, potentially diminishing the effectiveness of added perturbations;

from PIL import Image
img = Image.open(image_path)
img.save(output_path, "JPEG", quality=75)  

(2) DiffPure, a diffusion-based method that applies noise and then denoises defended images, leveraging the generative capacity of diffusion models to restore clean features.

(3) GridPure, which processes 256×256 image patches independently using a pre-trained diffusion model~\cite{dhariwal2021diffusion} to locally denoise defended images like Diffpure;

(4) Noiseup, a more aggressive purification strategy involving upsampling via a large-scale super-resolution model (from Stable Diffusion) followed by downsampling.

Many thanks for their work.

Training scripts

Prepare the dataset

In order to train RID, we need to first prepare the training dataset, which consists of the original dataset as well as protected image-image data pairs constructed using a gradient-based approach.

The pairs data is important for training the RID. To generate the pairs data, we use the Anti-Dreambooth library to generate the corresponding perturbation for each image and store these pairs for the following training.

cd gradient-based-attack
accelerate launch --num_processes 1 aspl_ensemble.py --pretrained_model_name_or_path "" --instance_data_dir_for_train "" --instance_data_dir_for_adversarial ""  --pgd_eps "" --output_dir ""

where 'pretrained_model_name_or_path' denotes the pre-trained models use for generation pairs data, it should be aligned with the process of training the RID in the following. 'instance_data_dir_for_train' and 'instance_data_dir_for_adversarial' denote the same clean images folder, while the images are directly stored in this folder. 'output_dir' denotes the output dir for protected images. 'pgd_eps' denotes the perturbation scale, with higher scale, the protection is more visible. We recommand to set the perturbation scale as 12/255.

Training the RID

After preparing the dataset, run the following commands to train the RID,

sh train_sd_ensemble_dmd.sh

or

accelerate launch train_sd_ensemble_dmd.py \
    --pretrained_model_name_or_path ""
    --vad_output_dir "./train_cache/image/sd-vgg-ensemble_dmddit_12-255_sds" \
    --output_dir "./train_cache/pth2/sd-vgg-ensemble_dmddit_12-255_sds" \
    --data_json_file "eps-12_255-mom_anti-a9f0/VGGFace-all.json" \
    --pair_path "eps-12_255-mom_anti-a9f0/output_pairs.json" \
    --tensorboard_output_dir "logs/sd-vgg-ensemble_dmddit_12-255_10l1_all" \
    --resolution 512 > ./logs/sd-vgg-dmd_sds-12-255_sds.log 2>&1 &

where 'pretrained_model_name_or_path' denotes the pre-trained diffusion models we use in training, to use the ensemble models to train, set 'pretrained_model_name_or_path' as 'model_1,model_2,model_3'.

'data_json_file' denotes the a JSON file that stores a list of dictionaries. Each dictionary in this list must have at least one key "image_file", which represents the file path of an image. In our paper, we use the VGGFace2 as the raw dataset. For instance,

[
    {"image_file": "1.png"},
    {"image_file": "2.png"},
]

The 'pair_path' should also be a JSON array where each element is a JSON object. Each object must contain two keys: "source_path" and "attacked_path". The value corresponding to "source_path" is the file path of the source image, and the value corresponding to "attacked_path" is the file path of the protected image which is generated using the Anti-DB. For instancce,

[
    {"source_path": "source_1.png", "attacked_path": "attacked_1.png"},
    {"source_path": "source_2.png", "attacked_path": "attacked_2.png"},
]

The order and number of elements do not need to be the same for both jsons.

'output_dir' denotes the the output dir for the trained RID network. 'vad_output_dir' represents the output of RID during training.

We also provide the ability to optimize the RID using only regression, using the following command,

sh train_sd_ensemble_reg.sh

Training RID costs about 7 days with 8 A100-40G.

Pseudocode

Pseudocode of Inference

Inference Algorithm: Image Protection with RID
Input: 
    - images: Batch of normalized images ∈ [-1, 1] (shape [B, C, H, W])
    - RID_net: Pretrained perturbation generator
Output: 
    - protected_images: Protected images ∈ [-1, 1]

Process:
1. Generate perturbations:
    # Real-time protection
    Δ = RID(images)  ▹ Matching shape [B, C, H, W]
    
2. Apply perturbations:
    protected = images + Δ
    
3. Clip to valid range:
    protected = clamp(protected, min=-1.0, max=1.0)

return protected

Pseudocode of our RID library

└── infer.py                      ## inference code using trained RID
└── train_sd_ensemble_dmd.sh      ## training scripts using Adv-SDS
└── train_sd_ensemble_dmd.py      ## training code using Adv-SDS
└── train_sd_ensemble_reg.sh      ## training scripts using only regression
└── train_sd_ensemble_reg.py      ## training code using only regression
└── tuning-based-personalization/ ## the peronsonalization code
    └── train_sd_dreambooth_token.py        ## train DB
    └── train_sd_lora_dreambooth_token.py    ## train Lora+TI/TI
    └── config   ## config using in training personalization
        └── sd_lora.py    # config for Lora+TI/TI
        └── sd.py         # config for DB
    └── ...
└── gradient-based-attack/         ## the code for generation pairs data using gradient-based protection methods
    └── aspl_ensemble.py           ## gradient-based protection code
    └── ...
└── evaluation/                    ## Quantitative evaluation code

User study

Given the limitations of quantitative metrics and the subjective nature of the task in our paper, we conducted a carefully designed user study to assess both the protection effectiveness (User study 1) and visual imperceptibility (User study 2) of RID in comparison with baseline approaches. The results of these two user studies are shown in the Figure2e and Figure2f in the Main text.

Further, due to the misleading quantitative results of the Noiseup, we carried out another user study about the protection effectiveness between the undefended images and defended images with the Noiseup. (User study 3), whose results are shown in Figure~S.2 in Supplementary Note C in Supplementary Information.

♥️ Acknowledgement

This project is heavily based on the Diffusers library, DiT libary, Anti-Dreambooth library. Thanks for their great work!

License

This project is covered under the MIT License.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages