这是indexloc提供的服务,不要输入任何密码
Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 9 additions & 3 deletions examples/corda_finetuning/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -150,7 +150,10 @@ corda_config = CordaConfig(
#### Knowledge-preserved adaptation mode

```bash
CUDA_VISIBLE_DEVICES=0 python -u preprocess.py --model_id="meta-llama/Llama-2-7b-hf" \
export CUDA_VISIBLE_DEVICES=0 # force to use device 0 of CUDA GPU
export ZE_AFFINITY_MASK=0 # force to use device 0 of Intel XPU

python -u preprocess.py --model_id="meta-llama/Llama-2-7b-hf" \
--r 128 --seed 233 \
--save_model --save_path {path_to_residual_model} \
--calib_dataset "nqopen"
Expand All @@ -165,7 +168,10 @@ Arguments:
#### Instruction-previewed adaptation mode

```bash
CUDA_VISIBLE_DEVICES=0 python -u preprocess.py --model_id="meta-llama/Llama-2-7b-hf" \
export CUDA_VISIBLE_DEVICES=0 # force to use device 0 of CUDA GPU
export ZE_AFFINITY_MASK=0 # force to use device 0 of Intel XPU

python -u preprocess.py --model_id="meta-llama/Llama-2-7b-hf" \
--r 128 --seed 233 \
--save_model --save_path {path_to_residual_model} \
--first_eigen --calib_dataset "MetaMATH"
Expand Down Expand Up @@ -248,4 +254,4 @@ Note that this conversion is not supported if `rslora` is used in combination wi
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
}
```
```
7 changes: 5 additions & 2 deletions examples/corda_finetuning/preprocess.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,8 +38,11 @@ def main(args):
# Setting random seed of numpy and torch
np.random.seed(args.seed)
torch.manual_seed(args.seed)
torch.cuda.manual_seed_all(args.seed)
torch.backends.cudnn.deterministic = True
if torch.cuda.is_available():
torch.cuda.manual_seed_all(args.seed)
elif torch.xpu.is_available():
torch.xpu.manual_seed_all(args.seed)
torch.use_deterministic_algorithms(True)

# Load model
model_id = args.model_id
Expand Down
Loading