-
Notifications
You must be signed in to change notification settings - Fork 247
Open
Description
Hi, I was training using 4 nodes on a 7B qwen2.5vl model, I tried to enlarge the rollout number, but got OOM error. I wonder how to correctly set the training parameters, especially the batch size in multinode training setup? Here's what i'm using now
python3 -m verl.trainer.main \
config=examples/config.yaml \
worker.actor.model.model_path=${MODEL_PATH} \
trainer.logger=['console','wandb'] \
data.max_prompt_length=2048 \
data.max_response_length=1024 \
data.rollout_batch_size=512 \
worker.actor.global_batch_size=128 \
worker.actor.micro_batch_size_per_device_for_update=4 \
worker.actor.micro_batch_size_per_device_for_experience=16 \
worker.rollout.n=128 \
data.max_pixels=1258291 \
trainer.save_freq=5 \
trainer.n_gpus_per_node=8 \
trainer.nnodes=4 \
Metadata
Metadata
Assignees
Labels
No labels