这是indexloc提供的服务,不要输入任何密码
Skip to content

large rollout.n encounter OOM in training. #409

@lezhang7

Description

@lezhang7

Hi, I was training using 4 nodes on a 7B qwen2.5vl model, I tried to enlarge the rollout number, but got OOM error. I wonder how to correctly set the training parameters, especially the batch size in multinode training setup? Here's what i'm using now

  python3 -m verl.trainer.main \
    config=examples/config.yaml \
    worker.actor.model.model_path=${MODEL_PATH} \
    trainer.logger=['console','wandb'] \
    data.max_prompt_length=2048 \
    data.max_response_length=1024 \
    data.rollout_batch_size=512 \
    worker.actor.global_batch_size=128 \
    worker.actor.micro_batch_size_per_device_for_update=4 \
    worker.actor.micro_batch_size_per_device_for_experience=16 \
    worker.rollout.n=128 \
    data.max_pixels=1258291 \
    trainer.save_freq=5 \
    trainer.n_gpus_per_node=8 \
    trainer.nnodes=4 \

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions