-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Closed
Labels
questionFurther information is requestedFurther information is requested
Description
- I have marked all applicable categories:
- exception-raising bug
- RL algorithm bug
- documentation request (i.e. "X is missing from the documentation.")
- new feature request
Question:
When runing PPO example, whatever the value of args.training_num ( 16, 64, 128, 256, ...) is used,
the GPU memory is only about 2Gb used (most GPU memory is free.),
and GPU usage is only about 1% ~ 2% (shown by nvidia-smi, Volatile GPU-Util = 2%)
How to fix the issue? Thanks!
hardware infos:
GPU: Nvidia A100 (with dirver 470, CUDA 11.4)
software version infos:
tianshou: 0.4.4
torch: 1.9.1+cu11
numpy: 1.20.3
sys: ubuntu20.04
PPO args (most other args value are the same with test_PPO.py in examples):
env.max_step: 50000
buffer_size: 4096 * 16
hidden_size: [128, 128]
step_per_epoch: 50000
batch_size: 2048
Metadata
Metadata
Assignees
Labels
questionFurther information is requestedFurther information is requested