这是indexloc提供的服务,不要输入任何密码
Skip to content

Memory error when sampling from collector #184

@Tortes

Description

@Tortes
  • I have marked all applicable categories:
    • exception-raising bug
    • RL algorithm bug
    • documentation request (i.e. "X is missing from the documentation.")
    • new feature request
  • I have visited the source website
  • I have searched through the issue tracker for duplicates
  • I have mentioned version numbers, operating system and environment, where applicable:
    import tianshou, torch, sys
    print(tianshou.__version__, torch.__version__, sys.version, sys.platform)
    0.2.5 1.6.0 3.7.7
    [GCC 7.3.0] linux

The collector raised the memory error when I training under the self-defined environment with PPO algorithm. The full error message is as below:
MemoryError: Unable to allocate 8.05 GiB for an array with shape (20000, 54006) and data type float64
The self-defined environment uses List state observation and List discrete action. Is there any necessary to change the observation type to dict to fit tianshou and use a smaller buffer size?
I tried to change the sample size of on-policy but it will raise the dimension matching problem(of course). Any answer is welcome for the stupid problem.

Metadata

Metadata

Assignees

No one assigned

    Labels

    questionFurther information is requested

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions