这是indexloc提供的服务,不要输入任何密码
Skip to content

Args of Net and MLP mismatch #770

@oleotiger

Description

@oleotiger

new feature request

import tianshou, gym, torch, numpy, sys
print(tianshou.__version__, gym.__version__, torch.__version__, numpy.__version__, sys.version, sys.platform)
print(tianshou.__version__, gym.__version__, torch.__version__, numpy.__version__, sys.version, sys.platform)
0.4.10 0.21.0 1.12.1.post200 1.23.3 3.10.6 | packaged by conda-forge | (main, Aug 22 2022, 20:35:26) [GCC 10.4.0] linux
  1. Type of args norm_layer and activation of tianshou.utils.net.common.Net is Optional[ModuleType], while that of MLP is Optional[Union[ModuleType, Sequence[ModuleType]]]. In init of Net, call of MLP() with norm_layer and activation miss adaptation to type Sequence[ModuleType] .
    I think it's better to keep consistent of the type of args between Net and MLP.

  2. Beside norm_layer and activation, I suggest that add concomitant args to pass args or kwargs for init of norm_layer and activation.
    For example,

def miniblock(
        input_size: int,
        output_size: int = 0,
        norm_layer: Optional[ModuleType] = None,
        norm_args: Optional[(Optional[Tuple], Optional[Dict])] = None,
        activation: Optional[ModuleType] = None,
        act_args: Optional[(Optional[Tuple], Optional[Dict])] = None,
        linear_layer: Type[nn.Linear] = nn.Linear,
) -> List[nn.Module]:

Add act_args: Optional[args or kwargs or sequence of args/kwargs], to Net and MLP.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementFeature that is not a new algorithm or an algorithm enhancement

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions