+
Skip to content

Support 8bit quantization and half-precision floating point representation #181

@robinvanemden

Description

@robinvanemden

What to add:

For instance WinMLTools enables the optimization of ONNX models by either

  • converting floating point 32 into a floating point 16 representation (IEEE 754 half), effectively compressing the model by reducing its size in half.
  • the compression of models represented in floating point 32 into 8-bit integer representations - which yields a disk footprint reduction of up to 75% depending on the model.

Currently, neither are supported by ONNC.

Why it is necessary:

As ONNC is perfectly suited to generate natively executable models targeting MCU's with limited memory constraints, it would be very useful if ONNC supported either one or both methods of model optimization.

How to achieve it:

Supporting floating point 16 representation for inputs and operators, and/or supporting 8-bit integer representations for operators.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载