这是indexloc提供的服务,不要输入任何密码
Skip to content

Constant folding fails when converting int8 transposed convolutions #53766

@lgeiger

Description

@lgeiger

When converting a model that used int8 quantization aware training conversion of transposed convolutions fails.

The converter isn't able to correctly constant fold the fake quantized weights and keeps an unnecessary tfl.transpose operation in the graph which leads to problems when executing the TFLite model. Note that this issue is independent of TensorFlow Model Optimization and can be reproduced using plain TensorFlow as well (see linked notebook).

1. System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): macOS / Ubuntu
  • TensorFlow installation (pip package or built from source): pip package
  • TensorFlow library (version, if pip package or github SHA, if built from source): 2.5, 2.6, 2.7, 2.8rc0 and 2.9.0-dev20220114

2. Code

A minimal reproduction of the issue is available in this notebook. Re-run the notebook to show netron visualisations showing the conversion problem.

Metadata

Metadata

Labels

TF 2.7Issues related to TF 2.7.0TFLiteConverterFor issues related to TFLite converterstat:awaiting tensorflowerStatus - Awaiting response from tensorflowertype:bugBug

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions