-
Notifications
You must be signed in to change notification settings - Fork 74.8k
Closed
Labels
TF 2.7Issues related to TF 2.7.0Issues related to TF 2.7.0TFLiteConverterFor issues related to TFLite converterFor issues related to TFLite converterstat:awaiting tensorflowerStatus - Awaiting response from tensorflowerStatus - Awaiting response from tensorflowertype:bugBugBug
Description
When converting a model that used int8 quantization aware training conversion of transposed convolutions fails.
The converter isn't able to correctly constant fold the fake quantized weights and keeps an unnecessary tfl.transpose
operation in the graph which leads to problems when executing the TFLite model. Note that this issue is independent of TensorFlow Model Optimization and can be reproduced using plain TensorFlow as well (see linked notebook).
1. System information
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): macOS / Ubuntu
- TensorFlow installation (pip package or built from source): pip package
- TensorFlow library (version, if pip package or github SHA, if built from source): 2.5, 2.6, 2.7, 2.8rc0 and 2.9.0-dev20220114
2. Code
A minimal reproduction of the issue is available in this notebook. Re-run the notebook to show netron
visualisations showing the conversion problem.
Metadata
Metadata
Assignees
Labels
TF 2.7Issues related to TF 2.7.0Issues related to TF 2.7.0TFLiteConverterFor issues related to TFLite converterFor issues related to TFLite converterstat:awaiting tensorflowerStatus - Awaiting response from tensorflowerStatus - Awaiting response from tensorflowertype:bugBugBug