这是indexloc提供的服务,不要输入任何密码
Skip to content

Releases: tensorflow/tensorflow

TensorFlow 2.12.0-rc1

07 Mar 19:10
0d8efc9
Compare
Choose a tag to compare
TensorFlow 2.12.0-rc1 Pre-release
Pre-release

Release 2.12.0

Breaking Changes

  • Build, Compilation and Packaging

    • Removal of redundant packages: the tensorflow-gpu and tf-nightly-gpu packages have been effectively removed and replaced with packages that direct users to switch to tensorflow or tf-nightly respectively. The naming difference was the only difference between the two sets of packages ever since TensorFlow 2.1, so there is no loss of functionality or GPU support. See https://pypi.org/project/tensorflow-gpu for more details.
  • tf.function:

    • tf.function now uses the Python inspect library directly for parsing the signature of the Python function it is decorated on.
    • This can break certain cases that were previously ignored where the signature is malformed, such as:
      • Using functools.wraps on a function with different signature
      • Using functools.partial with an invalid tf.function input
    • tf.function now enforces input parameter names to be valid Python identifiers. Incompatible names are automatically sanitized similarly to existing SavedModel signature behavior.
    • Parameterless tf.functions are assumed to have an empty input_signature instead of an undefined one even if the input_signature is unspecified.
    • tf.types.experimental.TraceType now requires an additional placeholder_value method to be defined.
    • tf.function now traces with placeholder values generated by TraceType instead of the value itself.
  • Experimental APIs tf.config.experimental.enable_mlir_graph_optimization and tf.config.experimental.disable_mlir_graph_optimization were removed.

  • tf.keras:

    • Moved all saving-related utilities to a new namespace, keras.saving, i.e. keras.saving.load_model, keras.saving.save_model, keras.saving.custom_object_scope, keras.saving.get_custom_objects, keras.saving.register_keras_serializable,keras.saving.get_registered_name and keras.saving.get_registered_object. The previous API locations (in keras.utils and keras.models) will stay available indefinitely, but we recommend that you update your code to point to the new API locations.
    • Improvements and fixes in Keras loss masking:
      • Whether you represent a ragged tensor as a tf.RaggedTensor or using keras masking, the returned loss values should be the identical to each other. In previous versions Keras may have silently ignored the mask.
      • If you use masked losses with Keras the loss values may be different in TensorFlow 2.12 compared to previous versions.
      • In cases where the mask was previously ignored, you will now get an error if you pass a mask with an incompatible shape.

Major Features and Improvements

  • tf.lite:

    • Add 16-bit float type support for built-in op fill.
    • Transpose now supports 6D tensors.
    • Float LSTM now supports diagonal recurrent tensors: https://arxiv.org/abs/1903.08023
  • tf.keras:

    • The new Keras model saving format (.keras) is available. You can start using it via model.save(f"{fname}.keras", save_format="keras_v3"). In the future it will become the default for all files with the .keras extension. This file format targets the Python runtime only and makes it possible to reload Python objects identical to the saved originals. The format supports non-numerical state such as vocabulary files and lookup tables, and it is easy to customize in the case of custom layers with exotic elements of state (e.g. a FIFOQueue). The format does not rely on bytecode or pickling, and is safe by default. Note that as a result, Python lambdas are disallowed at loading time. If you want to use lambdas, you can pass safe_mode=False to the loading method (only do this if you trust the source of the model).
    • Added a model.export(filepath) API to create a lightweight SavedModel artifact that can be used for inference (e.g. with TF-Serving).
    • Added keras.export.ExportArchive class for low-level customization of the process of exporting SavedModel artifacts for inference. Both ways of exporting models are based on tf.function tracing and produce a TF program composed of TF ops. They are meant primarily for environments where the TF runtime is available, but not the Python interpreter, as is typical for production with TF Serving.
    • Added utility tf.keras.utils.FeatureSpace, a one-stop shop for structured data preprocessing and encoding.
    • Added tf.SparseTensor input support to tf.keras.layers.Embedding layer. The layer now accepts a new boolean argument sparse. If sparse is set to True, the layer returns a SparseTensor instead of a dense Tensor. Defaults to False.
    • Added jit_compile as a settable property to tf.keras.Model.
    • Added synchronized optional parameter to layers.BatchNormalization.
    • Added deprecation warning to layers.experimental.SyncBatchNormalization and suggested to use layers.BatchNormalization with synchronized=True instead.
    • Updated tf.keras.layers.BatchNormalization to support masking of the inputs (mask argument) when computing the mean and variance.
    • Add tf.keras.layers.Identity, a placeholder pass-through layer.
    • Add show_trainable option to tf.keras.utils.model_to_dot to display layer trainable status in model plots.
    • Add ability to save a tf.keras.utils.FeatureSpace object, via feature_space.save("myfeaturespace.keras"), and reload it via feature_space = tf.keras.models.load_model("myfeaturespace.keras").
    • Added utility tf.keras.utils.to_ordinal to convert class vector to ordinal regression / classification matrix.
  • tf.experimental.dtensor:

    • Coordination service now works with dtensor.initialize_accelerator_system, and enabled by default.
    • Add tf.experimental.dtensor.is_dtensor to check if a tensor is a DTensor instance.
  • tf.data:

    • Added support for alternative checkpointing protocol which makes it possible to checkpoint the state of the input pipeline without having to store the contents of internal buffers. The new functionality can be enabled through the experimental_symbolic_checkpoint option of tf.data.Options().
    • Added a new rerandomize_each_iteration argument for the tf.data.Dataset.random() operation, which controls whether the sequence of generated random numbers should be re-randomized every epoch or not (the default behavior). If seed is set and rerandomize_each_iteration=True, the random() operation will produce a different (deterministic) sequence of numbers every epoch.
    • Added a new rerandomize_each_iteration argument for the tf.data.Dataset.sample_from_datasets() operation, which controls whether the sequence of generated random numbers used for sampling should be re-randomized every epoch or not. If seed is set and rerandomize_each_iteration=True, the sample_from_datasets() operation will use a different (deterministic) sequence of numbers every epoch.
  • tf.test:

    • Added tf.test.experimental.sync_devices, which is useful for accurately measuring performance in benchmarks.
  • tf.experimental.dtensor:

    • Added experimental support to ReduceScatter fuse on GPU (NCCL).

Bug Fixes and Other Changes

  • tf.SavedModel:
    • Introduced new class tf.saved_model.experimental.Fingerprint that contains the fingerprint of the SavedModel. See the SavedModel Fingerprinting RFC for details.
    • Introduced API tf.saved_model.experimental.read_fingerprint(export_dir) for reading the fingerprint of a SavedModel.
  • tf.random
    • Added non-experimental aliases for tf.random.split and tf.random.fold_in, the experimental endpoints are still available so no code changes are necessary.
  • tf.experimental.ExtensionType
    • Added function experimental.extension_type.as_dict(), which converts an instance of tf.experimental.ExtensionType to a dict representation.
  • stream_executor
    • Top level stream_executor directory has been deleted, users should use equivalent headers and targets under compiler/xla/stream_executor.
  • tf.nn
    • Added tf.nn.experimental.general_dropout, which is similar to tf.random.experimental.stateless_dropout but accepts a custom sampler function.
  • tf.types.experimental.GenericFunction
    • The experimental_get_compiler_ir method supports tf.TensorSpec compilation arguments.
  • tf.config.experimental.mlir_bridge_rollout
    • Removed enums MLIR_BRIDGE_ROLLOUT_SAFE_MODE_ENABLED and MLIR_BRIDGE_ROLLOUT_SAFE_MODE_FALLBACK_ENABLED which are no longer used by the tf2xla bridge

Thanks to our Contributors

This release contains contributions from many people at Google, as well as:

103yiran, 8bitmp3, Aakar, Aakar Dwivedi, Abinash Satapathy, Aditya Kane, ag.ramesh, Alexander Grund, Andrei Pikas, andreii, Andrew Goodbody, angerson, Anthony_256, Ashay Rane, Ashiq Imran, Awsaf, Balint Cristian, Banikumar Maiti (Intel Aipg), Ben Barsdell, bhack, cfRod, Chao Chen, chenchongsong, Chris Mc, Daniil Kutz, David Rubinstein, dianjiaogit, dixr, Dongfeng Yu, dongfengy, drah, Eric Kunze, Feiyue Chen, Frederic Bastien, Gauri1 Deshpande, guozhong.zhuang, hDn248, HYChou, ingkarat, James Hilliard, Jason Furmanek, Jaya, Jens Glaser, Jerry Ge, Jiao Dian'S Power Plant, Jie Fu, Jinzhe Zeng, Jukyy, Kaixi Hou, Kanvi Khanna, Karel Ha, karllessard, Koan-Sin Tan, Konstantin Beluchenko, Kulin Seth, Kun Lu, Kyle Gerard Felker, Leopold Cambier, Lianmin Zheng, linlifan, liuyuanqiang, Lukas Geiger, Luke Hutton, Mahmoud Abuzaina, Manas Mohanty, Mateo Fidabel, Maxiwell S. Garcia, Mayank Raunak, mdfaijul, meatybobby, Meenakshi Venkataraman, Michael Holman, Nathan John Sircombe, Nathan Luehr, n...

Read more

TensorFlow 2.12.0-rc0

15 Feb 00:41
80170ee
Compare
Choose a tag to compare
TensorFlow 2.12.0-rc0 Pre-release
Pre-release

Release 2.12.0

Breaking Changes

  • Build, Compilation and Packaging

    • Removal of redundant packages: the tensorflow-gpu and tf-nightly-gpu packages have been effectively removed and replaced with packages that direct users to switch to tensorflow or tf-nightly respectively. The naming difference was the only difference between the two sets of packages ever since TensorFlow 2.1, so there is no loss of functionality or GPU support. See https://pypi.org/project/tensorflow-gpu for more details.
  • tf.function:

    • tf.function now uses the Python inspect library directly for parsing the signature of the Python function it is decorated on.
    • This can break certain cases that were previously ignored where the signature is malformed, such as:
      • Using functools.wraps on a function with different signature
      • Using functools.partial with an invalid tf.function input
    • tf.function now enforces input parameter names to be valid Python identifiers. Incompatible names are automatically sanitized similarly to existing SavedModel signature behavior.
    • Parameterless tf.functions are assumed to have an empty input_signature instead of an undefined one even if the input_signature is unspecified.
    • tf.types.experimental.TraceType now requires an additional placeholder_value method to be defined.
    • tf.function now traces with placeholder values generated by TraceType instead of the value itself.
  • Experimental APIs tf.config.experimental.enable_mlir_graph_optimization and tf.config.experimental.disable_mlir_graph_optimization were removed.

  • tf.keras:

    • Moved all saving-related utilities to a new namespace, keras.saving, i.e. keras.saving.load_model, keras.saving.save_model, keras.saving.custom_object_scope, keras.saving.get_custom_objects, keras.saving.register_keras_serializable,keras.saving.get_registered_name and keras.saving.get_registered_object. The previous API locations (in keras.utils and keras.models) will stay available indefinitely, but we recommend that you update your code to point to the new API locations.
    • Improvements and fixes in Keras loss masking:
      • Whether you represent a ragged tensor as a tf.RaggedTensor or using keras masking, the returned loss values should be the identical to each other. In previous versions Keras may have silently ignored the mask.
      • If you use masked losses with Keras the loss values may be different in TensorFlow 2.12 compared to previous versions.
      • In cases where the mask was previously ignored, you will now get an error if you pass a mask with an incompatible shape.
  • tf.SavedModel:

    • Introduced new class tf.saved_model.experimental.Fingerprint that contains the fingerprint of the SavedModel. See the SavedModel Fingerprinting RFC for details.
    • Introduced API tf.saved_model.experimental.read_fingerprint(export_dir) for reading the fingerprint of a SavedModel.

Major Features and Improvements

  • tf.lite:

    • Add 16-bit float type support for built-in op fill.
    • Transpose now supports 6D tensors.
    • Float LSTM now supports diagonal recurrent tensors: https://arxiv.org/abs/1903.08023
  • tf.keras:

    • The new Keras model saving format (.keras) is available. You can start using it via model.save(f"{fname}.keras", save_format="keras_v3"). In the future it will become the default for all files with the .keras extension. This file format targets the Python runtime only and makes it possible to reload Python objects identical to the saved originals. The format supports non-numerical state such as vocabulary files and lookup tables, and it is easy to customize in the case of custom layers with exotic elements of state (e.g. a FIFOQueue). The format does not rely on bytecode or pickling, and is safe by default. Note that as a result, Python lambdas are disallowed at loading time. If you want to use lambdas, you can pass safe_mode=False to the loading method (only do this if you trust the source of the model).
    • Added a model.export(filepath) API to create a lightweight SavedModel artifact that can be used for inference (e.g. with TF-Serving).
    • Added keras.export.ExportArchive class for low-level customization of the process of exporting SavedModel artifacts for inference. Both ways of exporting models are based on tf.function tracing and produce a TF program composed of TF ops. They are meant primarily for environments where the TF runtime is available, but not the Python interpreter, as is typical for production with TF Serving.
    • Added utility tf.keras.utils.FeatureSpace, a one-stop shop for structured data preprocessing and encoding.
    • Added tf.SparseTensor input support to tf.keras.layers.Embedding layer. The layer now accepts a new boolean argument sparse. If sparse is set to True, the layer returns a SparseTensor instead of a dense Tensor. Defaults to False.
    • Added jit_compile as a settable property to tf.keras.Model.
    • Added synchronized optional parameter to layers.BatchNormalization.
    • Added deprecation warning to layers.experimental.SyncBatchNormalization and suggested to use layers.BatchNormalization with synchronized=True instead.
    • Updated tf.keras.layers.BatchNormalization to support masking of the inputs (mask argument) when computing the mean and variance.
    • Add tf.keras.layers.Identity, a placeholder pass-through layer.
    • Add show_trainable option to tf.keras.utils.model_to_dot to display layer trainable status in model plots.
    • Add ability to save a tf.keras.utils.FeatureSpace object, via feature_space.save("myfeaturespace.keras"), and reload it via feature_space = tf.keras.models.load_model("myfeaturespace.keras").
    • Added utility tf.keras.utils.to_ordinal to convert class vector to ordinal regression / classification matrix.
  • tf.experimental.dtensor:

    • Coordination service now works with dtensor.initialize_accelerator_system, and enabled by default.
    • Add tf.experimental.dtensor.is_dtensor to check if a tensor is a DTensor instance.
  • tf.data:

    • Added support for alternative checkpointing protocol which makes it possible to checkpoint the state of the input pipeline without having to store the contents of internal buffers. The new functionality can be enabled through the experimental_symbolic_checkpoint option of tf.data.Options().
    • Added a new rerandomize_each_iteration argument for the tf.data.Dataset.random() operation, which controls whether the sequence of generated random numbers should be re-randomized every epoch or not (the default behavior). If seed is set and rerandomize_each_iteration=True, the random() operation will produce a different (deterministic) sequence of numbers every epoch.
    • Added a new rerandomize_each_iteration argument for the tf.data.Dataset.sample_from_datasets() operation, which controls whether the sequence of generated random numbers used for sampling should be re-randomized every epoch or not. If seed is set and rerandomize_each_iteration=True, the sample_from_datasets() operation will use a different (deterministic) sequence of numbers every epoch.
  • tf.test:

    • Added tf.test.experimental.sync_devices, which is useful for accurately measuring performance in benchmarks.
  • tf.experimental.dtensor:

    • Added experimental support to ReduceScatter fuse on GPU (NCCL).

Bug Fixes and Other Changes

  • tf.random
    • Added non-experimental aliases for tf.random.split and tf.random.fold_in, the experimental endpoints are still available so no code changes are necessary.
  • tf.experimental.ExtensionType
    • Added function experimental.extension_type.as_dict(), which converts an instance of tf.experimental.ExtensionType to a dict representation.
  • stream_executor
    • Top level stream_executor directory has been deleted, users should use equivalent headers and targets under compiler/xla/stream_executor.
  • tf.nn
    • Added tf.nn.experimental.general_dropout, which is similar to tf.random.experimental.stateless_dropout but accepts a custom sampler function.
  • tf.types.experimental.GenericFunction
    • The experimental_get_compiler_ir method supports tf.TensorSpec compilation arguments.
  • tf.config.experimental.mlir_bridge_rollout
    • Removed enums MLIR_BRIDGE_ROLLOUT_SAFE_MODE_ENABLED and MLIR_BRIDGE_ROLLOUT_SAFE_MODE_FALLBACK_ENABLED which are no longer used by the tf2xla bridge

Thanks to our Contributors

This release contains contributions from many people at Google, as well as:

103yiran, 8bitmp3, Aakar, Aakar Dwivedi, Abinash Satapathy, Aditya Kane, ag.ramesh, Alexander Grund, Andrei Pikas, andreii, Andrew Goodbody, angerson, Anthony_256, Ashay Rane, Ashiq Imran, Awsaf, Balint Cristian, Banikumar Maiti (Intel Aipg), Ben Barsdell, bhack, cfRod, Chao Chen, chenchongsong, Chris Mc, Daniil Kutz, David Rubinstein, dianjiaogit, dixr, Dongfeng Yu, dongfengy, drah, Eric Kunze, Feiyue Chen, Frederic Bastien, Gauri1 Deshpande, guozhong.zhuang, hDn248, HYChou, ingkarat, James Hilliard, Jason Furmanek, Jaya, Jens Glaser, Jerry Ge, Jiao Dian'S Power Plant, Jie Fu, Jinzhe Zeng, Jukyy, Kaixi Hou, Kanvi Khanna, Karel Ha, karllessard, Koan-Sin Tan, Konstantin Beluchenko, Kulin Seth, Kun Lu, Kyle Gerard Felker, Leopold Cambier, Lianmin Zheng, linlifan, liuyuanqiang, Lukas Geiger, Luke Hutton, Mahmoud Abuzaina, Manas Mohanty, Mateo Fidabel, Maxiwell S. Garcia, Mayank Raunak, mdfaijul, meatybobby, Meenakshi Venkataraman, Michael Holman, Nathan John Sircombe, Nathan Lueh...

Read more

TensorFlow 2.11.0

18 Nov 06:01
d5b57ca
Compare
Choose a tag to compare

Release 2.11.0

Breaking Changes

  • The tf.keras.optimizers.Optimizer base class now points to the new Keras optimizer, while the old optimizers have been moved to the tf.keras.optimizers.legacy namespace.

    If you find your workflow failing due to this change, you may be facing one of the following issues:

    • Checkpoint loading failure. The new optimizer handles optimizer state differently from the old optimizer, which simplifies the logic of checkpoint saving/loading, but at the cost of breaking checkpoint backward compatibility in some cases. If you want to keep using an old checkpoint, please change your optimizer to tf.keras.optimizer.legacy.XXX (e.g. tf.keras.optimizer.legacy.Adam).
    • TF1 compatibility. The new optimizer, tf.keras.optimizers.Optimizer, does not support TF1 any more, so please use the legacy optimizer tf.keras.optimizer.legacy.XXX. We highly recommend migrating your workflow to TF2 for stable support and new features.
    • Old optimizer API not found. The new optimizer, tf.keras.optimizers.Optimizer, has a different set of public APIs from the old optimizer. These API changes are mostly related to getting rid of slot variables and TF1 support. Please check the API documentation to find alternatives to the missing API. If you must call the deprecated API, please change your optimizer to the legacy optimizer.
    • Learning rate schedule access. When using a tf.keras.optimizers.schedules.LearningRateSchedule, the new optimizer's learning_rate property returns the current learning rate value instead of a LearningRateSchedule object as before. If you need to access the LearningRateSchedule object, please use optimizer._learning_rate.
    • If you implemented a custom optimizer based on the old optimizer. Please set your optimizer to subclass tf.keras.optimizer.legacy.XXX. If you want to migrate to the new optimizer and find it does not support your optimizer, please file an issue in the Keras GitHub repo.
    • Errors, such as Cannot recognize variable.... The new optimizer requires all optimizer variables to be created at the first apply_gradients() or minimize() call. If your workflow calls the optimizer to update different parts of the model in multiple stages, please call optimizer.build(model.trainable_variables) before the training loop.
    • Timeout or performance loss. We don't anticipate this to happen, but if you see such issues, please use the legacy optimizer, and file an issue in the Keras GitHub repo.

    The old Keras optimizer will never be deleted, but will not see any new feature additions. New optimizers (for example, tf.keras.optimizers.Adafactor) will only be implemented based on the new tf.keras.optimizers.Optimizer base class.

  • tensorflow/python/keras code is a legacy copy of Keras since the TensorFlow v2.7 release, and will be deleted in the v2.12 release. Please remove any import of tensorflow.python.keras and use the public API with from tensorflow import keras or import tensorflow as tf; tf.keras.

Major Features and Improvements

  • tf.lite:

    • New operations supported: tf.math.unsorted_segment_sum, tf.atan2 and tf.sign.
    • Updates to existing operations:
      • tfl.mul now supports complex32 inputs.
  • tf.experimental.StructuredTensor:

    • Introduced tf.experimental.StructuredTensor, which provides a flexible and TensorFlow-native way to encode structured data such as protocol buffers or pandas dataframes.
  • tf.keras:

    • Added a new get_metrics_result() method to tf.keras.models.Model.
      • Returns the current metrics values of the model as a dict.
    • Added a new group normalization layer - tf.keras.layers.GroupNormalization.
    • Added weight decay support for all Keras optimizers via the weight_decay argument.
    • Added the Adafactor optimizer - tf.keras.optimizers.Adafactor.
    • Added warmstart_embedding_matrix to tf.keras.utils.
      • This utility can be used to warmstart an embedding matrix, so you reuse previously-learned word embeddings when working with a new set of words which may include previously unseen words (the embedding vectors for unseen words will be randomly initialized).
  • tf.Variable:

    • Added CompositeTensor as a base class to ResourceVariable.
      • This allows tf.Variables to be nested in tf.experimental.ExtensionTypes.
    • Added a new constructor argument experimental_enable_variable_lifting to tf.Variable, defaulting to True.
      • When it's set to False, the variable won't be lifted out of tf.function; thus it can be used as a tf.function-local variable: during each execution of the tf.function, the variable will be created and then disposed, similar to a local (that is, stack-allocated) variable in C/C++. Currently, experimental_enable_variable_lifting=False only works on non-XLA devices (for example, under @tf.function(jit_compile=False)).
  • TF SavedModel:

    • Added fingerprint.pb to the SavedModel directory. The fingerprint.pb file is a protobuf containing the "fingerprint" of the SavedModel. See the RFC for more details regarding its design and properties.
  • TF pip:

    • Windows CPU-builds for x86/x64 processors are now built, maintained, tested and released by a third party: Intel. Installing the Windows-native pip packages for tensorflow or tensorflow-cpu would install Intel's tensorflow-intel package. These packages are provided on an as-is basis. TensorFlow will use reasonable efforts to maintain the availability and integrity of this pip package. There may be delays if the third party fails to release the pip package. For using TensorFlow GPU on Windows, you will need to install TensorFlow in WSL2.

Bug Fixes and Other Changes

  • tf.image:

    • Added an optional parameter return_index_map to tf.image.ssim, which causes the returned value to be the local SSIM map instead of the global mean.
  • TF Core:

    • tf.custom_gradient can now be applied to functions that accept "composite" tensors, such as tf.RaggedTensor, as inputs.
    • Fix device placement issues related to datasets with ragged tensors of strings (i.e. variant encoded data with types not supported on GPU).
    • experimental_follow_type_hints for tf.function has been deprecated. Please use input_signature or reduce_retracing to minimize retracing.
  • tf.SparseTensor:

    • Introduced set_shape, which sets the static dense shape of the sparse tensor and has the same semantics as tf.Tensor.set_shape.

Security

Read more

TensorFlow 2.10.1

16 Nov 20:31
fdfc646
Compare
Choose a tag to compare

Release 2.10.1

This release introduces several vulnerability fixes:

TensorFlow 2.9.3

16 Nov 19:57
a5ed5f3
Compare
Choose a tag to compare

Release 2.9.3

This release introduces several vulnerability fixes:

TensorFlow 2.8.4

16 Nov 19:22
1b8f5c3
Compare
Choose a tag to compare

Release 2.8.4

This release introduces several vulnerability fixes:

TensorFlow 2.11.0-rc2

02 Nov 16:59
db80fa5
Compare
Choose a tag to compare
TensorFlow 2.11.0-rc2 Pre-release
Pre-release

Release 2.11.0

Breaking Changes

  • tf.keras.optimizers.Optimizer now points to the new Keras optimizer, and old optimizers have moved to the tf.keras.optimizers.legacy namespace.
    If you find your workflow failing due to this change, you may be facing one of the following issues:

    • Checkpoint loading failure. The new optimizer handles optimizer state differently from the old optimizer, which simplifies the logic of checkpoint saving/loading, but at the cost of breaking checkpoint backward compatibility in some cases. If you want to keep using an old checkpoint, please change your optimizer to tf.keras.optimizer.legacy.XXX (e.g. tf.keras.optimizer.legacy.Adam).
    • TF1 compatibility. The new optimizer, tf.keras.optimizers.Optimizer, does not support TF1 any more, so please use the legacy optimizer tf.keras.optimizer.legacy.XXX. We highly recommend to migrate your workflow to TF2 for stable support and new features.
    • Old optimizer API not found. The new optimizer, tf.keras.optimizers.Optimizer, has a different set of public APIs from the old
      optimizer. These API changes are mostly related to getting rid of slot variables and TF1 support. Please check the API documentation to find alternatives to the missing API. If you must call the deprecated API, please change your optimizer to the legacy optimizer.
    • Learning rate schedule access. When using a LearningRateSchedule, The new optimizer's learning_rate property returns the current learning rate value instead of a LearningRateSchedule object as before. If you need to access the LearningRateSchedule object, please use optimizer._learning_rate.
    • If you implemented a custom optimizer based on the old optimizer. Please set your optimizer to subclass tf.keras.optimizer.legacy.XXX. If you want to migrate to the new optimizer and find it does not support your optimizer, please file an issue in the Keras GitHub repo.
    • Errors, such as Cannot recognize variable.... The new optimizer requires all optimizer variables to be created at the first apply_gradients() or minimize() call. If your workflow calls the optimizer to update different parts of the model in multiple stages, please call optimizer.build(model.trainable_variables) before the training loop.
    • Timeout or performance loss. We don't anticipate this to happen, but if you see such issues, please use the legacy optimizer, and file an issue in the Keras GitHub repo.

    The old Keras optimizer will never be deleted, but will not see any new feature additions. New optimizers (for example, tf.keras.optimizers.Adafactor) will only be implemented based on tf.keras.optimizers.Optimizer, the new base class.

  • tensorflow/python/keras code is a legacy copy of Keras since 2.7 release, and will be deleted in 2.12 release. Please remove any import of tensorflow.python.keras and use public API with from tensorflow import keras or import tensorflow as tf; tf.keras.

Major Features and Improvements

  • tf.lite:

    • New operations supported: tf.unsortedsegmentmin, tf.atan2 and tf.sign.
    • Updates to existing operations:
      • tfl.mul now supports complex32 inputs.
  • tf.experimental.StructuredTensor

    • Introduced tf.experimental.StructuredTensor, which provides a flexible and TensorFlow-native way to encode structured data such as protocol buffers or pandas dataframes.
  • tf.keras:

    • Added a new get_metrics_result() method to tf.keras.models.Model.
      • Returns the current metrics values of the model as a dict.
    • Added a new group normalization layer - tf.keras.layers.GroupNormalization.
    • Added weight decay support for all Keras optimizers.
    • Added Adafactor optimizer tf.keras.optimizers.Adafactor.
    • Added warmstart_embedding_matrix to tf.keras.utils.
      • This utility can be used to warmstart an embeddings matrix, so you reuse previously-learned word embeddings when working with a new set of words which may include previously unseen words (the embedding vectors for unseen words will be randomly initialized).
  • tf.Variable:

    • Added CompositeTensor as a baseclass to ResourceVariable.
      • This allows tf.Variables to be nested in tf.experimental.ExtensionTypes.
    • Added a new constructor argument experimental_enable_variable_lifting to tf.Variable, defaulting to True.
      • When it's False, the variable won't be lifted out of tf.function, thus it can be used as a tf.function-local variable: during each execution of the tf.function, the variable will be created and then disposed, similar to a local (that is, stack-allocated) variable in C/C++. Currently, experimental_enable_variable_lifting=False only works on non-XLA devices (for example, under @tf.function(jit_compile=False)).
  • TF SavedModel:

    • Added fingerprint.pb to the SavedModel directory. The fingerprint.pb file is a protobuf containing the "fingerprint" of the SavedModel. See the RFC for more details regarding its design and properties.
  • TF pip:

    • Windows CPU-builds for x86/x64 processors are now built, maintained, tested and released by a third party: Intel. Installing the windows-native pip packages for tensorflow or tensorflow-cpu would install Intel's tensorflow-intel package. These packages are provided as-is. Tensorflow will use reasonable efforts to maintain the availability and integrity of this pip package. There may be delays if the third party fails to release the pip package. For using TensorFlow GPU on Windows, you will need to install TensorFlow in WSL2.

Bug Fixes and Other Changes

  • tf.image

    • Added an optional parameter return_index_map to tf.image.ssim which causes the returned value to be the local SSIM map instead of the global mean.
  • TF Core:

    • tf.custom_gradient can now be applied to functions that accept "composite" tensors, such as tf.RaggedTensor, as inputs.
    • Fix device placement issues related to datasets with ragged tensors of strings (i.e. variant encoded data with types not supported on GPU).
    • experimental_follow_type_hints for tf.function has been deprecated. Please use input_signature or reduce_retracing to minimize retracing.
  • tf.SparseTensor:

    • Introduced set_shape, which sets the static dense shape of the sparse tensor and has the same semantics as tf.Tensor.set_shape.

Thanks to our Contributors

This release contains contributions from many people at Google, as well as:

103yiran, 8bitmp3, Aakar Dwivedi, Alexander Grund, alif_elham, Aman Agarwal, amoitra, Andrei Ivanov, andreii, Andrew Goodbody, angerson, Ashay Rane, Azeem Shaikh, Ben Barsdell, bhack, Bhavani Subramanian, Cedric Nugteren, Chandra Kumar Ramasamy, Christopher Bate, CohenAriel, Cotarou, cramasam, Enrico Minack, Francisco Unda, Frederic Bastien, gadagashwini, Gauri1 Deshpande, george, Jake, Jeff, Jerry Ge, Jingxuan He, Jojimon Varghese, Jonathan Dekhtiar, Kaixi Hou, Kanvi Khanna, kcoul, Keith Smiley, Kevin Hu, Kun Lu, kushanam, Lianmin Zheng, liuyuanqiang, Louis Sugy, Mahmoud Abuzaina, Marius Brehler, mdfaijul, Meenakshi Venkataraman, Milos Puzovic, mohantym, Namrata-Ibm, Nathan John Sircombe, Nathan Luehr, Olaf Lipinski, Om Thakkar, Osman F Bayram, Patrice Vignola, Pavani Majety, Philipp Hack, Prianka Liz Kariat, Rahul Batra, RajeshT, Renato Golin, riestere, Roger Iyengar, Rohit Santhanam, Rsanthanam-Amd, Sadeed Pv, Samuel Marks, Shimokawa, Naoaki, Siddhesh Kothadi, Simengliu-Nv, Sindre Seppola, snadampal, Srinivasan Narayanamoorthy, sushreebarsa, syedshahbaaz, Tamas Bela Feher, Tatwai Chong, Thibaut Goetghebuer-Planchon, tilakrayal, Tom Anderson, Tomohiro Endo, Trevor Morris, vibhutisawant, Victor Zhang, Vremold, Xavier Bonaventura, Yanming Wang, Yasir Modak, Yimei Sun, Yong Tang, Yulv-Git, zhuoran.liu, zotanika

TensorFlow 2.11.0-rc1

19 Oct 16:26
8aa3c87
Compare
Choose a tag to compare
TensorFlow 2.11.0-rc1 Pre-release
Pre-release

Release 2.11.0

Breaking Changes

  • tf.keras.optimizers.Optimizer now points to the new Keras optimizer, and old optimizers have moved to the tf.keras.optimizers.legacy namespace.
    If you find your workflow failing due to this change, you may be facing one of the following issues:

    • Checkpoint loading failure. The new optimizer handles optimizer state differently from the old optimizer, which simplies the logic of
      checkpoint saving/loading, but at the cost of breaking checkpoint backward compatibility in some cases. If you want to keep using an old
      checkpoint, please change your optimizer to tf.keras.optimizer.legacy.XXX (e.g. tf.keras.optimizer.legacy.Adam).
    • TF1 compatibility. The new optimizer, tf.keras.optimizers.Optimizer, does not support TF1 any more, so please use the legacy optimizer
      tf.keras.optimizer.legacy.XXX.
      We highly recommend to migrate your workflow to TF2 for stable support and new features.
    • Old optimizer API not found. The new optimizer, tf.keras.optimizers.Optimizer, has a different set of public APIs from the old optimizer.
      These API changes are mostly related to getting rid of slot variables and TF1 support. Please check the API documentation to find alternatives
      to the missing API. If you must call the deprecated API, please change your optimizer to the legacy optimizer.
    • Learning rate schedule access. When using a LearningRateSchedule, The new optimizer's learning_rate property returns the
      current learning rate value instead of a LearningRateSchedule object as before. If you need to access the LearningRateSchedule object,
      please use optimizer._learning_rate.
    • If you implemented a custom optimizer based on the old optimizer. Please set your optimizer to subclass
      tf.keras.optimizer.legacy.XXX. If you want to migrate to the new optimizer and find it does not support your optimizer, please file
      an issue in the Keras GitHub repo.
    • Errors, such as Cannot recognize variable.... The new optimizer requires all optimizer variables to be created at the first
      apply_gradients() or minimize() call. If your workflow calls optimizer to update different parts of model in multiple stages,
      please call optimizer.build(model.trainable_variables) before the training loop.
    • Timeout or performance loss. We don't anticipate this to happen, but if you see such issues, please use the legacy optimizer, and file
      an issue in the Keras GitHub repo.

    The old Keras optimizer will never be deleted, but will not see any new feature additions. New optimizers (for example,
    tf.keras.optimizers.Adafactor) will only be implemented based on tf.keras.optimizers.Optimizer, the new base class.

Major Features and Improvements

  • tf.lite:

    • New operations supported: tf.unsortedsegmentmin, tf.atan2 and tf.sign.
    • Updates to existing operations:
      • tfl.mul now supports complex32 inputs.
  • tf.experimental.StructuredTensor

    • Introduced tf.experimental.StructuredTensor, which provides a flexible and TensorFlow-native way to encode structured data such as protocol
      buffers or pandas dataframes.
  • tf.keras:

    • Added a new get_metrics_result() method to tf.keras.models.Model.
      • Returns the current metrics values of the model as a dict.
    • Added a new group normalization layer - tf.keras.layers.GroupNormalization.
    • Added weight decay support for all Keras optimizers.
    • Added Adafactor optimizer tf.keras.optimizers.Adafactor.
    • Added warmstart_embedding_matrix to tf.keras.utils.
      • This utility can be used to warmstart an embeddings matrix, so you reuse previously-learned word embeddings when working with a new set of
        words which may include previously unseen words (the embedding vectors for unseen words will be randomly initialized).
  • tf.Variable:

    • Added CompositeTensor as a baseclass to ResourceVariable.
      • This allows tf.Variables to be nested in tf.experimental.ExtensionTypes.
    • Added a new constructor argument experimental_enable_variable_lifting to tf.Variable, defaulting to True.
      • When it's False, the variable won't be lifted out of tf.function, thus it can be used as a tf.function-local variable: during each
        execution of the tf.function, the variable will be created and then disposed, similar to a local (that is, stack-allocated) variable in C/C++.
        Currently, experimental_enable_variable_lifting=False only works on non-XLA devices (for example, under @tf.function(jit_compile=False)).
  • TF SavedModel:

    • Added fingerprint.pb to the SavedModel directory. The fingerprint.pb file is a protobuf containing the "fingerprint" of the SavedModel. See
      the RFC for more details regarding its design and properties.
  • TF pip:

    • Windows CPU-builds for x86/x64 processors are now built, maintained, tested and released by a third party: Intel. Installing the windows-native
      pip packages for tensorflow or tensorflow-cpu would install Intel's tensorflow-intel package. These packages are provided as-is. Tensorflow
      will use reasonable efforts to maintain the availability and integrity of this pip package. There may be delays if the third party fails to
      release the pip package. For using TensorFlow GPU on Windows, you will need to install TensorFlow in WSL2.

Bug Fixes and Other Changes

  • tf.image

    • Added an optional parameter return_index_map to tf.image.ssim which causes the returned value to be the local SSIM map instead of the global
      mean.
  • TF Core:

    • tf.custom_gradient can now be applied to functions that accept "composite" tensors, such as tf.RaggedTensor, as inputs.
    • Fix device placement issues related to datasets with ragged tensors of strings (i.e. variant encoded data with types not supported on GPU).
    • experimental_follow_type_hints for tf.function has been deprecated. Please use input_signature or reduce_retracing to minimize retracing.
  • tf.SparseTensor:

    • Introduced set_shape, which sets the static dense shape of the sparse tensor and has the same semantics as tf.Tensor.set_shape.

Thanks to our Contributors

This release contains contributions from many people at Google, as well as:

103yiran, 8bitmp3, Aakar Dwivedi, Alexander Grund, alif_elham, Aman Agarwal, amoitra, Andrei Ivanov, andreii, Andrew Goodbody, angerson, Ashay Rane, Azeem Shaikh, Ben Barsdell, bhack, Bhavani Subramanian, Cedric Nugteren, Chandra Kumar Ramasamy, Christopher Bate, CohenAriel, Cotarou, cramasam, Enrico Minack, Francisco Unda, Frederic Bastien, gadagashwini, Gauri1 Deshpande, george, Jake, Jeff, Jerry Ge, Jingxuan He, Jojimon Varghese, Jonathan Dekhtiar, Kaixi Hou, Kanvi Khanna, kcoul, Keith Smiley, Kevin Hu, Kun Lu, kushanam, Lianmin Zheng, liuyuanqiang, Louis Sugy, Mahmoud Abuzaina, Marius Brehler, mdfaijul, Meenakshi Venkataraman, Milos Puzovic, mohantym, Namrata-Ibm, Nathan John Sircombe, Nathan Luehr, Olaf Lipinski, Om Thakkar, Osman F Bayram, Patrice Vignola, Pavani Majety, Philipp Hack, Prianka Liz Kariat, Rahul Batra, RajeshT, Renato Golin, riestere, Roger Iyengar, Rohit Santhanam, Rsanthanam-Amd, Sadeed Pv, Samuel Marks, Shimokawa, Naoaki, Siddhesh Kothadi, Simengliu-Nv, Sindre Seppola, snadampal, Srinivasan Narayanamoorthy, sushreebarsa, syedshahbaaz, Tamas Bela Feher, Tatwai Chong, Thibaut Goetghebuer-Planchon, tilakrayal, Tom Anderson, Tomohiro Endo, Trevor Morris, vibhutisawant, Victor Zhang, Vremold, Xavier Bonaventura, Yanming Wang, Yasir Modak, Yimei Sun, Yong Tang, Yulv-Git, zhuoran.liu, zotanika

TensorFlow 2.11.0-rc0

18 Oct 18:19
ba36eac
Compare
Choose a tag to compare
TensorFlow 2.11.0-rc0 Pre-release
Pre-release

Release 2.11.0

Breaking Changes

  • tf.keras.optimizers.Optimizer now points to the new Keras optimizer, and old optimizers have moved to the tf.keras.optimizers.legacy namespace.
    If you find your workflow failing due to this change, you may be facing one of the following issues:

    • Checkpoint loading failure. The new optimizer handles optimizer state differently from the old optimizer, which simplies the logic of
      checkpoint saving/loading, but at the cost of breaking checkpoint backward compatibility in some cases. If you want to keep using an old
      checkpoint, please change your optimizer to tf.keras.optimizer.legacy.XXX (e.g. tf.keras.optimizer.legacy.Adam).
    • TF1 compatibility. The new optimizer, tf.keras.optimizers.Optimizer, does not support TF1 any more, so please use the legacy optimizer
      tf.keras.optimizer.legacy.XXX.
      We highly recommend to migrate your workflow to TF2 for stable support and new features.
    • Old optimizer API not found. The new optimizer, tf.keras.optimizers.Optimizer, has a different set of public APIs from the old optimizer.
      These API changes are mostly related to getting rid of slot variables and TF1 support. Please check the API documentation to find alternatives
      to the missing API. If you must call the deprecated API, please change your optimizer to the legacy optimizer.
    • Learning rate schedule access. When using a LearningRateSchedule, The new optimizer's learning_rate property returns the
      current learning rate value instead of a LearningRateSchedule object as before. If you need to access the LearningRateSchedule object,
      please use optimizer._learning_rate.
    • If you implemented a custom optimizer based on the old optimizer. Please set your optimizer to subclass
      tf.keras.optimizer.legacy.XXX. If you want to migrate to the new optimizer and find it does not support your optimizer, please file
      an issue in the Keras GitHub repo.
    • Errors, such as Cannot recognize variable.... The new optimizer requires all optimizer variables to be created at the first
      apply_gradients() or minimize() call. If your workflow calls optimizer to update different parts of model in multiple stages,
      please call optimizer.build(model.trainable_variables) before the training loop.
    • Timeout or performance loss. We don't anticipate this to happen, but if you see such issues, please use the legacy optimizer, and file
      an issue in the Keras GitHub repo.

    The old Keras optimizer will never be deleted, but will not see any new feature additions. New optimizers (for example,
    tf.keras.optimizers.Adafactor) will only be implemented based on tf.keras.optimizers.Optimizer, the new base class.

Major Features and Improvements

  • tf.lite:

    • New operations supported: tf.unsortedsegmentmin, tf.atan2 and tf.sign.
    • Updates to existing operations:
      • tfl.mul now supports complex32 inputs.
  • tf.experimental.StructuredTensor

    • Introduced tf.experimental.StructuredTensor, which provides a flexible and TensorFlow-native way to encode structured data such as protocol
      buffers or pandas dataframes.
  • tf.keras:

    • Added a new get_metrics_result() method to tf.keras.models.Model.
      • Returns the current metrics values of the model as a dict.
    • Added a new group normalization layer - tf.keras.layers.GroupNormalization.
    • Added weight decay support for all Keras optimizers.
    • Added Adafactor optimizer tf.keras.optimizers.Adafactor.
    • Added warmstart_embedding_matrix to tf.keras.utils.
      • This utility can be used to warmstart an embeddings matrix, so you reuse previously-learned word embeddings when working with a new set of
        words which may include previously unseen words (the embedding vectors for unseen words will be randomly initialized).
  • tf.Variable:

    • Added CompositeTensor as a baseclass to ResourceVariable.
      • This allows tf.Variables to be nested in tf.experimental.ExtensionTypes.
    • Added a new constructor argument experimental_enable_variable_lifting to tf.Variable, defaulting to True.
      • When it's False, the variable won't be lifted out of tf.function, thus it can be used as a tf.function-local variable: during each
        execution of the tf.function, the variable will be created and then disposed, similar to a local (that is, stack-allocated) variable in C/C++.
        Currently, experimental_enable_variable_lifting=False only works on non-XLA devices (for example, under @tf.function(jit_compile=False)).
  • TF SavedModel:

    • Added fingerprint.pb to the SavedModel directory. The fingerprint.pb file is a protobuf containing the "fingerprint" of the SavedModel. See
      the RFC for more details regarding its design and properties.
  • TF pip:

    • Windows CPU-builds for x86/x64 processors are now built, maintained, tested and released by a third party: Intel. Installing the windows-native
      pip packages for tensorflow or tensorflow-cpu would install Intel's tensorflow-intel package. These packages are provided as-is. Tensorflow
      will use reasonable efforts to maintain the availability and integrity of this pip package. There may be delays if the third party fails to
      release the pip package. For using TensorFlow GPU on Windows, you will need to install TensorFlow in WSL2.

Bug Fixes and Other Changes

  • tf.image

    • Added an optional parameter return_index_map to tf.image.ssim which causes the returned value to be the local SSIM map instead of the global
      mean.
  • TF Core:

    • tf.custom_gradient can now be applied to functions that accept "composite" tensors, such as tf.RaggedTensor, as inputs.
    • Fix device placement issues related to datasets with ragged tensors of strings (i.e. variant encoded data with types not supported on GPU).
    • experimental_follow_type_hints for tf.function has been deprecated. Please use input_signature or reduce_retracing to minimize retracing.
  • tf.SparseTensor:

    • Introduced set_shape, which sets the static dense shape of the sparse tensor and has the same semantics as tf.Tensor.set_shape.

Thanks to our Contributors

This release contains contributions from many people at Google, as well as:

103yiran, 8bitmp3, Aakar Dwivedi, Alexander Grund, alif_elham, Aman Agarwal, amoitra, Andrei Ivanov, andreii, Andrew Goodbody, angerson, Ashay Rane, Azeem Shaikh, Ben Barsdell, bhack, Bhavani Subramanian, Cedric Nugteren, Chandra Kumar Ramasamy, Christopher Bate, CohenAriel, Cotarou, cramasam, Enrico Minack, Francisco Unda, Frederic Bastien, gadagashwini, Gauri1 Deshpande, george, Jake, Jeff, Jerry Ge, Jingxuan He, Jojimon Varghese, Jonathan Dekhtiar, Kaixi Hou, Kanvi Khanna, kcoul, Keith Smiley, Kevin Hu, Kun Lu, kushanam, Lianmin Zheng, liuyuanqiang, Louis Sugy, Mahmoud Abuzaina, Marius Brehler, mdfaijul, Meenakshi Venkataraman, Milos Puzovic, mohantym, Namrata-Ibm, Nathan John Sircombe, Nathan Luehr, Olaf Lipinski, Om Thakkar, Osman F Bayram, Patrice Vignola, Pavani Majety, Philipp Hack, Prianka Liz Kariat, Rahul Batra, RajeshT, Renato Golin, riestere, Roger Iyengar, Rohit Santhanam, Rsanthanam-Amd, Sadeed Pv, Samuel Marks, Shimokawa, Naoaki, Siddhesh Kothadi, Simengliu-Nv, Sindre Seppola, snadampal, Srinivasan Narayanamoorthy, sushreebarsa, syedshahbaaz, Tamas Bela Feher, Tatwai Chong, Thibaut Goetghebuer-Planchon, tilakrayal, Tom Anderson, Tomohiro Endo, Trevor Morris, vibhutisawant, Victor Zhang, Vremold, Xavier Bonaventura, Yanming Wang, Yasir Modak, Yimei Sun, Yong Tang, Yulv-Git, zhuoran.liu, zotanika

TensorFlow 2.10.0

06 Sep 19:44
359c3cd
Compare
Choose a tag to compare

Release 2.10.0

Breaking Changes

  • Causal attention in keras.layers.Attention and keras.layers.AdditiveAttention is now specified in the call() method via the use_causal_mask argument (rather than in the constructor), for consistency with other layers.
  • Some files in tensorflow/python/training have been moved to tensorflow/python/tracking and tensorflow/python/checkpoint. Please update your imports accordingly, the old files will be removed in Release 2.11.
  • tf.keras.optimizers.experimental.Optimizer will graduate in Release 2.11, which means tf.keras.optimizers.Optimizer will be an alias of tf.keras.optimizers.experimental.Optimizer. The current tf.keras.optimizers.Optimizer will continue to be supported as tf.keras.optimizers.legacy.Optimizer, e.g.,tf.keras.optimizers.legacy.Adam. Most users won't be affected by this change, but please check the API doc if any API used in your workflow is changed or deprecated, and make adaptions. If you decide to keep using the old optimizer, please explicitly change your optimizer to tf.keras.optimizers.legacy.Optimizer.
  • RNG behavior change for tf.keras.initializers. Keras initializers will now use stateless random ops to generate random numbers.
    • Both seeded and unseeded initializers will always generate the same values every time they are called (for a given variable shape). For unseeded initializers (seed=None), a random seed will be created and assigned at initializer creation (different initializer instances get different seeds).
    • An unseeded initializer will raise a warning if it is reused (called) multiple times. This is because it would produce the same values each time, which may not be intended.

Deprecations

  • The C++ tensorflow::Code and tensorflow::Status will become aliases of respectively absl::StatusCode and absl::Status in some future release.
    • Use tensorflow::OkStatus() instead of tensorflow::Status::OK().
    • Stop constructing Status objects from tensorflow::error::Code.
    • One MUST NOT access tensorflow::errors::Code fields. Accessing tensorflow::error::Code fields is fine.
      • Use the constructors such as tensorflow::errors:InvalidArgument to create status using an error code without accessing it.
      • Use the free functions such as tensorflow::errors::IsInvalidArgument if needed.
      • In the last resort, use e.g.static_cast<tensorflow::errors::Code>(error::Code::INVALID_ARGUMENT) or static_cast<int>(code) for comparisons.
  • tensorflow::StatusOr will also become in the future alias to absl::StatusOr, so use StatusOr::value instead of StatusOr::ConsumeValueOrDie.

Major Features and Improvements

  • tf.lite:

    • New operations supported:
      • tflite SelectV2 now supports 5D.
      • tf.einsum is supported with multiple unknown shapes.
      • tf.unsortedsegmentprod op is supported.
      • tf.unsortedsegmentmax op is supported.
      • tf.unsortedsegmentsum op is supported.
    • Updates to existing operations:
      • tfl.scatter_nd now supports I1 for update arg.
    • Upgrade Flatbuffers v2.0.5 from v1.12.0
  • tf.keras:

    • EinsumDense layer is moved from experimental to core. Its import path is moved from tf.keras.layers.experimental.EinsumDense to tf.keras.layers.EinsumDense.
    • Added tf.keras.utils.audio_dataset_from_directory utility to easily generate audio classification datasets from directories of .wav files.
    • Added subset="both" support in tf.keras.utils.image_dataset_from_directory,tf.keras.utils.text_dataset_from_directory, and audio_dataset_from_directory, to be used with the validation_split argument, for returning both dataset splits at once, as a tuple.
    • Added tf.keras.utils.split_dataset utility to split a Dataset object or a list/tuple of arrays into two Dataset objects (e.g. train/test).
    • Added step granularity to BackupAndRestore callback for handling distributed training failures & restarts. The training state can now be restored at the exact epoch and step at which it was previously saved before failing.
    • Added tf.keras.dtensor.experimental.optimizers.AdamW. This optimizer is similar as the existing keras.optimizers.experimental.AdamW, and works in the DTensor training use case.
    • Improved masking support for tf.keras.layers.MultiHeadAttention.
      • Implicit masks for query, key and value inputs will automatically be used to compute a correct attention mask for the layer. These padding masks will be combined with any attention_mask passed in directly when calling the layer. This can be used with tf.keras.layers.Embedding with mask_zero=True to automatically infer a correct padding mask.
      • Added a use_causal_mask call time arugment to the layer. Passing use_causal_mask=True will compute a causal attention mask, and optionally combine it with any attention_mask passed in directly when calling the layer.
    • Added ignore_class argument in the loss SparseCategoricalCrossentropy and metrics IoU and MeanIoU, to specify a class index to be ignored during loss/metric computation (e.g. a background/void class).
    • Added tf.keras.models.experimental.SharpnessAwareMinimization. This class implements the sharpness-aware minimization technique, which boosts model performance on various tasks, e.g., ResNet on image classification.
  • tf.data:

    • Added support for cross-trainer data caching in tf.data service. This saves computation resources when concurrent training jobs train from the same dataset. See (https://www.tensorflow.org/api_docs/python/tf/data/experimental/service#sharing_tfdata_service_with_concurrent_trainers) for more details.
    • Added dataset_id to tf.data.experimental.service.register_dataset. If provided, tf.data service will use the provided ID for the dataset. If the dataset ID already exists, no new dataset will be registered. This is useful if multiple training jobs need to use the same dataset for training. In this case, users should call register_dataset with the same dataset_id.
    • Added a new field, inject_prefetch, to tf.data.experimental.OptimizationOptions. If it is set to True,tf.data will now automatically add a prefetch transformation to datasets that end in synchronous transformations. This enables data generation to be overlapped with data consumption. This may cause a small increase in memory usage due to buffering. To enable this behavior, set inject_prefetch=True in tf.data.experimental.OptimizationOptions.
    • Added a new value to tf.data.Options.autotune.autotune_algorithm: STAGE_BASED. If the autotune algorithm is set to STAGE_BASED, then it runs a new algorithm that can get the same performance with lower CPU/memory usage.
    • Added tf.data.experimental.from_list, a new API for creating Datasets from lists of elements.
  • tf.distribute:

    • Added tf.distribute.experimental.PreemptionCheckpointHandler to handle worker preemption/maintenance and cluster-wise consistent error reporting for tf.distribute.MultiWorkerMirroredStrategy. Specifically, for the type of interruption with advance notice, it automatically saves a checkpoint, exits the program without raising an unrecoverable error, and restores the progress when training restarts.
  • tf.math:

    • Added tf.math.approx_max_k and tf.math.approx_min_k which are the optimized alternatives to tf.math.top_k on TPU. The performance difference range from 8 to 100 times depending on the size of k. When running on CPU and GPU, a non-optimized XLA kernel is used.
  • tf.train:

    • Added tf.train.TrackableView which allows users to inspect the TensorFlow Trackable object (e.g. tf.Module, Keras Layers and models).
  • tf.vectorized_map:

    • Added an optional parameter: warn. This parameter controls whether or not warnings will be printed when operations in the provided fn fall back to a while loop.
  • XLA:

  • CPU performance optimizations:

    • x86 CPUs: oneDNN bfloat16 auto-mixed precision grappler graph optimization pass has been renamed from auto_mixed_precision_mkl to auto_mixed_precision_onednn_bfloat16. See example usage here.
    • aarch64 CPUs: Experimental performance optimizations from Compute Library for the Arm® Architecture (ACL) are available through oneDNN in the default Linux aarch64 package (pip install tensorflow).
      • The optimizations are disabled by default.
      • Set the environment variable TF_ENABLE_ONEDNN_OPTS=1 to enable the optimizations. Setting the variable to 0 or uns...
Read more