diff --git a/tensorflow/contrib/tensorrt/README.md b/tensorflow/contrib/tensorrt/README.md index 6eafc1754ca510..687dee07e1327d 100644 --- a/tensorflow/contrib/tensorrt/README.md +++ b/tensorflow/contrib/tensorrt/README.md @@ -1,59 +1,29 @@ # Using TensorRT in TensorFlow - -This module provides necessary bindings and introduces TRT_engine_op -operator that wraps a subgraph in TensorRT. This is still a work in progress -but should be useable with most common graphs. +This module provides necessary bindings and introduces TRT_engine_op operator +that wraps a subgraph in TensorRT. This is still a work in progress but should +be useable with most common graphs. ## Compilation - -In order to compile the module, you need to have a local TensorRT -installation ( libnvinfer.so and respective include files ). During the -configuration step, TensorRT should be enabled and installation path -should be set. If installed through package managers (deb,rpm), -configure script should find the necessary components from the system -automatically. If installed from tar packages, user has to set path to -location where the library is installed during configuration. +In order to compile the module, you need to have a local TensorRT installation +(libnvinfer.so and respective include files). During the configuration step, +TensorRT should be enabled and installation path should be set. If installed +through package managers (deb,rpm), configure script should find the necessary +components from the system automatically. If installed from tar packages, user +has to set path to location where the library is installed during configuration. ```shell bazel build --config=cuda --config=opt //tensorflow/tools/pip_package:build_pip_package bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/ ``` -After the installation of tensorflow package, TensorRT transformation -will be available. An example use can be found in test/test_tftrt.py script +After the installation of tensorflow package, TensorRT transformation will be +available. An example use can be found in test/test_tftrt.py script ## Installing TensorRT 3.0.4 -In order to make use of TensorRT integration, you will need a local installation of TensorRT 3.0.4 from the [NVIDIA Developer website](https://developer.nvidia.com/tensorrt). Due to compiler compatibility, you will need to download and install the TensorRT 3.0.4 tarball for _Ubuntu 14.04_, i.e., **_TensorRT-3.0.4.Ubuntu-14.04.5.x86_64.cuda-9.0.cudnn7.0-tar.gz_**, even if you are using Ubuntu 16.04 or later. - -### Preparing TensorRT installation - -Once you have downloaded TensorRT-3.0.4.Ubuntu-14.04.5.x86_64.cuda-9.0.cudnn7.0-tar.gz, you will need to unpack it to an installation directory, which will be referred to as . Please replace with the full path of actual installation directory you choose in commands below. - -```shell -cd && tar -zxf /path/to/TensorRT-3.0.4.Ubuntu-14.04.5.x86_64.cuda-9.0.cudnn7.0-tar.gz -``` - -After unpacking the binaries, you have several options to use them: - -#### To run TensorFlow as a user without superuser privileges - -For a regular user without any sudo rights, you should add TensorRT to your `$LD_LIBRARY_PATH`: - - ```shell - export LD_LIBRARY_PATH=/TensorRT-3.0.4/lib${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}} - ``` - -Then you are ready to use TensorFlow-TensorRT integration. `$LD_LIBRARY_PATH` must contain the path to TensorRT installation for TensorFlow-TensorRT integration to work. If you are using a VirtualEnv-like setup, you can add the command above to your `bin/activate` script or to your `.bashrc` script. - -#### To run TensorFlow as a superuser - - When running as a superuser, such as in a container or via sudo, the `$LD_LIBRARY_PATH` approach above may not work. The following is preferred when the user has superuser privileges: - - ```shell - echo "/TensorRT-3.0.4/lib" | sudo tee /etc/ld.so.conf.d/tensorrt304.conf && sudo ldconfig - ``` - - Please ensure that any existing deb package installation of TensorRT is removed before following these instructions to avoid package conflicts. \ No newline at end of file +In order to make use of TensorRT integration, you will need a local installation +of TensorRT 3.0.4 from the [NVIDIA Developer website](https://developer.nvidia.com/tensorrt). +Installation instructions for compatibility with TensorFlow are provided on the +[TensorFlow Installation page](https://www.tensorflow.org/install/install_linux#nvidia_requirements_to_run_tensorflow_with_gpu_support). diff --git a/tensorflow/docs_src/install/install_linux.md b/tensorflow/docs_src/install/install_linux.md index 42d218c4bceb8a..16a232ed20713f 100644 --- a/tensorflow/docs_src/install/install_linux.md +++ b/tensorflow/docs_src/install/install_linux.md @@ -65,16 +65,38 @@ must be installed on your system:
     $ sudo apt-get install libcupti-dev
     
+ * **[OPTIONAL]** For optimized inferencing performance, you can also install - NVIDIA TensorRT 3.0. For details, see - [NVIDIA's TensorRT documentation](http://docs.nvidia.com/deeplearning/sdk/tensorrt-install-guide/index.html#installing-tar). - Only steps 1-4 in the TensorRT Tar File installation instructions are - required for compatibility with TensorFlow; the Python package installation - in steps 5 and 6 can be omitted. Detailed installation instructions can be found at [package documentataion](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/tensorrt#installing-tensorrt-304) + **NVIDIA TensorRT 3.0**. The minimal set of TensorRT runtime components needed + for use with the pre-built `tensorflow-gpu` package can be installed as follows: + +
+    $ wget https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1404/x86_64/nvinfer-runtime-trt-repo-ubuntu1404-3.0.4-ga-cuda9.0_1.0-1_amd64.deb
+    $ sudo dpkg -i nvinfer-runtime-trt-repo-ubuntu1404-3.0.4-ga-cuda9.0_1.0-1_amd64.deb
+    $ sudo apt-get update
+    $ sudo apt-get install -y --allow-downgrades libnvinfer-dev libcudnn7-dev=7.0.5.15-1+cuda9.0 libcudnn7=7.0.5.15-1+cuda9.0
+    
**IMPORTANT:** For compatibility with the pre-built `tensorflow-gpu` - package, please use the Ubuntu **14.04** tar file package of TensorRT - even when installing onto an Ubuntu 16.04 system. + package, please use the Ubuntu **14.04** package of TensorRT as shown above, + even when installing onto an Ubuntu 16.04 system.
+
+ To build the TensorFlow-TensorRT integration module from source rather than + using pre-built binaries, see the [module documentation](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/tensorrt#using-tensorrt-in-tensorflow). + For detailed TensorRT installation instructions, see [NVIDIA's TensorRT documentation](http://docs.nvidia.com/deeplearning/sdk/tensorrt-install-guide/index.html).
+
+ To avoid cuDNN version conflicts during later system upgrades, you can hold + the cuDNN version at 7.0.5: + +
+    $  sudo apt-mark hold libcudnn7 libcudnn7-dev
+    
+ + To later allow upgrades, you can remove the hold: + +
+    $  sudo apt-mark unhold libcudnn7 libcudnn7-dev
+    
If you have an earlier version of the preceding packages, please upgrade to the specified versions. If upgrading is not possible, then you may still run diff --git a/tensorflow/python/ops/variable_scope.py b/tensorflow/python/ops/variable_scope.py index e33085ba626a76..8b9742fadb3a3e 100644 --- a/tensorflow/python/ops/variable_scope.py +++ b/tensorflow/python/ops/variable_scope.py @@ -1227,7 +1227,7 @@ class EagerVariableStore(object): for input in dataset_iterator: with container.as_default(): x = tf.layers.dense(input, name="l1") - print(container.variables) # Should print the variables used in the layer. + print(container.variables()) # Should print the variables used in the layer. ``` """ diff --git a/tensorflow/python/training/supervisor.py b/tensorflow/python/training/supervisor.py index 7389e344c7d8ee..427bed2ec78333 100644 --- a/tensorflow/python/training/supervisor.py +++ b/tensorflow/python/training/supervisor.py @@ -306,7 +306,7 @@ def __init__(self, @end_compatibility """ if context.executing_eagerly(): - raise RuntimeError("Supervisors are compatible with eager execution.") + raise RuntimeError("Supervisors are not compatible with eager execution.") # Set default values of arguments. if graph is None: graph = ops.get_default_graph() diff --git a/tensorflow/tools/docker/Dockerfile.devel b/tensorflow/tools/docker/Dockerfile.devel index 390d7442c37b1d..ae20cee24f440e 100644 --- a/tensorflow/tools/docker/Dockerfile.devel +++ b/tensorflow/tools/docker/Dockerfile.devel @@ -31,6 +31,7 @@ RUN pip --no-cache-dir install \ ipykernel \ jupyter \ matplotlib \ + mock \ numpy \ scipy \ sklearn \ @@ -81,7 +82,7 @@ RUN git clone --branch=r1.8 --depth=1 https://github.com/tensorflow/tensorflow.g ENV CI_BUILD_PYTHON python RUN tensorflow/tools/ci_build/builds/configured CPU \ - bazel build -c opt --cxxopt="-D_GLIBCXX_USE_CXX11_ABI=0" \ + bazel build -c opt --copt=-mavx --cxxopt="-D_GLIBCXX_USE_CXX11_ABI=0" \ # For optimized builds appropriate for the hardware platform of your choosing, uncomment below... # For ivy-bridge or sandy-bridge # --copt=-march="ivybridge" \ diff --git a/tensorflow/tools/docker/Dockerfile.devel-gpu b/tensorflow/tools/docker/Dockerfile.devel-gpu index 293028d229adba..e4f66bd5d370e0 100644 --- a/tensorflow/tools/docker/Dockerfile.devel-gpu +++ b/tensorflow/tools/docker/Dockerfile.devel-gpu @@ -40,6 +40,7 @@ RUN pip --no-cache-dir install \ ipykernel \ jupyter \ matplotlib \ + mock \ numpy \ scipy \ sklearn \ @@ -94,7 +95,7 @@ ENV TF_CUDNN_VERSION=7 RUN ln -s /usr/local/cuda/lib64/stubs/libcuda.so /usr/local/cuda/lib64/stubs/libcuda.so.1 && \ LD_LIBRARY_PATH=/usr/local/cuda/lib64/stubs:${LD_LIBRARY_PATH} \ tensorflow/tools/ci_build/builds/configured GPU \ - bazel build -c opt --config=cuda \ + bazel build -c opt --copt=-mavx --config=cuda \ --cxxopt="-D_GLIBCXX_USE_CXX11_ABI=0" \ tensorflow/tools/pip_package:build_pip_package && \ rm /usr/local/cuda/lib64/stubs/libcuda.so.1 && \