Does Adasum work with Gloo? #3610
Unanswered
jayatijain
asked this question in
Q&A
Replies: 2 comments
-
Hi @jayatijain, at this time Adasum requires MPI because it hasn't been implemented with Gloo operations so far. Unfortunately this makes it incompatible with elastic Horovod. It should really surface a more explicit Horovod error |
Beta Was this translation helpful? Give feedback.
0 replies
-
Hi @maxhgerlach Thank you for your response. Do you know when if there's a plan to support Adasum for gloo in the future? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Context: I'm running elastic Horovod for fault tolerance and scaling and wanted to use Adasum in my training script. As per documentation, adasum works with MPI however elastic horovod works with gloo. I'm looking for a way to use Adasum in elastic Horovod.
Currently I'm trying to run adasum with gloo and facing some challenges. To see if adasum works with gloo, did a small experiment by taking the latest horovod docker and running it with adasum:
Noticed that it used mpi to run: mpirun --allow-run-as-root --tag-output -np 2 -H localhost:2 -bind-to none -map-by slot -mca pml ob1 -mca btl ^openib -mca btl_tcp_if_include lo -x NCCL_SOCKET_IFNAME=lo -x CUDA_VERSION -x DEBIAN_FRONTEND -x HOME -x HOSTNAME -x LC_CTYPE -x LD_LIBRARY_PATH -x LIBRARY_PATH -x LS_COLORS -x NCCL_VERSION -x NVARCH -x NVIDIA_DRIVER_CAPABILITIES -x NVIDIA_REQUIRE_CUDA -x NVIDIA_VISIBLE_DEVICES -x NV_CUDA_COMPAT_PACKAGE -x NV_CUDA_CUDART_DEV_VERSION -x NV_CUDA_CUDART_VERSION -x NV_CUDA_LIB_VERSION -x NV_LIBCUBLAS_DEV_PACKAGE -x NV_LIBCUBLAS_DEV_PACKAGE_NAME -x NV_LIBCUBLAS_DEV_VERSION -x NV_LIBCUBLAS_PACKAGE -x NV_LIBCUBLAS_PACKAGE_NAME -x NV_LIBCUBLAS_VERSION -x NV_LIBCUSPARSE_DEV_VERSION -x NV_LIBCUSPARSE_VERSION -x NV_LIBNCCL_DEV_PACKAGE -x NV_LIBNCCL_DEV_PACKAGE_NAME -x NV_LIBNCCL_DEV_PACKAGE_VERSION -x NV_LIBNCCL_PACKAGE -x NV_LIBNCCL_PACKAGE_NAME -x NV_LIBNCCL_PACKAGE_VERSION -x NV_LIBNPP_DEV_PACKAGE -x NV_LIBNPP_DEV_VERSION -x NV_LIBNPP_PACKAGE -x NV_LIBNPP_VERSION -x NV_NVML_DEV_VERSION -x NV_NVPROF_DEV_PACKAGE -x NV_NVPROF_VERSION -x NV_NVTX_VERSION -x PATH -x PWD -x SHLVL -x TERM -x _ python elastic/pytorch/pytorch_mnist_elastic.py --data-dir /code/data --epochs 2 --log-interval 100 --use-adasum
Now forced it to use glow by giving the --gloo option:
3. horovodrun --gloo -np 2 -H localhost:2 --verbose python elastic/pytorch/pytorch_mnist_elastic.py --data-dir /code/data --epochs 2 --log-interval 100 --use-adasum
Please see the logs for the error that I got. Kindly tell me what I might be missing
horovodrun --gloo -np 2 -H localhost:2 --verbose python elastic/pytorch/pytorch_mnist_elastic.py --data-dir /code/data --epochs 2 --log-interval 100 --use-adasum
Filtering local host names.
Remote host found:
All hosts are local, finding the interfaces with address 127.0.0.1
Local interface found lo
env HOROVOD_HOSTNAME=localhost HOROVOD_RANK=1 HOROVOD_SIZE=2 HOROVOD_LOCAL_RANK=1 HOROVOD_LOCAL_SIZE=2 HOROVOD_CROSS_RANK=0 HOROVOD_CROSS_SIZE=1 NV_LIBCUBLAS_VERSION=11.4.1.1043-1 NVIDIA_VISIBLE_DEVICES=all NV_NVML_DEV_VERSION=11.2.152-1 NV_LIBNCCL_DEV_PACKAGE=libnccl-dev=2.8.4-1+cuda11.2 NV_LIBNCCL_DEV_PACKAGE_VERSION=2.8.4-1 HOSTNAME=03f015cc1e95 NVIDIA_REQUIRE_CUDA='cuda>=11.2 brand=tesla,driver>=418,driver<419' NV_LIBCUBLAS_DEV_PACKAGE=libcublas-dev-11-2=11.4.1.1043-1 NV_NVTX_VERSION=11.2.152-1 NV_CUDA_CUDART_DEV_VERSION=11.2.152-1 NV_LIBCUSPARSE_VERSION=11.4.1.1152-1 NV_LIBNPP_VERSION=11.3.2.152-1 NCCL_VERSION=2.8.4-1 PWD=/horovod/examples NVIDIA_DRIVER_CAPABILITIES=compute,utility NV_NVPROF_DEV_PACKAGE=cuda-nvprof-11-2=11.2.152-1 NV_LIBNPP_PACKAGE=libnpp-11-2=11.3.2.152-1 NV_LIBNCCL_DEV_PACKAGE_NAME=libnccl-dev NV_LIBCUBLAS_DEV_VERSION=11.4.1.1043-1 NV_LIBCUBLAS_DEV_PACKAGE_NAME=libcublas-dev-11-2 NV_CUDA_CUDART_VERSION=11.2.152-1 HOME=/root LS_COLORS='rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:.tar=01;31:.tgz=01;31:.arc=01;31:.arj=01;31:.taz=01;31:.lha=01;31:.lz4=01;31:.lzh=01;31:.lzma=01;31:.tlz=01;31:.txz=01;31:.tzo=01;31:.t7z=01;31:.zip=01;31:.z=01;31:.dz=01;31:.gz=01;31:.lrz=01;31:.lz=01;31:.lzo=01;31:.xz=01;31:.zst=01;31:.tzst=01;31:.bz2=01;31:.bz=01;31:.tbz=01;31:.tbz2=01;31:.tz=01;31:.deb=01;31:.rpm=01;31:.jar=01;31:.war=01;31:.ear=01;31:.sar=01;31:.rar=01;31:.alz=01;31:.ace=01;31:.zoo=01;31:.cpio=01;31:.7z=01;31:.rz=01;31:.cab=01;31:.wim=01;31:.swm=01;31:.dwm=01;31:.esd=01;31:.jpg=01;35:.jpeg=01;35:.mjpg=01;35:.mjpeg=01;35:.gif=01;35:.bmp=01;35:.pbm=01;35:.pgm=01;35:.ppm=01;35:.tga=01;35:.xbm=01;35:.xpm=01;35:.tif=01;35:.tiff=01;35:.png=01;35:.svg=01;35:.svgz=01;35:.mng=01;35:.pcx=01;35:.mov=01;35:.mpg=01;35:.mpeg=01;35:.m2v=01;35:.mkv=01;35:.webm=01;35:.ogm=01;35:.mp4=01;35:.m4v=01;35:.mp4v=01;35:.vob=01;35:.qt=01;35:.nuv=01;35:.wmv=01;35:.asf=01;35:.rm=01;35:.rmvb=01;35:.flc=01;35:.avi=01;35:.fli=01;35:.flv=01;35:.gl=01;35:.dl=01;35:.xcf=01;35:.xwd=01;35:.yuv=01;35:.cgm=01;35:.emf=01;35:.ogv=01;35:.ogx=01;35:.aac=00;36:.au=00;36:.flac=00;36:.m4a=00;36:.mid=00;36:.midi=00;36:.mka=00;36:.mp3=00;36:.mpc=00;36:.ogg=00;36:.ra=00;36:.wav=00;36:.oga=00;36:.opus=00;36:.spx=00;36:.xspf=00;36:' CUDA_VERSION=11.2.2 NV_LIBCUBLAS_PACKAGE=libcublas-11-2=11.4.1.1043-1 NV_LIBNPP_DEV_PACKAGE=libnpp-dev-11-2=11.3.2.152-1 NV_LIBCUBLAS_PACKAGE_NAME=libcublas-11-2 NV_LIBNPP_DEV_VERSION=11.3.2.152-1 TERM=xterm NV_LIBCUSPARSE_DEV_VERSION=11.4.1.1152-1 LIBRARY_PATH=/usr/local/cuda/lib64/stubs SHLVL=1 NV_CUDA_LIB_VERSION=11.2.2-1 NVARCH=x86_64 NV_CUDA_COMPAT_PACKAGE=cuda-compat-11-2 NV_LIBNCCL_PACKAGE=libnccl2=2.8.4-1+cuda11.2 LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64 NV_NVPROF_VERSION=11.2.152-1 PATH=/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin NV_LIBNCCL_PACKAGE_NAME=libnccl2 NV_LIBNCCL_PACKAGE_VERSION=2.8.4-1 DEBIAN_FRONTEND=noninteractive _=/usr/local/bin/horovodrun LC_CTYPE=C.UTF-8 PYTHONUNBUFFERED=1 HOROVOD_GLOO_RENDEZVOUS_ADDR=127.0.0.1 HOROVOD_GLOO_RENDEZVOUS_PORT=44746 HOROVOD_CONTROLLER=gloo HOROVOD_CPU_OPERATIONS=gloo HOROVOD_GLOO_IFACE=lo NCCL_SOCKET_IFNAME=lo python elastic/pytorch/pytorch_mnist_elastic.py --data-dir /code/data --epochs 2 --log-interval 100 --use-adasum
env HOROVOD_HOSTNAME=localhost HOROVOD_RANK=0 HOROVOD_SIZE=2 HOROVOD_LOCAL_RANK=0 HOROVOD_LOCAL_SIZE=2 HOROVOD_CROSS_RANK=0 HOROVOD_CROSS_SIZE=1 NV_LIBCUBLAS_VERSION=11.4.1.1043-1 NVIDIA_VISIBLE_DEVICES=all NV_NVML_DEV_VERSION=11.2.152-1 NV_LIBNCCL_DEV_PACKAGE=libnccl-dev=2.8.4-1+cuda11.2 NV_LIBNCCL_DEV_PACKAGE_VERSION=2.8.4-1 HOSTNAME=03f015cc1e95 NVIDIA_REQUIRE_CUDA='cuda>=11.2 brand=tesla,driver>=418,driver<419' NV_LIBCUBLAS_DEV_PACKAGE=libcublas-dev-11-2=11.4.1.1043-1 NV_NVTX_VERSION=11.2.152-1 NV_CUDA_CUDART_DEV_VERSION=11.2.152-1 NV_LIBCUSPARSE_VERSION=11.4.1.1152-1 NV_LIBNPP_VERSION=11.3.2.152-1 NCCL_VERSION=2.8.4-1 PWD=/horovod/examples NVIDIA_DRIVER_CAPABILITIES=compute,utility NV_NVPROF_DEV_PACKAGE=cuda-nvprof-11-2=11.2.152-1 NV_LIBNPP_PACKAGE=libnpp-11-2=11.3.2.152-1 NV_LIBNCCL_DEV_PACKAGE_NAME=libnccl-dev NV_LIBCUBLAS_DEV_VERSION=11.4.1.1043-1 NV_LIBCUBLAS_DEV_PACKAGE_NAME=libcublas-dev-11-2 NV_CUDA_CUDART_VERSION=11.2.152-1 HOME=/root LS_COLORS='rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:.tar=01;31:.tgz=01;31:.arc=01;31:.arj=01;31:.taz=01;31:.lha=01;31:.lz4=01;31:.lzh=01;31:.lzma=01;31:.tlz=01;31:.txz=01;31:.tzo=01;31:.t7z=01;31:.zip=01;31:.z=01;31:.dz=01;31:.gz=01;31:.lrz=01;31:.lz=01;31:.lzo=01;31:.xz=01;31:.zst=01;31:.tzst=01;31:.bz2=01;31:.bz=01;31:.tbz=01;31:.tbz2=01;31:.tz=01;31:.deb=01;31:.rpm=01;31:.jar=01;31:.war=01;31:.ear=01;31:.sar=01;31:.rar=01;31:.alz=01;31:.ace=01;31:.zoo=01;31:.cpio=01;31:.7z=01;31:.rz=01;31:.cab=01;31:.wim=01;31:.swm=01;31:.dwm=01;31:.esd=01;31:.jpg=01;35:.jpeg=01;35:.mjpg=01;35:.mjpeg=01;35:.gif=01;35:.bmp=01;35:.pbm=01;35:.pgm=01;35:.ppm=01;35:.tga=01;35:.xbm=01;35:.xpm=01;35:.tif=01;35:.tiff=01;35:.png=01;35:.svg=01;35:.svgz=01;35:.mng=01;35:.pcx=01;35:.mov=01;35:.mpg=01;35:.mpeg=01;35:.m2v=01;35:.mkv=01;35:.webm=01;35:.ogm=01;35:.mp4=01;35:.m4v=01;35:.mp4v=01;35:.vob=01;35:.qt=01;35:.nuv=01;35:.wmv=01;35:.asf=01;35:.rm=01;35:.rmvb=01;35:.flc=01;35:.avi=01;35:.fli=01;35:.flv=01;35:.gl=01;35:.dl=01;35:.xcf=01;35:.xwd=01;35:.yuv=01;35:.cgm=01;35:.emf=01;35:.ogv=01;35:.ogx=01;35:.aac=00;36:.au=00;36:.flac=00;36:.m4a=00;36:.mid=00;36:.midi=00;36:.mka=00;36:.mp3=00;36:.mpc=00;36:.ogg=00;36:.ra=00;36:.wav=00;36:.oga=00;36:.opus=00;36:.spx=00;36:.xspf=00;36:' CUDA_VERSION=11.2.2 NV_LIBCUBLAS_PACKAGE=libcublas-11-2=11.4.1.1043-1 NV_LIBNPP_DEV_PACKAGE=libnpp-dev-11-2=11.3.2.152-1 NV_LIBCUBLAS_PACKAGE_NAME=libcublas-11-2 NV_LIBNPP_DEV_VERSION=11.3.2.152-1 TERM=xterm NV_LIBCUSPARSE_DEV_VERSION=11.4.1.1152-1 LIBRARY_PATH=/usr/local/cuda/lib64/stubs SHLVL=1 NV_CUDA_LIB_VERSION=11.2.2-1 NVARCH=x86_64 NV_CUDA_COMPAT_PACKAGE=cuda-compat-11-2 NV_LIBNCCL_PACKAGE=libnccl2=2.8.4-1+cuda11.2 LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64 NV_NVPROF_VERSION=11.2.152-1 PATH=/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin NV_LIBNCCL_PACKAGE_NAME=libnccl2 NV_LIBNCCL_PACKAGE_VERSION=2.8.4-1 DEBIAN_FRONTEND=noninteractive _=/usr/local/bin/horovodrun LC_CTYPE=C.UTF-8 PYTHONUNBUFFERED=1 HOROVOD_GLOO_RENDEZVOUS_ADDR=127.0.0.1 HOROVOD_GLOO_RENDEZVOUS_PORT=44746 HOROVOD_CONTROLLER=gloo HOROVOD_CPU_OPERATIONS=gloo HOROVOD_GLOO_IFACE=lo NCCL_SOCKET_IFNAME=lo python elastic/pytorch/pytorch_mnist_elastic.py --data-dir /code/data --epochs 2 --log-interval 100 --use-adasum
[0]:elastic/pytorch/pytorch_mnist_elastic.py:96: UserWarning: Implicit dimension choice for log_softmax has been deprecated. Change the call to include dim=X as an argument.
[0]: return F.log_softmax(x)
[1]:elastic/pytorch/pytorch_mnist_elastic.py:96: UserWarning: Implicit dimension choice for log_softmax has been deprecated. Change the call to include dim=X as an argument.
[1]: return F.log_softmax(x)
[1]:Traceback (most recent call last):
[1]: File "/usr/local/lib/python3.8/dist-packages/horovod/torch/mpi_ops.py", line 145, in _allreduce_async
[1]: handle = getattr(mpi_lib, function)(tensor, output, divisor,
[1]:RuntimeError: Horovod has been shut down. This was caused by an exception on one of the ranks or an attempt to allreduce, allgather or broadcast a tensor after one of the ranks finished execution. If the shutdown was caused by an exception, you should see the exception in the log before the first shutdown message.
[1]:
[1]:During handling of the above exception, another exception occurred:
[1]:
[1]:Traceback (most recent call last):
[1]: File "/usr/local/lib/python3.8/dist-packages/horovod/common/elastic.py", line 164, in wrapper
[1]: return func(state, *args, **kwargs)
[1]: File "elastic/pytorch/pytorch_mnist_elastic.py", line 143, in train
[1]: loss.backward()
[1]: File "/usr/local/lib/python3.8/dist-packages/torch/tensor.py", line 245, in backward
[1]: torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
[1]: File "/usr/local/lib/python3.8/dist-packages/torch/autograd/init.py", line 145, in backward
[1]: Variable._execution_engine.run_backward(
[1]: File "/usr/local/lib/python3.8/dist-packages/horovod/torch/optimizer.py", line 472, in hook
[1]: handle, ctx = self.allreduce_grad_async(p)
[1]: File "/usr/local/lib/python3.8/dist-packages/horovod/torch/optimizer.py", line 450, in allreduce_grad_async
[1]: handle = allreduce_async(tensor_compressed.data, name=name, op=Adasum)
[1]: File "/usr/local/lib/python3.8/dist-packages/horovod/torch/mpi_ops.py", line 287, in allreduce_async
[1]: return _allreduce_async(tensor, tensor, name, op, prescale_factor, postscale_factor, process_set)
[1]: File "/usr/local/lib/python3.8/dist-packages/horovod/torch/mpi_ops.py", line 149, in _allreduce_async
[1]: raise HorovodInternalError(e)
[1]:horovod.common.exceptions.HorovodInternalError: Horovod has been shut down. This was caused by an exception on one of the ranks or an attempt to allreduce, allgather or broadcast a tensor after one of the ranks finished execution. If the shutdown was caused by an exception, you should see the exception in the log before the first shutdown message.
[1]:
[1]:During handling of the above exception, another exception occurred:
[1]:
[1]:Traceback (most recent call last):
[1]: File "elastic/pytorch/pytorch_mnist_elastic.py", line 199, in
[1]: train(state)
[1]: File "/usr/local/lib/python3.8/dist-packages/horovod/common/elastic.py", line 166, in wrapper
[1]: state.restore()
[1]: File "/usr/local/lib/python3.8/dist-packages/horovod/torch/elastic/state.py", line 57, in restore
[1]: handler.restore()
[1]: File "/usr/local/lib/python3.8/dist-packages/horovod/torch/elastic/state.py", line 113, in restore
[1]: self.value.load_state_dict(self._saved_optimizer_state)
[1]: File "/usr/local/lib/python3.8/dist-packages/torch/optim/optimizer.py", line 146, in load_state_dict
[1]: raise ValueError("loaded state dict contains a parameter group "
[1]:ValueError: loaded state dict contains a parameter group that doesn't match the size of optimizer's group
[0]:terminate called after throwing an instance of 'gloo::IoException'
[0]: what(): [/tmp/pip-req-build-l2iphkqz/horovod/common/gloo/http_store.cc:54] [/tmp/pip-req-build-l2iphkqz/horovod/common/gloo/http_store.cc:54] Wait timeout after 30 seconds for key(s): 1. You may want to increase the timeout via HOROVOD_GLOO_TIMEOUT_SECONDS
[0]:Aborted
Process 0 exit with status code 134.
Terminating remaining workers after failure of Process 0.
[1]:Terminated
Process 1 exit with status code 143.
Traceback (most recent call last):
File "/usr/local/bin/horovodrun", line 8, in
sys.exit(run_commandline())
File "/usr/local/lib/python3.8/dist-packages/horovod/runner/launch.py", line 824, in run_commandline
_run(args)
File "/usr/local/lib/python3.8/dist-packages/horovod/runner/launch.py", line 814, in _run
return _run_static(args)
File "/usr/local/lib/python3.8/dist-packages/horovod/runner/launch.py", line 672, in _run_static
_launch_job(args, settings, nics, command)
File "/usr/local/lib/python3.8/dist-packages/horovod/runner/launch.py", line 787, in _launch_job
run_controller(args.use_gloo, gloo_run_fn,
File "/usr/local/lib/python3.8/dist-packages/horovod/runner/launch.py", line 741, in run_controller
gloo_run()
File "/usr/local/lib/python3.8/dist-packages/horovod/runner/launch.py", line 779, in gloo_run_fn
gloo_run(settings, nics, env, driver_ip, command)
File "/usr/local/lib/python3.8/dist-packages/horovod/runner/gloo_run.py", line 300, in gloo_run
launch_gloo(command, exec_command, settings, nics, env, server_ip)
File "/usr/local/lib/python3.8/dist-packages/horovod/runner/gloo_run.py", line 284, in launch_gloo
raise RuntimeError('Horovod detected that one or more processes exited with non-zero '
RuntimeError: Horovod detected that one or more processes exited with non-zero status, thus causing the job to be terminated. The first process to do so was:
Process name: 0
Exit code: 134
Versions of various libraries I'm using:
torch==1.8.1+cu111
torchvision==0.9.1+cu111
horovod @ file:///horovod/dist/horovod-0.25.0.tar.gz
Beta Was this translation helpful? Give feedback.
All reactions