-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Closed
Labels
bugSomething isn't workingSomething isn't working
Description
In [2]: a=Batch(a={})
In [3]: b=Batch(a=torch.tensor([3,4], device='cuda'))
In [4]: Batch.cat([a,b])
will raise an exception. Same as Batch.stack:
~/github/tianshou-new/tianshou/data/batch.py in _is_scalar(value)
37 # the check of dict / Batch is omitted because this only checks a value.
38 # a dict / Batch will eventually check their values
---> 39 value = np.asanyarray(value)
40 return value.size == 1 and not value.shape
41
~/.local/lib/python3.6/site-packages/numpy/core/_asarray.py in asanyarray(a, dtype, order)
136
137 """
--> 138 return array(a, dtype, copy=False, order=order, subok=True)
139
140
~/.local/lib/python3.6/site-packages/torch/tensor.py in __array__(self, dtype)
490 def __array__(self, dtype=None):
491 if dtype is None:
--> 492 return self.numpy()
493 else:
494 return self.numpy().astype(dtype, copy=False)
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working