You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to run this example with spark and pytorch but it does not seem to actually run in parrallel. I.e. I am on a GCP cluster and the training runs on the master node in parrallel but it does not seem to run in parrallel on the worker nodes as well. However, if I do this other example where the data is local and I just use pytorch, it does get distributed like I'd like it to. Should both run in parrallel across all the nodes?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I'm trying to run this example with spark and pytorch but it does not seem to actually run in parrallel. I.e. I am on a GCP cluster and the training runs on the master node in parrallel but it does not seem to run in parrallel on the worker nodes as well. However, if I do this other example where the data is local and I just use pytorch, it does get distributed like I'd like it to. Should both run in parrallel across all the nodes?
Beta Was this translation helpful? Give feedback.
All reactions