Ways to use Apache Spark™ clusters in DataSphere
Yandex Data Processing allows you to deploy Apache Spark™ clusters. You can use Yandex Data Processing clusters to run distributed training.
Cluster deployment options
To work with Yandex Data Processing clusters in DataSphere, you can use the following:
If you have no existing Yandex Data Processing clusters or you need a cluster for a short time, use temporary Yandex Data Processing clusters. You can create them using the following:
- Spark connector (preferred)
- Yandex Data Processing template
Regardless of the deployment option, all Yandex Data Processing clusters are charged based on the Yandex Data Processing pricing policy.
Setting up a DataSphere project to work with Yandex Data Processing clusters
To work with Yandex Data Processing clusters:
-
In the project settings, specify these parameters:
- Default folder for integrating with other Yandex Cloud services. It will house a Yandex Data Processing cluster based on the current cloud quotas. A fee for using the cluster will be debited from your cloud billing account.
- Service account with the
vpc.userrole. DataSphere will use for this account to work with the Yandex Data Processing cluster network. - Subnet for DataSphere to communicate with the Yandex Data Processing cluster. Since the Yandex Data Processing cluster needs to access the internet, make sure to configure a NAT gateway in this subnet. After you specify a subnet, the time for computing resource allocation may increase.
-
Create a service agent:
-
To allow a service agent to operate in DataSphere, ask your cloud admin or owner to run the following command in the Yandex Cloud CLI:
yc iam service-control enable datasphere --cloud-id <cloud_ID>Where
--cloud-idis the ID of the cloud you are going to use in the DataSphere community. -
Create a service account with the following roles:
dataproc.agentto use Yandex Data Processing clusters.dataproc.adminto create clusters from Yandex Data Processing templates.vpc.userto use the Yandex Data Processing cluster network.iam.serviceAccounts.userto create resources in the folder on behalf of the service account.
-
Under Spark clusters in the community settings, click Add service account and select the service account you created.
-
Warning
The Yandex Data Processing persistent cluster must have the livy:livy.spark.deploy-mode : client setting.