Skip to content

Commit

Permalink
Add support for executor service account (kubeflow#1322)
Browse files Browse the repository at this point in the history
  • Loading branch information
bbenzikry authored Aug 22, 2021
1 parent d38c904 commit 74ea1c8
Show file tree
Hide file tree
Showing 4 changed files with 14 additions and 0 deletions.
5 changes: 5 additions & 0 deletions docs/quick-start-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ For a more detailed guide on how to use, compose, and work with `SparkApplicatio
- [Upgrade](#upgrade)
- [About the Spark Job Namespace](#about-the-spark-job-namespace)
- [About the Service Account for Driver Pods](#about-the-service-account-for-driver-pods)
- [About the Service Account for Executor Pods](#about-the-service-account-for-executor-pods)
- [Enable Metric Exporting to Prometheus](#enable-metric-exporting-to-prometheus)
- [Spark Application Metrics](#spark-application-metrics)
- [Work Queue Metrics](#work-queue-metrics)
Expand Down Expand Up @@ -175,6 +176,10 @@ The Spark Operator uses the Spark Job Namespace to identify and filter relevant

A Spark driver pod need a Kubernetes service account in the pod's namespace that has permissions to create, get, list, and delete executor pods, and create a Kubernetes headless service for the driver. The driver will fail and exit without the service account, unless the default service account in the pod's namespace has the needed permissions. To submit and run a `SparkApplication` in a namespace, please make sure there is a service account with the permissions in the namespace and set `.spec.driver.serviceAccount` to the name of the service account. Please refer to [spark-rbac.yaml](../manifest/spark-rbac.yaml) for an example RBAC setup that creates a driver service account named `spark` in the `default` namespace, with a RBAC role binding giving the service account the needed permissions.

## About the Service Account for Executor Pods

A Spark executor pod may be configured with a Kubernetes service account in the pod namespace. To submit and run a `SparkApplication` in a namespace, please make sure there is a service account with the permissions required in the namespace and set `.spec.executor.serviceAccount` to the name of the service account.

## Enable Metric Exporting to Prometheus

The operator exposes a set of metrics via the metric endpoint to be scraped by `Prometheus`. The Helm chart by default installs the operator with the additional flag to enable metrics (`-enable-metrics=true`) as well as other annotations used by Prometheus to scrape the metric endpoint. If `podMonitor.enable` is enabled, the helm chart will submit a pod monitor for the operator's pod. To install the operator **without** metrics enabled, pass the appropriate flag during `helm install`:
Expand Down
2 changes: 2 additions & 0 deletions docs/user-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -185,6 +185,7 @@ spec:
memory: 512m
labels:
version: 3.1.1
serviceAccount: spark
```

### Specifying Extra Java Options
Expand Down Expand Up @@ -271,6 +272,7 @@ spec:
cores: 1
instances: 1
memory: "512m"
serviceAccount: spark
gpu:
name: "nvidia.com/gpu"
quantity: 1
Expand Down
2 changes: 2 additions & 0 deletions pkg/config/constants.go
Original file line number Diff line number Diff line change
Expand Up @@ -117,6 +117,8 @@ const (
// SparkDriverServiceAccountName is the Spark configuration key for specifying name of the Kubernetes service
// account used by the driver pod.
SparkDriverServiceAccountName = "spark.kubernetes.authenticate.driver.serviceAccountName"
// account used by the executor pod.
SparkExecutorAccountName = "spark.kubernetes.authenticate.executor.serviceAccountName"
// SparkInitContainerImage is the Spark configuration key for specifying a custom init-container image.
SparkInitContainerImage = "spark.kubernetes.initContainer.image"
// SparkJarsDownloadDir is the Spark configuration key for specifying the download path in the driver and
Expand Down
5 changes: 5 additions & 0 deletions pkg/controller/sparkapplication/submission.go
Original file line number Diff line number Diff line change
Expand Up @@ -374,6 +374,11 @@ func addExecutorConfOptions(app *v1beta2.SparkApplication, submissionID string)
fmt.Sprintf("spark.executor.memoryOverhead=%s", *app.Spec.Executor.MemoryOverhead))
}

if app.Spec.Executor.ServiceAccount != nil {
executorConfOptions = append(executorConfOptions,
fmt.Sprintf("%s=%s", config.SparkExecutorAccountName, *app.Spec.Executor.ServiceAccount))
}

if app.Spec.Executor.DeleteOnTermination != nil {
executorConfOptions = append(executorConfOptions,
fmt.Sprintf("%s=%t", config.SparkExecutorDeleteOnTermination, *app.Spec.Executor.DeleteOnTermination))
Expand Down

0 comments on commit 74ea1c8

Please sign in to comment.