Skip to content

Commit

Permalink
Updated the use the v1beta1 version of the APIs
Browse files Browse the repository at this point in the history
  • Loading branch information
liyinan926 committed Jan 17, 2019
1 parent 97d4b14 commit c41576b
Show file tree
Hide file tree
Showing 53 changed files with 741 additions and 664 deletions.
6 changes: 4 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,11 @@

## Project Status

**Project status:** *alpha*
**Project status:** *beta*

The Kubernetes Operator for Apache Spark is still under active development. Backward compatibility of the APIs is not guaranteed for alpha releases.
The Kubernetes Operator for Apache Spark is under active development, but backward compatibility of the APIs is guaranteed for beta releases.

**If you are currently using the `v1alpha1` version of the APIs in your manifests, please update them to use the `v1beta1` version by changing `apiVersion: "sparkoperator.k8s.io/v1alpha1"` to `apiVersion: "sparkoperator.k8s.io/v1beta1"`. You will also need to delete the `v1alpha1` version of the CustomResourceDefinitions named `sparkapplications.sparkoperator.k8s.io` and `scheduledsparkapplications.sparkoperator.k8s.io`, and replace them with the `v1beta1` version either by installing the latest version of the operator or by running `kubectl create -f manifest/spark-operator-crds.yaml`.**

Customization of Spark pods, e.g., mounting arbitrary volumes and setting pod affinity, is currently experimental and implemented using a Kubernetes
[Mutating Admission Webhook](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/), which became beta in Kubernetes 1.9.
Expand Down
4 changes: 2 additions & 2 deletions docs/api.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,8 @@ The Kubernetes Operator for Apache Spark uses [CustomResourceDefinitions](https
named `SparkApplication` and `ScheduledSparkApplication` for specifying one-time Spark applications and Spark applications
that are supposed to run on a standard [cron](https://en.wikipedia.org/wiki/Cron) schedule. Similarly to other kinds of
Kubernetes resources, they consist of a specification in a `Spec` field and a `Status` field. The definitions are organized
in the following structure. The v1alpha1 version of the API definition is implemented
[here](../pkg/apis/sparkoperator.k8s.io/v1alpha1/types.go).
in the following structure. The v1beta1 version of the API definition is implemented
[here](../pkg/apis/sparkoperator.k8s.io/v1beta1/types.go).

```
ScheduledSparkApplication
Expand Down
2 changes: 1 addition & 1 deletion docs/gcp.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ The ones set in `core-site.xml` apply to all applications using the image. Also
variable `GCS_PROJECT_ID` must be set when using the image at `gcr.io/ynli-k8s/spark:v2.3.0-gcs`.

```yaml
apiVersion: "sparkoperator.k8s.io/v1alpha1"
apiVersion: "sparkoperator.k8s.io/v1beta1"
kind: SparkApplication
metadata:
name: foo-gcs-bg
Expand Down
2 changes: 1 addition & 1 deletion docs/quick-start-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -135,7 +135,7 @@ $ kubectl get sparkapplications spark-pi -o=yaml
This will show something similar to the following:

```yaml
apiVersion: sparkoperator.k8s.io/v1alpha1
apiVersion: sparkoperator.k8s.io/v1beta1
kind: SparkApplication
metadata:
...
Expand Down
4 changes: 2 additions & 2 deletions docs/user-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ It also has fields for specifying the unified container image (to use for both t
Below is an example showing part of a `SparkApplication` specification:

```yaml
apiVersion: sparkoperator.k8s.io/v1alpha1
apiVersion: sparkoperator.k8s.io/v1beta1
kind: SparkApplication
metadata:
name: spark-pi
Expand Down Expand Up @@ -387,7 +387,7 @@ client so effectively the driver gets restarted.
The operator supports running a Spark application on a standard [cron](https://en.wikipedia.org/wiki/Cron) schedule using objects of the `ScheduledSparkApplication` custom resource type. A `ScheduledSparkApplication` object specifies a cron schedule on which the application should run and a `SparkApplication` template from which a `SparkApplication` object for each run of the application is created. The following is an example `ScheduledSparkApplication`:

```yaml
apiVersion: "sparkoperator.k8s.io/v1alpha1"
apiVersion: "sparkoperator.k8s.io/v1beta1"
kind: ScheduledSparkApplication
metadata:
name: spark-pi-scheduled
Expand Down
2 changes: 1 addition & 1 deletion examples/spark-pi-prometheus.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
# limitations under the License.
#

apiVersion: "sparkoperator.k8s.io/v1alpha1"
apiVersion: "sparkoperator.k8s.io/v1beta1"
kind: SparkApplication
metadata:
name: spark-pi
Expand Down
2 changes: 1 addition & 1 deletion examples/spark-pi-schedule.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
# limitations under the License.
#

apiVersion: "sparkoperator.k8s.io/v1alpha1"
apiVersion: "sparkoperator.k8s.io/v1beta1"
kind: ScheduledSparkApplication
metadata:
name: spark-pi-scheduled
Expand Down
2 changes: 1 addition & 1 deletion examples/spark-pi.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
# limitations under the License.
#

apiVersion: "sparkoperator.k8s.io/v1alpha1"
apiVersion: "sparkoperator.k8s.io/v1beta1"
kind: SparkApplication
metadata:
name: spark-pi
Expand Down
2 changes: 1 addition & 1 deletion examples/spark-py-pi.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
# Support for Python is experimental, and requires building SNAPSHOT image of Apache Spark,
# with `imagePullPolicy` set to Always

apiVersion: "sparkoperator.k8s.io/v1alpha1"
apiVersion: "sparkoperator.k8s.io/v1beta1"
kind: SparkApplication
metadata:
name: pyspark-pi
Expand Down
4 changes: 2 additions & 2 deletions manifest/spark-operator-crds.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ spec:
- Scala
- Python
- R
version: v1alpha1
version: v1beta1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
Expand Down Expand Up @@ -202,4 +202,4 @@ spec:
- Scala
- Python
- R
version: v1alpha1
version: v1beta1
8 changes: 4 additions & 4 deletions manifest/spark-operator-with-metrics.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -21,13 +21,13 @@ metadata:
namespace: spark-operator
labels:
app.kubernetes.io/name: sparkoperator
app.kubernetes.io/version: v2.4.0-v1alpha1
app.kubernetes.io/version: v2.4.0-v1beta1
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: sparkoperator
app.kubernetes.io/version: v2.4.0-v1alpha1
app.kubernetes.io/version: v2.4.0-v1beta1
strategy:
type: Recreate
template:
Expand All @@ -38,14 +38,14 @@ spec:
prometheus.io/path: "/metrics"
labels:
app.kubernetes.io/name: sparkoperator
app.kubernetes.io/version: v2.4.0-v1alpha1
app.kubernetes.io/version: v2.4.0-v1beta1
initializers:
pending: []
spec:
serviceAccountName: sparkoperator
containers:
- name: sparkoperator
image: gcr.io/spark-operator/spark-operator:v2.4.0-v1alpha1-latest
image: gcr.io/spark-operator/spark-operator:v2.4.0-v1beta1-latest
imagePullPolicy: Always
ports:
- containerPort: 10254
Expand Down
16 changes: 8 additions & 8 deletions manifest/spark-operator-with-webhook.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -21,20 +21,20 @@ metadata:
namespace: spark-operator
labels:
app.kubernetes.io/name: sparkoperator
app.kubernetes.io/version: v2.4.0-v1alpha1
app.kubernetes.io/version: v2.4.0-v1beta1
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: sparkoperator
app.kubernetes.io/version: v2.4.0-v1alpha1
app.kubernetes.io/version: v2.4.0-v1beta1
strategy:
type: Recreate
template:
metadata:
labels:
app.kubernetes.io/name: sparkoperator
app.kubernetes.io/version: v2.4.0-v1alpha1
app.kubernetes.io/version: v2.4.0-v1beta1
initializers:
pending: []
spec:
Expand All @@ -45,7 +45,7 @@ spec:
secretName: spark-webhook-certs
containers:
- name: sparkoperator
image: gcr.io/spark-operator/spark-operator:v2.4.0-v1alpha1-latest
image: gcr.io/spark-operator/spark-operator:v2.4.0-v1beta1-latest
imagePullPolicy: Always
volumeMounts:
- name: webhook-certs
Expand All @@ -63,20 +63,20 @@ metadata:
namespace: spark-operator
labels:
app.kubernetes.io/name: sparkoperator
app.kubernetes.io/version: v2.4.0-v1alpha1
app.kubernetes.io/version: v2.4.0-v1beta1
spec:
backoffLimit: 3
template:
metadata:
labels:
app.kubernetes.io/name: sparkoperator
app.kubernetes.io/version: v2.4.0-v1alpha1
app.kubernetes.io/version: v2.4.0-v1beta1
spec:
serviceAccountName: sparkoperator
restartPolicy: Never
containers:
- name: main
image: gcr.io/spark-operator/spark-operator:v2.4.0-v1alpha1-latest
image: gcr.io/spark-operator/spark-operator:v2.4.0-v1beta1-latest
imagePullPolicy: IfNotPresent
command: ["/usr/bin/gencerts.sh", "-p"]
---
Expand All @@ -92,4 +92,4 @@ spec:
name: webhook
selector:
app.kubernetes.io/name: sparkoperator
app.kubernetes.io/version: v2.4.0-v1alpha1
app.kubernetes.io/version: v2.4.0-v1beta1
8 changes: 4 additions & 4 deletions manifest/spark-operator.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -21,27 +21,27 @@ metadata:
namespace: spark-operator
labels:
app.kubernetes.io/name: sparkoperator
app.kubernetes.io/version: v2.4.0-v1alpha1
app.kubernetes.io/version: v2.4.0-v1beta1
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: sparkoperator
app.kubernetes.io/version: v2.4.0-v1alpha1
app.kubernetes.io/version: v2.4.0-v1beta1
strategy:
type: Recreate
template:
metadata:
labels:
app.kubernetes.io/name: sparkoperator
app.kubernetes.io/version: v2.4.0-v1alpha1
app.kubernetes.io/version: v2.4.0-v1beta1
initializers:
pending: []
spec:
serviceAccountName: sparkoperator
containers:
- name: sparkoperator
image: gcr.io/spark-operator/spark-operator:v2.4.0-v1alpha1-latest
image: gcr.io/spark-operator/spark-operator:v2.4.0-v1beta1-latest
imagePullPolicy: Always
args:
- -logtostderr
74 changes: 74 additions & 0 deletions pkg/apis/sparkoperator.k8s.io/v1beta1/defaults.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
/*
Copyright 2017 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/

package v1beta1

// SetSparkApplicationDefaults sets default values for certain fields of a SparkApplication.
func SetSparkApplicationDefaults(app *SparkApplication) {
if app == nil {
return
}

if app.Spec.Mode == "" {
app.Spec.Mode = ClusterMode
}

if app.Spec.RestartPolicy.Type == "" {
app.Spec.RestartPolicy.Type = Never
}

if app.Spec.RestartPolicy.Type != Never {
// Default to 5 sec if the RestartPolicy is OnFailure or Always and these values aren't specified.
if app.Spec.RestartPolicy.OnFailureRetryInterval == nil {
app.Spec.RestartPolicy.OnFailureRetryInterval = new(int64)
*app.Spec.RestartPolicy.OnFailureRetryInterval = 5
}

if app.Spec.RestartPolicy.OnSubmissionFailureRetryInterval == nil {
app.Spec.RestartPolicy.OnSubmissionFailureRetryInterval = new(int64)
*app.Spec.RestartPolicy.OnSubmissionFailureRetryInterval = 5
}
}

setDriverSpecDefaults(app.Spec.Driver)
setExecutorSpecDefaults(app.Spec.Executor)
}

func setDriverSpecDefaults(spec DriverSpec) {
if spec.Cores == nil {
spec.Cores = new(float32)
*spec.Cores = 1
}
if spec.Memory == nil {
spec.Memory = new(string)
*spec.Memory = "1g"
}
}

func setExecutorSpecDefaults(spec ExecutorSpec) {
if spec.Cores == nil {
spec.Cores = new(float32)
*spec.Cores = 1
}
if spec.Memory == nil {
spec.Memory = new(string)
*spec.Memory = "1g"
}
if spec.Instances == nil {
spec.Instances = new(int32)
*spec.Instances = 1
}
}
1 change: 1 addition & 0 deletions pkg/apis/sparkoperator.k8s.io/v1beta1/types.go
Original file line number Diff line number Diff line change
Expand Up @@ -256,6 +256,7 @@ const (
InvalidatingState ApplicationStateType = "INVALIDATING"
SucceedingState ApplicationStateType = "SUCCEEDING"
FailingState ApplicationStateType = "FAILING"
UnknownState ApplicationStateType = "UNKNOWN"
)

// ApplicationState tells the current state of the application and an error message in case of failures.
Expand Down
6 changes: 3 additions & 3 deletions pkg/config/config.go
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ package config
import (
"fmt"

"github.com/GoogleCloudPlatform/spark-on-k8s-operator/pkg/apis/sparkoperator.k8s.io/v1alpha1"
"github.com/GoogleCloudPlatform/spark-on-k8s-operator/pkg/apis/sparkoperator.k8s.io/v1beta1"
)

// GetDriverAnnotationOption returns a spark-submit option for a driver annotation of the given key and value.
Expand All @@ -33,7 +33,7 @@ func GetExecutorAnnotationOption(key string, value string) string {
}

// GetDriverEnvVarConfOptions returns a list of spark-submit options for setting driver environment variables.
func GetDriverEnvVarConfOptions(app *v1alpha1.SparkApplication) []string {
func GetDriverEnvVarConfOptions(app *v1beta1.SparkApplication) []string {
var envVarConfOptions []string
for key, value := range app.Spec.Driver.EnvVars {
envVar := fmt.Sprintf("%s%s=%s", SparkDriverEnvVarConfigKeyPrefix, key, value)
Expand All @@ -43,7 +43,7 @@ func GetDriverEnvVarConfOptions(app *v1alpha1.SparkApplication) []string {
}

// GetExecutorEnvVarConfOptions returns a list of spark-submit options for setting executor environment variables.
func GetExecutorEnvVarConfOptions(app *v1alpha1.SparkApplication) []string {
func GetExecutorEnvVarConfOptions(app *v1beta1.SparkApplication) []string {
var envVarConfOptions []string
for key, value := range app.Spec.Executor.EnvVars {
envVar := fmt.Sprintf("%s%s=%s", SparkExecutorEnvVarConfigKeyPrefix, key, value)
Expand Down
18 changes: 9 additions & 9 deletions pkg/config/config_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -22,14 +22,14 @@ import (

"github.com/stretchr/testify/assert"

"github.com/GoogleCloudPlatform/spark-on-k8s-operator/pkg/apis/sparkoperator.k8s.io/v1alpha1"
"github.com/GoogleCloudPlatform/spark-on-k8s-operator/pkg/apis/sparkoperator.k8s.io/v1beta1"
)

func TestGetDriverEnvVarConfOptions(t *testing.T) {
app := &v1alpha1.SparkApplication{
Spec: v1alpha1.SparkApplicationSpec{
Driver: v1alpha1.DriverSpec{
SparkPodSpec: v1alpha1.SparkPodSpec{
app := &v1beta1.SparkApplication{
Spec: v1beta1.SparkApplicationSpec{
Driver: v1beta1.DriverSpec{
SparkPodSpec: v1beta1.SparkPodSpec{
EnvVars: map[string]string{
"ENV1": "VALUE1",
"ENV2": "VALUE2",
Expand All @@ -50,10 +50,10 @@ func TestGetDriverEnvVarConfOptions(t *testing.T) {
}

func TestGetExecutorEnvVarConfOptions(t *testing.T) {
app := &v1alpha1.SparkApplication{
Spec: v1alpha1.SparkApplicationSpec{
Executor: v1alpha1.ExecutorSpec{
SparkPodSpec: v1alpha1.SparkPodSpec{
app := &v1beta1.SparkApplication{
Spec: v1beta1.SparkApplicationSpec{
Executor: v1beta1.ExecutorSpec{
SparkPodSpec: v1beta1.SparkPodSpec{
EnvVars: map[string]string{
"ENV1": "VALUE1",
"ENV2": "VALUE2",
Expand Down
Loading

0 comments on commit c41576b

Please sign in to comment.