Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-24434][K8S] pod template files #22146

Closed
wants to merge 57 commits into from
Closed
Show file tree
Hide file tree
Changes from 19 commits
Commits
Show all changes
57 commits
Select commit Hold shift + click to select a range
e2e7223
add spec template configs
onursatici Aug 17, 2018
ea4dde6
start from template for driver
onursatici Aug 17, 2018
4f088db
start from template for executor
onursatici Aug 17, 2018
f2f9a44
wip
onursatici Aug 17, 2018
368d0a4
volume executor podspec template
onursatici Aug 17, 2018
0005ea5
move logic to apply functions
onursatici Aug 17, 2018
dda5cc9
find containers
onursatici Aug 17, 2018
d0f41aa
style
onursatici Aug 17, 2018
c4c1231
remove import
onursatici Aug 17, 2018
74de0e5
compiles
yifeih Aug 21, 2018
205ddd3
tests pass
yifeih Aug 21, 2018
4ae6fc6
adding TemplateVolumeStepSuite
yifeih Aug 21, 2018
c0bcfea
DriverBuilder test
yifeih Aug 21, 2018
b9e4263
WIP trying to write tests for KubernetesDriverBuilder constructor
yifeih Aug 21, 2018
c5e1ea0
fix test
yifeih Aug 21, 2018
56a6b32
fix test, and move loading logic to util method
yifeih Aug 22, 2018
7d0d928
validate that the executor pod template is good in the driver
yifeih Aug 22, 2018
1da79a8
cleaning
yifeih Aug 22, 2018
8ef756e
Merge branch 'apache/master' into yh/pod-template
yifeih Aug 22, 2018
7f3cb04
redo mounting file
yifeih Aug 23, 2018
cc8d3f8
rename to TemplateConfigMapStep
yifeih Aug 23, 2018
4119899
Pass initialPod constructor instead of Spec constructor
yifeih Aug 23, 2018
1d0a8fa
make driver and executor container names configurable
yifeih Aug 23, 2018
81e5a66
create temp file correctly?
yifeih Aug 23, 2018
7f4ff5a
executor initial pod test
yifeih Aug 23, 2018
3097aef
add docs
yifeih Aug 23, 2018
9b1418a
addressing some comments
yifeih Aug 23, 2018
ebacc96
integration tests attempt 1
yifeih Aug 24, 2018
98acd29
fix up docs
yifeih Aug 24, 2018
95f8b8b
rename a variable
yifeih Aug 24, 2018
da5dff5
fix style?
yifeih Aug 24, 2018
7fb76c7
fix docs to remove container name conf and further clarify
yifeih Aug 25, 2018
d86bc75
actually add the pod template test
yifeih Aug 25, 2018
f2720a5
remove containerName confs
yifeih Aug 25, 2018
4b3950d
test tag and indent
onursatici Aug 27, 2018
3813fcb
extension
onursatici Aug 28, 2018
ec04323
use resources for integartion tests templates
onursatici Aug 28, 2018
f3b6082
rat
onursatici Aug 28, 2018
fd503db
fix path
onursatici Aug 29, 2018
eeb2492
prevent having duplicate containers
onursatici Aug 29, 2018
4801e8e
Merge remote-tracking branch 'origin/master' into pod-template
onursatici Aug 29, 2018
36a70ad
do not use broken removeContainer
onursatici Aug 29, 2018
ece7a7c
nits
onursatici Aug 29, 2018
8b8aa48
inline integration test methods, add volume to executor builder unit …
onursatici Aug 30, 2018
1ed95ab
do not raise twice on template parse failuer
onursatici Aug 31, 2018
a4fde0c
add comprehensive test for supported template features
onursatici Aug 31, 2018
140e89c
generalize tests to cover both driver and executor builders
onursatici Aug 31, 2018
838c2bd
docs
onursatici Sep 4, 2018
5faea62
Merge remote-tracking branch 'origin/master' into pod-template
onursatici Oct 29, 2018
9e6a4b2
fix tests, templates does not support changing executor pod names
onursatici Oct 29, 2018
c8077dc
config to select spark containers in pod templates
onursatici Oct 29, 2018
3d6ff3b
more readable select container logic
onursatici Oct 29, 2018
83087eb
fix integration tests
onursatici Oct 29, 2018
a46b885
Merge remote-tracking branch 'origin/master' into pod-template
onursatici Oct 29, 2018
80b56c1
address comments
onursatici Oct 29, 2018
8f7f571
rename pod template volume name
onursatici Oct 30, 2018
3707e6a
imports
onursatici Oct 30, 2018
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -225,6 +225,19 @@ private[spark] object Config extends Logging {
"Ensure that major Python version is either Python2 or Python3")
.createWithDefault("2")

val KUBERNETES_DRIVER_PODTEMPLATE_FILE =
ConfigBuilder("spark.kubernetes.driver.podTemplateFile")
.doc("File containing a template pod spec for the driver")
.stringConf
.createOptional

val KUBERNETES_EXECUTOR_PODTEMPLATE_FILE =
ConfigBuilder("spark.kubernetes.executor.podTemplateFile")
.doc("File containing a template pod spec for executors")
.stringConf
.createOptional


val KUBERNETES_AUTH_SUBMISSION_CONF_PREFIX =
"spark.kubernetes.authenticate.submission"

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -74,8 +74,15 @@ private[spark] object Constants {
val ENV_R_PRIMARY = "R_PRIMARY"
val ENV_R_ARGS = "R_APP_ARGS"

// Pod spec templates
val EXECUTOR_POD_SPEC_TEMPLATE_FILE_NAME = "podSpecTemplate.yml"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we not use camel case for the file name? This looks inconsistent with other conf files.

val EXECUTOR_POD_SPEC_TEMPLATE_FILE =
s"$SPARK_CONF_DIR_INTERNAL/$EXECUTOR_POD_SPEC_TEMPLATE_FILE_NAME"
val POD_TEMPLATE_VOLUME = "podspec-volume"
Copy link
Contributor

@skonto skonto Aug 29, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: s/podspec-volume/pod-template-volume
You are passing the whole template right?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ping here


// Miscellaneous
val KUBERNETES_MASTER_INTERNAL_URL = "https://kubernetes.default.svc"
val DRIVER_CONTAINER_NAME = "spark-kubernetes-driver"
val EXECUTOR_CONTAINER_NAME = "executor"
val MEMORY_OVERHEAD_MIN_MIB = 384L
}
Original file line number Diff line number Diff line change
Expand Up @@ -24,8 +24,9 @@ private[spark] case class KubernetesDriverSpec(
systemProperties: Map[String, String])

private[spark] object KubernetesDriverSpec {
def initialSpec(initialProps: Map[String, String]): KubernetesDriverSpec = KubernetesDriverSpec(
SparkPod.initialPod(),
Seq.empty,
initialProps)
def initialSpec(initialConf: KubernetesConf[KubernetesDriverSpecificConf]): KubernetesDriverSpec =
KubernetesDriverSpec(
SparkPod.initialPod(),
Seq.empty,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NIT: For clarity can you write as:

    KubernetesDriverSpec(
      SparkPod.initialPod(),
      driverKubernetesResources = Seq.empty,
      initialConf.sparkConf.getAll.toMap)

initialConf.sparkConf.getAll.toMap)
}
Original file line number Diff line number Diff line change
Expand Up @@ -16,10 +16,16 @@
*/
package org.apache.spark.deploy.k8s

import org.apache.spark.SparkConf
import java.io.File

import io.fabric8.kubernetes.client.KubernetesClient
import scala.collection.JavaConverters._

import org.apache.spark.{SparkConf, SparkException}
import org.apache.spark.internal.Logging
import org.apache.spark.util.Utils

private[spark] object KubernetesUtils {
private[spark] object KubernetesUtils extends Logging {

/**
* Extract and parse Spark configuration properties with a given name prefix and
Expand Down Expand Up @@ -59,5 +65,23 @@ private[spark] object KubernetesUtils {
}
}

def loadPodFromTemplate(kubernetesClient: KubernetesClient,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IDEs don't handle this indentation properly I think - you want this:

def loadPodFromTemplate(
    kubernetesClient: KubernetesClient,
    templateFile: File,
    containerName: String): SparkPod = {

templateFile: File,
containerName: String): SparkPod = {
try {
val pod = kubernetesClient.pods().load(templateFile).get()
val container = pod.getSpec.getContainers.asScala
.filter(_.getName == containerName)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can use require(...exists)

.headOption
require(container.isDefined)
SparkPod(pod, container.get)
} catch {
case e: Exception =>
logError(
s"Encountered exception while attempting to load initial pod spec from file", e)
throw new SparkException("Could not load driver pod from template file.", e)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This error message is misleading, it throws when both executor and driver pod failed to load from its own template. Either remove "driver" or be more specific that its executor or driver.

}
}

def parseMasterUrl(url: String): String = url.substring("k8s://".length)
}
Original file line number Diff line number Diff line change
Expand Up @@ -129,7 +129,7 @@ private[spark] class BasicExecutorFeatureStep(
}

val executorContainer = new ContainerBuilder(pod.container)
.withName("executor")
.withName(Constants.EXECUTOR_CONTAINER_NAME)
.withImage(executorContainerImage)
.withImagePullPolicy(kubernetesConf.imagePullPolicy())
.withNewResources()
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.spark.deploy.k8s.features

import io.fabric8.kubernetes.api.model.{Config => _, _}

import org.apache.spark.deploy.k8s._

private[spark] class TemplateVolumeStep(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this pushes the pod spec yml from the spark-submit process's local disk up to the driver pod. It may be worthwhile to support specifying the file as a location in the driver pod that hasn't been mounted by spark-submit, but I think doing it this way is fine for now.

conf: KubernetesConf[_ <: KubernetesRoleSpecificConf])
extends KubernetesFeatureConfigStep {
def configurePod(pod: SparkPod): SparkPod = {
require(conf.get(Config.KUBERNETES_EXECUTOR_PODTEMPLATE_FILE).isDefined)
val podTemplateFile = conf.get(Config.KUBERNETES_EXECUTOR_PODTEMPLATE_FILE).get
val podWithVolume = new PodBuilder(pod.pod)
.editSpec()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Match the indentation here with the indentation style down below.

.addNewVolume()
.withName(Constants.POD_TEMPLATE_VOLUME)
.withHostPath(new HostPathVolumeSource(podTemplateFile))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hostPath is not the correct volume type here. Instead, do the following:

  • Override getAdditionalKubernetesResources() with the following:
    1. Load the contents of the template file from this process's local disk into a UTF-8 String
    2. Create and return a ConfigMap object containing the contents of that config map, with some given key
  • In configurePod, add the config map as a volume in the pod spec, and add a volume mount pointing to that volume as done here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we also change the name of this class to something like PodTemplateConfigMapStep to make it clear that is uses a ConfigMap to ship the template file?

.endVolume()
.endSpec()
.build()

val containerWithVolume = new ContainerBuilder(pod.container)
.withVolumeMounts(new VolumeMountBuilder()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

addNewVolumeMount()

.withName(Constants.POD_TEMPLATE_VOLUME)
.withMountPath(Constants.EXECUTOR_POD_SPEC_TEMPLATE_FILE)
.build())
.build()
SparkPod(podWithVolume, containerWithVolume)
}

def getAdditionalPodSystemProperties(): Map[String, String] = Map[String, String](
Config.KUBERNETES_EXECUTOR_PODTEMPLATE_FILE.key -> Constants.EXECUTOR_POD_SPEC_TEMPLATE_FILE)

def getAdditionalKubernetesResources(): Seq[HasMetadata] = Seq.empty
}
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,7 @@
package org.apache.spark.deploy.k8s.submit

import java.io.StringWriter
import java.util.{Collections, UUID}
import java.util.Properties
import java.util.{Collections, Properties, UUID}

import io.fabric8.kubernetes.api.model._
import io.fabric8.kubernetes.client.KubernetesClient
Expand All @@ -27,7 +26,7 @@ import scala.util.control.NonFatal

import org.apache.spark.SparkConf
import org.apache.spark.deploy.SparkApplication
import org.apache.spark.deploy.k8s.{KubernetesConf, KubernetesDriverSpecificConf, KubernetesUtils, SparkKubernetesClientFactory}
import org.apache.spark.deploy.k8s._
import org.apache.spark.deploy.k8s.Config._
import org.apache.spark.deploy.k8s.Constants._
import org.apache.spark.internal.Logging
Expand Down Expand Up @@ -226,7 +225,6 @@ private[spark] class KubernetesClientApplication extends SparkApplication {
clientArguments.mainClass,
clientArguments.driverArgs,
clientArguments.maybePyFiles)
val builder = new KubernetesDriverBuilder
val namespace = kubernetesConf.namespace()
// The master URL has been checked for validity already in SparkSubmit.
// We just need to get rid of the "k8s://" prefix here.
Expand All @@ -243,7 +241,7 @@ private[spark] class KubernetesClientApplication extends SparkApplication {
None,
None)) { kubernetesClient =>
val client = new Client(
builder,
KubernetesDriverBuilder(kubernetesClient, kubernetesConf.sparkConf),
kubernetesConf,
kubernetesClient,
waitForAppCompletion,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,11 +16,17 @@
*/
package org.apache.spark.deploy.k8s.submit

import org.apache.spark.deploy.k8s.{KubernetesConf, KubernetesDriverSpec, KubernetesDriverSpecificConf, KubernetesRoleSpecificConf}
import org.apache.spark.deploy.k8s.features.{BasicDriverFeatureStep, DriverKubernetesCredentialsFeatureStep, DriverServiceFeatureStep, EnvSecretsFeatureStep, LocalDirsFeatureStep, MountSecretsFeatureStep, MountVolumesFeatureStep}
import java.io.File

import io.fabric8.kubernetes.client.KubernetesClient

import org.apache.spark.{SparkConf, SparkException}
import org.apache.spark.deploy.k8s._
import org.apache.spark.deploy.k8s.features._
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Undo these import changes. Keep the ordering correct, but import each class individually.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ping here

import org.apache.spark.deploy.k8s.features.bindings.{JavaDriverFeatureStep, PythonDriverFeatureStep, RDriverFeatureStep}
import org.apache.spark.internal.Logging

private[spark] class KubernetesDriverBuilder(
private[spark] class KubernetesDriverBuilder (
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please remove the empty space before the opening (.

provideBasicStep: (KubernetesConf[KubernetesDriverSpecificConf]) => BasicDriverFeatureStep =
new BasicDriverFeatureStep(_),
provideCredentialsStep: (KubernetesConf[KubernetesDriverSpecificConf])
Expand Down Expand Up @@ -51,7 +57,13 @@ private[spark] class KubernetesDriverBuilder(
provideJavaStep: (
KubernetesConf[KubernetesDriverSpecificConf]
=> JavaDriverFeatureStep) =
new JavaDriverFeatureStep(_)) {
new JavaDriverFeatureStep(_),
provideTemplateVolumeStep: (KubernetesConf[_ <: KubernetesRoleSpecificConf]
=> TemplateVolumeStep) =
new TemplateVolumeStep(_),
provideInitialSpec: KubernetesConf[KubernetesDriverSpecificConf]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not provideInitialPod to be consistent with the executor builder?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's because we need the KubernetesDriverSpec object instead of just the pod? which includes things like the entire sparkConf map

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can make the initial pod and wrap it in the KubernetesDriverSpec object.

=> KubernetesDriverSpec =
KubernetesDriverSpec.initialSpec) {

def buildFromFeatures(
kubernetesConf: KubernetesConf[KubernetesDriverSpecificConf]): KubernetesDriverSpec = {
Expand All @@ -70,6 +82,10 @@ private[spark] class KubernetesDriverBuilder(
val volumesFeature = if (kubernetesConf.roleVolumes.nonEmpty) {
Seq(provideVolumesStep(kubernetesConf))
} else Nil
val templateVolumeFeature = if (
kubernetesConf.get(Config.KUBERNETES_EXECUTOR_PODTEMPLATE_FILE).isDefined) {
Seq(provideTemplateVolumeStep(kubernetesConf))
} else Nil

val bindingsStep = kubernetesConf.roleSpecificConf.mainAppResource.map {
case JavaMainAppResource(_) =>
Expand All @@ -81,9 +97,9 @@ private[spark] class KubernetesDriverBuilder(
.getOrElse(provideJavaStep(kubernetesConf))

val allFeatures = (baseFeatures :+ bindingsStep) ++
secretFeature ++ envSecretFeature ++ volumesFeature
secretFeature ++ envSecretFeature ++ volumesFeature ++ templateVolumeFeature
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the templateVolumeFeature should be the first feature step so individual config properties specified and handled by other steps may override the same config points in the template.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Never mind, I misread this.


var spec = KubernetesDriverSpec.initialSpec(kubernetesConf.sparkConf.getAll.toMap)
var spec = provideInitialSpec(kubernetesConf)
for (feature <- allFeatures) {
val configuredPod = feature.configurePod(spec.pod)
val addedSystemProperties = feature.getAdditionalPodSystemProperties()
Expand All @@ -96,3 +112,25 @@ private[spark] class KubernetesDriverBuilder(
spec
}
}

private[spark] object KubernetesDriverBuilder extends Logging {
def apply(kubernetesClient: KubernetesClient, conf: SparkConf): KubernetesDriverBuilder = {
conf.get(Config.KUBERNETES_DRIVER_PODTEMPLATE_FILE)
.map(new File(_))
.map(file => new KubernetesDriverBuilder(provideInitialSpec = conf => {
try {
val sparkPod = KubernetesUtils.loadPodFromTemplate(
kubernetesClient,
file,
Constants.DRIVER_CONTAINER_NAME)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unclear if these container names should be configurable.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You may import org.apache.spark.deploy.k8s.Constants._ at the top of the file and then not need the Constants prefix here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They're not configurable currently. We should probably make them configurable since I'd imagine people would want to rely on these names being consistent

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should also make the driver container name configurable.

KubernetesDriverSpec.initialSpec(conf).copy(pod = sparkPod)
} catch {
case e: Exception =>
logError(
s"Encountered exception while attempting to load initial pod spec from file", e)
throw new SparkException("Could not load driver pod from template file.", e)
}
}))
.getOrElse(new KubernetesDriverBuilder())
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ import com.google.common.cache.CacheBuilder
import io.fabric8.kubernetes.client.Config

import org.apache.spark.SparkContext
import org.apache.spark.deploy.k8s.{KubernetesUtils, SparkKubernetesClientFactory}
import org.apache.spark.deploy.k8s.{Constants, KubernetesUtils, SparkKubernetesClientFactory, SparkPod}
import org.apache.spark.deploy.k8s.Config._
import org.apache.spark.deploy.k8s.Constants._
import org.apache.spark.internal.Logging
Expand Down Expand Up @@ -69,6 +69,13 @@ private[spark] class KubernetesClusterManager extends ExternalClusterManager wit
defaultServiceAccountToken,
defaultServiceAccountCaCrt)

if (sc.conf.get(KUBERNETES_EXECUTOR_PODTEMPLATE_FILE).isDefined) {
KubernetesUtils.loadPodFromTemplate(
kubernetesClient,
new File(sc.conf.get(KUBERNETES_EXECUTOR_PODTEMPLATE_FILE).get),
Constants.EXECUTOR_CONTAINER_NAME)
}

val requestExecutorsService = ThreadUtils.newDaemonCachedThreadPool(
"kubernetes-executor-requests")

Expand All @@ -81,13 +88,17 @@ private[spark] class KubernetesClusterManager extends ExternalClusterManager wit
.build[java.lang.Long, java.lang.Long]()
val executorPodsLifecycleEventHandler = new ExecutorPodsLifecycleManager(
sc.conf,
new KubernetesExecutorBuilder(),
KubernetesExecutorBuilder(kubernetesClient, sc.conf),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I double checked and I don't think we use this variable inside ExecutorPodsLifecycleManager, can you remove it?

kubernetesClient,
snapshotsStore,
removedExecutorsCache)

val executorPodsAllocator = new ExecutorPodsAllocator(
sc.conf, new KubernetesExecutorBuilder(), kubernetesClient, snapshotsStore, new SystemClock())
sc.conf,
KubernetesExecutorBuilder(kubernetesClient, sc.conf),
kubernetesClient,
snapshotsStore,
new SystemClock())

val podsWatchEventSource = new ExecutorPodsWatchSnapshotSource(
snapshotsStore,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,12 @@
*/
package org.apache.spark.scheduler.cluster.k8s

import org.apache.spark.deploy.k8s.{KubernetesConf, KubernetesExecutorSpecificConf, KubernetesRoleSpecificConf, SparkPod}
import java.io.File

import io.fabric8.kubernetes.client.KubernetesClient

import org.apache.spark.SparkConf
import org.apache.spark.deploy.k8s._
import org.apache.spark.deploy.k8s.features._
import org.apache.spark.deploy.k8s.features.{BasicExecutorFeatureStep, EnvSecretsFeatureStep, LocalDirsFeatureStep, MountSecretsFeatureStep}

Expand All @@ -35,7 +40,8 @@ private[spark] class KubernetesExecutorBuilder(
new LocalDirsFeatureStep(_),
provideVolumesStep: (KubernetesConf[_ <: KubernetesRoleSpecificConf]
=> MountVolumesFeatureStep) =
new MountVolumesFeatureStep(_)) {
new MountVolumesFeatureStep(_),
provideInitialPod: () => SparkPod = SparkPod.initialPod) {

def buildFromFeatures(
kubernetesConf: KubernetesConf[KubernetesExecutorSpecificConf]): SparkPod = {
Expand All @@ -51,12 +57,27 @@ private[spark] class KubernetesExecutorBuilder(
Seq(provideVolumesStep(kubernetesConf))
} else Nil

val allFeatures = baseFeatures ++ secretFeature ++ secretEnvFeature ++ volumesFeature
val allFeatures =
baseFeatures ++ secretFeature ++ secretEnvFeature ++ volumesFeature

var executorPod = SparkPod.initialPod()
var executorPod = provideInitialPod()
for (feature <- allFeatures) {
executorPod = feature.configurePod(executorPod)
}
executorPod
}
}

private[spark] object KubernetesExecutorBuilder {
def apply(kubernetesClient: KubernetesClient, conf: SparkConf): KubernetesExecutorBuilder = {
conf.get(Config.KUBERNETES_EXECUTOR_PODTEMPLATE_FILE)
.map(new File(_))
.map(file => new KubernetesExecutorBuilder(provideInitialPod = () => {
KubernetesUtils.loadPodFromTemplate(
kubernetesClient,
file,
Constants.EXECUTOR_CONTAINER_NAME)
}))
.getOrElse(new KubernetesExecutorBuilder())
}
}
Loading