Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-24434][K8S] pod template files #22146

Closed
wants to merge 57 commits into from
Closed
Show file tree
Hide file tree
Changes from 25 commits
Commits
Show all changes
57 commits
Select commit Hold shift + click to select a range
e2e7223
add spec template configs
onursatici Aug 17, 2018
ea4dde6
start from template for driver
onursatici Aug 17, 2018
4f088db
start from template for executor
onursatici Aug 17, 2018
f2f9a44
wip
onursatici Aug 17, 2018
368d0a4
volume executor podspec template
onursatici Aug 17, 2018
0005ea5
move logic to apply functions
onursatici Aug 17, 2018
dda5cc9
find containers
onursatici Aug 17, 2018
d0f41aa
style
onursatici Aug 17, 2018
c4c1231
remove import
onursatici Aug 17, 2018
74de0e5
compiles
yifeih Aug 21, 2018
205ddd3
tests pass
yifeih Aug 21, 2018
4ae6fc6
adding TemplateVolumeStepSuite
yifeih Aug 21, 2018
c0bcfea
DriverBuilder test
yifeih Aug 21, 2018
b9e4263
WIP trying to write tests for KubernetesDriverBuilder constructor
yifeih Aug 21, 2018
c5e1ea0
fix test
yifeih Aug 21, 2018
56a6b32
fix test, and move loading logic to util method
yifeih Aug 22, 2018
7d0d928
validate that the executor pod template is good in the driver
yifeih Aug 22, 2018
1da79a8
cleaning
yifeih Aug 22, 2018
8ef756e
Merge branch 'apache/master' into yh/pod-template
yifeih Aug 22, 2018
7f3cb04
redo mounting file
yifeih Aug 23, 2018
cc8d3f8
rename to TemplateConfigMapStep
yifeih Aug 23, 2018
4119899
Pass initialPod constructor instead of Spec constructor
yifeih Aug 23, 2018
1d0a8fa
make driver and executor container names configurable
yifeih Aug 23, 2018
81e5a66
create temp file correctly?
yifeih Aug 23, 2018
7f4ff5a
executor initial pod test
yifeih Aug 23, 2018
3097aef
add docs
yifeih Aug 23, 2018
9b1418a
addressing some comments
yifeih Aug 23, 2018
ebacc96
integration tests attempt 1
yifeih Aug 24, 2018
98acd29
fix up docs
yifeih Aug 24, 2018
95f8b8b
rename a variable
yifeih Aug 24, 2018
da5dff5
fix style?
yifeih Aug 24, 2018
7fb76c7
fix docs to remove container name conf and further clarify
yifeih Aug 25, 2018
d86bc75
actually add the pod template test
yifeih Aug 25, 2018
f2720a5
remove containerName confs
yifeih Aug 25, 2018
4b3950d
test tag and indent
onursatici Aug 27, 2018
3813fcb
extension
onursatici Aug 28, 2018
ec04323
use resources for integartion tests templates
onursatici Aug 28, 2018
f3b6082
rat
onursatici Aug 28, 2018
fd503db
fix path
onursatici Aug 29, 2018
eeb2492
prevent having duplicate containers
onursatici Aug 29, 2018
4801e8e
Merge remote-tracking branch 'origin/master' into pod-template
onursatici Aug 29, 2018
36a70ad
do not use broken removeContainer
onursatici Aug 29, 2018
ece7a7c
nits
onursatici Aug 29, 2018
8b8aa48
inline integration test methods, add volume to executor builder unit …
onursatici Aug 30, 2018
1ed95ab
do not raise twice on template parse failuer
onursatici Aug 31, 2018
a4fde0c
add comprehensive test for supported template features
onursatici Aug 31, 2018
140e89c
generalize tests to cover both driver and executor builders
onursatici Aug 31, 2018
838c2bd
docs
onursatici Sep 4, 2018
5faea62
Merge remote-tracking branch 'origin/master' into pod-template
onursatici Oct 29, 2018
9e6a4b2
fix tests, templates does not support changing executor pod names
onursatici Oct 29, 2018
c8077dc
config to select spark containers in pod templates
onursatici Oct 29, 2018
3d6ff3b
more readable select container logic
onursatici Oct 29, 2018
83087eb
fix integration tests
onursatici Oct 29, 2018
a46b885
Merge remote-tracking branch 'origin/master' into pod-template
onursatici Oct 29, 2018
80b56c1
address comments
onursatici Oct 29, 2018
8f7f571
rename pod template volume name
onursatici Oct 30, 2018
3707e6a
imports
onursatici Oct 30, 2018
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -225,6 +225,30 @@ private[spark] object Config extends Logging {
"Ensure that major Python version is either Python2 or Python3")
.createWithDefault("2")

val KUBERNETES_DRIVER_CONTAINER_NAME =
ConfigBuilder("spark.kubernetes.driver.containerName")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any way to do this via the containers container array from the pod template?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I prefer to make it explicit, rather than e.g. "pick the first container in the list".

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you're saying that manipulating containers doesn't give specific control over which name is assigned to which container, then I agree

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mccheah it raises interesting question: a user might use this feature to add containers, e.g. side-car proxies, service mesh, etc. In fact, I'd want users to be able to do this. If there are multiple containers defined, we may need a convention for identifying which one is driver/executor. Alternatives might include additive-only, like labels, or disallowing, but supporting this seems very desirable.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This feature should support using multiple containers, in which case the user needs to specify which container is running the Spark process. Using a configuration option for that seems like the most straightforward solution.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similarly, what if the user gives the driver container name in the driver template but forgets to specify spark.kubernetes.driver.containerName? Requiring users to set the container name explicitly and additionally through another config key sounds a bit awkward to me, particularly when there's only the Spark container itself.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there confusion because there's an existing configuration option also? I think the existing configuration option sets the driver container name when no yml is specified. But perhaps the interpretation of this configuration value should change when the pod template is provided.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the semantic of this key will change with the template option. So with the template, there're two sources of driver container name, and we need to resolve conflicts in certain cases, e.g., if the name is specified in only one source or if the two sources mismatch.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wait, I don't think there's an existing configuration option. I believe the container name is currently just hard coded in Constants.

However, I see your concern about using spark.kubernetes.driver.containerName for multiple purposes. In that case, it's definitely easier to reason about with fewer moving pieces, and it sounds like simplest is best. I'll remove the container name configuration and just stick with first container = driver container. This just means that the only way to configure the container name will be through a pod template. I'll also clarify in the docs

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right. We were confusing the one for driver pod name with this one.

.doc("The name of the driver container within the driver pod template file")
.stringConf
.createWithDefault("spark-kubernetes-driver")

val KUBERNETES_EXECUTOR_CONTAINER_NAME =
ConfigBuilder("spark.kubernetes.executor.containerName")
.doc("The name of the executor container within the executor pod template file")
.stringConf
.createWithDefault("spark-kubernetes-executor")

val KUBERNETES_DRIVER_PODTEMPLATE_FILE =
ConfigBuilder("spark.kubernetes.driver.podTemplateFile")
.doc("File containing a template pod spec for the driver")
.stringConf
.createOptional

val KUBERNETES_EXECUTOR_PODTEMPLATE_FILE =
ConfigBuilder("spark.kubernetes.executor.podTemplateFile")
.doc("File containing a template pod spec for executors")
.stringConf
.createOptional

val KUBERNETES_AUTH_SUBMISSION_CONF_PREFIX =
"spark.kubernetes.authenticate.submission"

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -74,8 +74,14 @@ private[spark] object Constants {
val ENV_R_PRIMARY = "R_PRIMARY"
val ENV_R_ARGS = "R_APP_ARGS"

// Pod spec templates
val EXECUTOR_POD_SPEC_TEMPLATE_FILE_NAME = "pod-spec-template.yml"
val EXECUTOR_POD_SPEC_TEMPLATE_MOUNTHPATH = "/opt/spark/pod-template"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Spelling, think we want EXECUTOR_POD_SPEC_TEMPLATE_MOUNTPATH?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Quick ping here

val POD_TEMPLATE_VOLUME = "podspec-volume"
Copy link
Contributor

@skonto skonto Aug 29, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: s/podspec-volume/pod-template-volume
You are passing the whole template right?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ping here

val POD_TEMPLATE_CONFIGMAP = "podspec-configmap"
val POD_TEMPLATE_KEY = "podspec-configmap-key"

// Miscellaneous
val KUBERNETES_MASTER_INTERNAL_URL = "https://kubernetes.default.svc"
val DRIVER_CONTAINER_NAME = "spark-kubernetes-driver"
val MEMORY_OVERHEAD_MIN_MIB = 384L
}
Original file line number Diff line number Diff line change
Expand Up @@ -24,8 +24,9 @@ private[spark] case class KubernetesDriverSpec(
systemProperties: Map[String, String])

private[spark] object KubernetesDriverSpec {
def initialSpec(initialProps: Map[String, String]): KubernetesDriverSpec = KubernetesDriverSpec(
SparkPod.initialPod(),
Seq.empty,
initialProps)
def initialSpec(initialConf: KubernetesConf[KubernetesDriverSpecificConf]): KubernetesDriverSpec =
KubernetesDriverSpec(
SparkPod.initialPod(),
Seq.empty,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NIT: For clarity can you write as:

    KubernetesDriverSpec(
      SparkPod.initialPod(),
      driverKubernetesResources = Seq.empty,
      initialConf.sparkConf.getAll.toMap)

initialConf.sparkConf.getAll.toMap)
}
Original file line number Diff line number Diff line change
Expand Up @@ -16,10 +16,16 @@
*/
package org.apache.spark.deploy.k8s

import org.apache.spark.SparkConf
import java.io.File

import io.fabric8.kubernetes.client.KubernetesClient
import scala.collection.JavaConverters._

import org.apache.spark.{SparkConf, SparkException}
import org.apache.spark.internal.Logging
import org.apache.spark.util.Utils

private[spark] object KubernetesUtils {
private[spark] object KubernetesUtils extends Logging {

/**
* Extract and parse Spark configuration properties with a given name prefix and
Expand Down Expand Up @@ -59,5 +65,21 @@ private[spark] object KubernetesUtils {
}
}

def loadPodFromTemplate(kubernetesClient: KubernetesClient,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IDEs don't handle this indentation properly I think - you want this:

def loadPodFromTemplate(
    kubernetesClient: KubernetesClient,
    templateFile: File,
    containerName: String): SparkPod = {

templateFile: File,
containerName: String): SparkPod = {
try {
val pod = kubernetesClient.pods().load(templateFile).get()
val containers = pod.getSpec.getContainers.asScala
require(containers.map(_.getName).contains(containerName))
SparkPod(pod, containers.filter(_.getName == containerName).head)
} catch {
case e: Exception =>
logError(
s"Encountered exception while attempting to load initial pod spec from file", e)
throw new SparkException("Could not load driver pod from template file.", e)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This error message is misleading, it throws when both executor and driver pod failed to load from its own template. Either remove "driver" or be more specific that its executor or driver.

}
}

def parseMasterUrl(url: String): String = url.substring("k8s://".length)
}
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ private[spark] class BasicDriverFeatureStep(
)
val driverUIPort = SparkUI.getUIPort(conf.sparkConf)
val driverContainer = new ContainerBuilder(pod.container)
.withName(DRIVER_CONTAINER_NAME)
.withName(conf.get(KUBERNETES_DRIVER_CONTAINER_NAME))
.withImage(driverContainerImage)
.withImagePullPolicy(conf.imagePullPolicy())
.addNewPort()
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -129,7 +129,7 @@ private[spark] class BasicExecutorFeatureStep(
}

val executorContainer = new ContainerBuilder(pod.container)
.withName("executor")
.withName(kubernetesConf.get(KUBERNETES_EXECUTOR_CONTAINER_NAME))
.withImage(executorContainerImage)
.withImagePullPolicy(kubernetesConf.imagePullPolicy())
.withNewResources()
Expand Down Expand Up @@ -163,8 +163,8 @@ private[spark] class BasicExecutorFeatureStep(
val executorPod = new PodBuilder(pod.pod)
.editOrNewMetadata()
.withName(name)
.withLabels(kubernetesConf.roleLabels.asJava)
.withAnnotations(kubernetesConf.roleAnnotations.asJava)
.addToLabels(kubernetesConf.roleLabels.asJava)
.addToAnnotations(kubernetesConf.roleAnnotations.asJava)
.addToOwnerReferences(ownerReference.toSeq: _*)
.endMetadata()
.editOrNewSpec()
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.spark.deploy.k8s.features

import java.io.File
import java.nio.charset.StandardCharsets

import com.google.common.io.Files
import io.fabric8.kubernetes.api.model.{Config => _, _}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of doing this, just import Config, then, do this import: import org.apache.spark.deploy.k8s.Config._. Also, wildcard-import constants: import org.apache.spark.deploy.k8s.Constants._.


import org.apache.spark.deploy.k8s._
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do not wildcard import the package.


private[spark] class PodTemplateConfigMapStep(
conf: KubernetesConf[_ <: KubernetesRoleSpecificConf])
extends KubernetesFeatureConfigStep {
def configurePod(pod: SparkPod): SparkPod = {
val podWithVolume = new PodBuilder(pod.pod)
.editSpec()
.addNewVolume()
.withName(Constants.POD_TEMPLATE_VOLUME)
.withNewConfigMap()
.withName(Constants.POD_TEMPLATE_CONFIGMAP)
.addNewItem()
.withKey(Constants.POD_TEMPLATE_KEY)
.withPath(Constants.EXECUTOR_POD_SPEC_TEMPLATE_FILE_NAME)
.endItem()
.endConfigMap()
.endVolume()
.endSpec()
.build()

val containerWithVolume = new ContainerBuilder(pod.container)
.addNewVolumeMount()
.withName(Constants.POD_TEMPLATE_VOLUME)
.withMountPath(Constants.EXECUTOR_POD_SPEC_TEMPLATE_MOUNTHPATH)
.endVolumeMount()
.build()
SparkPod(podWithVolume, containerWithVolume)
}

def getAdditionalPodSystemProperties(): Map[String, String] = Map[String, String](
Config.KUBERNETES_EXECUTOR_PODTEMPLATE_FILE.key ->
(Constants.EXECUTOR_POD_SPEC_TEMPLATE_MOUNTHPATH + "/" +
Constants.EXECUTOR_POD_SPEC_TEMPLATE_FILE_NAME))

def getAdditionalKubernetesResources(): Seq[HasMetadata] = {
require(conf.get(Config.KUBERNETES_EXECUTOR_PODTEMPLATE_FILE).isDefined)
val podTemplateFile = conf.get(Config.KUBERNETES_EXECUTOR_PODTEMPLATE_FILE).get
val podTemplateString = Files.toString(new File(podTemplateFile), StandardCharsets.UTF_8)
Seq(new ConfigMapBuilder()
.withNewMetadata()
.withName(Constants.POD_TEMPLATE_CONFIGMAP)
.endMetadata()
.addToData(Constants.POD_TEMPLATE_KEY, podTemplateString)
.build())
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,7 @@
package org.apache.spark.deploy.k8s.submit

import java.io.StringWriter
import java.util.{Collections, UUID}
import java.util.Properties
import java.util.{Collections, Properties, UUID}

import io.fabric8.kubernetes.api.model._
import io.fabric8.kubernetes.client.KubernetesClient
Expand All @@ -27,7 +26,7 @@ import scala.util.control.NonFatal

import org.apache.spark.SparkConf
import org.apache.spark.deploy.SparkApplication
import org.apache.spark.deploy.k8s.{KubernetesConf, KubernetesDriverSpecificConf, KubernetesUtils, SparkKubernetesClientFactory}
import org.apache.spark.deploy.k8s._
import org.apache.spark.deploy.k8s.Config._
import org.apache.spark.deploy.k8s.Constants._
import org.apache.spark.internal.Logging
Expand Down Expand Up @@ -226,7 +225,6 @@ private[spark] class KubernetesClientApplication extends SparkApplication {
clientArguments.mainClass,
clientArguments.driverArgs,
clientArguments.maybePyFiles)
val builder = new KubernetesDriverBuilder
val namespace = kubernetesConf.namespace()
// The master URL has been checked for validity already in SparkSubmit.
// We just need to get rid of the "k8s://" prefix here.
Expand All @@ -243,7 +241,7 @@ private[spark] class KubernetesClientApplication extends SparkApplication {
None,
None)) { kubernetesClient =>
val client = new Client(
builder,
KubernetesDriverBuilder(kubernetesClient, kubernetesConf.sparkConf),
kubernetesConf,
kubernetesClient,
waitForAppCompletion,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,15 @@
*/
package org.apache.spark.deploy.k8s.submit

import org.apache.spark.deploy.k8s.{KubernetesConf, KubernetesDriverSpec, KubernetesDriverSpecificConf, KubernetesRoleSpecificConf}
import org.apache.spark.deploy.k8s.features.{BasicDriverFeatureStep, DriverKubernetesCredentialsFeatureStep, DriverServiceFeatureStep, EnvSecretsFeatureStep, LocalDirsFeatureStep, MountSecretsFeatureStep, MountVolumesFeatureStep}
import java.io.File

import io.fabric8.kubernetes.client.KubernetesClient

import org.apache.spark.{SparkConf, SparkException}
import org.apache.spark.deploy.k8s.{Config, Constants, KubernetesConf, KubernetesDriverSpec, KubernetesDriverSpecificConf, KubernetesRoleSpecificConf, KubernetesUtils, SparkPod}
import org.apache.spark.deploy.k8s.features.{BasicDriverFeatureStep, DriverKubernetesCredentialsFeatureStep, DriverServiceFeatureStep, EnvSecretsFeatureStep, LocalDirsFeatureStep, MountSecretsFeatureStep, MountVolumesFeatureStep, PodTemplateConfigMapStep}
import org.apache.spark.deploy.k8s.features.bindings.{JavaDriverFeatureStep, PythonDriverFeatureStep, RDriverFeatureStep}
import org.apache.spark.internal.Logging

private[spark] class KubernetesDriverBuilder(
provideBasicStep: (KubernetesConf[KubernetesDriverSpecificConf]) => BasicDriverFeatureStep =
Expand Down Expand Up @@ -51,7 +57,11 @@ private[spark] class KubernetesDriverBuilder(
provideJavaStep: (
KubernetesConf[KubernetesDriverSpecificConf]
=> JavaDriverFeatureStep) =
new JavaDriverFeatureStep(_)) {
new JavaDriverFeatureStep(_),
podTemplateConfigMapStep: (KubernetesConf[_ <: KubernetesRoleSpecificConf]
=> PodTemplateConfigMapStep) =
new PodTemplateConfigMapStep(_),
provideInitialPod: () => SparkPod = SparkPod.initialPod) {

def buildFromFeatures(
kubernetesConf: KubernetesConf[KubernetesDriverSpecificConf]): KubernetesDriverSpec = {
Expand All @@ -70,6 +80,10 @@ private[spark] class KubernetesDriverBuilder(
val volumesFeature = if (kubernetesConf.roleVolumes.nonEmpty) {
Seq(provideVolumesStep(kubernetesConf))
} else Nil
val templateVolumeFeature = if (
kubernetesConf.get(Config.KUBERNETES_EXECUTOR_PODTEMPLATE_FILE).isDefined) {
Seq(podTemplateConfigMapStep(kubernetesConf))
} else Nil

val bindingsStep = kubernetesConf.roleSpecificConf.mainAppResource.map {
case JavaMainAppResource(_) =>
Expand All @@ -81,9 +95,12 @@ private[spark] class KubernetesDriverBuilder(
.getOrElse(provideJavaStep(kubernetesConf))

val allFeatures = (baseFeatures :+ bindingsStep) ++
secretFeature ++ envSecretFeature ++ volumesFeature
secretFeature ++ envSecretFeature ++ volumesFeature ++ templateVolumeFeature
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the templateVolumeFeature should be the first feature step so individual config properties specified and handled by other steps may override the same config points in the template.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Never mind, I misread this.


var spec = KubernetesDriverSpec.initialSpec(kubernetesConf.sparkConf.getAll.toMap)
var spec = KubernetesDriverSpec(
provideInitialPod(),
Seq.empty,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto to above in setting as driverKubernetesResources = Seq.empty

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ping on this

kubernetesConf.sparkConf.getAll.toMap)
for (feature <- allFeatures) {
val configuredPod = feature.configurePod(spec.pod)
val addedSystemProperties = feature.getAdditionalPodSystemProperties()
Expand All @@ -96,3 +113,24 @@ private[spark] class KubernetesDriverBuilder(
spec
}
}

private[spark] object KubernetesDriverBuilder extends Logging {
def apply(kubernetesClient: KubernetesClient, conf: SparkConf): KubernetesDriverBuilder = {
conf.get(Config.KUBERNETES_DRIVER_PODTEMPLATE_FILE)
.map(new File(_))
.map(file => new KubernetesDriverBuilder(provideInitialPod = () => {
try {
KubernetesUtils.loadPodFromTemplate(
kubernetesClient,
file,
conf.get(Config.KUBERNETES_DRIVER_CONTAINER_NAME))
} catch {
case e: Exception =>
logError(
s"Encountered exception while attempting to load initial pod spec from file", e)
throw new SparkException("Could not load driver pod from template file.", e)
}
}))
.getOrElse(new KubernetesDriverBuilder())
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ import com.google.common.cache.CacheBuilder
import io.fabric8.kubernetes.client.Config

import org.apache.spark.SparkContext
import org.apache.spark.deploy.k8s.{KubernetesUtils, SparkKubernetesClientFactory}
import org.apache.spark.deploy.k8s.{Constants, KubernetesUtils, SparkKubernetesClientFactory, SparkPod}
import org.apache.spark.deploy.k8s.Config._
import org.apache.spark.deploy.k8s.Constants._
import org.apache.spark.internal.Logging
Expand Down Expand Up @@ -69,6 +69,13 @@ private[spark] class KubernetesClusterManager extends ExternalClusterManager wit
defaultServiceAccountToken,
defaultServiceAccountCaCrt)

if (sc.conf.get(KUBERNETES_EXECUTOR_PODTEMPLATE_FILE).isDefined) {
KubernetesUtils.loadPodFromTemplate(
kubernetesClient,
new File(sc.conf.get(KUBERNETES_EXECUTOR_PODTEMPLATE_FILE).get),
sc.conf.get(KUBERNETES_EXECUTOR_CONTAINER_NAME))
}

val requestExecutorsService = ThreadUtils.newDaemonCachedThreadPool(
"kubernetes-executor-requests")

Expand All @@ -81,13 +88,17 @@ private[spark] class KubernetesClusterManager extends ExternalClusterManager wit
.build[java.lang.Long, java.lang.Long]()
val executorPodsLifecycleEventHandler = new ExecutorPodsLifecycleManager(
sc.conf,
new KubernetesExecutorBuilder(),
KubernetesExecutorBuilder(kubernetesClient, sc.conf),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I double checked and I don't think we use this variable inside ExecutorPodsLifecycleManager, can you remove it?

kubernetesClient,
snapshotsStore,
removedExecutorsCache)

val executorPodsAllocator = new ExecutorPodsAllocator(
sc.conf, new KubernetesExecutorBuilder(), kubernetesClient, snapshotsStore, new SystemClock())
sc.conf,
KubernetesExecutorBuilder(kubernetesClient, sc.conf),
kubernetesClient,
snapshotsStore,
new SystemClock())

val podsWatchEventSource = new ExecutorPodsWatchSnapshotSource(
snapshotsStore,
Expand Down
Loading