Skip to content

Commit

Permalink
[SPARK-2392] Executors should not start their own HTTP servers
Browse files Browse the repository at this point in the history
Executors currently start their own unused HTTP file servers. This is because we use the same SparkEnv class for both executors and drivers, and we do not distinguish this case.

In the longer term, we should separate out SparkEnv for the driver and SparkEnv for the executors.

Author: Andrew Or <andrewor14@gmail.com>

Closes apache#1335 from andrewor14/executor-http-server and squashes the following commits:

46ef263 [Andrew Or] Start HTTP server only on the driver
  • Loading branch information
andrewor14 authored and conviva-zz committed Sep 4, 2014
1 parent 3f44a9f commit 69bc898
Showing 1 changed file with 10 additions and 4 deletions.
14 changes: 10 additions & 4 deletions core/src/main/scala/org/apache/spark/SparkEnv.scala
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ class SparkEnv (

private[spark] def stop() {
pythonWorkers.foreach { case(key, worker) => worker.stop() }
httpFileServer.stop()
Option(httpFileServer).foreach(_.stop())
mapOutputTracker.stop()
shuffleManager.stop()
broadcastManager.stop()
Expand Down Expand Up @@ -228,9 +228,15 @@ object SparkEnv extends Logging {

val cacheManager = new CacheManager(blockManager)

val httpFileServer = new HttpFileServer(securityManager)
httpFileServer.initialize()
conf.set("spark.fileserver.uri", httpFileServer.serverUri)
val httpFileServer =
if (isDriver) {
val server = new HttpFileServer(securityManager)
server.initialize()
conf.set("spark.fileserver.uri", server.serverUri)
server
} else {
null
}

val metricsSystem = if (isDriver) {
MetricsSystem.createMetricsSystem("driver", conf, securityManager)
Expand Down

0 comments on commit 69bc898

Please sign in to comment.