Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix compile error for hadoop CDH 4.4+ #151

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

package org.apache.spark.deploy.yarn

import scala.language.experimental.macros
import scala.reflect.macros.Context

private[yarn] object YarnAllocationHandlerMacro {
def getAMResp(resp: Any): Any = macro getAMRespImpl

/**
* From Hadoop CDH 4.4.0+ (2.1.0-beta),
* AMResponse is merged into AllocateResponse,
* so we don't need to call getAMResponse(), just use AllocateResponse directly.
* This macro will test the existence of AMResponse,
* and generate diffenert expressions.
*
* This macro now is only used in spark's alpha version of yarn api.
* It stays in the core project, for the two-stage compiling of
* the scala macro system.
*/
def getAMRespImpl(c: Context)(resp: c.Expr[Any]) = {
try {
import c.universe._
c.mirror.staticClass("org.apache.hadoop.yarn.api.records.AMResponse")
c.Expr[Any](Apply(Select(resp.tree, newTermName("getAMResponse")), List()))
} catch {
case _: Throwable => resp
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of doing this for each invocation - find out which method is exposed and set some flag to return appropriate value based on that via reflection.
I will defer to Tom on whether there is a better way to do this.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

getAMRespImpl() macro is only called once when compiling allocateContainers() method at compile-time, so invocations of allocateContainers() will not do reflection at run-time.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why would we want to hardcode this at compile time ?
better would be to reflect and find it at runtime - so that same code runs in both versions.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

run time reflection also need hard code method name that does't exist in beta api, and have to pay the run time cost of reflect calls. even if using runtime reflection, IMO, we would rarely deploy spark-assembly-xxx-hadoop2.0.0-cdh4.6.0.jar on a cluster running hadoop cdh4.2.0, and vise versa.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On Sat, Mar 15, 2014 at 9:55 PM, gzm55 notifications@github.com wrote:

In
core/src/main/scala/org/apache/spark/deploy/yarn/YarnAllocationHandlerMacro.scala:

  • * so we don't need to call getAMResponse(), just use AllocateResponse directly.
  • * This macro will test the existence of AMResponse,
  • * and generate diffenert expressions.
  • * This macro now is only used in spark's alpha version of yarn api.
  • * It stays in the core project, for the two-stage compiling of
  • * the scala macro system.
  • */
  • def getAMRespImpl(c: Context)(resp: c.Expr[Any]) = {
  • try {
  •  import c.universe._
    
  •  c.mirror.staticClass("org.apache.hadoop.yarn.api.records.AMResponse")
    
  •  c.Expr[Any](Apply%28Select%28resp.tree, newTermName%28"getAMResponse"%29%29, List%28%29%29)
    
  • } catch {
  •  case _: Throwable => resp
    
  • }

run time reflection also need hard code method name that does't exist in
beta api, and have to pay the run time cost of reflect calls.

Cache the resolved method - it is fairly cheap once that happens.
There is a reason java does not support macro's.

even if using runtime reflection, IMO, we would rarely deploy
spark-assembly-xxx-hadoop2.0.0-cdh4.6.0.jar on a cluster running hadoop
cdh4.2.0.

That is indeed a fair point.

Reply to this email directly or view it on GitHubhttps://github.com//pull/151/files#r10638000
.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried to use runtime refleciton, and found some difficults. because amResp will have different type, I've to give it a duck type like

AnyRef {
def getAllocatedContainers ...
def getAvailableResources ...
def getCompletedContainersStatuses ...
}

or have to cache four reflection methods (the upper three plus getAMResponse()). in the future, each time we introduce a method of AMResponse/AllocateResponse, we'll have to make additional cache for it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would look at this as a very uncommon and temporary situation.
We would need to continue supporting the old api for only a little while
longer - after which we can remove the reflection stuff; and ideally, yarn
should converge on a stable api soon enough.
So until then, unfortunately, we have instability in our impl.

On Sat, Mar 15, 2014 at 10:32 PM, gzm55 notifications@github.com wrote:

In
core/src/main/scala/org/apache/spark/deploy/yarn/YarnAllocationHandlerMacro.scala:

  • * so we don't need to call getAMResponse(), just use AllocateResponse directly.
  • * This macro will test the existence of AMResponse,
  • * and generate diffenert expressions.
  • * This macro now is only used in spark's alpha version of yarn api.
  • * It stays in the core project, for the two-stage compiling of
  • * the scala macro system.
  • */
  • def getAMRespImpl(c: Context)(resp: c.Expr[Any]) = {
  • try {
  •  import c.universe._
    
  •  c.mirror.staticClass("org.apache.hadoop.yarn.api.records.AMResponse")
    
  •  c.Expr[Any](Apply%28Select%28resp.tree, newTermName%28"getAMResponse"%29%29, List%28%29%29)
    
  • } catch {
  •  case _: Throwable => resp
    
  • }

I tried to use runtime refleciton, and found some difficults. because
amResp will have different type, I've to give it a duck type like

AnyRef {
def getAllocatedContainers ...
def getAvailableResources ...
def getCompletedContainersStatuses ...
}

or have to cache four reflection methods (the upper three plus
getAMResponse()). in the future, each time we introduce a method of
AMResponse/AllocateResponse, we'll have to make additional cache for it.

Reply to this email directly or view it on GitHubhttps://github.com//pull/151/files#r10638121
.

}
}
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ import org.apache.spark.util.Utils

import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.yarn.api.AMRMProtocol
import org.apache.hadoop.yarn.api.records.{AMResponse, ApplicationAttemptId}
import org.apache.hadoop.yarn.api.records.ApplicationAttemptId
import org.apache.hadoop.yarn.api.records.{Container, ContainerId, ContainerStatus}
import org.apache.hadoop.yarn.api.records.{Priority, Resource, ResourceRequest}
import org.apache.hadoop.yarn.api.protocolrecords.{AllocateRequest, AllocateResponse}
Expand Down Expand Up @@ -103,7 +103,7 @@ private[yarn] class YarnAllocationHandler(
// this much.

// Keep polling the Resource Manager for containers
val amResp = allocateExecutorResources(executorsToRequest).getAMResponse
val amResp = YarnAllocationHandlerMacro.getAMResp(allocateExecutorResources(executorsToRequest))

val _allocatedContainers = amResp.getAllocatedContainers()

Expand Down