-
Notifications
You must be signed in to change notification settings - Fork 309
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix build warnings #28
Fix build warnings #28
Conversation
Merged build triggered. |
Merged build started. |
Merged build finished. |
All automated tests passed. |
@@ -89,6 +89,7 @@ | |||
<arg>-unchecked</arg> | |||
<arg>-optimise</arg> | |||
<arg>-deprecation</arg> | |||
<arg>-Xfatal-warnings</arg> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice flag addition. It will definitely force us to address warnings instead of ignore them. :)
Please indent the arg at the same level as the others.
Aside from the indentation nit, this looks ready to merge to me. |
- Make future build warnings fatal, so that we can address them earlier
Merged build triggered. |
Merged build started. |
Merged build finished. |
All automated tests passed. |
Thanks, Carl! |
FYI - this breaks the build for those using Hadoop 1.0.4 (the default hadoop version for the Spark EC2 scripts). [INFO] Compiling 52 Scala sources to /Users/fnothaft/adam-new/adam/adam-commands/target/scala-2.9.3/classes... |
This also does not run out of the box with the spark-ec2 scripts with hadoop-major-version=2. Digging further... |
So, two things -- is there a more automated way for us to test in this kind of environment? Carl and I are in a meeting at the moment, but it sounds like he'll dig into this when we're done here. Frank, what's your preference, should we revert? Or fix it in some other way? |
Long term, I think that we need to figure out what versions of all tools we need to support. From there, we'll probably set up matrix builds in Jenkins to test this out. I wouldn't fret from your side right now — I'm going through hadoop versions trying to sort out which version works. There's nothing really to do until then. Realistically, I think this problem will be resolved through documentation and testing, not a code patch. TL;DR: Jenkins and documentation. |
We can protect against this by updating our Jenkins job. I'll do that soon (possibly today). |
It seems that Spark requires hadoop-client:2.0.0-cdh4.2.0. The Maven jar for this does not still appear to be available: http://mvnrepository.com/artifact/org.apache.hadoop/hadoop-client Thoughts? I assume this is a rough patch that is going to be fixed by spark-0.8.1? |
I've reverted this change in master. |
There were a few build errors that have been collecting. I've fixed them, and also made warnings fatal so that they are addressed earlier.
All of the changes were because of deprecation in an imported library; most were related to Hadoop's Job constructor being deprecated. The biggest change was in Genotype.getAttributeAsString. I don't know why it was deprecated, but I've included a scala version of it.