-
Notifications
You must be signed in to change notification settings - Fork 309
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TypeError: 'JavaPackage' object is not callable #2122
Comments
Hello @Shekharrajak! Thanks for submitting this issue. How did you install Spark and ADAM? |
I installed adam using this : https://adam.readthedocs.io/en/latest/installation/pip/ and Spark using this : https://medium.com/@josemarcialportilla/installing-scala-and-spark-on-ubuntu-5665ee4b62b1 |
I cut a new ADAM version 0.26.0 release last week and pushed to PiPy. Could you give it another try with |
I have updated the library and updated the sample code :
But still getting the same error :
|
This is the details :
|
@heuermh do you know if pip installs the ADAM binary? by just installing ADAM via pip, there is no way the binaries would be accessible, right? |
The docs "Pip will install the bdgenomics.adam Python binding, as well as the ADAM CLI." and original pull requests (#1848, #1849) read as if that were so. I plan to fire up a new vanilla EC2 instance and give it a try this afternoon. |
Note also the example used in the jenkins script https://github.com/bigdatagenomics/adam/blob/master/scripts/jenkins-test-pyadam.py called here https://github.com/bigdatagenomics/adam/blob/master/scripts/jenkins-test#L236 |
Ahh fancy. @Shekharrajak how are you starting python? |
It does look like we have a problem. Starting with a new Amazon Linux 2 AMI on EC2, $ ssh ...
__| __|_ )
_| ( / Amazon Linux 2 AMI
___|\___|___|
https://aws.amazon.com/amazon-linux-2/
1 package(s) needed for security, out of 3 available
Run "sudo yum update" to apply all updates.
$ sudo yum update
...
$ python --version
Python 2.7.14
$ sudo easy_install pip
...
Installed /usr/lib/python2.7/site-packages/pip-19.0.3-py2.7.egg
$ sudo pip install pyspark
...
Successfully installed py4j-0.10.7 pyspark-2.4.0
$ which pyspark
/usr/bin/pyspark
$ pyspark --version
JAVA_HOME is not set
$ which java
/usr/bin/which: no java in (/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/ec2-user/.local/bin:/home/ec2-user/bin)
$ sudo yum install java-1.8.0-openjdk-devel
...
Installed:
java-1.8.0-openjdk-devel.x86_64 1:1.8.0.191.b12-0.amzn2
$ pyspark --version
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.4.0
/_/
Using Scala version 2.11.12, OpenJDK 64-Bit Server VM, 1.8.0_191
$ sudo pip install bdgenomics.adam
...
Requirement already satisfied: pyspark>=1.6.0 in /usr/lib/python2.7/site-packages
(from bdgenomics.adam) (2.4.0)
Requirement already satisfied: py4j==0.10.7 in /usr/lib/python2.7/site-packages
(from pyspark>=1.6.0->bdgenomics.adam) (0.10.7)
Installing collected packages: bdgenomics.adam
Running setup.py install for bdgenomics.adam ... done
Successfully installed bdgenomics.adam-0.26.0
$ which pyadam
/usr/bin/pyadam
$ pyadam --version
['/usr/bin/..', '/usr/lib/python2.7/site-packages/bdgenomics/adam']
['/usr/bin/..', '/usr/lib/python2.7/site-packages/bdgenomics/adam']
ls: cannot access
/usr/lib/python2.7/site-packages/bdgenomics/adam/adam-python/dist:
No such file or directory
Failed to find ADAM egg in
/usr/lib/python2.7/site-packages/bdgenomics/adam/adam-python/dist.
You need to build ADAM before running this program.
$ which adam-submit
/usr/bin/adam-submit
$ adam-submit --version
['/usr/bin/..', '/usr/lib/python2.7/site-packages/bdgenomics/adam']
Using ADAM_MAIN=org.bdgenomics.adam.cli.ADAMMain
Using spark-submit=/usr/bin/spark-submit
2019-02-26 19:21:31 INFO ADAMMain:109 - ADAM invoked with args: "--version"
e 888~-_ e e e
d8b 888 \ d8b d8b d8b
/Y88b 888 | /Y88b d888bdY88b
/ Y88b 888 | / Y88b / Y88Y Y888b
/____Y88b 888 / /____Y88b / YY Y888b
/ Y88b 888_-~ / Y88b / Y888b
ADAM version: 0.26.0
Built for: Apache Spark 2.3.3, Scala 2.11.12, and Hadoop 2.7.5
$ touch empty.sam
$ adam-shell
Using SPARK_SHELL=/usr/bin/spark-shell
Spark context available as 'sc' (master = local[*], app id = local-1551208961369).
Spark session available as 'spark'.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.4.0
/_/
Using Scala version 2.11.12 (OpenJDK 64-Bit Server VM, Java 1.8.0_191)
Type in expressions to have them evaluated.
Type :help for more information.
scala> import org.bdgenomics.adam.rdd.ADAMContext._
import org.bdgenomics.adam.rdd.ADAMContext._
scala> val alignments = sc.loadAlignments("empty.sam")
alignments: org.bdgenomics.adam.rdd.read.AlignmentRecordDataset =
RDDBoundAlignmentRecordDataset with 0 reference sequences, 0 read groups,
and 0 processing steps
scala> alignments.toDF().count()
res0: Long = 0
scala> :quit After a bit of messing around, it appears
If I modify the
|
@akmorrow13 , I have put those lines of code into @heuermh , I tried your above commands and got the similar output. Can I do something like this :
? I got error,when I tried above code : `
|
Sorry for the delay in responding, re
This will return the path to the ADAM assembly jar, and what Spark wants is the Maven coordinates ( For version 0.26.0 of ADAM, that would be |
Hello @akmorrow13, this issue looks similar to #2225 to me, in that removing the egg stuff seems to help. Curious if we should do that or if there might be another more Python-y approach to take. |
Python:
When I am trying to run
ADAMContext(sparkSession)
, I am getting this error:The full code I am executing is :
I also followed the comment: JohnSnowLabs/spark-nlp#232 (comment) , but it didn't work for me.
The text was updated successfully, but these errors were encountered: