Python project for protein sequence analysis using Apache Spark.
Checkout the repo and run the following commands in the base directory of the repo:
# create your virtual environment at the base of the repo
python -m virtualenv .venv
# activate your virtual environment
.venv\Scripts\activate
# install tools used for the setup script
pip install -U pytest setuptools wheel build
# install project dependencies
python setup.py install
# note: you may need to restart your IDE after these steps in order for intellisense to work
# deactivate virtual environment
deactivate
Python Version: 3.8.5
- Download the installer from https://www.python.org/downloads/
- Select 'Customize instillation'
- Click 'Next'
- Check 'Add Python to environment variables' & set the install location to
C:\Python
, then click 'Install' - Verify Python instillation by running
python -V
- Verify Pip instillation by running
pip -V
JDK Version: 14.0.2
- Install the JDK by downloading your preferred package/archive/installer from https://www.oracle.com/java/technologies/javase-jdk14-downloads.html
- Add a new system environment variable named
JAVA_HOME
with valueC:\Progra~1\Java\jdk-14.0.2
- Add
%JAVA_HOME%\bin
to the systems 'Path' environment variable - Verify instillation by running
javac --help
Spark Version: 3.0.0
Package Type: Pre-built for Apache Hadoop 2.7
- Download the
.tgz
file using the settings listed above from https://spark.apache.org/downloads.html - Unzip the archive downloaded to
C:\Spark
- SPARK-2356 - an existing bug means that you also need to copy winutils.exe into
C:\Spark\bin
(exe can be found in the ticket) - Add a new system environment variable named
SPARK_HOME
with valueC:\Spark
- Add a new system environment variable named
HADOOP_HOME
with valueC:\Spark
- Add
%SPARK_HOME%\bin
and%HADOOP_HOME%\bin
to the systems 'Path' environment variable - Optional - update logging level of Spark to
ERROR
- Navigate to
C:\Spark\conf
- Copy
log4j.properties.template
into the same directory - Open
log4j.properties - Copy.template
and change all logging toERROR
- Rename
log4j.properties - Copy.template
tolog4j.properties
- Navigate to
- Verify instillation by running
spark-shell
The following script is intended to be run through the Spark Shell and will verify if Spark is working as intended. It counts each word in the Spark README file and writes each word and it's count to SparkTest/ReadMeWordCount
.
val textFile = sc.textFile("file:///Spark/README.md")
val tokenisedFileData = textFile.flatMap(line => line.split(" "))
val countPrep = tokenisedFileData.map(word => (word, 1))
val counts = countPrep.reduceByKey((x, y) => x + y)
val sortedCounts = counts.sortBy(x => x._2, false)
sortedCounts.saveAsTextFile("file:///SparkTest/ReadMeWordCount")