The Spark Kernel has one main goal: provide the foundation for interactive applications to connect and use Apache Spark.
The kernel provides several key features for applications:
-
Define and run Spark Tasks
- Executing Scala code dynamically in a similar fashion to the Scala REPL and Spark Shell
-
Collect Results without a Datastore
-
Send execution results and streaming data back via the Spark Kernel to your applications
-
Use the Comm API - an abstraction of the IPython protocol - for more detailed data communication and synchronization between your applications and the Spark Kernel
-
-
Host and Manage Applications Separately from Apache Spark
- The Spark Kernel serves as a proxy for requests to the Apache Spark cluster
The project intends to provide applications with the ability to send both packaged jars and code snippets. As it implements the latest IPython message protocol (5.0), the Spark Kernel can easily plug into the 3.x branch of IPython for quick, interactive data exploration. The Spark Kernel strives to be extensible, providing a pluggable interface for developers to add their own functionality.
If you are new to the Spark Kernel, please see the Getting Started section.
For more information, please visit the Spark Kernel wiki.
For bug reporting and feature requests, please visit the Spark Kernel issue list.