-
Notifications
You must be signed in to change notification settings - Fork 55
/
preface.Rmd
executable file
·72 lines (43 loc) · 9.18 KB
/
preface.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
# Preface {-}
In a world where information is growing exponentially, leading tools like Apache Spark provide support to solve many of the relevant problems we face today. From companies looking for ways to improve based on data-driven decisions, to research organizations solving problems in health care, finance, education, and energy, Spark enables analyzing much more information faster and more reliably than ever before.
Various books have been written for learning Apache Spark; for instance, [_Spark: The Definitive Guide_](https://oreil.ly/gMaGP) is a comprehensive resource, and [_Learning Spark_](https://oreil.ly/1-4CA) is an introductory book meant to help users get up and running (both are from O'Reilly). However, as of this writing, there is neither a book to learn Apache Spark using the R computing language nor a book specifically designed for the R user or the aspiring R user.
There<!--((("Spark using R", "online resources")))--> are some resources online to learn Apache Spark with R, most notably the [spark.rstudio.com](https://spark.rstudio.com) site and the Spark documentation site at [spark.apache.org](http://bit.ly/31H2nMl). Both sites are great online resources; however, the content is not intended to be read from start to finish and assumes you, the reader, have some knowledge of Apache Spark, R, and cluster computing.
The goal of this book is to help anyone get started with Apache Spark using R. Additionally, because the R programming language was created to simplify data analysis, it is also our belief that this book provides the easiest path for you to learn the tools used to solve data analysis problems with Spark. The first chapters provide an introduction to help anyone get up to speed with these concepts and present the tools required to work on these problems on your own computer. We then quickly ramp up to relevant data science topics, cluster computing, and advanced topics that should interest even the most experienced users.
Therefore, this book is intended to be a useful resource for a wide range of users, from beginners curious to learn Apache Spark, to experienced readers seeking to understand why and how to use Apache Spark from R.
This book has the following general outline:
Introduction
: In the first two chapters, [Chapter 1](#intro), _Introduction_, and [Chapter 2](#starting), _Getting Started_, you learn about Apache Spark, R and the tools to perform data analysis with Spark and R.
Analysis
: In [Chapter 3](analysis), _Analysis_, you learn how to analyze, explore, transform, and visualize data in Apache Spark with R.
Modeling
: In [Chapter 4](#modeling), _Modeling_, and [Chapter 5](#pipelines), _Pipelines_, you learn how to create statistical models with the purpose of extracting information, predicticting outcomes, and automating this process in production-ready workflows.
Scaling
: Up to this point, the book has focused on performing operations on your personal computer and with limited data formats. [Chapter 6](#clusters), _Clusters_, [Chapter 7](#connections), _Connections_, [Chapter 8](#data), _Data_, and [Chapter 9](#tuning), _Tuning_, introduce distributed computing techniques required to perform analysis and modeling across many machines and data formats to tackle the large-scale data and computation problems for which Apache Spark was designed.
Extensions
: [Chapter 10](#extensions), _Extensions_, describes optional components and extended functionality applicable to specific, relevant use cases. You learn about alternative modeling frameworks, graph processing, preprocessing data for deep learning, geospatial analysis, and genomics at scale.
Advanced
: The book closes with a set of advanced chapters, [Chapter 11](#distributed), _Distributed_, [Chapter 12](#streaming), _Streaming_, and [Chapter 13](#contributing), _Contributing_; these will be of greatest interest to advanced users. However, by the time you reach this section, the content won’t seem as intimidating; instead, these chapters will be equally relevant, useful, and interesting as the previous ones.
The first group of chapters, 1-5, provide a gentle introduction to performing data science and machine learning at scale. If you are planning to read this book while also following along with code examples, these are great chapters to consider executing the code line by line. Because these chapters teach all of the concepts using your personal computer, you won’t be taking advantage of multiple computers, which Spark was designed to use. But worry not: the next set of chapters will teach this in detail!
The second group of chapters, 6-9, introduces fundamental concepts in the exciting world of cluster computing using Spark. To be honest, they also introduce some of the not-so-fun parts of cluster computing, but believe us, it’s worth learning the concepts we present. Besides, the overview sections in each chapter are especially interesting, informative, and easy to read, and help you develop intuitions as to how cluster computing truly works. For these chapters, we actually don’t recommend executing the code line by line—especially not if you are trying to learn Spark from start to finish. You can always come back and execute code after you have a proper Spark cluster. If you already have a cluster at work or you are really motivated to get one, however, you might want to use [Chapter 6](#clusters) to pick one and then [Chapter 7](#connections) to connect to it.
The third group of chapters, 10-13, present tools that should be quite interesting to most readers and will make it easier to follow along. Many advanced topics are presented, and it is natural to be more interested in some topics than others; for instance, you might be interested in analyzing geographic datasets, or perhaps you're more interested in processing real-time datasets, or maybe you'd like to do both! Based on your personal interests or problems at hand, we encourage you to execute the code examples that are most relevant to you. All of the code in these chapters is written to be executed on your personal computer, but you are also encouraged to use proper Spark clusters given that you’ll have the tools required to troubleshoot issues and tune large-scale computations.
## Formatting {-}
Tables<!--((("formatting plots")))((("ggplot2 package", "formatting plots")))--> generated from code are formatted as follows:
```
# A tibble: 3 x 2
numbers text
<dbl> <chr>
1 1 one
2 2 two
3 3 three
```
The dimensions of the table (number of rows and columns) are described in the first row, followed by column names in the second row and column types in the third row. There are also various subtle visual improvements provided by the `tibble` package that we make use of throughout this book.
Most plots are rendered using the `ggplot2` package and a custom theme available in the appendix; however, because this book is not focused on data visualization, we only provide code to render a basic plot that won’t match the formatting we applied. If you are interested in learning more about visualization in R, consider specialized books like [_R Graphics Cookbook_](https://oreil.ly/bIF4l) (O'Reilly).
=== Acknowledgments
We thank the package authors that enabled Spark with R: Javier Luraschi, Kevin Kuo, Kevin Ushey, and JJ Allaire (`sparklyr`); Romain François and Hadley Wickham (`dbplyr`); Hadley Wickham and Edgar Ruiz (`dpblyr`); Kirill Mülller (`DBI`); and the authors of the Apache Spark project itself, and its original author Matei Zaharia.
We thank the package authors that released extensions to enrich the Spark and R ecosystem: Akhil Nair (`crassy`); Harry Zhu (`geospark`); Kevin Kuo (`graphframes`, `mleap`, `sparktf`, and `sparkxgb`); Jakub Hava, Navdeep Gill, Erin LeDell, and Michal Malohlava (`rsparkling`); Jan Wijffels (`spark.sas7bdat`); Aki Ariga (`sparkavro`); Martin Studer (`sparkbq`); Matt Pollock (`sparklyr.nested`); Nathan Eastwood (`sparkts`); and Samuel Macêdo (`variantspark`).
We thank our wonderful editor, Melissa Potter, for providing us with guidance, encouragement, and countless hours of detailed feedback to make this book the best we could have ever written.
To Bradley Boehmke, Bryan Adams, Bryan Jonas, Dusty Turner, and Hossein Falaki, we thank you for your technical reviews, time, and candid feedback, and for sharing your expertise with us. Many readers will have a much more pleasant experience thanks to you.
Thanks to RStudio, JJ Allaire, and Tareef Kawaf for supporting this work, and the R community itself for its continuous support and encouragement.
Max Kuhn, thank you for your invaluable feedback on [Chapter 4](#modeling), in which, with his permission, we adapted examples from his wonderful book _Feature Engineering and Selection: A Practical Approach for Predictive Models_ (CRC Press).
We also thank everyone indirectly involved but not explicitly listed in this section; we are truly standing on the shoulders of giants.
This book itself was written in R using `bookdown` by Yihui Xie, `rmarkdown` by JJ Allaire and Yihui Xie, and `knitr` by Yihui Xie; we drew the visualizations using `ggplot2` by Hadley Wickham and Winston Chang; we created the diagrams using `nomnoml` by Daniel Kallin and Javier Luraschi; and we did the document conversions using `pandoc` by John MacFarlane.