Skip to content

Combine Spark and Python to process large datasets and unlock the power of parallel computing and machine learning

License

Notifications You must be signed in to change notification settings

TrainingByPackt/Big-Data-Analysis-with-Python

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

69 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GitHub issues GitHub forks GitHub stars PRs Welcome

Big Data Analysis with Python

Processing big data in real time is challenging due to scalability, information inconsistency, and fault tolerance. Big Data Analysis with Python teaches you how to use tools that can control this data avalanche for you. With this book, you'll learn effective techniques to aggregate data into useful dimensions for posterior analysis, extract statistical measurements, and transform datasets into features for other systems.

The book begins with an introduction to data manipulation in Python using Pandas. You'll then get familiar with statistical analysis and plotting techniques. With multiple hands-on activities in store, you'll be able to analyze data that is distributed on several computers by using Dask. As you progress, you'll study how to aggregate data for plots when the entire data cannot be accommodated into memory. You'll also explore Hadoop (HDFS and YARN), which will help you tackle larger datasets. The book further covers Spark and its interaction with other tools.

By the end of this book, you'll be able to bootstrap your own Python environment, process large files, and manipulate data to generate statistics, metrics, and graphs.

Learning Objectives

  • Use Python to read and transform data into different formats
  • Generate basic statistics and metrics using data on the disk
  • Work with computing tasks distributed over a cluster
  • Convert data from different sources into storage or querying formats
  • Prepare data for statistical analysis, visualization, and machine learning
  • Present data in the form of effective visuals

Hardware Requirements

For an optimal experience, we recommend the following hardware configuration:

  • Processor: Dual Core or better
  • Memory: 4GB RAM
  • Storage: 10 GB available space

Software Requirements

  • Windows 7 SP1 32/64-bit,
  • Windows 8.1 32/64-bit or Windows 10 32/64-bit
  • Ubuntu 14.04 or later
  • macOS Sierra or later
  • Browser: Google Chrome or Mozilla Firefox
  • Conda
  • Jupyterlab

About

Combine Spark and Python to process large datasets and unlock the power of parallel computing and machine learning

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published