Skip to content

JasdeepSidhu13/DM_apache_cassandra

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Data Modeling with Cassandra

This project was submitted as part of second project for the Data Engineering Nanodegree. In this project Data is modeled with Cassandra which includes following key steps:

  • ETL pipeline using Python
  • Create one table per query and answer user specific queries

Introduction

A startup called Sparkify wants to analyze the data they've been collecting on songs and user activity on their new music streaming app. The analysis team is particularly interested in understanding what songs users are listening to. Currently, there is no easy way to query the data to generate the results, since the data reside in a directory of CSV files on user activity on the app.

They'd like a data engineer to create an Apache Cassandra database which can create queries on song play data to answer the questions, and wish to bring you on the project. Your role is to create a database for this analysis. You'll be able to test your database by running queries given to you by the analytics team from Sparkify to create the results.

Project Overview

In this project, you'll apply what you've learned on data modeling with Apache Cassandra and complete an ETL pipeline using Python. To complete the project, you will need to model your data by creating tables in Apache Cassandra to run queries. You are provided with part of the ETL pipeline that transfers data from a set of CSV files within a directory to create a streamlined CSV file to model and insert data into Apache Cassandra tables.

Datasets

For this project, you'll be working with one dataset: event_data. The directory of CSV files partitioned by date. Here are examples of filepaths to two files in the dataset:

event_data/2018-11-08-events.csv
event_data/2018-11-09-events.csv

Project Template

The project template includes one Jupyter Notebook file, in which:

You will process the event_datafile_new.csv dataset to create a denormalized dataset You will model the data tables keeping in mind the queries you need to run You have been provided queries that you will need to model your data tables for You will load the data into tables you create in Apache Cassandra and run your queries

Project Steps

Below are steps you can follow to complete each component of this project.

Modeling your NoSQL database or Apache Cassandra database

Design tables to answer the queries outlined in the project template Write Apache Cassandra CREATE KEYSPACE and SET KEYSPACE statements Develop your CREATE statement for each of the tables to address each question Load the data with INSERT statement for each of the tables Include IF NOT EXISTS clauses in your CREATE statements to create tables only if the tables do not already exist. We recommend you also include DROP TABLE statement for each table, this way you can run drop and create tables whenever you want to reset your database and test your ETL pipeline Test by running the proper select statements with the correct WHERE clause

Build ETL Pipeline

Implement the logic in section Part I of the notebook template to iterate through each event file in event_data to process and create a new CSV file in Python Make necessary edits to Part II of the notebook template to include Apache Cassandra CREATE and INSERT statements to load processed records into relevant tables in your data model Test by running SELECT statements after running the queries on your database

Project Files

  1. event_data This is the folder in the repository which contains all the data.
  2. event_datafile_new.csv This is the new csv data file processed from event_data.
  3. Project_1B_ Project_Template.ipynb This is main notebook of the project in which ETL pipeline is built and tables are created, data is inserted and queries are answered.
  4. README.md This is the current file which provides discussion on the project.

Discussion

Project_1B_ Project_Template.ipynb

The python notebook includes comments and Markdowns explaining key steps. Please see the python notebook for details.

About

Data Modeling with Apache Cassandra

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published