Skip to content

CarlosAmaral/data-warehouse-s3

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Data Warehouses with AWS's S3

Introduction

This project aims to demonstrate the power of data warehouses with AWS S3's Redshift (based on Postgres).

The scripts available within the project allow an user to do an ETL process starting by creating all the necessary tables of types: fact, dimensional and staging using 2 datasets: the Million Song Dataset and the Log Dataset -- log files generated by an event simulator based on the Million Song Dataset.

Database schema

This database schema encompasses the following tables from two distinct data sets to lastly form a fact table:

  1. song dataset:
    • staging_songs table: staging/intermediate table to perform ETL and load the song dataset;
    • song table: dimensional table that contains data from the available songs such as year, duration, title, song ID (PK) and artist ID (FK);
    • artists table: dimensional table contains data related to the artists from the songs such as artist ID (PK), artist name, and artist's latitude and logitude;
  2. log dataset (data refers to user interaction data):
    • staging_events table: staging/intermediate table to perform ETL and load the log dataset;
    • time table: dimensional table containing data related to the times when users when users were listening to music;
    • users table: dimensional table containing user data such as first and last name, gender and subscription level.

Lastly, the fact table that gathers the data from these two data sets is called songplay table. It contains FKs such as artist Id, song Id and user Id, as well as timestamp of when the song was played and location.

Requirements

  1. Install python 3.x.x;
  2. Create an AWS S3's Redshift cluster;
  3. Create an IAM role for the cluster;
  4. Run create_tables.py to create all the staging, fact and dimensional tables that we need;
  5. Run sql_queries.py to load all the data sets to the staging tables and later afterwards query the staging tables to populate the facts and the dimensional tables

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages