Sparkify has grown their user base and song database and want to move their processes and data onto the cloud. Their data resides in S3, in a directory of JSON logs on user activity on the app, as well as a directory with JSON metadata on the songs in their app.
They'd like a data engineer to build an ETL pipeline that extracts their data from S3, stages them in Redshift, and transforms data into a set of dimensional tables for their analytics team to continue finding insights in what songs their users are listening to. You'll be able to test your database and ETL pipeline by running queries given to you by the analytics team from Sparkify and compare your results with their expected results.
In this project, you will move Sparkify's processes and data onto the cloud. Specifically, you will build ETL pipelines that extract data from S3 and stage them in Redshift, while transforming the data into a set of dimensional tables to allow Sparkify's analytical team to explore user song preferences and find insights.
- python
- AWS
- Redshift
You will work with two datasets that reside in S3. Here are the S3 links for each dataset:
- Song data:
s3://udacity-dend/song_data
- Log data:
s3://udacity-dend/log_data
- Log data JSON path:
s3://udacity-dend/log_json_path.json
.
- Log data JSON path:
The song dataset contains a subset of the Million Song Dataset. Each file is in JSON format and contains metadata about a song and the artist of that song. The files are partitioned by the first three letters of each song's track ID like so:
song_data/A/B/C/TRABCEI128F424C983.json
song_data/A/A/B/TRAABJL12903CDCF1A.json
This is what the content of each JSON file looks like:
{"num_songs": 1, "artist_id": "ARJIE2Y1187B994AB7", "artist_latitude": null, "artist_longitude": null, "artist_location": "", "artist_name": "Line Renaud", "song_id": "SOUPIRU12A6D4FA1E1", "title": "Der Kleine Dompfaff", "duration": 152.92036, "year": 0}
The log dataset contains simulated app activity logs from a music streaming app based on configuration settings. The log files are partitioned by year and month, like so:
log_data/2018/11/2018-11-12-events.json
log_data/2018/11/2018-11-13-events.json
Here is an example of what the data in the log file looks like.
You will use the star database schema as data model for this ETL pipeline, which contains fact and dimension tables. An entity relationship diagram is shown below.
Clone this repository
git clone https://github.com/najuzilu/CDW-AWSRedshift.git
- conda
- python 3.8
- psycopg2
- boto3
- json
- botocore
- configparser
Create a virtual environment through Anaconda using
conda env create --file environment.yml
- Use
dwh_example.cfg
to create and populate adwh.cfg
file with the AWS Access Key and Secret Key fields. - Run
create_tables.py
to create a new Redshift cluster on AWS and tables.python create_tables.py
- Run
etl.py
to load the data from S3 to staging tables in Redshift and insert data from staging tables to final tables.The pipeline will also execute test queries to make sure the tables have been populated.python etl.py
Yuna Luzi - @najuzilu
Distributed under the MIT License. See LICENSE
for more information.