The ETL process was implemented to manipulate dataframes to create csv files into schemas and relationship tables from a Crowdfunding-related excel dataset.
Python, Jupyter Lab, Pandas, PostgreSQL, and QuickDBD were the platforms used to implement the ETL Process on Crowdfunding dataset.
- Data was extracted and dataframes were created from the Crowdfunding dataset in Jupyter lab
- The dataframes were converted to csv files and pushed/stored locally into a repository resource folder
- The csv files were exported into Quick DBD and Postgres SQL/Pg4Admin databases where relationship tables and schemas were created, respectively
Please see the attached csv, sql, and ERD diagram files within the folders located above this ReadMe.
- Avis Randle
- Irina Tenis