Skip to content

Latest commit

 

History

History
32 lines (22 loc) · 1.78 KB

File metadata and controls

32 lines (22 loc) · 1.78 KB

SQL Solutions in PySpark DataFrame API and Spark SQL

Welcome to my repository featuring PySpark DataFrame API and Spark SQL solutions for LeetCode SQL questions! If you're passionate about solving SQL problems and working with PySpark, you're in the right place.

Introduction

This repository contains my solutions to various SQL problems from LeetCode, implemented using PySpark DataFrame API and Spark SQL. The goal is to provide alternative solutions and insights for SQL enthusiasts who want to explore the power of PySpark and Spark SQL.

Why PySpark and Spark SQL?

  • Scalability: Leverage the distributed computing capabilities of Apache Spark to handle large datasets.
  • Flexibility: PySpark allows you to seamlessly integrate SQL operations with Python, providing a powerful combination for data manipulation.
  • Performance: Spark SQL optimization and caching mechanisms contribute to improved query performance.

Databricks Community Edition

All solutions in this repository were developed and tested using Databricks Community Edition. If you're not already using Databricks, you can sign up for a free account to practice and run these PySpark and Spark SQL solutions in a collaborative environment.

How to Use

To run locally:

  1. Clone the repository:
    git clone https://github.com/your-username/sql-leetcode-pyspark.git```
  2. Install dependencies:
    pip3 install pyspark```
  3. Open the Jupyter notebooks and run the codes to explore the solutions.

Feel free to explore more in the solution-notebooks directory!

Contributing

Found a bug? Have a suggestion? Contributions are welcome! Fork the repository, make your changes, and submit a pull request.