This repository contains exercises and solutions for a one-day crash course for PySpark and Spark ML. The repository only contains Jupyter Notebooks which assume a working PySpark kernel with Python 3.5 and Spark 2.1.
All notebooks have been create by Kaya Kupferschmidt @ dimajix. In case you have any questions, feel free to contact me at k.kupferschmidt@dimajix.de
This notebook contains some simple snippets to get a basic understanding how to interact with Spark DataFrames in Python.
These notebooks provides some examples on the differences between Pandas and Spark on an API level.
A small exercise using some more data for a simple weather analysis.
An introduction to the various types of Pandas Vectorized UDFs
An non-trivial example for using Pandas UDFs
These notebooks contain a simple linear regression exercise as an introduction to machine learning with Spark.
These notebooks builds on the last one, but creates more structure by using Spark ML pipeliens.
After being exposed to a simple linear regression, these notebooks contain an exercise to perform a simple statistical text classification.
As with many complex algorithms and ML pipelines, the text classification has many hyper parameters. These notebooks show how to perform hyper parameter tuning with PySpark.