Skip to content

Latest commit

 

History

History
36 lines (22 loc) · 2.58 KB

README.md

File metadata and controls

36 lines (22 loc) · 2.58 KB

The repository contains the source code and dataset to reproduce the parallel computing exercise described in the paper:

Econometrics at Scale: Spark Up Big Data in Economics

Benjamin Bluhm & Jannic Cutura

Abstract

This paper provides an overview of how to use ''big data'' for economic research. We investigate the performance and ease of use of different Spark applications running on a distributed file system to enable the handling and analysis of data sets which were previously not usable due to their size. More specifically, we explain how to use Spark to (i) explore big data sets which exceed retail grade computers memory size and (ii) run typical econometric tasks including microeconometric, panel data and time series regression models which are prohibitively expensive to evaluate on stand-alone machines. By bridging the gap between the abstract concept of Spark and ready-to-use examples which can easily be altered to suite the researchers need, we provide economists and social scientists more generally with the theory and practice to handle the ever growing datasets available. The ease of reproducing the examples in this paper makes this guide a useful reference for researchers with a limited background in data handling and distributed computing.

Replication files

This repository contains all codes to replicate the results of our paper.

Link to the paper:

Data download:

Data set Url
HDMA* https://www.dropbox.com/sh/y5vrc3fnhwvw14o/AAAkgKja5YVpTT2vSUM0dW6-a?dl=0
HDMA subset https://www.dropbox.com/s/z690uga5a0qrezv/HMDA_subsample.csv?dl=0
Simulated panel https://www.dropbox.com/sh/vk2ra1ufupi0yky/AABHUX6FZxIOWdk9LMnNTy5ea?dl=0
Time series See Chapter 4.4 for simulation code

Authors

Benjamin Bluhm, Jannic Cutura

Python R