Skip to content

Latest commit

 

History

History
45 lines (26 loc) · 1.97 KB

README.md

File metadata and controls

45 lines (26 loc) · 1.97 KB

Stark: Distributed Inference with Stan & Spark

The idea of stark is to adapt a powerful single-node modeling tool (Stan) for running on a distributed platform (Spark). The naive way of doing this is to copy the data to each worker, and each worker produces poster samples. This emberassingly parallel model could be useful, but running in a distributed setting facilitates other possibilities, such as running on much larger datasets.

One strategy for handling more data is to run the same model where each one is provided a subset of data. Each model will then generate samples from a subposterior. Combining subposteriors is not trivial, and seems to be a current area of research. See the reading section for more resources.

Other possibilties could be running the same model with different datasets or hyperparameters (e.g. for a sensitivity analysis). If MAP estimates were found instead of full posteriors then one might have something akin to a distributed bootstrap.

This project is experimental-quality

Modes

The current status is that only the naive mode and simple weighted averaging are implemented.

Example

Currently there is just a single example, built on the 8 schools dataset. Spark is not useful for such a small dataset, it is used just for illustration.

From the stark/example directory, try:

spark-submit stark_ex.py

(I've only tried this with Spark 1.6.2)

Reading

Related projects