- Spark 2.x
Create IrisApp application on PredictionIO:
pio app new --access-key IRIS_TOKEN IrisApp
Import iris data from iris.data.
python data/import_eventserver.py
Build this template:
pio build
Launch Jupyter notebook and open eda.ipynb. (or you can create a new notebook to analyze data)
PYSPARK_PYTHON=$PYENV_ROOT/shims/python PYSPARK_DRIVER_PYTHON=$PYENV_ROOT/shims/jupyter PYSPARK_DRIVER_PYTHON_OPTS="notebook" pio-shell --with-pyspark
Download Python code from eda.ipynb and put it to train.py.
To execute it on Spark, run pio train
with --main-py-file option.
pio train --main-py-file train.py
Run PredictionIO API server:
pio deploy
Check predictions from deployed model:
curl -s -H "Content-Type: application/json" -d '{"attr0":5.1,"attr1":3.5,"attr2":1.4,"attr3":0.2}' http://localhost:8000/queries.json
PredictionIO Setup dockernizes PredictionIO Template project easily and provides All-In-One environment for train and deploy processes.
train
mode executes create-app, import-data, build and train process.
PIO_MODE=train docker-compose up --abort-on-container-exit
deploy
mode starts Rest API on Docker container.
PIO_MODE=deploy docker-compose up --abort-on-container-exit
and then check a prediction:
curl -s -H "Content-Type: application/json" -d '{"attr0":5.1,"attr1":3.5,"attr2":1.4,"attr3":0.2}' http://localhost:8000/queries.json