Skip to content

hdinsight/BigDLonHDInsight

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Introduction to Deep Learning on HDInsight with Intel Deep Learning framework: BigDL (R) Intel

   

Presenters

  • Denny Lee, Principal Program Manager, CosmosDB
  • Tom Drabas, Data Scientist, WDG

In close cooperation with Intel

  • Sergey Ermolin, Power/Performance Optimization
  • Ding Ding, Software Engineer
  • Jiao Wang, Software Engineer
  • Jason Dai, Senior Principle Engineer and CTO, Big Data Technologies
  • Yiheng, Wang, Software Engineer
  • Xianyan Jia, Software Engineer

Special thanks to

  • Felix Cheung, Principal Software Engineer
  • Xiaoyong Zhu, Program Manager
  • Alejandro Guerrero Gonzalez, Senior Software Engineer

Setting up the environment

1. Clone the Github repository

The folders in this repo:

  1. data folder - contains a set of 4 files that can be downloaded from http://yann.lecun.com/exdb/mnist/:
    1. train-images-idx3-ubyte - set of training images in a binary format with a specific schema (we'll get to that)
    2. train-labels-idx1-ubyte - corresponding set of training labels
    3. t10k-images-idx3-ubyte - set of testing (validation) images
    4. t10k-labels-idx1-ubyte - corresponding set of testing (validation) labels
  2. jars folder - contains two compiled jars for the BigDL:
    1. bigdl-0.2.0-SNAPSHOT-spark-2.0-jar-with-dependencies.jar - BigDL compiled for Spark 2.0
    2. bigdl-0.2.0-SNAPSHOT-spark-2.1-jar-with-dependencies.jar - BigDL compiled for Spark 2.1
  3. notebook folder - contains the notebook for the workshop

2. Upload BigDL jar

Grab the jar from the jars folder appropriate for your version of Spark.

  1. Go to Azure Dashboard and click on your cluster. Scroll down to the Storage accounts Storage options
  2. Click on the default storage account Default storage
  3. Go to Blobs Blobs
  4. Select the default container Container
  5. Upload the jar appropriate for your version of Spark to the root of the folder Upload
  6. Check if uploaded successfully Uploaded

3. Upload the data

Similarly to uploading the BigDL upload the data from the data folder. Upload the data into the /tmp folder in your default storage.

About

logistics to run BigDL on HDInsight

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published