Skip to content

minhajpasha/mlops-hackathon

 
 

Repository files navigation

MLOps Hackathon

Learn about MLOps by deploying your own ML pipelines in Google Cloud. You'll solve a number of exercises and challenges to run pipelines in Vertex AI, continuously monitor your models, and promote your artifacts to a production environment.

Getting started

As a hackathon attendee, simply follow this notebook series in your Vertex AI Workbench instance:

  1. Health check - start here
  2. Run pipelines
  3. Promote model
  4. Challenge: Model monitoring
  5. Challenge: Real-time predictions

❗Note: This workshop has been designed to be run in Vertex AI Workbench. Support for running the workshop locally is provided, but we recommend Vertex AI Workbench for the best experience.

For instructors

Shell

Introduction

The notebooks are self-contained but instructors of this hackathon are asked to prepare the following for hackathon attendees.

  1. Create 3x Google Cloud projects (dev, test, prod)
  2. Use make deploy to deploy resources in each of them. It's advised to follow the infrastructure setup notebook for each environment
  3. Create an E2E test trigger in the test project
  4. Create a release trigger in the prod project
  5. Add each user with their own Google account with the following IAM roles:
    • Vertex AI User (roles/aiplatform.user)
    • Storage Object Viewer (roles/storage.objectViewer)
    • Service Usage Consumer (roles/serviceusage.serviceUsageConsumer)
  6. Create one Vertex Workbench instance per user.
  7. Confirm that users can access the GCP resources.
  8. ❗Post workshop remember to delete all the users from the project and to clean up branches and releases in this repository

About

MLOps Hackathon with Vertex AI and Turbo Templates

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 53.4%
  • HCL 22.5%
  • Jupyter Notebook 19.1%
  • Makefile 3.6%
  • Other 1.4%