Skip to content

Commit

Permalink
Merge pull request GoogleCloudPlatform#196 from maneerali-SSK/ocbl216
Browse files Browse the repository at this point in the history
Removed the optional part
  • Loading branch information
ajayhemnani authored Oct 27, 2023
2 parents acd0280 + 06a2a3a commit 786de21
Show file tree
Hide file tree
Showing 2 changed files with 6 additions and 133 deletions.
69 changes: 3 additions & 66 deletions workshops/tfx-caip-tf23/lab-02-tfx-pipeline/labs/lab-02.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -446,7 +446,7 @@
"source": [
"### Important \n",
"\n",
"A full pipeline run without tuning enabled will take about 40 minutes to complete. You can view the run's progress using the TFX CLI commands above or in the"
"A full pipeline run without tuning enabled will take about 40 minutes to complete. You can view the run's progress using the TFX CLI commands above or in the Kubeflow Pipeline Dashboard."
]
},
{
Expand All @@ -464,79 +464,16 @@
"\n",
"The previous pipeline version read from hyperparameter default values in the search space defined in `_get_hyperparameters()` in `model.py` and used these values to build a TensorFlow WideDeep Classifier model.\n",
"\n",
"Let's now deploy a new pipeline version with the `Tuner` component added to the pipeline that calls out to the AI Platform Vizier service for distributed and parallelized hyperparameter tuning. The `Tuner` component `\"best_hyperparameters\"` artifact will be passed directly to your `Trainer` component to deploy the top performing model. Review `pipeline.py` to see how this environment variable changes the pipeline topology. Also, review the tuning function in `model.py` for configuring `CloudTuner`.\n",
"You can also deploy a new pipeline version with the `Tuner` component added to the pipeline that calls out to the AI Platform Vizier service for distributed and parallelized hyperparameter tuning. The `Tuner` component `\"best_hyperparameters\"` artifact will be passed directly to your `Trainer` component to deploy the top performing model. Review `pipeline.py` to see how this environment variable changes the pipeline topology. Also, review the tuning function in `model.py` for configuring `CloudTuner`.\n",
"\n",
"Note that you might not want to tune the hyperparameters every time you retrain your model due to the computational cost. Once you have used `Tuner` determine a good set of hyperparameters, you can remove `Tuner` from your pipeline and use model hyperparameters defined in your model code or use a `ImporterNode` to import the `Tuner` `\"best_hyperparameters\"`artifact from a previous `Tuner` run to your model `Trainer`.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ENABLE_TUNING=True"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%env ENABLE_TUNING={ENABLE_TUNING}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Compile your pipeline code"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!tfx pipeline compile --engine kubeflow --pipeline_path runner.py"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Deploy your pipeline container to AI Platform Pipelines with the TFX CLI"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#TODO: your code to update your pipeline \n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Trigger a pipeline run from the Kubeflow Pipelines UI\n",
"\n",
"On the [AI Platform Pipelines](https://console.cloud.google.com/ai-platform/pipelines/clusters) page, click `OPEN PIPELINES DASHBOARD`. A new browser tab will open. Select the `Pipelines` tab to the left where you see the `PIPELINE_NAME` pipeline you deployed previously. You should see 2 pipeline versions. \n",
"\n",
"Click on the most recent pipeline version with tuning enabled which will open up a window with a graphical display of your TFX pipeline directed graph. \n",
"\n",
"Next, click the `Create a run` button. Verify the `Pipeline name` and `Pipeline version` are pre-populated and optionally provide a `Run name` and `Experiment` to logically group the run metadata under before hitting `Start` to trigger the pipeline run."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Important\n",
"### Important\n",
"\n",
"A full pipeline run with tuning enabled will take about 50 minutes and can be executed in parallel while the previous pipeline run without tuning continues running. \n",
"\n",
Expand Down
70 changes: 3 additions & 67 deletions workshops/tfx-caip-tf23/lab-02-tfx-pipeline/solutions/lab-02.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -453,7 +453,7 @@
"source": [
"### Important \n",
"\n",
"A full pipeline run without tuning enabled will take about 40 minutes to complete. You can view the run's progress using the TFX CLI commands above or in the"
"A full pipeline run without tuning enabled will take about 40 minutes to complete. You can view the run's progress using the TFX CLI commands above or in the Kubeflow Pipeline Dashboard."
]
},
{
Expand All @@ -471,80 +471,16 @@
"\n",
"The previous pipeline version read from hyperparameter default values in the search space defined in `_get_hyperparameters()` in `model.py` and used these values to build a TensorFlow WideDeep Classifier model.\n",
"\n",
"Let's now deploy a new pipeline version with the `Tuner` component added to the pipeline that calls out to the AI Platform Vizier service for distributed and parallelized hyperparameter tuning. The `Tuner` component `\"best_hyperparameters\"` artifact will be passed directly to your `Trainer` component to deploy the top performing model. Review `pipeline.py` to see how this environment variable changes the pipeline topology. Also, review the tuning function in `model.py` for configuring `CloudTuner`.\n",
"You can choose to deploy a new pipeline version with the `Tuner` component added to the pipeline that calls out to the AI Platform Vizier service for distributed and parallelized hyperparameter tuning. The `Tuner` component `\"best_hyperparameters\"` artifact will be passed directly to your `Trainer` component to deploy the top performing model. Review `pipeline.py` to see how this environment variable changes the pipeline topology. Also, review the tuning function in `model.py` for configuring `CloudTuner`.\n",
"\n",
"Note that you might not want to tune the hyperparameters every time you retrain your model due to the computational cost. Once you have used `Tuner` determine a good set of hyperparameters, you can remove `Tuner` from your pipeline and use model hyperparameters defined in your model code or use a `ImporterNode` to import the `Tuner` `\"best_hyperparameters\"`artifact from a previous `Tuner` run to your model `Trainer`.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ENABLE_TUNING=True"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%env ENABLE_TUNING={ENABLE_TUNING}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Compile your pipeline code"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!tfx pipeline compile --engine kubeflow --pipeline_path runner.py"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Deploy your pipeline container to AI Platform Pipelines with the TFX CLI"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#TODO: your code to update your pipeline \n",
"!tfx pipeline update --pipeline_path runner.py --endpoint {ENDPOINT}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Trigger a pipeline run from the Kubeflow Pipelines UI\n",
"\n",
"On the [AI Platform Pipelines](https://console.cloud.google.com/ai-platform/pipelines/clusters) page, click `OPEN PIPELINES DASHBOARD`. A new browser tab will open. Select the `Pipelines` tab to the left where you see the `PIPELINE_NAME` pipeline you deployed previously. You should see 2 pipeline versions. \n",
"\n",
"Click on the most recent pipeline version with tuning enabled which will open up a window with a graphical display of your TFX pipeline directed graph. \n",
"\n",
"Next, click the `Create a run` button. Verify the `Pipeline name` and `Pipeline version` are pre-populated and optionally provide a `Run name` and `Experiment` to logically group the run metadata under before hitting `Start` to trigger the pipeline run."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Important\n",
"### Important\n",
"\n",
"A full pipeline run with tuning enabled will take about 50 minutes and can be executed in parallel while the previous pipeline run without tuning continues running. \n",
"\n",
Expand Down

0 comments on commit 786de21

Please sign in to comment.