From 9ed5cdfa95b992cd4385bef1608d22b8cc5cd0e5 Mon Sep 17 00:00:00 2001 From: Maneer Ali Date: Thu, 26 Oct 2023 16:58:02 +0530 Subject: [PATCH 1/7] Removed the cells of the optional part --- .../lab-02-tfx-pipeline/labs/lab-02.ipynb | 18 ------------------ .../lab-02-tfx-pipeline/solutions/lab-02.ipynb | 18 ------------------ 2 files changed, 36 deletions(-) diff --git a/workshops/tfx-caip-tf23/lab-02-tfx-pipeline/labs/lab-02.ipynb b/workshops/tfx-caip-tf23/lab-02-tfx-pipeline/labs/lab-02.ipynb index 4493d90b..904dde1c 100644 --- a/workshops/tfx-caip-tf23/lab-02-tfx-pipeline/labs/lab-02.ipynb +++ b/workshops/tfx-caip-tf23/lab-02-tfx-pipeline/labs/lab-02.ipynb @@ -469,24 +469,6 @@ "Note that you might not want to tune the hyperparameters every time you retrain your model due to the computational cost. Once you have used `Tuner` determine a good set of hyperparameters, you can remove `Tuner` from your pipeline and use model hyperparameters defined in your model code or use a `ImporterNode` to import the `Tuner` `\"best_hyperparameters\"`artifact from a previous `Tuner` run to your model `Trainer`.\n" ] }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "ENABLE_TUNING=True" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "%env ENABLE_TUNING={ENABLE_TUNING}" - ] - }, { "cell_type": "markdown", "metadata": {}, diff --git a/workshops/tfx-caip-tf23/lab-02-tfx-pipeline/solutions/lab-02.ipynb b/workshops/tfx-caip-tf23/lab-02-tfx-pipeline/solutions/lab-02.ipynb index bba1330c..6c062d96 100644 --- a/workshops/tfx-caip-tf23/lab-02-tfx-pipeline/solutions/lab-02.ipynb +++ b/workshops/tfx-caip-tf23/lab-02-tfx-pipeline/solutions/lab-02.ipynb @@ -476,24 +476,6 @@ "Note that you might not want to tune the hyperparameters every time you retrain your model due to the computational cost. Once you have used `Tuner` determine a good set of hyperparameters, you can remove `Tuner` from your pipeline and use model hyperparameters defined in your model code or use a `ImporterNode` to import the `Tuner` `\"best_hyperparameters\"`artifact from a previous `Tuner` run to your model `Trainer`.\n" ] }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "ENABLE_TUNING=True" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "%env ENABLE_TUNING={ENABLE_TUNING}" - ] - }, { "cell_type": "markdown", "metadata": {}, From f32f03cad238e5dfce81cec3b1ecce253462e5ec Mon Sep 17 00:00:00 2001 From: Ajay C Hemnani Date: Thu, 26 Oct 2023 18:15:20 -0700 Subject: [PATCH 2/7] Update lab-02.ipynb --- .../lab-02-tfx-pipeline/labs/lab-02.ipynb | 29 ++----------------- 1 file changed, 2 insertions(+), 27 deletions(-) diff --git a/workshops/tfx-caip-tf23/lab-02-tfx-pipeline/labs/lab-02.ipynb b/workshops/tfx-caip-tf23/lab-02-tfx-pipeline/labs/lab-02.ipynb index 904dde1c..9dfb61be 100644 --- a/workshops/tfx-caip-tf23/lab-02-tfx-pipeline/labs/lab-02.ipynb +++ b/workshops/tfx-caip-tf23/lab-02-tfx-pipeline/labs/lab-02.ipynb @@ -464,7 +464,7 @@ "\n", "The previous pipeline version read from hyperparameter default values in the search space defined in `_get_hyperparameters()` in `model.py` and used these values to build a TensorFlow WideDeep Classifier model.\n", "\n", - "Let's now deploy a new pipeline version with the `Tuner` component added to the pipeline that calls out to the AI Platform Vizier service for distributed and parallelized hyperparameter tuning. The `Tuner` component `\"best_hyperparameters\"` artifact will be passed directly to your `Trainer` component to deploy the top performing model. Review `pipeline.py` to see how this environment variable changes the pipeline topology. Also, review the tuning function in `model.py` for configuring `CloudTuner`.\n", + "You can also deploy a new pipeline version with the `Tuner` component added to the pipeline that calls out to the AI Platform Vizier service for distributed and parallelized hyperparameter tuning. The `Tuner` component `\"best_hyperparameters\"` artifact will be passed directly to your `Trainer` component to deploy the top performing model. Review `pipeline.py` to see how this environment variable changes the pipeline topology. Also, review the tuning function in `model.py` for configuring `CloudTuner`.\n", "\n", "Note that you might not want to tune the hyperparameters every time you retrain your model due to the computational cost. Once you have used `Tuner` determine a good set of hyperparameters, you can remove `Tuner` from your pipeline and use model hyperparameters defined in your model code or use a `ImporterNode` to import the `Tuner` `\"best_hyperparameters\"`artifact from a previous `Tuner` run to your model `Trainer`.\n" ] @@ -473,32 +473,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "### Compile your pipeline code" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "!tfx pipeline compile --engine kubeflow --pipeline_path runner.py" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Deploy your pipeline container to AI Platform Pipelines with the TFX CLI" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "#TODO: your code to update your pipeline \n" + "### Compile your pipeline code and Deploy your pipeline" ] }, { From 879966ffdb1c8bb6693446c99faebc103e6cd561 Mon Sep 17 00:00:00 2001 From: Ajay C Hemnani Date: Thu, 26 Oct 2023 18:23:17 -0700 Subject: [PATCH 3/7] Update lab-02.ipynb --- .../lab-02-tfx-pipeline/labs/lab-02.ipynb | 20 ------------------- 1 file changed, 20 deletions(-) diff --git a/workshops/tfx-caip-tf23/lab-02-tfx-pipeline/labs/lab-02.ipynb b/workshops/tfx-caip-tf23/lab-02-tfx-pipeline/labs/lab-02.ipynb index 9dfb61be..ed36f107 100644 --- a/workshops/tfx-caip-tf23/lab-02-tfx-pipeline/labs/lab-02.ipynb +++ b/workshops/tfx-caip-tf23/lab-02-tfx-pipeline/labs/lab-02.ipynb @@ -469,26 +469,6 @@ "Note that you might not want to tune the hyperparameters every time you retrain your model due to the computational cost. Once you have used `Tuner` determine a good set of hyperparameters, you can remove `Tuner` from your pipeline and use model hyperparameters defined in your model code or use a `ImporterNode` to import the `Tuner` `\"best_hyperparameters\"`artifact from a previous `Tuner` run to your model `Trainer`.\n" ] }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Compile your pipeline code and Deploy your pipeline" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Trigger a pipeline run from the Kubeflow Pipelines UI\n", - "\n", - "On the [AI Platform Pipelines](https://console.cloud.google.com/ai-platform/pipelines/clusters) page, click `OPEN PIPELINES DASHBOARD`. A new browser tab will open. Select the `Pipelines` tab to the left where you see the `PIPELINE_NAME` pipeline you deployed previously. You should see 2 pipeline versions. \n", - "\n", - "Click on the most recent pipeline version with tuning enabled which will open up a window with a graphical display of your TFX pipeline directed graph. \n", - "\n", - "Next, click the `Create a run` button. Verify the `Pipeline name` and `Pipeline version` are pre-populated and optionally provide a `Run name` and `Experiment` to logically group the run metadata under before hitting `Start` to trigger the pipeline run." - ] - }, { "cell_type": "markdown", "metadata": {}, From 4211f1ce6b51b7731294ba213d1afb6ef87d6a94 Mon Sep 17 00:00:00 2001 From: Ajay C Hemnani Date: Thu, 26 Oct 2023 18:24:27 -0700 Subject: [PATCH 4/7] Update lab-02.ipynb --- workshops/tfx-caip-tf23/lab-02-tfx-pipeline/labs/lab-02.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/workshops/tfx-caip-tf23/lab-02-tfx-pipeline/labs/lab-02.ipynb b/workshops/tfx-caip-tf23/lab-02-tfx-pipeline/labs/lab-02.ipynb index ed36f107..e8bd1eff 100644 --- a/workshops/tfx-caip-tf23/lab-02-tfx-pipeline/labs/lab-02.ipynb +++ b/workshops/tfx-caip-tf23/lab-02-tfx-pipeline/labs/lab-02.ipynb @@ -473,7 +473,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Important\n", + "### Important\n", "\n", "A full pipeline run with tuning enabled will take about 50 minutes and can be executed in parallel while the previous pipeline run without tuning continues running. \n", "\n", From 80bfb9df0470f660c4f70d010f158a325bb844dc Mon Sep 17 00:00:00 2001 From: Ajay C Hemnani Date: Thu, 26 Oct 2023 18:27:28 -0700 Subject: [PATCH 5/7] Update lab-02.ipynb --- workshops/tfx-caip-tf23/lab-02-tfx-pipeline/labs/lab-02.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/workshops/tfx-caip-tf23/lab-02-tfx-pipeline/labs/lab-02.ipynb b/workshops/tfx-caip-tf23/lab-02-tfx-pipeline/labs/lab-02.ipynb index e8bd1eff..2aa29c7c 100644 --- a/workshops/tfx-caip-tf23/lab-02-tfx-pipeline/labs/lab-02.ipynb +++ b/workshops/tfx-caip-tf23/lab-02-tfx-pipeline/labs/lab-02.ipynb @@ -446,7 +446,7 @@ "source": [ "### Important \n", "\n", - "A full pipeline run without tuning enabled will take about 40 minutes to complete. You can view the run's progress using the TFX CLI commands above or in the" + "A full pipeline run without tuning enabled will take about 40 minutes to complete. You can view the run's progress using the TFX CLI commands above or in the Kubeflow Pipeline Dashboard" ] }, { From 7292768624404c5402f0adf3b23655b4ba6097c4 Mon Sep 17 00:00:00 2001 From: Ajay C Hemnani Date: Thu, 26 Oct 2023 18:28:12 -0700 Subject: [PATCH 6/7] Update lab-02.ipynb --- workshops/tfx-caip-tf23/lab-02-tfx-pipeline/labs/lab-02.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/workshops/tfx-caip-tf23/lab-02-tfx-pipeline/labs/lab-02.ipynb b/workshops/tfx-caip-tf23/lab-02-tfx-pipeline/labs/lab-02.ipynb index 2aa29c7c..014cabde 100644 --- a/workshops/tfx-caip-tf23/lab-02-tfx-pipeline/labs/lab-02.ipynb +++ b/workshops/tfx-caip-tf23/lab-02-tfx-pipeline/labs/lab-02.ipynb @@ -446,7 +446,7 @@ "source": [ "### Important \n", "\n", - "A full pipeline run without tuning enabled will take about 40 minutes to complete. You can view the run's progress using the TFX CLI commands above or in the Kubeflow Pipeline Dashboard" + "A full pipeline run without tuning enabled will take about 40 minutes to complete. You can view the run's progress using the TFX CLI commands above or in the Kubeflow Pipeline Dashboard." ] }, { From 06a2a3a69f18902b4689e9ad0e06390b5d033f7f Mon Sep 17 00:00:00 2001 From: Ajay C Hemnani Date: Thu, 26 Oct 2023 18:30:00 -0700 Subject: [PATCH 7/7] Update lab-02.ipynb --- .../solutions/lab-02.ipynb | 52 ++----------------- 1 file changed, 3 insertions(+), 49 deletions(-) diff --git a/workshops/tfx-caip-tf23/lab-02-tfx-pipeline/solutions/lab-02.ipynb b/workshops/tfx-caip-tf23/lab-02-tfx-pipeline/solutions/lab-02.ipynb index 6c062d96..2bc6cdb4 100644 --- a/workshops/tfx-caip-tf23/lab-02-tfx-pipeline/solutions/lab-02.ipynb +++ b/workshops/tfx-caip-tf23/lab-02-tfx-pipeline/solutions/lab-02.ipynb @@ -453,7 +453,7 @@ "source": [ "### Important \n", "\n", - "A full pipeline run without tuning enabled will take about 40 minutes to complete. You can view the run's progress using the TFX CLI commands above or in the" + "A full pipeline run without tuning enabled will take about 40 minutes to complete. You can view the run's progress using the TFX CLI commands above or in the Kubeflow Pipeline Dashboard." ] }, { @@ -471,7 +471,7 @@ "\n", "The previous pipeline version read from hyperparameter default values in the search space defined in `_get_hyperparameters()` in `model.py` and used these values to build a TensorFlow WideDeep Classifier model.\n", "\n", - "Let's now deploy a new pipeline version with the `Tuner` component added to the pipeline that calls out to the AI Platform Vizier service for distributed and parallelized hyperparameter tuning. The `Tuner` component `\"best_hyperparameters\"` artifact will be passed directly to your `Trainer` component to deploy the top performing model. Review `pipeline.py` to see how this environment variable changes the pipeline topology. Also, review the tuning function in `model.py` for configuring `CloudTuner`.\n", + "You can choose to deploy a new pipeline version with the `Tuner` component added to the pipeline that calls out to the AI Platform Vizier service for distributed and parallelized hyperparameter tuning. The `Tuner` component `\"best_hyperparameters\"` artifact will be passed directly to your `Trainer` component to deploy the top performing model. Review `pipeline.py` to see how this environment variable changes the pipeline topology. Also, review the tuning function in `model.py` for configuring `CloudTuner`.\n", "\n", "Note that you might not want to tune the hyperparameters every time you retrain your model due to the computational cost. Once you have used `Tuner` determine a good set of hyperparameters, you can remove `Tuner` from your pipeline and use model hyperparameters defined in your model code or use a `ImporterNode` to import the `Tuner` `\"best_hyperparameters\"`artifact from a previous `Tuner` run to your model `Trainer`.\n" ] @@ -480,53 +480,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "### Compile your pipeline code" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "!tfx pipeline compile --engine kubeflow --pipeline_path runner.py" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Deploy your pipeline container to AI Platform Pipelines with the TFX CLI" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "#TODO: your code to update your pipeline \n", - "!tfx pipeline update --pipeline_path runner.py --endpoint {ENDPOINT}" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Trigger a pipeline run from the Kubeflow Pipelines UI\n", - "\n", - "On the [AI Platform Pipelines](https://console.cloud.google.com/ai-platform/pipelines/clusters) page, click `OPEN PIPELINES DASHBOARD`. A new browser tab will open. Select the `Pipelines` tab to the left where you see the `PIPELINE_NAME` pipeline you deployed previously. You should see 2 pipeline versions. \n", - "\n", - "Click on the most recent pipeline version with tuning enabled which will open up a window with a graphical display of your TFX pipeline directed graph. \n", - "\n", - "Next, click the `Create a run` button. Verify the `Pipeline name` and `Pipeline version` are pre-populated and optionally provide a `Run name` and `Experiment` to logically group the run metadata under before hitting `Start` to trigger the pipeline run." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Important\n", + "### Important\n", "\n", "A full pipeline run with tuning enabled will take about 50 minutes and can be executed in parallel while the previous pipeline run without tuning continues running. \n", "\n",